name
stringlengths 1
3
| title
stringlengths 17
118
| abstract
stringlengths 268
2.12k
| fulltext
stringlengths 8.6k
78.1k
| keywords
stringlengths 28
1.35k
|
---|---|---|---|---|
191 | The Potential of the Cell Processor for Scientific Computing | The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists . As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the forthcoming STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations , and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on the Cell full system simulator. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture . Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency. | INTRODUCTION
Over the last decade the HPC community has moved towards
machines composed of commodity microprocessors as
a strategy for tracking the tremendous growth in processor
performance in that market. As frequency scaling slows,
and the power requirements of these mainstream processors
continues to grow, the HPC community is looking for alternative
architectures that provide high performance on scientific
applications, yet have a healthy market outside the
scientific community. In this work, we examine the potential
of the forthcoming STI Cell processor as a building block for
future high-end computing systems, by investigating performance
across several key scientific computing kernels: dense
matrix multiply, sparse matrix vector multiply, stencil computations
on regular grids, as well as 1D and 2D FFTs.
Cell combines the considerable floating point resources required
for demanding numerical algorithms with a power-efficient
software-controlled memory hierarchy. Despite its
radical departure from previous mainstream/commodity processor
designs, Cell is particularly compelling because it
will be produced at such high volumes that it will be cost-competitive
with commodity CPUs. The current implementation
of Cell is most often noted for its extremely high performance
single-precision (SP) arithmetic, which is widely
considered insufficient for the majority of scientific applications
. Although Cell's peak double precision performance
is still impressive relative to its commodity peers (~14.6
Gflop/[email protected]), we explore how modest hardware changes
could significantly improve performance for computationally
intensive DP applications.
This paper presents several novel results.
We present
quantitative performance data for scientific kernels that compares
Cell performance to leading superscalar (AMD Opteron),
VLIW (Intel Itanium2), and vector (Cray X1E) architectures
. We believe this study examines the broadest array
of scientific algorithms to date on Cell. We developed both
analytical models and lightweight simulators to predict kernel
performance that we demonstrated to be accurate when
compared against published Cell hardware result, as well as
our own implementations on the Cell full system simulator.
Our work also explores the complexity of mapping several
important scientific algorithms onto the Cell's unique architecture
in order to leverage the large number of available
functional units and the software-controlled memory. Additionally
, we propose modest microarchitectural modifications
that could increase the efficiency of double-precision
arithmetic calculations, and demonstrate significant performance
improvements compared with the current Cell implementation
.
Overall results demonstrate the tremendous potential of
the Cell architecture for scientific computations in terms of
both raw performance and power efficiency. We also conclude
that Cell's heterogeneous multi-core implementation
is inherently better suited to the HPC environment than
homogeneous commodity multicore processors.
RELATED WORK
One of the key limiting factors for computational performance
is off-chip memory bandwidth. Since increasing
the off-chip bandwidth is prohibitively expensive, many architects
are considering ways of using available bandwidth
more efficiently. Examples include hardware multithreading
or more efficient alternatives to conventional cache-based architectures
such as software-controlled memories. Software-controlled
memories can potentially improve memory subsystem
performance by supporting finely controlled prefetch-ing
and more efficient cache-utilization policies that take advantage
of application-level information -- but do so with far
less architectural complexity than conventional cache architectures
. While placing data movement under explicit software
control increases the complexity of the programming
model, prior research has demonstrated that this approach
can be more effective for hiding memory latencies (including
cache misses and TLB misses) -- requiring far smaller cache
sizes to match the performance of conventional cache implementations
[17, 19]. The performance of software-controlled
memory is more predictable, thereby making it popular for
real-time embedded applications where guaranteed response
rates are essential.
Over the last five years, a plethora of alternatives to conventional
cache-based architectures have been suggested including
scratchpad memories [9,16,30], paged on-chip memories
[12, 17], and explicit three-level memory architectures
[18, 19]. Until recently, few of these architectural concepts
made it into mainstream processor designs, but the increasingly
stringent power/performance requirements for embedded
systems have resulted in a number of recent implementations
that have adopted these concepts. Chips like the
Sony Emotion Engine [20, 23, 29] and Intel's MXP5800 both
achieved high performance at low power by adopting three
levels (registers, local memory, external DRAM) of software-managed
memory. More recently, the STI Cell processor has
adopted a similar approach where data movement between
SPE
256 KB
PPC
512 KB
memo ry
con troller
I/O
I/O
EIB
4 rings, 8bytes/ core cycle
25.6 GB/s
SPE
256 KB
SPE
256 KB
SPE
256 KB
SPE
256 KB
SPE
256 KB
SPE
256 KB
SPE
256 KB
Figure 1: Overview of the Cell processor
these three address spaces is explicitly controlled by the application
.
For predictable data access patterns the local
store approach is highly advantageous as it can be very efficiently
utilized through explicit software-controlled scheduling
. Improved bandwidth utilization through deep pipelining
of memory requests requires less power, and has a faster
access time, than a large cache due in part to its lower complexity
. If however, the data access pattern lacks predictabil-ity
, then the advantages of software-managed memory are
lost. This more aggressive approach to memory architecture
was adopted to meet the demanding cost/performance
and real-time responsiveness requirements of Sony's upcoming
video game console. However, to date, an in-depth study
to evaluate the potential of utilizing the Cell architecture in
the context of scientific computations does not appear in the
literature.
CELL BACKGROUND
Cell [8,27] was designed by a partnership of Sony, Toshiba,
and IBM (STI) to be the heart of Sony's forthcoming PlayStation3
gaming system. Cell takes a radical departure from
conventional multiprocessor or multi-core architectures. Instead
of using identical cooperating commodity processors,
it uses a conventional high performance PowerPC core that
controls eight simple SIMD cores, called synergistic processing
elements (SPEs), where each SPE contains a synergistic
processing unit (SPU), a local memory, and a memory flow
controller. An overview of Cell is provided in Figure 1.
Access to external memory is handled via a 25.6GB/s
XDR memory controller. The cache coherent PowerPC core,
the eight SPEs, the DRAM controller, and I/O controllers
are all connected via 4 data rings, collectively known as the
EIB. The ring interface within each unit allows 8 bytes/cycle
to be read or written. Simultaneous transfers on the same
ring are possible. All transfers are orchestrated by the PowerPC
core.
Each SPE includes four single precision (SP) 6-cycle pipelined
FMA datapaths and one double precision (DP) half-pumped
(the double precision operations within a SIMD
operation must be serialized) 9-cycle pipelined FMA datapath
with 4 cycles of overhead for data movement [22]. Cell
has a 7 cycle in-order execution pipeline and forwarding network
[8]. IBM appears to have solved the problem of inserting
a 13 (9+4) cycle DP pipeline into a 7 stage in-order machine
by choosing the minimum effort/performance/power
solution of simply stalling for 6 cycles after issuing a DP
10
instruction. The SPE's DP throughput [14] of one DP instruction
every 7 (1 issue + 6 stall) cycles coincides perfectly
with this reasoning.
Thus for computationally intense algorithms like dense
matrix multiply (GEMM), we expect SP implementations to
run near peak whereas DP versions would drop to approximately
one fourteenth the peak SP flop rate [10]. Similarly,
for bandwidth intensive applications such as sparse matrix
vector multiplication (SpMV) we expect SP versions to be
between 1.5x and 4x as fast as DP, depending on density
and uniformity.
Unlike a typical coprocessor, each SPE has its own local
memory from which it fetches code and reads and writes
data. All loads and stores issued from the SPE can only
access the SPE's local memory. The Cell processor depends
on explicit DMA operations to move data from main memory
to the local store of the SPE. The limited scope of loads
and stores allows one to view the SPE as having a two-level
register file. The first level is a 128 x 128b single cycle register
file, where the second is a 16K x 128b six cycle register
file. Data must be moved into the first level before it can be
operated on by instructions. Dedicated DMA engines allow
multiple concurrent DMA loads to run concurrently with the
SIMD execution unit, thereby mitigating memory latency
overhead via double-buffered DMA loads and stores. The
selectable length DMA operations supported by the SPE
are much like a traditional unit stride vector load. We exploit
these similarities to existing HPC platforms to select
programming models that are both familiar and tractable
for scientific application developers.
PROGRAMMING MODELS
The Cell architecture poses several challenges to programming
: an explicitly controlled memory hierarchy, explicit
parallelism between the 8 SPEs and the PowerPC, and a
quadword based ISA. Our goal is to select the programming
paradigm that offers the simplest possible expression of an
algorithm while being capable of fully utilizing the hardware
resources of the Cell processor.
The memory hierarchy is programmed using explicit DMA
intrinsics with the option of user programmed double buffering
to overlap data movement with computation on the
SPEs. Moving from a hardware managed memory hierarchy
to one controlled explicitly by the application significantly
complicates the programming model, and pushes it towards
a one sided communication model. Unlike MPI, the intrinsics
are very low level and map to half a dozen instructions.
This allows for very low software overhead and good performance
, but requires the user to be capable and either ensure
correct usage or provide an interface or abstraction.
For programming the parallelism on Cell, we considered
three possible programming models: task parallelism with
independent tasks scheduled on each SPE; pipelined parallelism
where large data blocks are passed from one SPE to
the next; and data parallelism, where the processors perform
identical computations on distinct data. For simplicity, we
do not consider parallelism between the PowerPC and the
SPEs, so we can treat this as a homogeneous parallel machine
. Data pipelining may be suitable for certain classes
of algorithms and will be the focus of future investigation.
We adopt the data-parallel programming model, which is a
good match to many scientific applications and offers the
simplest and most direct method of decomposing the problem
. Data-parallel programming is quite similar to loop-level
parallelization afforded by OpenMP or the vector-like
multistreaming on the Cray X1E and the Hitachi SR-8000.
The focus of this paper is Cell architecture and performance
; we do not explore the efficacy of the IBM SPE XLC
compiler. Thus, we heavily rely on SIMD intrinsics and do
not investigate if appropriate SIMD instructions are gener-ated
by the compiler. Although the produced Cell code may
appear verbose -- due to the use of intrinsics instead of C
operators -- it delivers readily understandable performance.
Our first Cell implementation, SpMV, required about a
month of learning the programming model, the architecture,
the compiler, the tools, and deciding on a final algorithmic
strategy. The final implementation required about 600 lines
of code. The next code development examined two flavors
of double precision stencil-based algorithms. These implementations
required one week of work and are each about
250 lines, with an additional 200 lines of common code. The
programming overhead of these kernels on Cell required significantly
more effort than the scalar version's 15 lines, due
mainly to loop unrolling and intrinsics use. Although the
stencils are a simpler kernel, the SpMV learning experience
accelerated the coding process.
Having become experienced Cell programmers, the single
precision time skewed stencil -- although virtually a complete
rewrite from the double precision single step version
-- required only a single day to code, debug, benchmark,
and attain spectacular results of over 65 Gflop/s. This implementation
consists of about 450 lines, due once again to
unrolling and the heavy use of intrinsics.
SIMULATION METHODOLOGY
The simplicity of the SPEs and the deterministic behavior
of the explicitly controlled memory hierarchy make Cell
amenable to performance prediction using a simple analytic
model. Using this approach, one can easily explore multiple
variations of an algorithm without the effort of programming
each variation and running on either a fully cycle-accurate
simulator or hardware. With the newly released cycle accurate
simulator (Mambo), we have succesfully validated our
performance model for SGEMM, SpMV, and Stencil Computations
, as will be shown in the subsequent sections.
Our modeling approach is broken into two steps commensurate
with the two phase double buffered computational
model. The kernels were first segmented into code-snippets
that operate only on data present in the local store of the
SPE. We sketched the code snippets in SPE assembly and
performed static timing analysis. The latency of each operation
, issue width limitations, and the operand alignment requirements
of the SIMD/quadword SPE execution pipeline
determined the number of cycles required. The in-order nature
and fixed local store memory latency of the SPEs makes
the analysis deterministic and thus more tractable than on
cache-based, out-of-order microprocessors.
In the second step, we construct a model that tabulates
the time required for DMA loads and stores of the operands
required by the code snippets. The model accurately reflects
the constraints imposed by resource conflicts in the
memory subsystem. For instance, concurrent DMAs issued
by multiple SPEs must be serialized, as there is only a single
DRAM controller. The model also presumes a conservative
fixed DMA initiation latency of 1000 cycles.
The model computes the total time by adding all the per-11
Cell
X1E
AMD64
IA64
SPE
Chip
(MSP)
SIMD
Multi-Multi
Super
VLIW
Architecture
core
chip
scalar
SIMD Vector
Clock (GHz)
3.2
3.2
1.13
2.2
1.4
DRAM (GB/s)
25.6
25.6
34
6.4
6.4
SP Gflop/s
25.6
204.8
36
8.8
5.6
DP Gflop/s
1.83
14.63
18
4.4
5.6
Local Store
256KB
2MB
-L2
Cache
-512KB
2MB
1MB
256KB
L3 Cache
-3MB
Power (W)
3
~40
120
89
130
Year
-2006
2005
2004
2003
Table 1: Architectural overview of STI Cell, Cray
X1E MSP, AMD Opteron, and Intel Itanium2. Es-timated
total Cell power and peak Gflop/s are
based on the active SPEs/idle PowerPC programming
model.
iteration (outer loop) times, which are themselves computed
by taking the maximum of the snippet and DMA transfer
times. In some cases, the per-iteration times are constant
across iterations, but in others it varies between iterations
and is input-dependent. For example, in a sparse matrix, the
memory access pattern depends on the nonzero structure of
the matrix, which varies across iterations. Some algorithms
may also require separate stages which have different execution
times; e.g., the FFT has stages for loading data, loading
constants, local computation, transpose, local computation,
bit reversal, and storing the results.
For simplicity we chose to model a 3.2GHz, 8 SPE version
of Cell with 25.6GB/s of memory bandwidth. This version
of Cell is likely to be used in the first release of the Sony
PlayStation3 [28]. The lower frequency had the simplifying
benefit that both the EIB and DRAM controller could deliver
two SP words per cycle. The maximum flop rate of
such a machine would be 204.8 Gflop/s, with a computational
intensity of 32 FLOPs/word. For comparison, we ran
these kernels on actual hardware of several leading processor
designs: the vector Cray X1E MSP, superscalar AMD
Opteron 248 and VLIW Intel Itanium2. The key architectural
characteristics are detailed in Table 1.
5.1
Cell+ Architectural Exploration
The Double Precision (DP) pipeline in Cell is obviously
an afterthought as video games have limited need for DP
arithmetic.
Certainly a redesigned pipeline would rectify
the performance limitations, but would do so at a cost of
additional design complexity and power consumption. We
offer a more modest alternative that can reuse most of the
existing circuitry. Based on our experience designing the VI-RAM
vector processor-in-memory chip [12], we believe these
"Cell+" design modifications are considerably less complex
than a redesigned pipeline, consume very little additional
surface area on the chip, but show significant DP performance
improvements for scientific kernels.
In order to explore the limitations of Cell's DP issue bandwidth
, we propose an alternate design with a longer forwarding
network to eliminate the all but one of the stall cycles
-- recall the factors that limit DP throughput as described
in Section 3. In this hypothetical implementation, called
Cell+, each SPE would still have the single DP datapath,
but would be able to dispatch one DP SIMD instruction
every other cycle instead of one every 7 cycles. The Cell+
design would not stall issuing other instructions and would
achieve 3.5x the DP throughput of the Cell (51.2 Gflop/s) by
fully utilizing the existing DP datapath; however, it would
maintain the same SP throughput, frequency, bandwidth,
and power as the Cell.
DENSE MATRIX-MATRIX MULTIPLY
We begin by examining the performance of dense matrix-matrix
multiplication, or GEMM. This kernel is character-ized
by high computational intensity and regular memory
access patterns, making it an extremely well suited for the
Cell architecture. We explored two storage formats: column
major and block data layout [26] (BDL). BDL is a two-stage
addressing scheme (block row/column, element sub
row/column).
6.1
Algorithm Considerations
For GEMM, we adopt what is in essence an outer loop
parallelization approach. Each matrix is broken into 8n x
n element tiles designed to fit into the memory available on
the Cell chip, which are in turn split into eight n x n element
tiles that can fit into the 8 SPE local stores. For the column
layout, the matrix will be accessed via a number of short
DMAs equal to the dimension of the tile -- e.g. 64 DMAs
of length 64. BDL, on the other hand, will require a single
long DMA of length 16KB.
Since the local store is only 256KB, and must contain
both the program and stack, program data in the local
store is limited to about 56K words. The tiles, when double
buffered, require 6n
2
words of local store (one from each
matrix) -- thus making 96
2
the maximum square tiles in
SP. Additionally, in column layout, there is added pressure
on the maximum tile size for large matrices, as each column
within a tile will be on a different page resulting in TLB
misses. The minimum size of a tile is determined by the
FLOPs to word ratio of the processor. In the middle, there
is a tile-size "sweet spot" that delivers peak performance.
The loop order was therefore chosen to minimize the average
number of pages touched per phase for a column major
storage format. The BDL approach, as TLB misses are of
little concern, allows us to structure the loop order to minimize
memory bandwidth requirements.
A possible alternate approach is to adapt Cannon's algorithm
[3] for parallel machines. Although this strategy could
reduce the DRAM bandwidth requirements by transferring
blocks via the EIB, for a column major layout, it could significantly
increase the number of pages touched. This will
be the subject of future work. Note that for small matrix
sizes, it is most likely advantageous to choose an algorithm
that minimizes the number of DMAs. One such solution
would be to broadcast a copy of the first matrix to all SPEs.
6.2
Single Precision GEMM Results
The Cell performance of GEMM based on our performance
model (referred to as Cell
pm
) for large matrices is
presented in Table 2. SGEMM simulation data show that
32
2
blocks do not achieve sufficient computational intensity
to fully utilize the processor. The choice of loop order
12
Cell
pm
+
Cell
pm
X1E
AMD64
IA64
DP (Gflop/s)
51.1
14.6
16.9
4.0
5.4
SP (Gflop/s)
-204
.7
29.5
7.8
3.0
Table 2: GEMM performance (in Gflop/s) for large
square matrices on Cell, X1E, Opteron, and Itanium2
.
Only the best performing numbers are
shown. Cell data based on our performance model
is referred to as Cell
pm
.
and the resulting increase in memory traffic prevents column
major 64
2
blocks from achieving a large fraction of peak
(over 90%) for large matrices. Only 96
2
block sizes provide
enough computational intensity to overcome the additional
block loads and stores, and thus achieving near-peak performance
-- over 200Gflop/s. For BDL, however, 64
2
blocks
effectively achieve peak performance. Whereas we assume a
1000 cycle DMA startup latency in our simulations, if the
DMA latency were only 100 cycles, then the 64
2
column
major performance would reach parity with BDL.
At 3.2GHz, each SPE requires about 3W [8]. Thus with
a nearly idle PPC and L2, Cell
pm
achieves over 200 Gflop/s
for approximately 40W of power -- nearly 5 Gflop/s/Watt.
Clearly, for well-suited applications, Cell is extremely power
efficient.
6.3
Double Precision GEMM Results
A similar set of strategies and simulations were performed
for DGEMM. Although the time to load a DP 64
2
block is
twice that of the SP version, the time required to compute
on a 64
2
DP block is about 14x as long as the SP counterpart
(due to the limitations of the DP issue logic). Thus it is far
easier for DP to reach its peak performance. -- a mere 14.6
Gflop/s. However, when using our proposed Cell+ hardware
variant, DGEMM performance jumps to an impressive 51
Gflop/s.
6.4
Performance Comparison
Table 2 shows a performance comparison of GEMM between
Cell
pm
and the set of modern processors evaluated in
our study. Note the impressive performance characteristics
of the Cell processors, achieving 69x, 26x, and 7x speed
up for SGEMM compared with the Itanium2, Opteron, and
X1E respectively. For DGEMM, the default Cell processor
is 2.7x and 3.7x faster than the Itanium2 and Opteron. In
terms of power, the Cell performance is even more impressive
, achieving over 200x the efficiency of the Itanium2 for
SGEMM!
Our Cell
pm
+
exploration architecture is capable, for large
tiles, of fully exploiting the DP pipeline and achieving over
50 Gflop/s. In DP, the Cell+ architecture would be nearly
10 times faster than the Itanium2 and nearly 30 times more
power efficient. Additionally, traditional micros (Itanium2,
Opteron, etc) in multi-core configurations would require either
enormous power saving innovations or dramatic reductions
in performance, and thus would show even poorer performance/power
compared with the Cell technology. Com-pared
to the X1E, Cell+ would be 3 times as fast and 9
times more power efficient.
The decoupling of main memory data access from the
computational kernel guarantees constant memory access
latency since there will be no cache misses, and all TLB accesses
are resolved in the communication phase. Matrix multiplication
is perhaps the best benchmark to demonstrate
Cell's computational capabilities, as it achieves high performance
by buffering large blocks on chip before computing
on them.
6.5
Model Validation
IBM recently released their in-house performance evaluation
of their prototype hardware [4]. On SGEMM, they
achieve about 201 Gflop/s, which is within 2% of our pred-icated
performance.
SPARSE MATRIX VECTOR MULTIPLY
At first glance, SpMV would seem to be a poor application
choice for the Cell since the SPEs have neither caches
nor word-granularity gather/scatter support. Furthermore,
SpMV has a relatively low O(1) computational intensity.
However, these considerations are perhaps less important
than the Cell's low functional unit and local store latency
(<2ns), the task parallelism afforded by the SPEs, the eight
independent load store units, and the ability to stream nonzeros
via DMAs.
7.1
Algorithmic Considerations
Two storage formats are presented in this paper: Compressed
Sparse Row (CSR) and Blocked Compressed Sparse
Row (BCSR). Only square BCSR was explored, and only
2x2 BCSR numbers will be presented here.
Future Cell
SpMV work will examine the entire BCSR space. Because
of the quadword nature of the SPEs, all rows within a CSR
tile are padded to a multiple of 4. This greatly simplifies
the programming model at the expense of increasing memory
traffic. Note that this is very different than 1x4 BCSR..
To perform a stanza gather operation the Cell utilizes the
MFC "get list" command, where a list of addresses/lengths
is created in local store. The MFC then gathers these stanzas
from the global store and packs them into the local store.
It is possible to make every stanza a single quadword, however
, without an accurate performance model of the MFC
"get list" command, one must resort to tiling to provide
a reasonable estimate for performance. For simplicity all
benchmarks were run using square tiles. The data structure
required to store the entire matrix is a 2D array of tiles,
where each block stores its nonzeros and row pointers as if
it were an entire matrix. We chose not to buffer the source
and destination vector tiles as this would result in a smaller
block size. These tradeoffs will be examined in future work.
Collectively the blocks are chosen to be no larger than ~36K
words in SP (half that in DP).
The inner loop of CSR SpMV either requires significant
software pipelining, hefty loop unrolling, or an approach al-gorithmically
analogous to a segmented scan [1]. As there
are no conditional stores in the SPE assembly language, we
chose to partially implement a segmented scan, where the
gather operations are decoupled from the dot products. This
decoupled gather operation can be unrolled and software
pipelined, thereby completing in close to three cycles per
element (the ISA is not particularly gather friendly). It is
important to note that since the local store is not a write
back cache, it is possible to overwrite its contents without
fear of consuming DRAM bandwidth or corrupting the actual
arrays.
As the nonzeros are stored contiguously in arrays, it is
13
#
Name
N
NNZ
Comments
15
Vavasis
40K 1.6M
2D PDE Problem
17
FEM
22K
1M
Fluid Mechanics Problem
18
Memory
17K 125K
MotorolaMemory Circuit
36
CFD
75K 325K Navier-Stokes, viscous flow
06 FEM Crystal
14K 490K
FEM stiffness matrix
09
3D Tube
45K 1.6M
3D pressure Tube
25
Portfolio
74K 335K
Financial Portfolio
27
NASA
36K 180K
PWT NASA Matrix
28 Vibroacoustic 12K 177K
Flexible box structure
40 Linear Prog.
31K
1M
AA
T
-7pt
64
256K 1.8M
64
3
7pt stencil
Table 3: Suite of matrices used to evaluate SpMV
performance.
Matrix numbers as defined in the
SPARSITY suite are shown in the first column.
straightforward to stream them in via DMA. Here, unlike
the source and destination vectors, it is essential to double
buffer in order to maximize the SPEs computational
throughput. Using buffers of 16KB for SP allows for 2K
values and 2K indices for CSR, and 1K tiles for 2x2 BCSR.
Note that for each phase -- loading nonzeros and indices
-- there is the omnipresent 1000 cycle DMA latency overhead
in addition to the startup and finalize penalties (as in
traditional pipelining).
To partition the work among the SPEs, we implemented
a cooperative blocking model. By forcing all SPEs to work
on the same block, it is possible to broadcast the blocked
source vector and row pointers to minimize memory traffic.
One approach, referred to as PrivateY, was to divide work
among SPEs within a block by distributing the nonzeros
as evenly as possible. This strategy necessitates that each
SPE contains a private copy of the destination vector, and
requires an inter-SPE reduction at the end of each blocked
row.
The alternate method, referred to as PartitionedY,
partitions the destination vector evenly among the SPEs.
However there is no longer any guarantee that the SPEs'
computations will remain balanced, causing the execution
time of the entire tile to be limited by the most heavily
loaded SPE. Thus for load balanced blocks, the PartitionedY
approach is generally advantageous; however, for matrices
exhibiting irregular (uneven) nonzero patterns, we expect
higher performance using PrivateY.
Note that there is a potential performance benefit by writing
a kernel specifically optimized for symmetric matrices.
For these types of matrices, the number of operations can
effectively double relative to the memory traffic. However,
the algorithm must block two tiles at a time -- thus the symmetric
matrix kernel divides memory allocated for blocking
the vector evenly among the two submatrices, and performs
a dot product and SAXPY for each row in the lower triangle.
7.2
Evaluation Matrices
In order to effectively evaluate SpMV performance, we examine
a synthetic stencil matrix, as well as ten real matrices
used in numerical calculations from the BeBop SPARSITY
suite [11, 31] (four nonsymmetric and six symmetric). Table
3 presents an overview of the evaluated matrices.
7.3
Single Precision SpMV Results
Single and double precision tuned SpMV results for the
SPARSITY matrices are show in Tables 4 and 5. Surpris-ingly
, given Cell's inherent SpMV limitations, the SPARSITY
nonsymmetric matrices average over 4 Gflop/s, while
the symmetric matrices average nearly 8 Gflop/s. Unfortunately
, many of these matrices are so small that they utilize
only a fraction of the default tile size. Unlike the synthetic
matrices, the real matrices, which contain dense sub-blocks,
can exploit BCSR without unnecessarily wasting memory
bandwidth on zeros. As memory traffic is key, storing BCSR
blocks in a compressed format (the zeros are neither stored
nor loaded) would allow for significantly higher performance
if there is sufficient support within the ISA to either decompress
these blocks on the fly, or compute on compressed
blocks. This is an area of future research.
Overall results show that the PrivateY approach is generally
a superior partitioning strategy compared with PartitionedY
. In most cases, the matrices are sufficiently unbalanced
that the uniform partitioning of the nonzeros coupled
with a reduction requires less time than the performing a
load imbalanced calculation.
When using the PartionedY approach, the symmetric kernel
is extremely unbalanced for blocks along the diagonal.
Thus, for matrices approximately the size of a single block,
the imbalance between SPEs can severely impair the performance
-- even if the matrix is uniform. In fact, symmetric
optimizations show only about 50% performance improvement
when running the nonsymmetric kernel on the symmetric
matrices.
Once again DMA latency plays a relatively small role in
this algorithm.
In fact, reducing the DMA latency by a
factor of ten results in only a 5% increase in performance.
This is actually a good result. It means than the memory
bandwidth is highly utilized and the majority of bus cycles
are used for transferring data rather than stalls.
On the whole, clock frequency also plays a small part in
the overall performance.
Solely increasing the clock frequency
by a factor of 2 (to 6.4GHz) provides only a 1%
increase in performance on the SPARSITY nonsymmetric
matrix suite. Similarly, cutting the frequency in half (to
1.6GHz) results in only a 20% decrease in performance. Simply
put, for the common case, more time is used in transferring
nonzeros and the vectors rather than computing on
them.
7.4
Double Precision SpMV Results
Results from our performance estimator show that single
precision SPMV is almost twice as fast as double precision,
even though the nonzero memory traffic only increases by
50%. This discrepancy is due to the reduction in the number
of values contained in a tile, where twice as many blocked
rows are present. For example, when using 16K
2
SP tiles on
a 128K
2
matrix, the 512KB source vector must be loaded 8
times. However, in DP, the tiles are only 8K
2
-- causing the
1MB source vector to be loaded 16 times, and thus resulting
in a much higher volume of memory traffic. Future work
will investigate caching mega blocks across SPEs to reduce
total memory traffic.
7.5
Performance Comparison
Table 4 compares Cell's estimated performance (the best
partitioning and blocking combination) for SpMV with re-14
SPARSITY nonsymmetric matrix suite
Double Precision (Gflop/s)
Single Precision (Gflop/s)
Matrix
Cell
F SS
Cell
pm
+
Cell
pm
X1E
AMD64
IA64
Cell
pm
AMD64
IA64
Vavasis
3.79
3.17
3.06
0.84
0.44
0.46
6.06
0.70
0.49
FEM
4.28
3.44
3.39
1.55
0.42
0.49
5.14
0.59
0.62
Mem
2.21
1.69
1.46
0.57
0.30
0.27
2.79
0.45
0.31
CFD
1.87
1.52
1.44
1.61
0.28
0.21
2.33
0.38
0.23
Average
3.04
2.46
2.34
1.14
0.36
0.36
4.08
0.53
0.41
SPARSITY symmetric matrix suite
Double Precision (Gflop/s)
Single Precision (Gflop/s)
Matrix
Cell
F SS
Cell
pm
+
Cell
pm
X1E
AMD64
IA64
Cell
pm
AMD64
IA64
FEM
-6
.79
6.32
3.12
0.93
1.14
12.37
1.46
1.37
3D Tube
-6
.48
6.06
2.62
0.86
1.16
11.66
1.36
1.31
Portfolio
-1
.83
1.60
2.99
0.37
0.24
3.26
0.42
0.32
NASA
-1
.92
1.66
3.30
0.42
0.32
3.17
0.46
0.40
Vibro
-3
.90
3.47
2.54
0.57
0.56
7.08
0.56
0.64
LP
-5
.17
4.87
1.27
0.47
0.63
8.54
0.55
0.92
Average
-4
.35
4.00
2.64
0.60
0.67
7.68
0.80
0.83
Synthetic Matrices
Double Precision (Gflop/s)
Single Precision (Gflop/s)
Matrix
Cell
F SS
Cell
pm
+
Cell
pm
X1E
AMD64
IA64
Cell
pm
AMD64
IA64
7pt 64 Stencil
2.20
1.44
1.29
-0
.30
0.29
2.61
0.51
0.32
Table 4: SpMV performance in single and double precision on the SPARSITY (top) nonsymmetric and
(bottom) symmetric matrix suites. Note: Cell
F SS
represents the actual implementation and runs on the
cycle accurate full system simulator
sults from the Itanium2 and Opteron using SPARSITY,
a highly tuned sparse matrix numerical library, on nonsymmetric
(top) and symmetric matrix suites. X1E results
where gathered using a high-performance X1-specific SpMV
implementation [6].
Considering that the Itanium2 and Opteron each have a
6.4GB/s bus compared to the Cell's 25.6GB/s DRAM bandwidth
-- one may expect that a memory bound application
such as SpMV would perform only four times better on the
Cell. Nonetheless, on average, Cell
pm
is more than 6x faster
in DP and 10x faster in SP. This is because in order to
achieve maximum performance, the Itanium2 must rely on
the BCSR storage format, and thus waste memory bandwidth
loading unnecessary zeros. However, the Cell's high
FLOP to byte ratio ensures that the regularity of BCSR is
unnecessary allowing it to avoid loading many of the superfluous
zeros. For example, in matrix #17, Cell uses more
than 50% of its bandwidth loading just the DP nonzero values
, while the Itanium2 utilizes only 33% of its bandwidth.
The rest of Itanium2's bandwidth is used for zeros and meta
data. It should be noted that where simulations on Cell involve
a cold start to the local store, the Itanium2's have the
additional advantage of a warm cache.
Cell's use of on-chip memory as a buffer is advantageous in
both power and area compared with a traditional cache. In
fact, Cell is 20 times more power efficient than the Itanium2
and 15 times more efficient than the Opteron for SpMV. For
a memory bound application such as this, multicore commodity
processors will see little performance improvement
unless they also scale memory bandwidth.
Comparing results with an X1E MSP is far more difficult
. For unsymmetric matrices, the Cell
pm
performance on
average is twice that of the X1E. For symmetric matrices,
Cell
pm
performs somewhere between half and triple the performance
of the X1E, but on average is 50% faster. The fact
that the X1E consumes about three times the power of Cell
guarantees Cell, in double precision, is at least as power efficient
as the X1E
7.6
Model Validation
Some might claim that matrix-matrix multiplication performance
can be easily predictable. Most, however, would
agree that SpMV is very difficult to predict. As seen in Table
4, we tested our implementation of the DP SpMV kernel
on the cycle accurate IBM full system simulator, referred
to as Cell
F SS
. The actual implementation makes dynamic
blocking and partitioning decisions at run time, based on
the lessons learned while exploring optimization strategies
for the performance model; however, the current version but
does not include the BCSR approach, and only pads rows
to the nearest even number.
The cycle accurate simulations with a superior implementation
proved to be about 30% faster than the initial performance
estimate, and averages impressive results of more
than 3 Gflop/s for nonsymmetric matrices. The 30% discrepancy
disappears when static partitioning and blocking
strategies used. We can clearly see how the actual implemen-tation's
run time search for structure boosted performance
of the heat equation from about 1.3 Gflop/s to 2.2 Gflop/s -achieving
a 7x speedup over the Itanium2. Cell
F SS
, for double
precision nonsymmetric matrices, is more than 8 times
faster than the Itanium2, and 27 times more power efficient.
These results confirm our performance model's predictive
15
X
next
[i, j, k, t+ 1] = X[i
- 1, j, k, t]+ X[i+ 1, j, k, t]+
X[i, j
- 1, k, t]+ X[i, j+ 1, k, t]+
X[i, j, k
- 1, t]+ X[i, j, k+ 1, t]+
X[i, j, k, t]
X[i, j, k, t+ 1] =
dt
2
dx
2
(X[i
- 1, j, k, t]+ X[i+ 1, j, k, t])+
dt
2
dy
2
(X[i, j
- 1, k, t]+ X[i, j+ 1, k, t])+
dt
2
dz
2
(X[i, j, k
- 1, t]+ X[i, j, k+ 1, t])+
X[i, j, k, t]
- X[i, j, k, t- 1]
Figure 2: Stencil kernels used in evaluation. Top:
Chombo heattut equation requires only the previous
time step. Bottom: Cactus WaveToy equation
requires both two previous time steps.
abilities on complex kernels, and clearly demonstrate Cell's
performance superiority when compared with leading microarchitectural
approaches.
STENCIL COMPUTATIONS
Stencil-based computations on regular grids are at the
core of a wide range of important scientific applications. In
these applications, each point in a multidimensional grid is
updated with contributions from a subset of its neighbors.
The numerical operations are then used to build solvers that
range from simple Jacobi iterations to complex multigrid
and block structured adaptive methods.
In this work we examine two flavors of stencil computations
derived from the numerical kernels of the Chombo [5]
and Cactus [2] toolkits. Chombo is a framework for computing
solutions of partial differential equations (PDEs) using
finite difference methods on adaptively refined meshes. Here
we examine a stencil computation based on Chombo's demo
application, heattut, which solves a simple heat equation
without adaptivity. Cactus is modular open source framework
for computational science, successfully used in many
areas of astrophysics. Our work examines the stencil kernel
of the Cactus demo, WaveToy, which solves a 3D hyperbolic
PDE by finite differencing. The heattut and WaveToy
equations are shown in Figure 2.
Notice that both kernels solve 7 point stencils in 3D for
each point. However, the heattut equation only utilizes values
from the previous time step, while WaveToy requires values
from the two previous timesteps.. Additionally, WaveToy
has a higher computational intensity, and can more
readily exploit the FMA pipeline.
8.1
Algorithmic Considerations
The algorithm used on Cell is virtually identical to that
used on traditional architectures except that the ISA forces
main memory loads and stores to be explicit, rather than
caused by cache misses and evictions. The basic algorithmic
approach to update the 3D cubic data array is to sweep
across the domain, updating one plane at a time. Since a
stencil requires both the next and previous plane, a minimum
of 4 planes must be present in the local stores: (z-1,t),
Z+2
Z+1
Z
Z-1
Z-1
Z
Time t+1
Time t
Z+2
Z+1
Z
Z-1
Z-2
Z-1
Time t+2
Time t
Z
Z-1
Z-2
Time t+1
Figure 3: Flow Diagram for Heat equation flow diagram
. Left: Queues implemented within each SPE
perform only one time step. Right: Time skewing
version requires an additional circular queue to hold
intermediate results.
(z,t), (z+1,t), and (z,t+1). Additionally, bus utilization can
be maximized by double buffering the previous output plane
(z-1,t+1) with the next input plane (z+2,t).
In order to parallelize across SPEs, each plane of the 3D
domain is partitioned into eight overlapping blocks. Due
to the finite size of the local store memory, a straightforward
stencil calculation is limited to planes of 256
2
elements
plus ghost regions. Thus each SPE updates the core 256x32
points from a 258x34 slab (as slabs also contain ghost regions
).
To improve performance of stencil computations on cache-based
architectures, previous research has shown multiple
time steps can be combined to increase performance [13,
21, 32].
This concept of time skewing can also be effectively
leveraged in our Cell implementation.
By keeping
multiple planes from multiple time steps in the SPE simul-taneously
, it is possible to double or triple the number of
stencils performed with almost no increase in memory traffic
; thus increasing computational intensity and improving
overall performance. Figure 3 details a flow diagram for the
heat equation, showing both the simple and time skewed
implementations.
Note that the neighbor communication required by stencils
is not well suited for the aligned quadword load requirements
of the SPE ISA - i.e. unaligned loads must be emu-lated
with permute instructions. In fact, for SP stencils with
extensive unrolling, after memory bandwidth, the permute
datapath is the limiting factor in performance -- not the
FPU. This lack of support for unaligned accesses highlights
a potential bottleneck of the Cell architecture; however we
can partially obviate this problem for the stencil kernel via
data padding.
8.2
Stencil Kernel Results
The performance estimation for the heattut and WaveToy
stencil kernels is shown in Table 5. Results show that
as the number of time steps increases, a corresponding decrease
in the grid size is required due to the limited memory
footprint of the local store. In SP, the heat equation on the
Cell
pm
is effectively computationally bound with two steps
of time skewing, resulting in over 41 Gflop/s. More specifically
, the permute unit becomes fully utilized as discussed
16
Double Precision (Gflop/s)
Cell
F SS
Cell
pm
+
Cell
pm
+
Cell
pm
X1E AMD64 IA64
Stencil
(2 step)
Heat
7.25
21.1
10.6
8.2
3.91
0.57
1.19
WaveToy
9.68
16.7
11.1
10.8 4.99
0.68
2.05
Single Precision (Gflop/s)
Cell
F SS
Cell
pm
Cell
pm
X1E AMD64 IA64
Stencil
(4 step) (2 step)
Heat
65.8
41.9
21.2
3.26
1.07
1.97
WaveToy
-33
.4
22.3
5.13
1.53
3.11
Table 5:
Performance for the Heat equation and
WaveToy stencils. X1E and Itanium2 experiments
use 256
3
grids. The Opteron uses a 128
3
. Cell uses
the largest grid that would fit within the local stores.
The (n steps) versions denote a time skewed version
where n time steps are computed.
in Section 8.1. In DP, however, the heat equation is truly
computationally bound for only a single time step, achieving
8.2 Gflop/s. Analysis also shows that in the Cell+ approach,
the heat equation is memory bound when using a single time
step attaining 10.6 Gflop/s; for time skewing, performance
of Cell+ DP jumps to over 21 Gflops/s.
We believe the temporal recurrence in the CACTUS WaveToy
example will allow more time skewing in single precision
at the expense of far more complicated code, and will be the
subject of future investigation.
8.3
Performance Comparison
Table 5 presents a performance comparison of the stencil
computations across our evaluated set of leading processors.
Note that stencil performance has been optimized for the
cache-based platforms as described in [15].
In single precision, for this memory bound computation,
even without time skewing, Cell
pm
achieves 6.5x, 11x, and
20x speedup compared with the X1E, the Itanium2 and the
Opteron respectively.
Recall that the Cell has only four
times the memory bandwidth the scalar machines, and 75%
the bandwidth of the X1E indicating that Cells potential to
perform this class of computations in a much more efficient
manner is due to the advantages of software controlled memory
for algorithms exhibiting predictable memory accesses.
In double precision, with 1/14
th
the floating point throughput
, Cell
pm
achieves a 2x, 7x, and 14x speedup compared to
the X1E, the Itanium2, and the Opteron for the heat equation
-- a truly impressive result. Additionally, unlike the
Opteron and Itanium2, simple time skewing has the potential
to at least double the performance in either SP (either
version of Cell) or in DP on the Cell+ variant.
Finally, recall that in Section 7 we examined Cell SpMV
performance using 7-point stencil matrices.
We can now
compare those results with the structured grid approach presented
here, as the numerical computations are equivalent
in both cases. Results show that for two time step calculations
, the single precision structured grid approach achieves
a 23x advantage compared with the sparse matrix method.
This impressive speedup is attained through the regularity of
memory accesses, reduction of memory traffic (constants are
encoded in the equation rather than the matrix), the ability
to time skew (increased computational intensity), and that
stencils on a structured grid dont require multiplications by
1.0 like a sparse matrix would. For double precision, the
stencil algorithm advantage is diminished to approximately
12x, due mainly to the lack of time skewing.
8.4
Model Validation
As with SpMV, we implemented an actual double precision
kernel on the full system simulator, with Cell
F SS
results
shown in Table 5. At first, we were surprised that measured
performance fell short of our prediction by 13%. However,
upon closer examination it was discovered that the actual
Cell implementation prohibits dual issuing of DP instructions
with loads or permutes, even though it allows SP with
loads or permutes to be dual issued. Thus for kernels with
streaming behavior, it is realistic to assume that one double
precision SIMD instruction can be executed every 8 cycles
-- instead of every 7 as we had predicted previously. This
discrepancy results in a 14% architectural performance reduction
, which corresponding very well to the 13% difference
observed in Table 5 between the predicted (Cellpm)
and simulated (Cell
F SS
) DP data.
Nonetheless, the actual DP Cell
F SS
implementation of
our evaluated stencil kernel is about 13x faster, and nearly
30x more power efficient than the Opteron. We also developed
a SP version of the heat equation that allowed four
time-skewed stencil steps.
(Our original performance estimation
assumed one or two time steps.)
Results show
spectacular SP Cell
F SS
performance of nearly 66 Gflop/s
-- more than 60x faster and 136x power efficient compared
with the Opteron, even though Cell has only four times the
bandwidth and 20 times the single precision throughput.
FAST FOURIER TRANSFORMS
The FFT presents us with an interesting challenge: its
computational intensity is much less than matrix-matrix
multiplication and standard algorithms require a non-trivial
amount of data movement. Extensive work has been performed
on optimizing this kernel for both vector [24] and
cache-based [7] machines. In addition, implementations for
varying precisions appear in many embedded devices using
both general and special purpose hardware. In this section
we evaluate the implementation of a standard FFT algorithm
on the Cell processor.
9.1
Methods
We examine both the 1D FFT cooperatively executed
across the SPEs, and a 2D FFT whose 1D FFTs are each
run on a single SPE. In all cases the data appears in a single
array of complex numbers. Internally (within the local
stores) the data is unpacked into separate arrays, and a table
lookup is used for the roots of unity so that no runtime computation
of roots is required. As such, our results include
the time needed to load this table. Additionally, all results
are presented to the FFT algorithm and returned in natural
order (i.e. a bit reversal was required to unwind the permutation
process in all cases). Note that these requirements
have the potential to severely impact performance.
For simplicity we evaluated a naive FFT algorithm (no
double buffering and with barriers around computational
segments) for the single 1D FFT. The data blocks are dis-tributed
cyclically to SPEs, 3 stages of local work are performed
, the data is transposed (basically the reverse of the
17
cyclic allocation), and then 9 to 13 stages of local computation
is performed (depending on the FFT size). At that
point the indices of the data on chip are bit-reversed to unwind
the permutation process and the naturally ordered result
copied back into main memory. Once again, we presume
a large DMA initiation overhead of 1000 cycles. However, a
Cell implementation where the DMA initiation overhead is
smaller, would allow the possibility of much larger FFT calculations
(including out of core FFTs) using smaller block
transfers, with little or no slowdown using double buffering
to hide the DMA latency.
Before exploring the 2D FFT, we briefly discuss simultaneous
FFTs. For sufficiently small FFTs (<4K points in
SP) it is possible to both double buffer and round robin allocate
a large number of independent FFTs to the 8 SPEs.
Although there is lower computational intensity, the sheer
parallelism, and double buffering allow for extremely high
performance (up to 76 Gflop/s).
Simultaneous FFTs form the core of the 2D FFT. In order
to ensure long DMAs, and thus validate our assumptions on
effective memory bandwidth, we adopted an approach that
requires two full element transposes. First, N 1D N-point
FFTs are performed for the rows storing the data back to
DRAM. Second, the data stored in DRAM is transposed
(columns become rows) and stored back to DRAM. Third
the 1D FFTs are performed on the columns, whose elements
are now sequential (because of the transpose). Finally a second
transpose is applied to the data to return it to its original
layout. Instead of performing an N point bit reversal for
every FFT, entire transformed rows (not the elements of the
rows) are stored in bit-reversed order (in effect, bit reversing
the elements of the columns). After the first transpose, a
decimation in frequency FFT is applied to the columns. The
columns are stored back in bit-reversed order -- in doing so,
the row elements are bit reversed. With a final transpose,
the data is stored back to memory in natural order and layout
in less time.
9.2
Single Precision FFT Performance
Table 6 presents performance results for the Cell 1D and
2D FFT. For the 1D case, more than half of the total time is
spent just loading and storing points and roots of unity from
DRAM. If completely memory bound, peak performance is
approximately (25.6GB/s/8Bytes)
5NlogN/3N cycles or
approximately 5.3logN Gflop/s. This means performance is
limited to 64 Gflop/s for a 4K point SP FFT regardless of
CPU frequency. A clear area for future exploration is hiding
computation within the communication and the minimiza-tion
of the overhead involved with the loading of the roots
of unity.
Unfortunately the two full element transposes, used in
the 2D FFT to guarantee long sequential accesses, consume
nearly 50% of the time. Thus, although 8K simultaneous
4K point FFTs achieve 76 Gflop/s (after optimizing away
the loading of roots of unity), a 4K
2
2D FFT only reaches
46 Gflop/s -- an impressive figure nonetheless. Without the
bit reversal approach, the performance would have further
dropped to about 40 Gflop/s. The smaller FFT's shown in
the table show even poorer performance.
9.3
Double Precision FFT Performance
When DP is employed, the balance between memory and
computation is changed by a factor of 7.
This pushes a
Double Precision (Gflop/s)
N
Cell
pm
+
Cell
pm
X1E
AMD64
IA64
4K
12.6
5.6
2.92
1.88
3.51
1D
16K
14.2
6.1
6.13
1.34
1.88
64K
-7
.56
0.90
1.57
1K
2
15.9
6.6
6.99
1.19
0.52
2D
2K
2
16.5
6.7
7.10
0.19
0.11
Single Precision (Gflop/s)
N
Cell
pm
+
Cell
pm
X1E
AMD64
IA64
4K
-29
.9
3.11
4.24
1.68
1D
16K
-37
.4
7.48
2.24
1.75
64K
-41
.8
11.2
1.81
1.48
1K
2
-35
.9
7.59
2.30
0.69
2D
2K
2
-40
.5
8.27
0.34
0.15
Table 6: Performance of 1D and 2D FFT in DP (top)
and SP (bottom). For large FFTs, Cell is more than
10 times faster in SP than either the Opteron or
Itanium2. The Gflop/s number is calculated based
on a naive radix-2 FFT algorithm. For 2D FFTs the
naive algorithm computes 2N N-point FFTs.
slightly memory bound application strongly into the computationally
bound domain. The SP simultaneous FFT is
10 times faster than the DP version. On the upside, the
transposes required in the 2D FFT are now less than 20% of
the total time, compared with 50% for the SP case. Cell
pm
+
finds a middle ground between the 4x reduction in computational
throughput and the 2x increase in memory traffic -increasing
performance by almost 2.5x compared with the
Cell for all problem sizes.
9.4
Performance Comparison
The peak Cell FFT performance is compared to a number
of other processors in the Table 6. These results are conservative
given the naive 1D FFT implementation we used
on Cell whereas the other systems in the comparison used
highly tuned FFTW [7] or vendor-tuned FFT implementations
[25]. Nonetheless, in DP, Cell
pm
is at least 12x faster
than the Itanium2 for a 1D FFT, and Cell
pm
+
could be as
much as 30x faster for a large 2D FFT. Cell+ more than
doubles the DP FFT performance of Cell for all problem
sizes. Cell performance is nearly at parity with the X1E in
double precision; however, we believe considerable headroom
remains for more sophisticated Cell FFT implementations.
In single precision, Cell is unparalleled.
Note that FFT performance on Cell improves as the number
of points increases, so long as the points fit within the
local store. In comparison, the performance on cache-based
machines typically reach peak at a problem size that is far
smaller than the on-chip cache-size, and then drops precip-itously
once the associativity of the cache is exhausted and
cache lines start getting evicted due to aliasing. Elimination
of cache evictions requires extensive algorithmic changes for
the power-of-two problem sizes required by the FFT algorithm
, but such evictions will not occur on Cells software-managed
local store. Furthermore, we believe that even for
problems that are larger than local store, 1D FFTs will con
X1E FFT numbers provided by Cray's Bracy Elton and
Adrian Tate.
18
tinue to scale much better on Cell than typical cache-based
superscalar processors with set-associative caches since local
store provides all of the benefits as a fully associative cache.
The FFT performance clearly underscores the advantages
of software-controlled three-level memory architecture over
conventional cache-based architectures.
CONCLUSIONS AND FUTURE WORK
The Cell processor offers an innovative architectural approach
that will be produced in large enough volumes to be
cost-competitive with commodity CPUs. This work presents
the broadest quantitative study Cell's performance on scientific
kernels and directly compares its performance to tuned
kernels running on leading superscalar (Opteron), VLIW
(Itanium2), and vector (X1E) architectures. We developed
an analytic framework to predict Cell performance on dense
and sparse matrix operations, stencil computations, and 1D
and 2D FFTs. Using this approach allowed us to explore
numerous algorithmic approaches without the effort of implementing
each variation. We believe this analytical model
is especially important given the relatively immature software
environment makes Cell time-consuming to program
currently; the model proves to be quite accurate, because
the programmer has explicit control over parallelism and
features of the memory system.
Furthermore, we propose Cell+, a modest architectural
variant to the Cell architecture designed to improve DP behavior
. Overall results demonstrate the tremendous potential
of the Cell architecture for scientific computations in
terms of both raw DP and SP performance and power efficiency
. In addition, we show that Cell+ significantly out-performs
Cell for most of our evaluated DP kernels, while
requiring minimal microarchitectural modifications to the
existing design.
Analysis shows that Cell's three level software-controlled
memory architecture, which completely decouples main memory
load/store from computation, provides several advantages
over mainstream cache-based architectures. First, kernel
performance can be extremely predictable as the load
time from local store is constant. Second, long block transfers
can achieve a much higher percentage of memory bandwidth
than individual loads in much the same way a hardware
stream prefetch engine, once engaged, can fully consume
memory bandwidth. Finally, for predictable memory
access patterns, communication and computation can be
overlapped more effectively than conventional cache-based
approaches. Increasing the size of the local store or reducing
the DMA startup overhead on future Cell implementations
may further enhance the scheduling efficiency by enabling
more effective overlap of communication and computation.
There are also disadvantages to the Cell architecture for
kernels such as SpMV. With its lack of unaligned load support
, Cell must issue additional instructions simply to permute
data, yet still manages to outperform conventional
scalar processor architectures.
Even memory bandwidth
may be wasted since SpMV is constrained to use tiling to
remove the indirectly indexed accesses to the source vector
. The ability, however, to perform a decoupled gather,
to stream nonzeros, and Cell's low functional unit latency,
tends to hide this deficiency. Additionally, we see stencil
computations as an example of an algorithm that is heavily
influenced by the performance of the permute pipeline.
Here, the lack of support for an unaligned load instruction
Speedup vs.
Power Efficiency vs.
Cell+
X1E
AMD64 IA64
X1E
AMD64
IA64
GEMM
3x
12.7x
9.5x
9x
28.3x
30.9x
SpMV
>2.7x
>8.4x
>8.4x >8.0x >18.7x >27.3x
Stencil
5.4x
37.0x
17.7x
16.2x
82.4x
57.5x
1D FFT
2.3x
10.6x
7.6x
6.9x
23.6x
24.7x
2D FFT
2.3x
13.4x
30.6x
6.9x
29.8x
99.5x
Speedup vs.
Power Efficiency vs.
Cell
X1E
AMD64 IA64
X1E
AMD64
IA64
GEMM
0.8x
3.7x
2.7x
2.4x
8.2x
8.78x
SpMV
2.7x
8.4x
8.4x
8.0x
18.7x
27.3x
Stencil
1.9x
12.7x
6.1x
5.7x
28.3x
19.8x
1D FFT
1.0x
4.6x
3.2x
3.0x
10.2x
10.4x
2D FFT
0.9x
5.5x
12.7x
2.7x
12.2x
41.3x
Table 7: Double precision speedup and increase in
power efficiency of (Top) Cell+ and (Bottom) Cell,
relative to the X1E, Opteron, and Itanium2 for our
evaluated suite of scientific kernels. Results show an
impressive improvement in performance and power
efficiency.
is a more significant performance bottleneck than either the
SP execution rate or the memory bandwidth.
For dense matrix operations, it is essential to maximize
computational intensity and thereby fully utilize the local
store.
However, if not done properly, the resulting TLB
misses adversely affect performance. For example, in the
GEMM kernel we observe that the BDL data storage format,
either created on the fly or before hand, can ensure that
TLB misses remain a small issue even as on-chip memories
increase in size.
Table 7 compares the advantage of Cell and Cell+ based
on the better of performance model or actual implementation
(where available) in terms of DP performance and
power efficiency for our suite of evaluated kernels and architectural
platforms. Observe that Cell+ has the potential to
greatly increase the already impressive performance characteristics
of Cell.
By using the insight gained in the development of our estimation
model, we developed an optimized SpMV version
that outperformed our initial predictions by 25% 70%. If a
full system simulator could model the modest improvements
of our Cell+ variant, we feel confident that we could demonstrate
comparable improvements to DP performance as well.
We also note that DP stencil performance fell short of our
model by 13% due to previously unknown microarchitectural
limitations. However, time skewing showed a huge benefit
in SP and we believe a similar benefit would be present in
DP on Cell+ variant.
It is important to consider these performance differences
in the context of increasingly prevalent multi-core commodity
processors. The first generation of this technology will
instantiate at most two cores per chip, and thus will deliver
less than twice the performance of today's existing architectures
. This factor of 2x is trivial compared with Cell+'s
potential of 10-20x improvement.
While peak Cell DP performance is impressive relative to
its commodity peers, a fully utilizable pipelined DP floating
point unit would boost Cell (i.e. Cell+) performance and
efficiency significantly.
19
Acknowledgments
This work was supported by the Director, Office of Science,
of the U.S. Department of Energy under Contract No. DE-AC02
-05CH11231. The authors gratefully thank Bracy Elton
and Adrian Tate for their assistance in obtaining X1E
FFT performance data, and Eduardo D'Azevedo for providing
us with an optimized X1E SpMV implementation.
REFERENCES
[1] G. Blelloch, M. Heroux, and M. Zagha. Segmented
operations for sparse matrix computation on vector
multiprocessors. Technical Report CMU-CS-93-173,
CMU, 1993.
[2] Cactus homepage. http://www.cactuscode.org.
[3] L. Cannon. A Cellular Computer to Implement the
Kalman Filter Algorithm. PhD thesis, Montana State
University, 1969.
[4] Cell broadband engine architecture and its first
implementation. http://www-128.ibm.com/
developerworks/power/library/pa-cellperf/.
[5] Chombo homepage.
http://seesar.lbl.gov/anag/chombo.
[6] E. D'Azevedo, M. R. Fahey, and R. T. Mills.
Vectorized sparse matrix multiply for compressed row
storage format. In International Conference on
Computational Science (ICCS), pages 99106, 2005.
[7] FFTW speed tests. http://www.fftw.org.
[8] B. Flachs, S. Asano, S. Dhong, et al. A streaming
processor unit for a cell processor. ISSCC Dig. Tech.
Papers, pages 134135, February 2005.
[9] P. Francesco, P. Marchal, D. Atienzaothers, et al. An
integrated hardware/software approach for run-time
scratchpad management. In Proceedings of the 41st
Design Automation Conference, June 2004.
[10] Ibm cell specifications.
http://www.research.ibm.com/cell/home.html.
[11] E.-J. Im, K. Yelick, and R. Vuduc. Sparsity:
Optimization framework for sparse matrix kernels.
International Journal of High Performance Computing
Applications, 2004.
[12] The Berkeley Intelligent RAM (IRAM) Project.
http://iram.cs.berkeley.edu.
[13] G. Jin, J. Mellor-Crummey, and R. Fowlerothers.
Increasing temporal locality with skewing and
recursive blocking. In Proc. SC2001, 2001.
[14] J. Kahle, M. Day, H. Hofstee, et al. Introduction to
the cell multiprocessor. IBM Journal of R&D, 49(4),
2005.
[15] S. Kamil, P. Husbands, L. Oliker, et al. Impact of
modern memory subsystems on cache optimizations
for stencil computations. In ACM Workshop on
Memory System Performance, June 2005.
[16] M. Kandemir, J. Ramanujam, M. Irwin, et al.
Dynamic management of scratch-pad memory space.
In Proceedings of the Design Automation Conference,
June 2001.
[17] P. Keltcher, S. Richardson, S. Siu, et al. An equal area
comparison of embedded dram and sram memory
architectures for a chip multiprocessor. Technical
report, HP Laboratories, April 2000.
[18] B. Khailany, W. Dally, S. Rixner, et al. Imagine:
Media processing with streams. IEEE Micro, 21(2),
March-April 2001.
[19] M. Kondo, H. Okawara, H. Nakamura, et al. Scima: A
novel processor architecture for high performance
comp uting. In 4th International Conference on High
Performance Computing in the Asia Pacific Region,
volume 1, May 2000.
[20] A. Kunimatsu, N. Ide, T. Sato, et al. Vector unit
architecture for emotion synthesis. IEEE Micro, 20(2),
March 2000.
[21] Z. Li and Y. Song. Automatic tiling of iterative stencil
loops. ACM Transactions on Programming Language
Systems, 26(6), 2004.
[22] S. Mueller, C. Jacobi, C. Hwa-Joon, et al. The vector
floating-point unit in a synergistic processor element
of a cell processor. In 17th IEEE Annual Symposium
on Computer Arithmetic (ISCA), June 2005.
[23] M. Oka and M. Suzuoki. Designing and programming
the emotion engine. IEEE Micro, 19(6), November
1999.
[24] L. Oliker, R. Biswas, J. Borrill, et al. A performance
evaluation of the Cray X1 for scientific applications. In
Proc. 6th International Meeting on High Performance
Computing for Computational Science, 2004.
[25] Ornl cray x1 evaluation.
http://www.csm.ornl.gov/
dunigan/cray.
[26] N. Park, B. Hong, and V. Prasanna. Analysis of
memory hierarchy performance of block data layout.
In International Conference on Parallel Processing
(ICPP), August 2002.
[27] D. Pham, S. Asano, M. Bollier, et al. The design and
implementation of a first-generation cell processor.
ISSCC Dig. Tech. Papers, pages 184185, February
2005.
[28] Sony press release. http://www.scei.co.jp/
corporate/release/pdf/050517e.pdf.
[29] M. Suzuoki et al. A microprocessor with a 128-bit cpu,
ten floating point macs, four floating-point dividers,
and an mpeg-2 decoder. IEEE Solid State Circuits,
34(1), November 1999.
[30] S. Tomar, S. Kim, N. Vijaykrishnan, et al. Use of local
memory for efficient java execution. In Proceedings of
the International Conference on Computer Design,
September 2001.
[31] R. Vuduc. Automatic Performance Tuning of Sparse
Matrix Kernels. PhD thesis, University of California
at Berkeley, 2003.
[32] D. Wonnacott. Using time skewing to eliminate idle
time due to memory bandwidth and network
limitations. In International Parallel and Distributed
Processing Symposium (IPDPS), 2000.
20
| GEMM;FFT;Cell processor;three level memory;SpMV;Stencil;sparse matrix |
192 | The Use of Mediation and Ontology Technologies for Software Component Information Retrieval | Component Based Development aims at constructing software through the inter-relationship between pre-existing components. However, these components should be bound to a specific application domain in order to be effectively reused. Reusable domain components and their related documentation are usually stored in a great variety of data sources. Thus, a possible solution for accessing this information is to use a software layer that integrates different component information sources. We present a component information integration data layer, based on mediators. Through mediators, domain ontology acts as a technique/formalism for specifying ontological commitments or agreements between component users and providers, enabling more accurate software component information search. | INTRODUCTION
Component Based Development (CBD) [1] aims at constructing
software through the inter-relationship between pre-existing
components, thus reducing the complexity, as well as the cost of
software development, through the reuse of exhaustively tested
components. Building new solutions by combining components
should improve quality and support rapid development, leading to
a shorter time-to-market. At the same time, nimble adaptation to
changing requirements can be achieved by investing only in key
changes of a component-based solution, rather than undertaking a
major release change. For these reasons, component technology is
expected by many to be the cornerstone of software production in
the years to come.
According to Jacobson, Griss and Jonsson [1], the effectiveness of
component reuse depends on the connectiveness among them and
their binding to specific application domains. The connectiveness
is one of the most discussed problems in CBD [6, 12]. Approaches
that deal with component interfaces
(one of premises for
connection between components) focus on their capability to
provide and request services. Although this interface aspect is
important in a CBD approach, other problems arise when trying to
connect components. The connectiveness also depends on the
execution environment, the heterogeneity of components, the
distance between them and the architecture that controls their
connections [1, 5, 12]. The architecture that governs the
connections between components also depends on the application
domain. Therefore, reuse possibilities increase when components
are bound to domain concepts. As stated by Krueger [1], while
retrieving reusable software components, it is advantageous to use
a terminology that is familiar to the domain. This approach
diverges from other software component retrieval proposals, such
as the Agora System [6] that bases the search only on the
component interfaces, covering solely the component
connectiveness problem, and the RIG initiative [10] that presents
an approach for domain repository integration with minor user
transparency and without web access.
Suppose that, in a typical component retrieval scenario, a software
developer wants to find software components to use in the
construction of an application under development. If he does not
know any other specialized service that provides information
about components, the natural search space will be the Internet.
Now, consider that this developer has no knowledge about the
available components. Thus, the following actions are necessary
to discover software components that satisfy his needs:
1. To locate information about components that can be stored in
distributed repositories. This could be typically done through
an Internet search engine that takes as input a few keywords
and returns a list of relevant resource descriptions that might
be available in these repositories. The success of this task
directly depends on the interest of the repository
administrators in publicizing their data and the precision of
the user while providing the keywords.
2. To determine usability of search results. Due to the
complexity in the analysis of component usefulness (i.e.,
considering the component domain, functionality and
connection possibilities based on architecture decisions), the
An interface of a component can be seen as a component's part that
defines its access points. These points allow clients (components
themselves) to access services provided by the component [12].
Thus, a naive Internet approach will not cope with the complexity
of software component retrieval. The previous actions require an
engine that combines the following three characteristics: (i)
Distribution and Heterogeneity software components can be
distributed and use different kinds of storage; (ii) Domain
Ontology - to organize component repositories within a domain in
order to ease its search; (iii) Software Component Information
Evolution to insert new information (including legacy
information).
However, as stated before, current software component retrieval
proposals either lack from heterogeneity or domain ontology. On
the other hand, many database projects [3,4,7,8,9] are particularly
concerned with distribution, heterogeneity (and ontology) found in
legacy databases. These projects are known as "multi-database" or
Heterogeneous and Distributed Data Base Systems (HDDS) [4].
One solution found in HDDS is the use of mediators
[2]
combined with ontology [14] to integrate, identify and retrieve
related legacy databases. Ontology in this context can be defined
as a vocabulary of terms and the relationship between them. For
each term, a definition must be created, using an informal
description, some concrete examples in the domain, and also a
formal specification of the relationships between terms, thus
forming a semantic network.
We believe that the HDDS technologies can be adapted to handle
software component repositories in the place of legacy databases.
Mediators can represent and integrate domain information
repositories (distributed and/or heterogeneous). Metadata found in
mediators can describe the repositories of components, presenting
the domain, their semantics, software architecture and interfaces.
Usually a query engine is available in HDDS and therefore ad hoc
queries over this metadata can be used to analyze the available
components. The organization of mediators with ontology drives
the user search along heterogeneous vocabulary.
Therefore, our main objective is to present a software component
information retrieval engine named Odyssey Mediation Layer
(OML) that combines the connectiveness of components and the
domain concept approach. We address both issues through the
adoption of mediation [2] and ontology [14] technologies,
respectively.
Our approach is motivated by a project that is being conducted in
the Legislative House domain. There are several applications that
can benefit from reusable information within this domain and
from other related ones, such as justice domain, criminal domain,
among others. Our users are not specialists in the latter domains,
only in the Legislative domain. However, it is important that
relevant reusable information from all related domains can be
presented to them, particularly when they are not aware of its
existence. Most components of legislative process applications
can be reused from the legislative domain (e.g., Proposal Creation,
Legislature Evaluation, Council Members referee, among others),
but sometimes it is worth looking at components from other
related domains such as justice domain. Our retrieval engine is
able to identify and suggest components from other related
2
According to Wiederhold [2], mediators are modules that encompass
layers of mediation services, connecting bases of heterogeneous and
distributed data (producers) to information systems (consumers).
domains in the same way as it suggests components from the
Legislative domain.
With our retrieval engine, the user search can rely on a controlled
vocabulary (ontology) composed of domain terms that are familiar
to him. Thus, the search is more focused and executed over
relevant available component information repositories. Besides,
the usefulness of the retrieved components is supported by the
bindings between ontology terms and related components. This
binding is accomplished by domain specialists together with
domain engineers during a domain engineering process [5, 17]
thus enforcing the biding precision. In order to address these
issues, the retrieval engine organizes component information
repositories within domain ontologies while preserving its original
characteristics of distribution and heterogeneity, all in a flexible
way.
The main contribution of our proposal is to provide an approach
for accessing software components through the use of ontologies
and mediators. Our innovative aspect is to provide flexibility,
transparency and accuracy in software component information
retrieval.
In order to present our approach, the paper is organized as
follows: Section 2 discusses the novelties of our proposal with
related works; Section 3 details the architecture of Odyssey
Mediation Layer; Section 4 shows a component retrieval example;
and Section 5 presents our concluding remarks.
RELATED WORKS
In a broader approach, several information retrieval systems use
semantic brokering with respect to their resources. These systems
include SIMS [3], TSIMMIS [4], InfoMaster [7], and Information
Manifold [8]. These systems work in the definition of some sort of
common vocabulary (similar to an ontology) to define objects in
their domain. Individual information sources that contain these
objects describe constraints on objects that they can provide, in
terms of this common vocabulary. The broker then uses these
constraints to determine how to process queries. These projects
deal with generic designs for database retrieval. Our approach
combines the mediation technology with specific domain ontology
[2] to integrate different software components data sources. The
main difference between these projects and ours is that our
approach is particularly concerned with software components.
Thus, we use an ontology, which is specifically constructed for
that. This ontology is specified during a Domain Engineering
process [5] tailored for this purpose. Hence, the ontology accuracy
is more efficient, and consequently the usefulness of the retrieved
components.
Another work worth mentioning is the InfoSleuth system[9]. It is
a large project, conducted by MCC, which uses an ontology
approach to retrieve information from distributed and
heterogeneous databases. InfoSleuth can be seen as a framework
that can be tailored for a given purpose. One interesting
application is the EDEN project in the environmental domain [9].
In this case, some tools of InfoSleuth were customized for this
project. Our project adopts a similar approach, where our
constructs are specific for software component information
retrieval.
The Agora System [6] describes a search engine for retrieving
reusable code components, such as JavaBeans and CORBA
components. Agora uses an introspection mechanism for
20
registering code components, through its interface. As a result, this
information may not be available at a certain time because the
repository is not running, or the information cannot be located. In
each of these cases, the interface information cannot be
successfully retrieved and indexed, and the component is not
registered in the AGORA index database. In our proposal, the use
of mediators provides the flexibility to access remote component
repositories. There is no need for an index phase, and the mediator
is able to capture updates that may occur in a remote repository
(i.e., the repository, using the translator
3
services, sends a message
with these updates to the mediator). Moreover, the mediator
metadata manager has access to all the ontological terms of a
given domain, facilitating the identification of the existence of a
component, even if its repository is out of service. In this case, the
user knows that the component exists and that he can retrieve it
latter. Moreover, new information is always associated to domain
terms within a given domain ontology, improving its accessibility
and reuse.
Ye and Fischer [18] presents an approach that provides an active
repository for components. The work emphasizes active delivery
of reuse information, helping on the identification of components
that developers did not even know that existed. Regarding this last
aspect, it is similar to our approach. Our component information
retrieval system also provides this functionality too, once it
accesses components from other domains based on semantic
similarity. The active repository functionality, although not
described in this paper, is also part of our work [16]. One aspect
that is different in our proposal is the retrieval of distributed
information, which is not mentioned in Ye and Fischer's work.
Another important work to mention is the RIG initiative [9], which
describes a reuse library interoperability approach. The idea of the
asset library interoperability is based on the storage of domain
information in several databases. These databases are static and
based on a unique global model. The integration requires that
information is stored according to this unique model. Therefore, if
any reuse database is to be integrated, it has to be translated to the
RIG model. Alternatively, the mediation approach creates a new
level of abstraction above the database model, allowing the insertion
and/or removal of repositories from the mediation structure without
the need for updates on the whole structure. Moreover, RIG lacks a
more effective search engine that provides searches based on
domain concepts and filtering of relevant information, including
Internet access, as we do in our work.
SPECIALIZING A MEDIATION ARCHITECTURE TO A COMPONENT INFORMATION RETRIEVAL ENGINE
The effectiveness of a software component information retrieval
engine is associated to its capacity to handle the distribution and
heterogeneity of software components, to organize component
repositories within a domain, and to enable software component
evolution. These requirements can be accomplished through a
software layer that is seen as a particular case of HDDS.
As stated before, mediators are modules that encompass layers of
mediation services, connecting bases of heterogeneous and
distributed data (producers) to information systems (consumers).
Hence, in order to be really useful in software component
information search this solution has to be tailored to component
3
A translator in this context is the same as a wrapper or adapter.
information retrieval, considering the component domain, its
semantics, architecture, and interfaces.
In a mediation architecture, as new sources of information are
aggregated to the mediation structure, the amount of information
to be modeled increases, frequently generating inconsistencies,
ambiguities, and conflicts in the represented information. One way
to deal with this problem is to partition the consumers and
producers' models and the structure of mediation by domain. The
description of these models, partitioned by domains, forms the so-called
domain ontology [14].
In the context of component information retrieval, the use of
mediators allows the information access to be carried
independently of the format and the operational platform where it
is stored. Therefore, the structure of a retrieval engine as a whole
can be flexible, since existing component data sources can be
added to the architecture in an easy way, with no need to convert
from the original information format (format form of
data/information source) to the format used by the reuse
environment. Another interesting feature of mediators within a
component information retrieval engine is that reusable
information is naturally organized by domain (Figure 1), which
facilitates the search for domain concepts, since specific domain
data is accessed in a search. Moreover, the use of mediators allows
the aggregation of information already stored in legacy databases,
without the need to transform the original database format.
In order to help on the correct choice of mediators for a given
domain, the mediator layer provides a specific ontology for each
domain. Therefore, this ontology must be specified by domain
specialists, facilitating the search for specific components, since
the ontology definition is directly connected to software
components within the domain.
The use of this layer in the Legislative Domain is particularly
interesting, since in Brazil as in other countries [15], there are
some legislative houses that are more up to date with software
technology than others. The former represents a reference source
of software components to several Legislative Houses. Without
this kind of layer, there exist some barriers for reusing
components among these houses, such as the distance, scarce
financial resources, and semantic conflicts among components (a
common component functionality can be identified differently in
each legislative house).
Figure 1 presents an example of a mediation layer configuration
for this specific application domain. Several mediators are
presented as sub-domains, such as State Legislative (SL) and
Municipal Legislative (ML) domains. The SL Mediator is
aggregated (P1) to ML, generating a more generic mediator that
combines the two domains. The latter can be used in cases where
information concerning the two domains is necessary. Each
mediator is connected to the related domain data sources that
contain reuse component information. The Justice Domain
Mediator may be accessed in cases where the user wants
components related to the Justice domain.
In order to provide an architecture that is able to handle the
requirements of component information search and retrieval, we
specified and implemented OML (Odyssey Mediation Layer) was
specified and implemented, based on mediation and ontology
technologies. OML is derived from a HDDS mediator, the
HIMPAR architecture [12], adding to it more precision and
semantics, using ontologies tailored to software component
information retrieval.
21
Retrieval Interface
Legislative
State
Domain Mediator
Justice
Domain
Mediator
Legislative
Domain
Mediator
Legislative
Municipal
Domain
Mediator
P1
P2
ORB
Component
Repository
Translator
Component
Repository
Component
Repository
Translator
P1 : aggregation
P2: association
Translator
Component
Repository
Translator
Figure 1 - An example of a mediation layer for the Legislative domain
User Interface
Service Manager
SM
Query Translator
Query Packer
Query
Decomposition
Query Manager(QM)
Ontology
Model
OntologyManager
Metadata Manager (MM)
Mediator
Translator
Translator
Service Manager
Metadata
Model
ORB Bus
Component Repository
Component Repository
ORB Bus
Reuse Tools
Figure 2- Architecture of the Odyssey Mediation Layer
The OML engine is part of a reuse environment, named Odyssey,
that deals with component based development of applications
within a given domain. Specifically, OML is part of a system of
agents that helps users in their search for reusable components
[16]. It uses intelligent mechanisms such as learning techniques
and user preferences in order to present in advance components
that the user does not even know that exist. The agent system
infers this information and presents the components to the user. It
is important to notice that although OML was built in the context
of the Odyssey project, it can be used standalone or integrated to
other tools, since OML offers a user interface (as seen in figures 3
through 8) and a CORBA IDL interface for its communication
with other tools.
Figure 2 presents OML, which comprises four levels: Interface,
Mediation Layer, ORB bus, and Translators. The Interface level is
implemented by the Service Manager (SM), which stores
metadata about available mediators, and is capable of creating
ontological bindings between related ontologies in order to query
several mediation layers. Also, SM is responsible for the creation
and modification of mediators. The Mediation Layer provides the
management of each mediator through the Metadata Manager
22
(MM), and provides access to mediators through the Query
Manager. At the ORB level, communication between the
mediation layer and translators is established through CORBA
standard services. Finally, the Translator level provides one
translator for each component repository in such a way that it can
participate in the Mediation Layer integration model.
3.1 Service Manager
The Service Manager (SM) stores metadata about mediators,
translators and data sources availability, and deals with
ontological commitments between related mediators (domains).
Schema 1 presents an overview of SM Metadata in ODL
notation, and Schema 2 provides the IDL interface to access
mediators through the CORBA bus.
class Object_Himpar
( extent Himpares)
{
attribute string Name;
}
class Mediator extends Object_Himpar
( extent Mediadores)
{
relationship list<Container> AssociatedDataSources
inverse DataSource::Medis;
attribute string Description;
attribute string KeyWords;
relationship list<Mediador> Super
inverse Mediador::*;
relationship list<Mediador> Spec
inverse Mediador::*;
relationship list<Mediador> Assoc
inverse Mediador::*;
attribute string BaseName;
relationship list<OntologyTerm> TermRel
inverse OntologyTerm::MediadorRel;
attribute String password;
}
class Wrapper extends Object_Himpar
( extent Wrappers)
{
attribute string description;
attribute string type;
relationship list<DataSource> Repositories
inverse DataSource::Trad;
}
class DataSource extends Object_Himpar
( extent DataSources)
{
attribute string owner;
relationship list<Mapping> Structure
inverse Mapping::Cont;
attribute string AbstractionLevel;
relationship Wrapper Trad
inverse Wrapper::Repositories;
relationship list<Mediator> Medis.
inverse Mediator:: AssociatedDataSources;
attribute String password;
}
class Mapping
{
attribute string DataSourceName;
attribute string map;
relationship DataSource Cont
inverse DataSource::Structure;
}
class OntologyTerm
( extent Terms)
{
attribute string Name;
relationship list<OntologyTerm> Synonym
inverse OntologyTerm::*;
relationship list< OntologyTerm > Hipernym
inverse OntologyTerm::*;
relationship list< OntologyTerm > Hiponym
inverse OntologyTerm::*;
relationship Mediator MediadorRel
inverse Mediator: TermRel;
}
class Component
( extent Components)
{
attribute string type;
}
Schema 1 Metadata of Service Manager
The ontological commitments between related mediators are all
done at SM level. SM provides the necessary metadata for this,
using the Mediator, DataSource, Mapping, OntologyTerm and
Component classes described in Schema2. The decision to
concentrate all ontological commitments at SM level was mainly
based on the SM available information. It knows the availability
of all OML components and thus is able to indicate which domain
the user could access. In order to use a given mediator within
OML, the administrator has to register the mediator, its related
data sources, and translators used by these data sources into SM.
Figures 3 and 4 present examples of the interfaces for doing this.
module Mediator
{
interface Access {
struct OntologyTerm {
string
Name;
string
Description;
};
struct object {
string
type;
string
definition;
};
typedef sequence<Ontology> ListOntology;
typedef sequence<object> ListObjects;
// Functions for the management of bases
string open_base (in string basename);
void close_base (in string basename);
// Functions for the management of ontologies
ListOntology retrieve_Ontology (in string
mediator-name);
// Functions for retrieve components
ListObjects queryMediator(in string query);
};
};
Schema 2 SM IDL interface to access mediators
Figure 3 shows some information about the Municipal Legislative
Mediator within SM. Some basic metadata are: i) the mediator
name, ii) the executable file that has to be loaded (if it is not
already loaded) by ORB, in order to respond to some request to
this specific mediator, iii) the keywords related to the mediator
23
(this information provides a fast and limited knowledge about the
contents of the mediator), iv) the password required by the
mediator to attend the request (if necessary), among others. Figure
4 presents a data source registration, associating a specific data
source, File System2, to the Municipal Legislative Mediator. We
may also register the available types of components in order to
know to which phase of the application development the
component belongs (analysis, architectural or implementation
see Figure 8).
One important characteristic of OML is to use domain ontology to
search for domain terms and its ontological relationship, within or
among various domains at different levels of abstraction. Thus,
SM has to capture the ontological model
4
of each mediator and
associate terms among them. For capturing each ontological
model, SM uses the ORB bus, through IDL retrieve_Ontology()
interface method (Schema 2), to access the specific mediator,
retrieving its ontological terms. Therefore, the ontological model
provides the main structure for dealing with domain ontology
relationships. Relationships involve semantic links such as
Hypernyms, Hyponyms, and Synonyms. A Synonym link
associates ontological terms in several domains that represent
synonyms for a particular ontological term. Hypernyms and
Hyponyms links relate ontological terms from various domains
that can be either more general or more specific than the current
one. Thus, it is possible to associate ontological terms from
multiple domains, providing accessibility for domain information.
Figure 5 presents the interface for the association of ontological
terms.
In a query formulation, SM accesses and retrieves all related
mediators, searching for component information that fulfills the
query semantics. The ontological information about requested
4
Each ontological term is specified as a domain term and a detailed
description of it, and relationships with other ontological terms, at
different levels of abstraction, are created (see section 3.2)
domains is transferred by the Broker (ORB) to the proper domain
(mediator), using the method queryMediator (in string query), and
correct ontological domain terms.
In the example shown, SM will query all mediators that are related
to the Municipal Legislative mediator. Thus, SM will transmit the
query to the Legislative Domain Mediator and Justice Domain
Mediator (see Figure 2). The retrieved components and their
corresponding match levels are shown, if each retrieved
component exactly matches the query or if the component
Figure 3 A Mediator Registration Example
Figure 4 A data source registration example
24
partially attends the request. OML registers this information and
presents how well the component attends the request (total or
some percentage in the latter case). In section 4, we present a
more concrete example of this kind of query.
3.2 Mediator Manager
Each mediator has its own metadata. This metadata represents the
ontological model of the domain. The Mediator manager (MM)
also stores relationships among the ontological terms and
components stored in data sources. This metadata provides the
capability to retrieve components related to this mediator
(domain). In order to provide this feature, MM also stores the
ontology metadata related to this domain. For each ontological
domain term, it is necessary to register its name, its type (in this
specific case a term that represents a functionality in the domain),
its importance within the domain and the related domain terms.
These terms permit the expansion of the query range within the
domain, i.e., if there is no component information on data sources
related to this specific ontological term, OML could query related
ontological terms in the same domain. Of course, this "shift" must
be reported to the user.
In order to relate each ontological term with its counterparts in
data sources, MM retrieves related information of data sources
from SM, using the ORB bus. The retrieved information is used
by MM to locate and retrieve software components from data
sources. Thus, we can associate each ontological term with its
related components, with the help of a specific translator.
A RETRIEVAL EXAMPLE USING OML
Consider a user who is developing an application to handle new
legislative proposals, among other characteristics, in the legislative
domain. He wants to know if he can use pre-existing software
components in his application. Thus, he can use the OML user
interface to know about the availability of this kind of
components, and to retrieve some candidates.
In our example (Figure 6), data source 2 has a binary software
component called "New Subject", and data source 1 has a Java
package (set of related classes) named "Proposal Creation". Both
data sources were mapped into the Legislative Municipal Domain
Mediator. Therefore, the Justice Domain mediator has an ontology
term named Justice Code that is mapped to a component named
Code Database.
These components are made available to the Legislative
Municipal Domain through the ontological term in the mediator
called "Creation of New Proposals", and are mapped to the above
components in data sources, i.e., data sources 1 and 2.
During the creation of new proposals within a Municipal
Legislative House, there are some cases when it is necessary to
consult justice database rules. This justice database can impose
some restrictions on a new proposal creation.
Figure 5 Multiple Ontology Association
25
When the Municipal Legislative Mediator was registered, the SM
administrator associated this mediator with the Justice Mediator.
This Justice Mediator provides software components used for the
development of applications in the Justice domain. Thus, when
our user accesses the SM interface in order to retrieve components
related to the creation of new proposals, he can choose to access
information from all related mediators, i.e., generic mediators,
specific mediators, associated mediators or all of them. Suppose
our user decides to retrieve information from the Legislative
Mediator and associated mediators, then he will access
components from the Legislative Mediator and Justice Mediator
(see Figure 2). The formulation of the query (Figure 7), selecting
the type of component to be retrieved (components belong to
analysis, architectural, codification or all phases of development),
and the result of this query is presented in Figure 8. Note that for
each component, a description of the retrieval is presented (in
Data Source 1
Data Source 2
Mediator Manager
ORB
Justice Rules
Proposal Creation
New Subject
Data Source 3
Rule Database
Mediator Manager
Creation of New
Proposals
Service Manager
Creation of
New Proposals
Hyponym
Justice
Rules
Figure 6 Retrieval Schema Example
Figure 7 Query Formulation
26
Figure 8, the description presented is related to the Rule Database
Access component) and the user can select one or more
components to retrieve.
Through the mediation structure, OML users can search for
components in a transparent and uniform way. In the above
example, users of OML do not have to know where components
are stored. Moreover, users do not have to query all component
repositories, using each specific repository query language format
(when a query language exists) to find where the needed
components are stored. They do not even have do know how to
access data sources.
The complexity for dealing with these heterogeneous repositories
is treated by OML. Without this layer, users would have to handle
these repositories individually, increasing the complexity of the
query and access. By using mediators, users can query specifically
the mediation metadata, using one single model. The mappings
between mediator metadata and translators redirect and
decompose the query to data sources 1, 2, and 3. Also, the
identification of components of the same domain that are in
different repositories can be detected at the time of their
registration in the mediation layer. Afterwards this is all
transparent to users.
CONCLUSIONS
This work addresses the interoperability problem between
component information repositories. An integration layer was
developed to help searching and identifying suitable reuse
components. This layer is based on mediators and ontologies to
provide the binding of different components to their domain
concepts. To assist the identification of related components and
their appropriate domain organization, each mediator encloses one
domain ontology and provides the mapping to their respective
repository of components.
Mediators provide a uniform view of the available components
organized in domain taxonomy. Domain ontologies are used to
help searching for reusable components information through the
representation of domain semantic concepts. Therefore, this
mediation layer promotes domain information integration and
provides mechanisms to translate component requests across
ontologies. The important aspect of our proposal is the use of
domain ontologies, for reusable component retrieval, in a concrete
situation, allowing users to express component requests at a higher
level of abstraction when compared to keyword based access or
component interface based access used in other proposals.
Without OML, users would have to access directly various
repositories, dealing with specific characteristics of each
repository. Therefore, the main contribution of this paper is to
show the potential of the technology of mediators, together with
ontology models, for dealing with components repositories
complexities, and organizing the manipulation of different
components within a domain ontology. Although, the mediation
technology is quite popular within HDDS, its adaptation using
domain ontologies for component information retrieval is
innovative.
OML is an operational interoperability architecture based on the
use of mediators, translators, and a CORBA communication
protocol, which is responsible for the connection among
translators and mediators in a distributed and heterogeneous
environment. It was constructed using the C++ language together
with the Visibroker ORB for C++. Currently, OML is being
extended in order to publish and search for components on the
Internet [19], based on XML standard.
References
[1]
Jacobson, I.; Griss, M.; Jonsson, P. : "Software Reuse:
Architecture, Process and Organization for Business
Success;" Addison Wesley Longman, May 1997.
Figure 8 Example of component information retrieval in OML
27
[2] Wiederhold, Gio; Jannink, Jan: "Composing Diverse
Ontologies;" 8th Working Conference on Database
Semantics (DS-8), Rotorua, New Zealand (DS-8) January
1999 (Final version to be published by
IFIP/Kluwer/Chapman&Hall).
[3]
Arens Y., Knoblock C.A., and Shen W.: Query
reformulation for dynamic information integration.
Journal of Intelligent Information Systems, 6(2):99130,
1996.
[4]
Molina, Garcia and et.al. :The TSIMMIS approach to
mediation: Data models and languages. Journal of
Intelligent Information System, 8(2), 1997.
[5] Braga, R.; Mattoso, M.; Werner, C.: "The Use of
Mediators for Component Retrieval in a Reuse
Environment," In: Proc. Technology of Object-Oriented
Languages and Systems Conference (TOOLS-30
USA'99), IEEE CS Press, Santa Barbara, pp.542-546,
August 1999.
[6]
Seacord, R.; Hissan, S.; Wallnau, K,: "Agora: A Search
Engine for Software Components," Technical Report
CMU/SEI-98-TR-011, August 1998.
[7]
Genesereth M.R., Keller A., and Duschka O.M.:
Infomaster: An Information Integration System. In
SIGMOD RECORD, Proceedings of the 97 ACM
SIGMOD International Conference on Management of
Data, pp. 539542, Tucson-Arizona, 1997.
[8]
Levy, Alon Y., Rajaraman, Anand, and Ordille, Joann J.:
Querying heterogeneous information sources using source
descriptions. In Proceedings of the 22nd VLDB
Conference, pp. 251262, Mumbai (Bombay), India,
1996.
[9]
Fowler, Jerry, Perry, Brad, Nodine, Marian, and
Bargmeyer, Bruce: Agent-Based Semantic Interoperability
in InfoSleuth , SIGMOD Record 28(1): pp. 60-67, 1999.
[10]
RIG; "Reusable Library Interoperability Group" at
http://www.asset.com/rig/, 1996.
[11] Pires, P.; Mattoso, M.: "A CORBA based architecture for
heterogeneous information source interoperability;"
Proceedings of Technology of Object-Oriented Languages
and Systems - TOOLS'25, IEEE CS Press, pp.33-49,
November 1997.
[12] Szyperski, C.: Component Software: Beyond Object
Oriented Programming, Addison Wesley, 1998
[13] Ram, S.: "Guest Editor's Introduction: Heterogeneous
Distributed Database Systems,"; IEEE Computer, Vol. 24
No.12, December 1991.
[14] Nieto, E. M.: OBSERVER: An Aproach for Query
Processing in Global Information Systems based on
Interoperation across Pre-existing Ontologies, Doctoral
Thesis, Universidade de Zaragoza, November 1998.
[15] Weinstein, P. C.: Ontology-based Metadata:
Transforming the MARC Legacy, Proceedings of the 1998
ACM 7
th
Internacional Conference on Information and
Knowledge Management, pp. 52-59, 1998.
[16] Braga, R.; Mattoso, M.; Werner, C.: "Using Ontologies
for Domain Information Retrieval," in DEXA 2000 DomE
Workshop, pp.100-104, September 2000.
[17] Braga, R.; Werner, C.; Mattoso, M.: "Odyssey: A Reuse
Environment based on Domain Models"; In: Proceedings
of IEEE Symposium on Application-Specific Systems and
Software Engineering Technology(ASSET'99), IEEE CS
Press, Richardson, Texas, pp.50-57, March 1999.
[18]
Ye, Y.; Fischer, G.: "Promoting Reuse with Active Reuse
Repository Systems," IEEE ICSR 2000, Vienna, pp.302-317
, June 2000
[19]
Pinheiro, R.; Costa, M.; Braga, R.; Mattoso, M; Werner,
C.; "Software Components Reuse Through Web Search
and Retrieval", Proceedings of the International
Workshop on Information Integration on the Web -
Technologies and Applications, Rio de Janeiro, Brazil,
2001 (to appear).
5
Each ontological term is specified as a domain term and a detailed
description of it, and relationships with other ontological terms, at
different levels of abstraction, are created (see section 3.2)
28 | Domain Engineering;Software Classification and Identification;Component Repositories;Component Based Engineering |
193 | Through Different Eyes Assessing Multiple Conceptual Views for Querying Web Services | We present enhancements for UDDI / DAML-S registries allowing cooperative discovery and selection of Web services with a focus on personalization. To find the most useful service in each instance of a request, not only explicit parameters of the request have to be matched against the service offers. Also user preferences or implicit assumptions of a user with respect to common knowledge in a certain domain have to be considered to improve the quality of service provisioning. In the area of Web services the notion of service ontologies together with cooperative answering techniques can take a lot of this responsibility. However, without quality assessments for the relaxation of service requests and queries a personalized service discovery and selection is virtually impossible. This paper focuses on assessing the semantic meaning of query relaxation plans over multiple conceptual views of the service ontology, each one representing a soft query constraint of the user request. Our focus is on the question what constitutes a minimum amount of necessary relaxation to answer each individual request in a cooperative manner. Incorporating such assessments as early as possible we propose to integrate ontology-based discovery directly into UDDI directories or query facilities in service provisioning portals. Using the quality assessments presented here, this integration promises to propel today's Web services towards an intuitive user-centered service provisioning. Categories and Subject Descriptors | INTRODUCTION
Web services are expected to provide an open platform not only
for electronic B2B interaction, but also for the provisioning of so-called
user-centered services, i.e. B2C services that can provide
useful information and a variety of service offers to support users
in a modern mobile lifestyle. Though the capabilities of such services
are still relatively simple, their sophistication will grow with
the improvement of (wireless) networks, bandwidths, and client
device capabilities. However, finding the adequate service for
subsequent use of each individual user becomes a more and more
demanding problem. Given the convergence of networks in forthcoming
(mobile) environments and the evolving innovative business
models for third party service deployment (e.g. NTT
DoCoMo's i-mode service certification/licensing model for mobile
service portals<A href="193.html#10"> [19]) the variety of services is even expected
to grow. Making an informed choice of the `right' service will
therefore include matching individual users' preferences or dislikes
against the concepts and capabilities of the services offered.
Usually the interaction process for Web services consists of three
distinct phases: a discovery of possible services, the selection of
the most useful, and the subsequent execution. In understanding
what a service actually offers the first two phases are crucial and
the general acceptance of user-centered services will depend on
the solutions of still demanding problems in interaction like cooperative
querying. As sh<A href="193.html#9">own in [4] and [5]<A href="193.html#10"> the discovery and selection
processes of user-centered Web services involves a high degree
of respect for user preferences to be flexible enough for real
world use. In that respect providing user-centered services
strongly differs from the well-defined capabilities of traditional
B2B services. As a running example of a typical user-centered
service we will use an extension of the cooperative restaurant
booking Web service <A href="193.html#9">presented in [4]: restaurant booking services
subscribe to the least general applicable node along a complex
service ontology for a number of characteristics. A service request
can then be performed including a choice of various individual
categories. However, the individual services offered will usually
only more or less match all the user's expectations. Ranking services
with respect to requests is thus an ongoing challenge, as is
also evident from the research areas of IR or Web search engines
for information provisioning.
Service providers almost always can anticipate some typical interactions
with their services. For our example typical tasks are
for instance booking a certain restaurant for a specific evening,
finding a suitable restaurant in the vicinity for lunch, etc. The
characteristics and input parameters for Web services for restaurant
booking thus usually contain a number of general input values
that can be specified in a service request/query: the name of
the restaurant, its location, its specific address, the type of cuisine,
the date and time for a booking, its price range or even third party
content like recommendations (e.g. the Zagat reviews). However,
from a service provisioning point of view the nature of these parameters
strongly differs. A user expecting to book a certain restaurant
on a specific evening will expect that the request may be
granted or may fail depending on current reservations of that restaurant
for the given date, but relaxing the constraints of the date
given or booking a different restaurant for the evening might simply
not do. In contrast a user simply wishing for a close-by restaurant
to have lunch will rarely provide such fixed terms as a restaurants
name, but rather use descriptive terms like a preferred cuisine
and an approximate location.
Figure 1: Concept of enhanced UDDI service registries
Distinguishing such query stereotypes like `book a table at the
`Chez Panisse' for the 12/3/03 8:00 pm' and `give me the name
and address of a Chinese restaurant in the commercial district of
San Francisco with medium price range' and the subsequent personalization
of service provisioning also needs different types of
input parameters. Whereas simple variables like the restaurant's
name or a certain category in a clear request can be handled in an
exact match fashion, more fuzzy attributes in a somewhat tentative
request like an approximate location or the choice of cuisine
have to be understood as a user`s preferences with respect to certain
concepts (soft constraints). In the area of the Semantic Web
the management of such concepts is usually done by the very
powerful tool of ontologies that describe a generalization hierarchy
of such concepts. In the course of this paper we will show
how to open up service provisioning to the better understanding,
adequate handling and quality assessment of each individual
user's intentions and preferences. The contribution of the paper
thus is twofold:
On one hand we relate the use of ontologies and the handling
of conceptual views like given by the Semantic Web to co-operatively
evaluating preferences for each specific user
On the other hand we show how to effectively deal with the
problem of relaxing multiple conceptual views for more
complex queries and give quality measures to assess the
most useful results for each specific user.
Both contributions can be expected to improve the service provisioning
of user-centered Web services and help to boost their
usability and thus subsequently their acceptance.
SEMANTIC REGISTRY ENHANCE-MENTS
Today Web services are usually provided via an Internet wide
network of services registries given by the Universal Description
Discovery<A href="193.html#10"> and Integration (UDDI) [21]. UDDI builds on the Web
Service Definition Language WSDL<A href="193.html#10"> [7] which features basic
information about providers of a service and technical service
invocation details. Even though UDDI has become the de facto
standard in the field it suffers from a major shortcoming: the information
offered on individual services is rather limited. A yellow
-page-style lookup mechanism provides the service interface
together with a short verbal description of what task the service
performs. Mainly targeted a human Web service experts and developers
, advanced query capabilities and cooperative matchmaking
, however, are still lacking.
Research in the area of the Semantic Web seeks a solution to this
unsatisfy<A href="193.html#10">ing situation, e.g. [20][4]<A href="193.html#9">. Generally speaking, the Semantic
Web fosters a population of the Web with content and
services having formal semantics and rich service descriptions.
Several semantic frameworks for Web services are currently
emerging <A href="193.html#9">with DAML-S [2] and W3C's recently established
<A href="193.html#10">OWL-S [9], [10] initiative as the most prominent approaches. We
have built our previous work on DAML-S as a relatively mature
ontology-based approach to the description of Web services that
tries to provide a common ontology of services for the Semantic
<A href="193.html#10">Web. Building on top of DAML+OIL [8] the Web service representations
in DAML-S consist of a service profile for advertising
and discovering services, a process model giving a detailed description
of a service's operation and a service grounding providing
details on how to interoperate with services via message exchange
.
Figure 1 shows a schematic view of our semantically enriched
Web service provisioning concept. A user states personal needs
and preferences in an enhanced service request. The enhanced
UDDI registry matches this request against the descriptions of all
registered services. The actual matching can be carried out using
cooperative database technology like sh<A href="193.html#9">own in [4]. The query is
split in hard and soft constraints where the hard constraints are
processed as filter conditions, whereas the soft conditions can be
relaxed if necessary. If no user-specific preferences are given with
the service request, the relaxation follows the domain-specific
conceptual views of the service ontology given by the service
providers or portal operators. To distinguish between several possible
relaxations the quality assessment, which is the main aspect
of this paper, will evaluate the degree of match for each service
with respect to the original user query and offer all best matches.
After a certain implementation has been chosen, the service provider
will execute the service and deliver the result.
From a service provisioning viewpoint centralized and publicly
available service ontologies can be understood as a default service
conceptualization or the most common service concept hierarchy,
i.e. encoding common and widely accepted knowledge or
world/domain knowledge. Due to the hierarchical nature of ontologies
a user asking for a specific service will also be served
with any more specific service concept subsumed by his request.
On the other hand, in the case where a best match to his initial
request is not available he/she might also be satisfied with more
general services from a super-class of the requested one. This is
determined by a relaxation step in the service ontology, i.e. a
generalization of concepts along the lines of the ontology.
ONTOLOGY-BASED WEB SERVICE DISCOVERY AND SELECTION
In this section we provide a brief overview of the use of ontologies
for relaxation of soft query constraints and show how common
knowledge serves as a default for cooperative retrieval.
197
American
California
Seafood
Shopping
Commercial
Cajun
Texmex
Organic
Fusion
Center
Suburb
City
Cultural
...
...
...
Location
Thing
Restaurant
Akasaka
Chez Panisse
Avocado Garden
Cesar
The Walrus
...
Se
rvi
c
e
s
S
e
rvi
c
e O
n
t
o
lo
gy
,,Cuisine"
Figure 2: Service Ontology for restaurant booking.
3.1
Service Ontologies, User Preferences and
Usage Patterns
The purpose of a service ontology is to describe the kinds of entities
available in a service repository and how they are related. To
this end service ontologies may include descriptions of service
classes, properties and their instances (the actual services that are
eventually selected for execution). A basic service ontology is
depicted in Figure 2. Here restaurant booking services are classified
according to their cuisine and their location in a city. For
instance the restaurant `Chez Panisse' serves `Organic' food with
`Organic' being a specialization of the `Californian' cuisine (as
well as of `American'). Furthermore the restaurant `Chez Panisse'
is located in the `Shopping' district which itself is part of the city
`Center'. We have used W3C's Web Ontology Language (OWL)
and its predecessor DAML+OIL to enrich DAML-S service pro-<A
href="193.html#9">files in Web service repositories [4][5]<A href="193.html#10">. Modeled in OWL the
most general `Restaurant' and `Location' concepts are anchored
in `owl:Thing' the most common concept of any ontology.
A restaurant booking service using the service ontology from
Figure 2 might on one hand assume that a user asking for `a restaurant
featuring American cuisine' will be well served by all
restaurants with e.g. Cajun, Californian or Texmex cuisine, since
they are all instantiations of American cuisines. On the other
hand, if a user asks for `a Californian fusion cuisine restaurant'
and no such service should be registered, implicitly relaxing the
query to all Californian restaurants and offering restaurants with
organic cuisine or Californian seafood to our user will be more
helpful than just stating the empty result. And even if a user has
different conceptions (e.g. cuisines being related based on their
flavors) or explicit preferences (a Chinese restaurant rather than
an Italian one), an ontology-based discovery/selection model is
<A href="193.html#9">still useful. [4] shows in detail how to deal with these cases by
overwriting the default ontology with an explicitly provided (or
implicitily derived) generalization hierarchy of a user somewhat
similar to the view definitions proposed by<A href="193.html#10"> [14]. Since for our
assessment framework here the exact kind and classes/values of
an ontology do matter less than its actual structure in each instance
, such overwritings by user specified conceptions are always
possible to facilitate.
We advocate the use of service ontologies together with a proprietary
notion of basic user preferences and typical service usage
patterns for the stepwise refinement of service requests in a cooperative
service provisioning environment. While the basic approach
and combination of ontologies, preferences and patterns is
<A href="193.html#10">published elsewhere [5] we will now concentrate on enhancements
to the relaxation along the lines of ontologies alone: unlike
the specialization of a request, a generalization can result in severe
changes of the initial query semantics. This is especially true,
if several relaxation steps have to be performed until a match can
be found. At the point of relaxing a constraint to the root of an
ontology, the respective constraint can even be considered as
entirely dropped. But nevertheless, since an ontology resembles
common knowledge (and thus implicit preferences), for high
quality service provisioning offering somewhat related features is
usually still a better default than just returning an empty result set.
3.2
Multiple Conceptual Views
Individual users might have quite specific ideas about differing
domain concepts (conceptions) or very clear expectations how to
be served differing from the usual domain assumptions (explicit
preferences), but also implicit preferences play an important part.
Consider for instance location-based services, e.g. for restaurant
booking. If a user asks to book `a Chinese restaurant for dinner',
the common domain knowledge tells us that this restaurant should
be in the vicinity (e.g. a 30 miles area) of his current or usual
whereabouts and we can add this information as an implicit constraint
for better provisioning quality. A user in San Francisco
would usually be annoyed by offers of Chinese restaurants in
Hong Kong no matter how good their actual quality or rating is. If
this general assumption would not hold, however, (e.g. if a user
wants to fly to Hong Kong in the morning and then have dinner
there) he or she would have stated this unusual detail already
within the query and had asked for a `Chinese restaurant in Hong
Kong for dinner'. Such explicit information within a service request
is provided due to the psychological notion that though
users want a service to know what is sensible (like they expect to
be served in human-human interaction), no user expects a service
to be clairvoyant. Thus, not only having further (explicit) knowledge
of a user, but also assuming typical behavior, concept hier-198
archies given by ontologies can be used as good default relaxation
hierarchies for user preferences. Should, however, some preferences
or a specific conception be given, the underlying ontology
has to be exchanged against the user-provided terms or concepts.
We introduce the notion of conceptual views on a service ontology
to account for all the different interests a user wants to express
in a service request. Such a conceptual view is modeled as a
clipping from the full service ontology that starts with the most
general concept associable with a specific user interest. For this
paper we will for the ease of understanding assume conceptual
views to be non-overlapping, tree-shaped clippings from the full
service ontology where each service is registered with the node
that describes its value with respect to the most specific characteristics
. An example conceptual view is indicated as a grey shaping
in figure 2: the view named `Cuisine' is basically a sub-ontology
only concerned with the classification of restaurants according to
the offered type of food. Whereas the restaurants `The Walrus',
`Chez Panisse', `Avocado Garden' and Cesar are classified as
being `Fusion' or `Organic' places the restaurant `Akasaka' is not
reachable in this view as it is only classified as located in the cultural
district. As we will discuss in the remainder of this paper
multiple conceptual views can be used to account for different
interests a user wants to express in a service request and the relative
importance between them. In the case of the restaurant booking
example it is conceivable that a user values the fulfillment of
location constraints over cuisine constraints if he/she is only up
for a quick work lunch. However, for the ambitious `hobby gour-met'
this might be just the other way round on the weekend. Thus
we will need ways to assess the respective quality of different
relaxation schemes to allow users to make an informed choice.
RELAXING MULTIPLE ONTOLOGIES
Let us now focus on problems that arise when multiple soft constraints
have to be relaxed over the service ontology. We will first
look at our sample scenario, investigate the relaxation of conceptual
views and then discuss some quality considerations.
4.1
Relaxation Plans for Multiple Selection
Predicates
Let us consider our restaurant booking service from above. A
typical query would be "Find a Californian fusion cuisine restaurant
in the commercial district of Berkeley". Here we will have to
deal with two soft constraints: the type of cuisine and the location.
Assuming that we do not have more specific information about
the user's preferences both constraints could be relaxed along the
two default conceptual views of the service ontology of figure 2.
`The Walrus'
fusion
`Cesar'
`Chez Panisse'
californian
organic
`Avocado Garden'
`Cucina Calabrese'
seafood
`The Mediterraneum'
Figure 3: The cuisine ontology with respective instances
Figures 3 and 4 show the full first two concept levels of the respective
conceptual views with some instances. So in figure 3 we
can for instance see that there is a service for a restaurant called
`The Walrus' which is classified as offering fusion cuisine and as
such, also offering Californian cuisine. The relaxation of query
predicates over such a view is straightforward. If the query predicate
specifies `fusion cuisine' restaurants, we would have the
choice between the respective instances, here `The Walrus' and
`Cesar'. If for some reason there would be no fusion cuisine restaurants
registered, or the instances cannot satisfy some other
constraints (like booking for a certain date), we will relax along
the ontology to the more general concept of Californian cuisine
and can also consider the restaurants that are registered under the
`organic' and `seafood' characterizations of California cuisine.
The technical problem of how to relax single conceptual views
with adequate query languages over cooperative database systems
<A href="193.html#9">is in detail addressed e.g. in [4] and [5]<A href="193.html#10">. The beauty of the design
is that all details of a UDDI or DAML-S style description for each
service together with some more characteristics (e.g. taken from
RDF statements of a restaurants homepage) can be stored in a
classic relational database by the service provider and can be
searched using a declarative query language extended by preference
constructors like sh<A href="193.html#10">own in e.g. [13]. Thus an added-value
service using semantically meaningful content can be provided
quite easily.
`Tom's Grill'
commercial
`Sushi and Soul'
`Avocado Garden'
city center cultural
`The Mediterraneum'
`Akasaka'
`Chez Panisse'
shopping
`Cesar'
`Pizza My Heart'
Figure 4: The location ontology with respective instances
A more serious problem arises when several soft query constraints
over various conceptual views have to be evaluated. Consider for
instance the query on a `fusion cuisine restaurant in the commercial
district'. We can easily verify that in our example even
though `The Walrus' and `Cesar' are fusion cuisine, they are not
registered in the commercial district. Likewise `Tom's Grill',
`Sushi and Soul' and `Avocado Garden' are in the commercial
district, but they do not offer fusion cuisine. Aiming at a cooperative
retrieval behavior we are left with three choices: relaxing the
constraint on the cuisine, relaxing the constraint on the location or
relaxing both constraints. When relaxing the cuisine to `califor-nian'
we would retrieve the service of `Avocado Garden' the only
Californian cuisine restaurant within the commercial district. Relaxing
the location-based ontology would result only in the service
of `Cesar' the only fusion cuisine restaurant in the city center
. Finally relaxing both constraints would result in all Californian
cuisine restaurant within the city center, i.e. `Avocado Garden'
, `The Mediterraneum', `Chez Panisse' and `Cesar'. Thus
199
different kinds of relaxation will usually result in essentially differing
answer sets. This problem will remain if more relaxation
steps have to be taken. If we have relaxed only one constraint we
could e.g. decide to relax this property even further or relax another
property of our service during the next step. Let us first
define the task of finding the `best' service under relaxation
The Problem of Best-Matching Service Provisioning:
Given various characteristics that describe all services in a service
ontology, a concept hierarchy (conceptual view) for each characteristic
and a user request stating a number of hard and soft constraints
, the best-matching service is given by all services that:
- fulfill all the hard constraints
- fulfill all soft constraints with a minimum amount of relaxation
of characteristics with respect to a suitable quality measure
So given that each service is registered in one concept node for
each characteristic given by the respective conceptual view, and
all services not fulfilling the hard constraints have been filtered,
the problem of selection over multiple constraints comes down to
deciding what is `a minimum amount of relaxation'. Obviously
the basic task is finding a service that is registered with all concepts
(or any of their respective sub-concepts) as specified by the
query, a `perfect match'. But if there is no such service registered,
the decision what soft constraints to relax and how far they are
relaxed, is paramount for the quality of provisioning.
4.2
Basic Service Quality Considerations
Since the decision about the relaxation scheme is important for
the output, some way of considering which scheme to follow is
needed. Usually ontologies are of a qualitative nature. A superclass
/ subclass taxonomy is established, but there is no knowledge
of the `degree' or the relative distance between different
concepts. However, such knowledge could crucially change the
utility of certain relaxation plans. Relaxing more refined ontologies
or views will generally hurt the user preferences less than
relaxing already coarse views or ontologies. The less general the
concept, the more refined are the sets of objects that will be offered
to the user, and flooding the user with too much too general
content is avoided. Let us first take a closer look at merely qualitative
views on ontologies of comparable granularity, etc. (i.e.
relaxing one constraint is introducing the same amount of generalization
as relaxing any other) and then investigate ways to deal
with quantitative measures in the following section.
For scenarios of merely qualitative preferences and their relaxation
for the restricted class of ceteris paribus preferences <A href="193.html#10">[16]
proposes a scheme of ordering different objects in the result set
according to the count of necessary relaxation steps from the top
or the bottom of the hierarchy or simply the relative distance to all
violated query constraints. Since we assumed symmetrical views
on ontologies, also in our relaxation problem a similar concept
will help us to understand what should be relaxed preferably and
what this means for the objects in the result set. Let us first show
the approach of simply counting the relaxation steps from the
violated constraints. We will label the services we found in each
step by the number of necessary steps to find them. But first we
need to define the necessary concept of relaxation paths.
Given tree-shaped conceptual views of a service ontology that
arranges concepts or values with respect to certain service characteristics
using a generalization semantics, a relaxation path is a
path along the edges of each conceptual view that leads from a
base concept to the respective root of the view. Usually this base
concept is specified in a service request or user query and relaxing
along the relaxation path leads to an increasing generalization of
this concept. Assuming that all services have been assigned to the
node of their first appearance along the relaxation path (i.e. they
are registered to any of the respective node's sub-trees of concepts
, but not to an earlier node of the relaxation path) we get a
chain of concepts with all services registered under the aspect of
least necessary level of generalization.
An example for a relaxation path can be easily derived from figures
2 and 4. If a user is primarily interested in the commercial
district, the appropriate nodes of the relaxation path would be
`commercial district', `city center', `city' and `location'. The
services registered in the nodes are e.g. `Tom's Grill', `Sushi and
Soul' and `Avocado Garden' for `commercial district'. The node
`city center' would also contain all services registered to its sub-concepts
, (i.e. `The Mediterraneum', `Akasaka', `Chez Panisse',
`Cesar' and `Pizza My Heart') and so on. Figure 5 shows the first
two steps of respective relaxation paths for both conceptual views
in figures 3 and 4 focusing only on the services registered in both
views. As we pointed out we will always assume tree-shape views
for the course of this paper. Please note that all the concepts easily
can be transferred to the case where sub-concepts can have
multiple parent nodes. In this case the node along the relaxation
path would consist of the intersection of the different parent concepts
, or the intersection of their registered services respectively.
This generalization has already been successfully employed in a
similar fashion for mapping queries between differing ontologies
by<A href="193.html#10"> [18]. Distances then can simply be measured by the minimum
distance, if multiple paths for relaxation should be available.
Let us now see, how relaxation can be done using an unlabeled
relaxation graph (see figures 3 and 4). If we begin by either relaxing
the cuisine or the location constraint of our query we would
have to assign a quality value of 1 to both the `Avocado Garden'
and `Cesar'. Thus in terms of quality they are incomparable,
which closely resembles our missing knowledge of what kind of
relaxation the individual user would prefer. If there should be
more ways to relax constraints to encounter a service, we will
always count the minimum number of relaxation steps necessary.
Relaxing both constraints again leads to quality values of 1 for
`Avocado Garden' and `Cesar' and a value of 2 for `The Mediterraneum'
and `Chez Panisse', because they have only been seen by
relaxing both constraints and thus are probably less desirable for
the user. However, relaxing the cuisine ontology two steps (i.e. to
all American cuisines) and sticking to the commercial center constraint
might result in `Tom's Grill' turning up also with a value
of 2. This is because in the nave model the semantic difference
between `deep' relaxation and `broad' relaxation is considered the
same. To be sensitive to the differing implications of broad and
deep relaxation with respect to generality we will use our concept
of relaxation paths and show how to use labels along this path to
get to a more sophisticated relaxation paradigm.
fusion
californian
american
city center
commercial
city
1
2
1
2
fusion
californian
american
city center
commercial
city
1
2
1
2
Avocado Garden
Tom`s Grill
Cesar
Cesar
Avocado Garden
Tom`s Grill
Chez Panisse
Mediterraneum
Chez Panisse
Mediterraneum
Figure 5: Relaxation paths and assigned services
200
Usually the generalization throughout ontologies will become
quickly rather unspecific with decreasing distance to the root.
Hence a broad relaxation strategy (breadth first relaxation) often
is preferred to deep relaxation steps. Summing up the distances
like before, but weighing each relaxation step with the relative
distance to the original query term can implement this. Consider
our query for `fusion cuisine' restaurants in the `commercial district'
. So for example the `Avocado Garden' and `Cesar' need
each only one relaxation step with a distance of one to the original
constraint (cf. labels in figure 5), so their quality value is 1. `Chez
Panisse' and `The Mediterraneum' both need two relaxation steps
with a distance of 1 each resulting in a value of 2. In contrast
`Tom's Grill' also needs only two relaxation steps, but whereas
the first step has a distance of 1, the second step already shows a
distance of 2. Thus the final value for `Tom's Grill' is 3 (i.e.
1+2*1) and we can now effectively distinguish between deep and
broad relaxation. Depending on the nature and granularities of the
ontology or views we can of course also use higher weightings for
the deep relaxations, for instance 10
(distance-1)
. So the first step will
be weighted by 1, the second deep step by 10, the third deep step
by 100, and so on. Since the broad relaxation steps are still simply
added up, this will `punish' deep relaxation and avoid too broad
generalizations of constraints. If we always want to punish deep
steps symmetrically until all constraints in turn are relaxed at least
to the same level a factor of (number_of_ontologies)
(distance-1)
will
be adequate as shown in the following lemma.
Lemma 1: Weightings to Foster Broad Ontology Relaxation
Given n soft query constraints with their respective relaxation
hierarchies. To always prefer a broad relaxation scheme, label
each object by summing up the numbers of edges relaxed to find
this object in each hierarchy and weigh every edge by n
(d 1)
using
the number of soft constraints n and the relative distance d to the
original query constraint.
Proof: Since within each depth all weightings are the same, it is
obvious that within a certain depth of the hierarchy any object
seen with less relaxation steps has a smaller label than an object
that needs more steps. Thus if we e.g. have to relax two constraints
within a level this object will always be labeled with a
higher weight than an object that needed only one relaxation independently
of which constraints have been relaxed.
We still have to show that if we do a step with a deeper distance,
an object O encountered there always gets a higher label than any
object P encountered in all hierarchies only with relaxations up to
a lower distance. We will do that by showing that the minimum
label for object O is higher than the maximum label for object P.
Let us assume that in order to encounter object O we have to relax
at least one constraint to a distance of k. The minimum label for O
thus is given by relaxing only a single constraint to distance k and
not having to relax any other constraint. Hence object O's label is
given by (n
0
+n
1
+...+n
(k-2)
+n
(k-1)
). The maximum possible label for
object P on the other hand is given by having to relax every of the
n constraints (k-1)-times, i.e. the maximum distance smaller than
k. Thus the maximum label for P is n*(n
0
+n
1
+...+n
(k-2)
) =
(n
1
+...+n
(k-2)
+n
(k-1)
) and thus P's label is -even relaxing all constraints
to a maximum- at least by 1 smaller than the best possible
label for O.
Thus in relaxing constraints for equally important conceptual
views an adequate algorithm would be the processing of decreasing
levels of quality, i.e. finding services with increasing weightings
. Starting with the minimum possible relaxation the algorithm
will always work over an entire sequence of services with the
same quality index and return all the discovered services of the
lowest level found, together with their quality estimation. This is
important for having to restart the algorithm at the previous point
of termination, if the services discovered so far should not have
been sufficient and the evaluation of lesser quality levels becomes
necessary. Similarly, knowing the labeling technique and the
views involved a user can also specify a maximum quality value
up to which he/she is willing to accept more general services. For
our algorithm we will assume a declarative query mechanism on
UDDI directories enhanced by all the feature characteristics described
by their respective conceptual views like pres<A href="193.html#9">ented in [4].
However, symmetrically relaxing just the same number of steps
even when assuming views of the same granularity and importance
with respect to the user query, will generally not lead to our
desired broad relaxation scheme with as little generalization as
possible. Imagine a query that specifies two soft predicates of
which one is a leaf node of a view, whereas the other predicate
specifies a direct descendant node of the root in the respective
view. If no perfect match should be given, our strategy would
result in relaxing either of the two constraints a single step. But
whereas the relaxation by a single step from our leaf node usually
leads to a slight generalization allowing a few more services for
selection, the relaxation step in our second constraint would relax
to the root node and thus offer the total number of services registered
in the entire ontology for the respective characteristic, i.e.
entirely drop the second constraint. Obviously that would not be a
sensible behavior. So we do not only have to punish deep relaxation
steps but even more severely refrain from relaxations the
closer we are to the root.
This concept will be implemented in our relaxation algorithm by
letting the longest possible relaxation path determine the weightings
for all constraints (again assuming a comparable level of
detail throughout our ontology). The weightings for edges along
the relaxation path will then be assigned in each conceptual view
in descending order starting from the root down to the concept
specified by the query. Thus the relaxation of all concepts at least
to the same level of generalization is enforced before having to
relax already more general concepts. In our example from above
the relaxation path for a leaf node concept would be assigned
weightings as given by the height of the conceptual view, whereas
the relaxation path for a concept node right below the root would
be assigned the highest possible weighting. Thus (following
lemma 1), our leaf node concept will have been relaxed to a generalization
level of the respective concept right below the root
before the second constraint is relaxed for the first time. Now we
are ready to present an algorithm for relaxation of symmetrical
constraints incorporation a breadth first paradigm and a minimum
level of generalization strategy.
Algorithm: Symmetrical Constraints Breadth First Paradigm
1.
Pose the query containing only the hard constraints against
the enhanced UDDI / DAML-S directory.
1.1.
If an empty result should be returned, terminate the algorithm
outputting the empty result set.
1.2.
Repose the query with all soft constraints included.
1.3.
If a non-empty result should be returned for the expanded
query, terminate the algorithm and output the
respective services as perfect matches with a relaxation
level of 0.
201
2.
Given n the number of soft constraints we have to label each
edge along the relaxation path from the concept specified by
the query to the root node in every conceptual view (cf.
lemma 1).
2.1.
Among the n views find the longest possible relaxation
path and set maxdepth as the maximum depth of all ontologies
relative to the class specified in the service request
2.2.
For every view label the relaxation path starting from
the root by n
d
down to the concept specified by the
query (d := (maxdepth 1) to 0 descending)
3.
For i = 1 to
n
j
(1
j maxdepth)
3.1.
Start with the query including all hard and soft constraints
and build statements containing any possible
relaxation with a weighting of i, i.e. relax in turn all
conceptual views by one step up to the point of reaching
the desired weighting. Due to construction of the
weighting this will result in a breadth first strategy.
3.2.
If any of the statements produced in 3.1. retrieves a
non-empty result set, collect results of all possibilities
and terminate the algorithm.
In this algorithm step 3 can efficiently be implemented using an
A*-Algorithm that successively explores the differently weighted
tree edges finding all possible combinations for each quality
weighting. However, please note that not every weighting is possible
to reach by relaxing constraints.
We will now exemplify the above algorithm through different
examples: consider the three conceptual views X, Y and Z given
in figure 6 and assume a user has posed a query requesting the
characteristics X3, Y2 and Z6 as soft constraints. Let us assume
all hard filter conditions have already been satisfied, but the basic
query for our soft constraints fails, i.e. no service with these capabilities
is registered. Step two of our algorithm will now label the
relaxation paths like shown in figure 6 (bold edges and shaded
vertexes). Starting with all services having quality values of one,
concept Z6 is generalized to Z3 and the query is reposed with
constraints X3, Y2 and Z3. If we still should have no matching
services we have to go on relaxing. A query for a value of two is
not possible, but for a value of 3 we can relax X3 to X2 and repose
the query with constraints X2, Y2 and Z3. Let us assume we
still have not found a result the next quality value would be 4. For
this we have two possibilities and have to unite the results of the
query on X2, Y2, Z3 and the query X3, Y2, Z2. The next possible
quality value would be 7 with a query on X2, Y2, Z2. Please note
that we indeed have relaxed all constraints to same level until we
relax any constraint to the top level (e.g. in the view Y) for the
first time. The algorithm would terminate at the latest after relaxing
all views to the top level with a quality value of 34.
Z2
Z1
Z4
Y5
Y4
Y3
X1
X4
X5
X6
Z6
1
9
3
9
3
9
X3
X2
Y2
Z3
Y1
Figure 6: Conceptual views with labeled relaxation paths
4.3
Quantitative Service Quality Measures
In the last section we have seen an effective scheme for the case
of symmetrical relaxations under the assumption of equal useful-ness
, i.e. one broad step was as useful as any other broad step of a
comparable level. But in real world applications query constraints
are not always only of a qualitative, incomparable nature. Deeper
knowledge of individual user's preferences or knowledge of
stereotypical usage can become interesting parameters in assessing
the quality of different relaxation schemes. Tuning the factor
to a certain ratio for each application (x broad steps equal 1 deep
step) will express the desired semantics in each instance. The
exact coefficient used for the discrimination of deep relaxation in
each instance, however, will typically strongly depend on the
domain, the total number of soft query constraints and the respective
granularity of the views / ontology (i.e. the semantic level of
detail) used. Views that are modeled with a very fine granularity
can be relaxed introducing a smaller degree of generalization to
the service request results than would be introduced by those
views that are modeled rather coarsely anyway. Hence, a deep
relaxation step in a very detailed ontology might be worth only
three or four broad relaxations of other user constraints, whereas a
deep step in a coarser ontology used within the same query might
add up to the worth of ten broad relaxation steps or even several
deep relaxation steps of other constraints. Likewise very flat hierarchies
with many subclasses to each node are not suited too well
for deep relaxation. So the discrimination will in each application
depend on:
The relative semantic importance of a view with respect
to the user request
The maximum depth of each conceptual view,
Its total number of (sub-) concepts and
The (average) number of instances in each concept.
The relative semantic importance of a view can usually only be
determined by directly consulting the user. But in the following
we will give an overview of techniques that will generally help to
deal with problems of relaxing views with different granularities.
As a rule of thumb we can state that the relaxation of views having
a low maximum depth and rather high numbers of services
attached to each node should be delayed as long as possible. Our
technique of starting the view graph labeling from the root node
already helps facilitating this rule. If a shallow conceptual view is
used (usually an indication for coarse modeling) together with a
more detailed view, even the edges to leaf nodes in the shallow
ontology will be assigned rather high weightings unlike the leaf
nodes in ontologies with a rather high depth. This behavior is,
however, not always the best choice. If unlike in figure 6 the respective
depths of conceptual views differ by a considerable
amount, we should not simply delay the relaxation of shallow
views, but have to insert several intermediate steps in the more
detailed views before relaxing the next step in a coarse view.
Figure 7 shows pairs of views X, Y and X', Y'. In both cases the
depth of X, X' is only two whereas the depth of Y, Y' is four.
That means that in terms of relaxation we can assume that for
some reason the more shallow ontologies X and X' are modeled
rather coarsely. On the left hand side in figure 7 we can see the
labeling scheme from our algorithm. Ontology Y would be relaxed
to Y3 or even Y2 before a single step in X would be relaxed
. On the right hand side we can see a better labeling scheme
with interleaved relaxation steps (two steps in Y' for a step in X').
202
Y5
8
4
1
8
4
2
Y5
12
3
1
8
4
2
Y4
Y2
Y3
Y1
Y4
Y2
Y3
Y1
X1
X2
X3
X1
X2
X3
Figure 7: Differently labeled relaxation paths
Since the interleaving of relaxation steps with respect to the
maximum depth generally seems a fairer approach, we will incorporate
this behavior into our algorithm. If the maximum depth of
some conceptual views should severely differ, we will assume a
coarser level of detail and find out how many steps in the most
detailed view represent a single step in the coarser view. We then
re-label the coarse view beginning from the root by adding so
many of the appropriate sequence of weightings, as steps in the
detailed view are necessary. For instance in figure 7 we can easily
see that a single step in X (depth 2) represents two steps in Y
(depth 4) and thus we would have to re-label the first edge by the
sum of the two highest weights of Y (8+4) and its second edge by
the sum of the next two weights (2+1). Following this scheme we
can gain the more suitable relaxation weights of view X'. In terms
of our quality assessment algorithm that means that we have to
reconsider the labeling of the relaxation paths in step two and
replace the respective section by the following:
2.1. Among the n conceptual views find the longest possible
relaxation path multiplying possible steps in each view relative
to the class specified in the service request by the view's
respective factor of q, where q is given as the integer part of
the result of dividing the maximum view depth by the maximum
depth of the current view (i.e. q steps in the most detailed
view represent one step in this view). Set the maximum
value for maxdepth.
2.2. For every conceptual view label the relaxation path
starting from the root by n
d
+ n
d-1
+...+ n
d-q+1
(i.e. the sum of
the first q weights in terms of the most detailed relaxation
path) down to n
d
+ n
d-1
+...+ n
0
with d:= (maxdepth 1).
A second possibility to control the relaxation properties of multiple
conceptual views is the incorporation of user preferences giving
a preferred relaxation order. Incorporating such preferences
into the weights along the ontology, however, is a very difficult
problem in its own rights and therefore beyond the scope of this
paper. For the case that a simple ordering of relaxation for the
views is given by the user, we can use double-labeled relaxation
paths like <A href="193.html#9">for the tree patterns in [1]. The second label for each
node is e.g. the respective rank of the view in a specified relaxation
ordering or the number of the node's respective sub-concepts.
In the case that for the execution of step 3 in our algorithm more
than one query should be possible, the relaxations can then be
executed minimizing the sum of second labels and retrieval can
be terminated whenever a result occurs. Also for the case that a
prioritization of relaxations is give<A href="193.html#10">n (cf. [13]), the respective relaxation
scheme is straightforward: The prioritized view is successively
relaxed until a first result set is retrieved. Then the second
, third, etc. views are used to break ties. However, when it
comes to integrating weights from preferences into relaxation
weights deeper research is still needed.
RELATED WORK
While in the above discussions we assumed the existence of different
conceptual views to a given ontology, the actual creation of
these views is beyond the scope of this paper. Multiple views as
abstractions of data sources are a well understand concept in classical
databases systems. Yet the concept of such views has only
recently been addressed in the context of the Semantic Web
through the proposal of the view definition language RVL for the
low level ontology<A href="193.html#10"> language RDFS [14]. RVL uses a declarative
query language for the creation of virtual schemas which in turn
serve as views on existing complex ontologies. Please note that
although we merely focused on tree-shaped clippings from OWL
ontologies as simple relaxation hierarchies in our examples, the
presented concepts are general enough to be used with only the
slightest adaptations together with other types of ontologies and
more complex views, e.g. virtual RVL views.
Choosing the `right' Web service for execution has been considered
in several ways. For legal or economical points of view especially
assurance structures guaranteeing that a service performs
<A href="193.html#10">the desired task like [11] or [17] have been addressed. However,
when negotiating about execution guarantees or costs the semantic
content of a service has to be understood and its specific capabilities
have already to be agreed on. Taking a more user-centered
view the notion of services' reputation for subsequent selection
<A href="193.html#10">[15] or the quality assessment for the negotiation of service level
agreements<A href="193.html#9"> [3] have been proposed. However, these approaches
focus on conceptual designs omitting algorithms how to assess the
quality in each instance. The most complete framework with respect
to heterogeneous environments featuring multiple ontologies
is given by<A href="193.html#10"> [18] where the notion of information loss for
query reformulation is defined. Unlike our work presented here,
where multiple conceptual views of a service ontology occur in a
single request, this work, however, deals with the loss of information
when a query has to be translated from one into another ontology
(e.g. in order to pose it to a different data source). Thus it
is rather concerned with the problem of ontology mappings.
The area of service request relaxation over ontologies also shows
some similarities with database query relaxation frameworks like
given <A href="193.html#10">in [6] or [13] and especially recent work on querying semi-structured
data like in XML databases. In the case of XML the
DTD of a document defines its structure together with the (semantic
) type of data within each node. The main focus of querying in
that area is on building queries without perfect knowledge of
documents structure or the exact data it contains. Exploiting the
set of labels given by a XML document's DTD as ontology in
<A href="193.html#10">term of the documents' structure [12] uses the result sets of queries
to define the semantic equivalence of alternative query expressions
, however without relaxing concepts within queries. The
area of relaxation for tree-shaped queries not only on a structural
level (`relax to any descendant node instead of child node'), but
also on a limited semantic level (`find author of document instead
of book') is in <A href="193.html#9">detail addressed in [1]. Here also weightings along
the edges of trees comparable to our user preference-driven quality
assessments in section 4.3 are discussed. Our work differs
mainly in that we can rely on fine-granular concept ontologies
that are custom made by domain experts and used in central service
provisioning portals or UDDI / DAML-S directories. Not
only are we able to exploit semantics by a far larger extent than
previous work, but we also provide means to derive sensible
weightings for edges within conceptual views based on general
user preferences and a fair relaxation paradigm.
203
The general area of enhancement of UDDI goes back to describing
the capabilities of Web services on a more detailed level by
using ontology languages <A href="193.html#10">like DAML+OIL [8]. An example for
the efficient mapping of DAML+OIL capability descriptions onto
<A href="193.html#10">UDDI records is given in [20]. Our approach for result quality
assessment here is facilitated by the database-based approach for
UDDI <A href="193.html#9">enhancement featured in [4]. Using cooperative database
technology and an extended declarative query language like the
one <A href="193.html#10">given in [13] this framework features a good implementation
framework for our quality assessment. Queries using hard and soft
constraints can be automatically rewritten using the relaxed concepts
and posed against a database of service descriptions using
suitable relaxation ontologies. However, in terms of quality these
languages feature only a qualitative result set under the notion of
Pareto-optimality. Quantitative quality constraints that limit the
flood of incomparable results delivered by the exponential growth
of Pareto sets with the number of soft constraints in the request
are not considered. A first study of such quality measures is given
<A href="193.html#10">in [16] for the restricted class of ceteris paribus preferences like
discussed in section 4.2.
SUMMARY AND OUTLOOK
In this paper we presented a framework for the discovery and
selection of Web Services based on personalized quality assessments
for individual service users. Starting with a set of conceptual
views over a service ontology that express a generalization
hierarchy of concepts (conceptual views) for different services'
capabilities and characteristics, we proposed to enhance today's
UDDI / DAML-S registries by a matching component that will
not only perform a filtering of services according to user specified
terms, but also allows for cooperative matchmaking between service
descriptions and the individual user's preferences. Focusing
on the quality assessment component of such an enhancement we
described in detail how to deal with different kinds and multiple
instances of conceptual views. The views in our framework contain
the domain-specific understanding of concepts or the common
knowledge that users typically will expect when trying to
find an adequate service for execution. If no service that offers all
required capabilities can be found, relaxing along these views will
step by step generalize the services' requested features until a best
possible match can be found. Thus cooperative behavior can be
introduced for improved service provisioning.
We focused on controlling these relaxation steps implementing a
breadth first strategy control flow to delay far-reaching generalizations
for as long as possible. We also discussed the influence of
relaxation plans for various views, which may differ in their
granularity or accuracy of discrimination and the influence of
individual user preferences for relaxation orders. For the case of
views with differing level of details we gave an adequate scheme
to balance the control flow. Nevertheless, the exact instantiation
of the views and the adequate weightings chosen still will usually
differ between application areas. Anticipating stereotype interaction
patterns or experiences of past interactions, however, the
service provider usually is able to also provide some suitable default
ontologies for cooperative matchmaking within UDDI /
DAML-S registries or managed service portals. In any case the
provisioning of suitable facilities for the assessment of Web service
quality can be expected to become a central part of service
provisioning and will essentially influence the future acceptance
of Web service offers by individual clients.
In this paper we have restricted conceptual views to tree-shaped
clippings of ontologies with view elements being exclusively
related through `is-a' relationships (stating an explicit generalization
of concepts in a superclass/subclass fashion). Of course also
other relationships within ontologies might be available for relaxation
tasks in service requests; on the other hand their semantic
meaning will usually be somewhat more difficult. Since complex
ontologies commonly contain named relationships between entities
that might be used to make views more flexible and query
relaxation more meaningful, an important future work item will
be to break down this restriction on relationships of the `is-a'
type. With our algorithms' focus on relaxation paths as an abstraction
of the underlying views, our general framework for quality
assessment can be expected to be extensible also to these new
types of views in a straightforward manner independently of the
exact type of view the relaxation path was derived from. If a relaxation
of a constraint along a certain relationship is backed by
sensible relaxation semantics (i.e. is meaningful), however, has to
be checked in each individual instance.
Furthermore, our future work will focus on a tighter integration of
individual user preferences into the quality assessment process
like addressed in section 4.3. Choosing the adequate weightings
does not have an obvious semantics. The meaning of `relaxing
one constraint is two-times better than relaxing another constraint'
can only be guessed, what about three-times, etc.? The area between
quantitative quality assessments like e.g. re-weighting techniques
or relevance feedback as known from the area of IR, and
the purely qualitative approaches like Pareto optimality of solu-<A
href="193.html#10">tions like given in [13] offers a vast variety of possibilities to
explore for real world applications. Also here the notion of stereotypical
usage of services and the grouping of users with similar
intensions might lead to improved service provisioning. We believe
that using and extending our framework is a vital step towards
getting a better understanding of these topics. In any case
assessing quality of service request results in a semantically sensible
way promises to pave the road to cooperative provisioning
for user-centered services.
ACKNOWLEDGMENTS
We would like to thank Achim Leubner and Anthony Tarlano for
helpful comments and suggestions. This work was partially
funded by an Emmy-Noether-Grant of the German Research
Foundation (DFG).
REFERENCES
[1]
S. Amer-Yahia, S. Cho, D. Srivastava. Tree Pattern Relaxation
. In Proc. of the Int. Conf. on Extending Database Technology
(EDBT'02), Prague, Czech Republic, 2002.
[2]
A. Ankolenkar, M. Burstein, J. Hobbs, et. al. DAML-S: Web
Service Description for the Semantic Web. In Proc. of the
Int. Semantic Web Conf. (ISWC'02), Sardinia, Italy, LNCS
2342, Springer, 2002.
[3]
W.-T. Balke, A. Badii. Assessing Web Services Quality for
Call-by-Call Outsourcing. In Proc. of the Int Workshop on
Web Services Quality (WQW'03), Rome, Italy, 2003.
[4]
W.-T. Balke, M. Wagner. Cooperative Discovery for User-centered
Web Service Provisioning. In Proceedings of the
First International Conference on Web Services (ICWS'03),
Las Vegas, USA, 2003.
204
[5]
W.-T. Balke, M. Wagner. Towards Personalized Selection of
Web Services. In Proceedings of the 12th International
World Wide Web Conference (WWW 2003) Alternate Track
on Web Services, Budapest, Hungary, 2003.
[6]
S. Chaudhuri. Generalization and a Framework for Query
Modification. In Proc. of the Int. Conf. on Data Engineering
(ICDE'90), Los Angeles, USA, 1990.
[7]
E. Christensen, F. Curbera, G. Meredith, S. Weerawarana.
Web Services Description Language (WSDL) 1.1.
http://www.w3.org/TR/2001/NOTE-wsdl-20010315, 2001.
[8]
D. Connolly et al. DAML+OIL Reference Description. W3C
Note, December 2001.
[9]
DAML. OWL-S: Semantic Markup for Web Services.
http://www.daml.org/services/owl-s/1.0/owl-s.html#foot29
[10]
DAML. OWL-S 1.0 Release.
http://www.daml.org/services/owl-s/1.0/
[11]
M. Jakobsson, M. Yung. On Assurance Structures for WWW
Commerce. In Proc. of Int. Conf. on Financial Cryptography
(FC'98), Springer LNCS 1465, Anguilla, British West Indies,
1998
[12]
Y. Kanza, Y. Sagiv. Flexible Queries over Semistructured
Data. In Proc. of the ACM Symp. on Principles of Database
Systems (PODS'02), Santa Barbara, USA, 2001.
[13]
W. Kieling, G. Kstler. Preference SQL - Design, Imple-mentation
, Experiences. In Proc. of the Int. Conf. on Very
Large Databases (VLDB'02), Hong Kong, China, 2002.
[14]
A. Magkanaraki, V. Tannen, V. Christophides, D.
Plexousakis. Viewing the Semantic Web Through RVL
Lenses. In Proc. of the Int. Semantic Web Conf. (ISWC'03),
LNCS 2870, Sanibel Island, USA, 2003.
[15]
E. M. Maximilien, M.Singh. Conceptual Model of Web Service
Reputation. In SIGMOD Records 31(4), 2002.
[16]
M. McGeachie, J. Doyle. Efficient Utility Functions for Ce-teris
Paribus Preferences. In Proc. of Conf. on Artificial Intelligence
and Conf. on Innovative Applications of Artificial
Intelligence (AAAI/IAAI'02), Edmonton, Canada, 2002.
[17]
G. Medvinsky, C. Lai, B. Neuman. Endorsements, Licensing
, and Insurance for Distributed System Services. In Proc.
of the ACM Conf. on Computer and Communications Security
, Fairfax, USA, 1994
[18]
E. Mena, V. Kashyap, A. Illarramendi, A. Sheth. Imprecise
Answers in Distributed Environments: Estimation of Information
Loss for Multi-Ontology based Query Processing. In
International Journal of Cooperative Information Systems
(IJCIS), 9 (4), 2000.
[19]
NTT DoCoMo home page.
http://www.nttdocomo.com/home.html, 2003.
[20]
M. Paolucci, T. Kawamura, T. Payne, K. Sycara. Importing
the Semantic Web in UDDI. In Proc. of the Int. Workshop on
Web Services, e-Business and the Semantic Web (WES'02),
Toronto, Canada, 2002
[21]
UDDI. The UDDI Technical White Paper.
http://www.uddi.org.
205 | selection of the most useful;Web Service Definition Language;Web services;Tree-shaped clipping of ontologies;subsequent execution;Semantic Web;user profiling;The generalization throughout ontologies;ontology resembles common knowledge;Universal Description Discovery and Integration;discovery of possible services;generalization hierarchy of concepts;cooperative service discovery;personalization;preference-based service provisioning;Domain-specific understanding of concepts;Relaxing multiple ontologies |
194 | Topic Modeling in Fringe Word Prediction for AAC | Word prediction can be used for enhancing the communication ability of persons with speech and language impair-ments . In this work, we explore two methods of adapting a language model to the topic of conversation, and apply these methods to the prediction of fringe words. | INTRODUCTION
Alternative and Augmentative Communication (AAC) is
the field of research concerned with finding ways to help
those with speech difficulties communicate more easily and
completely. Today there are approximately 2 million people
in the United States with some form of communication
difficulty. One means to help ease communication is the
use of an electronic communication device, which may have
synthetic speech as output. However, one issue in using an
AAC device is communication rate. Whereas speaking rate
is estimated at 180 words per minute (wpm), many AAC
users' communication rates are lower than 15 wpm [3, 7,
16]. Thus one goal of developers is to find ways to increase
the rate of communication, by making AAC devices easier
to use and more intelligent.
Some researchers have attempted to speed communication
rate by providing quick access to the core vocabulary
the relatively small set of frequently used words. Methods
for doing this include abbreviation expansion and iconic
methods such as semantic compaction [1]. In contrast, in
this work we attempt to speed access to the much larger
set of words often called fringe vocabulary. This set is of
interest because although each individual word occurs less
frequently, the set of fringe words on the whole is very significant
.
Suppose that the user wants to enter "I want a home
in the country." After typing, "I want a h", they might
see something like shown below. The system has created a
prediction window
containing the five words that it thinks
the user may be trying to type. In this example, the user can
press F5 to complete the word "home" and the system will
enter the word with a space afterwards. So in this example,
the user needed 2 keystrokes to enter what would normally
take 5 keystrokes.
It is difficult to judge how much word prediction can speed
communication rate. Much of this determination is dependent
on the accuracy of the prediction method, the characteristics
of the user, such as their physical and cognitive
abilities, and the characteristics of the user interface, such
as where the prediction list is displayed and how a word in
the list is selected. Here, the prediction method is evaluated
separately from the rest of a word prediction system by simulating
what a user would type in a conversation if he/she
were taking full advantage of the prediction list. This theoretical
evaluation measures the percentage of keystrokes
that were saved by word prediction over typing out every
character.
In this paper we first describe related work and give some
background in statistical approaches to word prediction. We
present approaches to topic modeling and compare the results
of topic modeling to a baseline method. For a more
thorough account of this work, visit
http://www.cis.udel.edu/fringe/.
RELATED WORK
Several previous researchers have used n-gram models in
word prediction for AAC [4, 5, 12, 18]. For example, Lesher
et al. [12] show how increasing training set size and unigrams
to bigrams (going from 47% to 54.7%) to trigrams (another
.8%). These evaluations used a window size of 6.
Other researchers have integrated grammatical information
into n-gram word prediction systems. Garay-Vitoria
and Gonzalez-Abascal [10] integrated a statistical chart parser,
while Fazly and Hirst [8] and Copestake [7] used part-of-speech
(POS) tagging. These yielded improvements of 1-5%
keystroke savings.
There have been several attempts at topic modeling in
the language modeling community, particularly for speech
recognition [2, 14, 17, 6, 9, 13]. Some of the evaluations
of topic modeling have found different variants of it to be
very beneficial [2, 14, 9]. Lesher and Rinkus [13] is an attempt
at topic modeling for word prediction, but does not
use dynamic topic modeling like [9, 2] and this work.
Table 1: The keystroke savings of topic modeling is
shown compared to a bigram and trigram baseline.
METHODS
Like several of the aforementioned word prediction researchers
, we use n-gram methods for language modeling.
Our baseline word prediction methods use bigram and trigram-based
n-gram models with backoff with Good-Turing smoothing
, the current best practice in statistical language modeling
according to Manning and Sch
utze [15]. Additionally,
we incorporate a special unigram model for the first word
of each sentence. In word prediction, these language models
our used to rank all the words that the user could possibly
be typing. The top W words are presented to the user,
where W is the prediction window size.
Statistical approaches require a collection of text to construct
a language model. Ideally, our corpus would be a large
collection of conversations involving one or more people using
an AAC system. Such a corpus is unavailable, so we
follow [13] in using the Switchboard corpus, which is a collection
of telephone conversations and their transcriptions.
1
The training section contains a randomly pre-selected 2217
conversations and the testing section contains the remaining
221 conversations. We perform preprocessing to remove
some speech repairs in accordance with Hindle [11]. These
editing rules bring the Switchboard conversations closer to
what we envision an AAC user would type.
3.1
Evaluation
We compare the number of keystrokes required for a user
taking full advantage of our word prediction system to the
number of keystrokes required to enter each character of the
conversation. We use immediate prediction for our evaluations
, which allows use of the prediction list before the first
character of a word has been entered. We assume that one
keystroke is required to "speak" each turn of input and that
a space is automatically inserted after a word is selected
from the prediction list.
KS
= keys
normal
- keys
withprediction
keys
normal
100%
Because we are interested in the prediction of fringe words,
our evaluations are measured on fringe words only. Core
words are excluded from the list of predictions. The particular
core vocabulary we chose is available from the AAC
Centers at the University of Nebraska at Lincoln, available
from http://aac.unl.edu/. We used the "Young Adult Conversation
Regular" core vocabulary list, as it is the most
similar to the type of conversations in the Switchboard corpus
.
1
The Switchboard transcriptions were available from
http://www.isip.msstate.edu/projects/switchboard/
TOPIC MODELING
The goal of topic modeling is to identify the current topic
of conversation, then increase the probability of related words
and decrease the probability of unrelated words. Some words
will be unaffected by topic modeling, such as function words,
which are used similarly in all topics. It is for this reason
that we chose to improve fringe word prediction with topic
modeling: we feel that topic modeling specifically improves
fringe word prediction.
Researchers are consistent in representing a topic by creating
a collection of representative text of the topic. However,
researchers differ on the best way to organize a collection of
topics. Some researchers have created a hierarchical collection
of topics [9], while others have created a disjoint set of
topics [14, 2, 17]. We feel that the primary lure of a hierarchical
approach, the ability to generalize, can be captured
in the set approach as well, by giving varying weight to all
topics and not just the most likely topic. For this reason,
we represent topics as disjoint sets of conversations.
The current topic of conversation must be identified from
the part of the conversation that has taken place so far, and
updated periodically in the conversation. Thus, we must
devise a representation for a partial conversation for assessing
the similarity of the conversation to each topic. In representing
the conversation so far, we choose to implement
an exponentially decayed cache, like [2], using TF-IDF values
rather than raw frequencies. This follows the work of
Mahajan et. al. [14] in considering the inverse document
frequency of a word to proportional to its utility in identifying
the current topic. Because our approach is for topic
identification, we ignore words that occur in 85% or more
of the topics, with the intuition that such words are irrelevant
to selection of topic. As a step to convert our model of
the current conversation to a model of the current topic, we
compute the document similarity between the cache and the
unigram model for each topic. We chose to use the cosine
metric, following [9].
Given that we have computed a similarity score between
each topic and the current conversation, there are two main
variations on how to construct a new language model. Mahajan
et. al. [14] implemented a k-nearest solution, constructing
the topic model from the most similar k topics.
Each topic's language model was weighted equally for their
experiments. Instead, we chose to follow Florian and Yarowsky's
approach [9]. They expand the probability for a word (w)
given a history (h) as follows:
P
(w | h) =
X
ttopics
P
(t | h) P (w | t, h)
P
(w | t, h) is simply the probability of w taken from the
language model constructed for topic t. The probability of
the topic is estimated as follows:
P
(t | h)
S
(t, h)
P
t
topics
S
(t
, h
)
where S(t, h) is the cosine similarity of the topic to the current
part of the conversation.
4.1
Method A
Our first method of topic modeling is most similar in
spirit to the work of Mahajan et. al. [14] and Florian and
Yarowsky [9]. In training, a bigram model is computed for
277
each topic in Switchboard. In testing, the cache representation
of the current conversation is compared against the
unigram representation of each topic and similarity scores
are computed. The similarity scores are then used to weight
the frequencies obtained from each topic in a linear interpolation
. Then this interpolated bigram model is used to
compute the probabilities used for word prediction.
Topic modeling shows a sizable improvement over the the
bigram baseline: 1.6% 1.7%. We've included the comparison
to a bigram baseline because it is the most natural
baseline in terms of language understanding. However, a trigram
baseline is also a natural comparison when considering
that it can run with the same or less computational resources
than topic modeling. When compared against the trigram
baseline, the topic model gives 0.8% 1.5% improvement.
4.2
Method B
Our second method of topic modeling is more similar
to the work of Bellegarda [2]. Like Bellegarda, we compute
topic-dependent unigram probabilities. These topic-dependent
probabilities are multiplied with probabilities from
a trigram backoff model. Additionally, we weight the topic
component with a tuning parameter. After manual tuning
on a two conversations, we found that = .15 worked well.
Method B is an improvement over a trigram baseline, but
only a minor improvement. We feel that the problem is that
a low value was necessary to avoid overriding the word
preference that is due to context, but that it also reduced
the ability of the overall model to adapt to a particular topic.
4.3
Comparison
Method A offers an additional 1% or more keystroke savings
over Method B for most window sizes. This is due to
the low weight of the tuning parameter for Method B. However
, as previously mentioned, the low weight was necessary.
Additionally, notice that Method A becomes comparatively
better as the window size is increased. The trigram model
component in Method B can be thought of as a stronger
source of knowledge than the interpolated bigram model of
Method A. Because of this, when the trigram history exists
in the language model, Method B's predictions are more accurate
. However, because the trigram model is sparse, it
can only contribute to the top few predictions. Thus, it has
a much greater effect on the top few window sizes.
For real world systems, however, absolute performance is
not the only factor. The computational demands of each
approach are often considered when selecting a practical solution
. The trigram baseline processed at 1,325 words per
minute (wpm). Method A processed conversations in testing
at 32 wpm and Method B processed 1,267 words per
minute. Method B uses barely more processing time than
the trigram baseline model.
CONCLUSIONS
Topic modeling can be implemented in many different
ways. We've demonstrated two such methods for topic modeling
: one for computationally limited devices and another
for computationally rich devices. Both methods show a clear
improvement over a trigram model with backoff. Before the
advent of word prediction, a user would've pressed 6.4 keys
per fringe word on average. Now, with topic modeling for
word prediction, only 2.5 keys per word are required.
ACKNOWLEDGMENTS
We would like to thank the US Department of Education
for funding this research under grant H113G040051, under
the National Institute on Disability and Rehabilitation Research
program. We would also like to thank Dr. Gregory
Lesher for correspondence regarding his work and Dr. David
Saunders for lending us a compute server.
REFERENCES
[1] B. Baker. Minspeak. Byte, pages 186202, 1982.
[2] J. Bellegarda. Large vocabulary speech recognition with
multispan language models. IEEE Trans. On Speech and
Audio Processing
, 8(1), 2000.
[3] D. R. Beukelman and P. Mirenda. Augmentative and
alternative communication: Management of severe
communication disorders in children and adults
. P.H. Brookes
Pub. Co., 1998.
[4] L. Boggess. Two simple prediction algorithms to facilitate text
production. In Proceedings of the second conference on
Applied natural language processing
, pages 3340, Morristown,
NJ, USA, 1988. Association for Computational Linguistics.
[5] A. Carlberger, J. Carlberger, T. Magnuson, M. S. Hunnicutt,
S. Palazuelos-Cagigas, and S. A. Navarro. Profet, a new
generation of word prediction: An evaluation study. In
Proceedings of Natural Language Processing for
Communication Aids
, 1997.
[6] S. Chen, K. Seymore, and R. Rosenfeld. Topic adaptation for
language modeling using unnormalized exponential models. In
Proc. Int'l Conf. on Acoustics, Speech and Signal Processing
,
1998.
[7] A. Copestake. Augmented and alternative nlp techniques for
augmentative and alternative communication. In Proceedings
of the ACL workshop on Natural Language Processing for
Communication Aids
, 1997.
[8] A. Fazly and G. Hirst. Testing the efficacy of part-of-speech
information in word completion. In Proceedings of the 10th
Conference of the European Chapter of the Association for
Computational Linguistics
, 2003.
[9] R. Florian and D. Yarowsky. Dynamic nonlocal language
modeling via hierarchical topic-based adaptation. In
Proceedings of ACL'99
, pages 167174, 1999.
[10] N. Garay-Vitoria and J. Gonz
alez-Abascal. Intelligent
word-prediction to enhance text input rate. In Proceedings of
the second international conference on Intelligent User
Interfaces
, 1997.
[11] D. Hindle. Deterministic parsing of syntactic non-fluencies. In
Proceedings of the 21st Annual Meeting of the Association for
Computational Linguistics
, 1983.
[12] G. Lesher, B. Moulton, and J. Higgonbotham. Effects of ngram
order and training text size on word prediction. In Proceedings
of the RESNA '99 Annual Conference
, 1999.
[13] G. Lesher and G. Rinkus. Domain-specific word prediction for
augmentative communication. In Proceedings of the RESNA
'01 Annual Conference
, 2001.
[14] M. Mahajan, D. Beeferman, and X. D. Huang. Improved
topic-dependent language modeling using information retrieval
techniques. In Proceedings of the International Conference on
Acoustics, Speech, and Signal Processing
, 1999.
[15] C. Manning and H. Sch
utze. Foundations of Statistical
Natural Language Processing
. MIT Press, 2000.
[16] A. Newell, S. Langer, and M. Hickey. The r^
ole of natural
language processing in alternative and augmentative
communication. Natural Language Engineering, 4(1):116,
1996.
[17] K. Seymore and R. Rosenfeld. Using story topics for language
model adaptation. In Proceedings of Eurospeech '97, pages
19871990, Rhodes, Greece, 1997.
[18] A. L. Swiffin, J. A. Pickering, J. L. Arnott, and A. F. Newell.
Pal: An effort efficient portable communication aid and
keyboard emulator. In Proceedings of the 8th Annual
Conference on Rehabilitation Techonology
, pages 197199,
1985.
278
| core vocabulary;identify current topic of conversation;AAC;language modeling;accuracy of prediction method;fringe vocabulary;prediction of fringe words;conversations in the Switchboard corpus;Word prediction;immediate prediction;decrease probability of unrelated words;increase probability of related words;prediction window size;communication rate;construct a new language model;topic modeling |
195 | Topic Transition Detection Using Hierarchical Hidden Markov and Semi-Markov Models | In this paper we introduce a probabilistic framework to exploit hierarchy, structure sharing and duration information for topic transition detection in videos. Our probabilistic detection framework is a combination of a shot classification step and a detection phase using hierarchical probabilistic models. We consider two models in this paper: the extended Hierarchical Hidden Markov Model (HHMM) and the Coxian Switching Hidden semi-Markov Model (S-HSMM) because they allow the natural decomposition of semantics in videos, including shared structures, to be modeled directly, and thus enabling efficient inference and reducing the sample complexity in learning. Additionally, the S-HSMM allows the duration information to be incorporated, consequently the modeling of long-term dependencies in videos is enriched through both hierarchical and duration modeling . Furthermore, the use of the Coxian distribution in the S-HSMM makes it tractable to deal with long sequences in video. Our experimentation of the proposed framework on twelve educational and training videos shows that both models outperform the baseline cases (flat HMM and HSMM) and performances reported in earlier work in topic detection . The superior performance of the S-HSMM over the HHMM verifies our belief that duration information is an important factor in video content modeling. | INTRODUCTION
The ultimate goal of the video segmentation problem is to
characterize the temporal dynamics of the video whereby it
can be segmented into coherent units, possibly at different
levels of abstraction. Seeking abstract units to move beyond
the shots has thus been an active topic of much recent research. While the problem of shot transition is largely solved
at a satisfactory level [7], the `abstract units' or scene detection
problem is much harder, partially due to the following
three challenges identified in [29]: (a) the variety in directional
styles, (b) the semantic relationship of neighbouring
scenes, and (c) the knowledge of the viewer about the world.
While the last aspect is beyond the scope of this work, the
first two clearly imply that effective modeling of high-level
semantics requires the domain knowledge (directional style)
and the modeling of long-term, multiple-scale correlations
of the video dynamics (neighboring semantic relationship).
Modeling temporal correlations over a long period is generally
a challenging problem. As we shall review in the
subsequent section, this problem is usually solved in a specific
domain setting so that the expert knowledge about the
domain can be utilised. While organization of content in
generic videos (e.g., movies) is too diverse to be fully char-acterized
by statistical models, the hierarchy of semantic
structure in the class of education-oriented videos is more
defined, exposing strong temporal correlation in time, and
thus make it more desirable to probabilistic modeling. In
this paper, we concentrate on this video genre and develop
an effective framework to segment these videos into topi-cally
correlated units. This problem is an important step
to enable abstraction, summarization, and browsing of educational
content a rich class of film genre that has an
increasing important role in building e-services for learning
and training.
Probabilistic modeling of temporal correlations in video
data is however a difficult problem. It is complicated because
the underlying semantics naturally possess a hierarchical
decomposition with possible existence of tight structure
sharing between high-level semantics. In addition, the
typical duration for these structures usually varies for each
of its higher semantic. As an example, assisted narration a
section that involves the narrator talking to the audience is
usually used in both the introduction and in the main body
of a topic in an educational video. However while one, or
rarely two, shots of assisted narration (AN) are considered
sufficient for the introduction, the body typically requires
many AN shots. Thus it is important to exploit and fuse
hierarchical decomposition, structure sharing and duration
information in a unified framework to effectively address the
problem of topic transition detection.
The most widely used pobabilistic model is the hidden
Markov model (HMM). However, in many cases, the HMM
is unsuitable for video analysis since the strong Markov assumption
makes it too restrictive to capture correlations
11
over long periods. This limitation is usually overcome in the
literature by the use of a series of HMMs in a hierarchic manner
. The underlying problem in these approaches still is the
manual combination of HMMs at the higher levels which results
in the excessive expense of preparing the training data
and, more importantly, the interaction across higher semantic
levels is not incorporated during model training. One
rigorous approach to overcome this limitation is the use of
the Hierarchical Hidden Markov Model (HHMM), first introduced
in [6] and later extended to handle structure sharing
in [3]. The sophisticated model in [3] allows natural hierarchical
organization of the videos, including any existing
structure sharing, to be modeled rigorously. Practically this
will result in computational savings and a reduction in sample
complexity for learning. Given its advantages, we use
this model in this paper to model educational video content
for topic transition detection.
It is natural to see that durative properties play an important
role in human perception. An excessively long lecture
would bore the students. As such, education-oriented videos
(e.g., news, documentaries, lectures, training videos, etc.)
exhibit strong duration information in their content. We
thus propose an alternative approach towards handling temporal
dependencies over long periods through the explicit
modeling of duration information captured in semi-Markov
models. In these models, a state is assumed to remain un-changed
for some duration of time before it transits to a
new state, and thus it addresses the violation of the strong
Markov assumption from having states whose duration distributions
are non-geometric.
Existing semi-Markov models commonly model duration
distributions as multinomials. Video data is however typically
very long, thus making a multinomial semi-Markov
model unsuitable for video analysis since it would result in
both excessive computation and the number of parameters
required. Continuous modeling of duration does exist such
as in the use of the Gamma distribution, or more generally
the exponential family, described in [12, 16] to provide
more compact parameterization. However, there are still
two limitations applied to these models for video analysis:
(a) learning these distributions requires numerical optimiza-tion
and the time complexity still depends on the maximum
duration length, and (b) no hierarchical modeling has been
attempted. Fortunately, in [5], a Switching Hidden Semi-Markov
Model (S-HSMM) is introduced in which the duration
is modeled as a discrete M -phase Coxian distribution
. This model is particularly interesting for video analysis
since: (1) it can model hierarchical decomposition, and
(2) the Coxian duration modeling results in fast learning
and inference, the number of parameters is small and close-formed
estimation exists. Parameterizing long-term temporal
correlations existing in video is thus enriched by both
the hierarchical architecture and the duration modeling at
the bottom level of the S-HSMM.
To model video content, we argue that it is beneficial to
exploit both the hierarchical organization of the videos, their
semantically shared substructures and typical durations of
important semantics. These aspects are all addressed in
this paper in a unified and coherent probabilistic framework
. We use the HHMM and the S-HSMM and propose
a two-phase architecture for detecting topical transition in
educational videos. In the first phase, shots are classified
into meaningful labels. Using classified shot labels, the second
phase trains a hierarchical probabilistic model (HHMM
or S-HSMM) which is then used at a later stage for segmentation
and annotation. Prior knowledge about the domain,
including shared structures, is incorporated into the topo-logical
structure during training.
Our cross-validation on a dataset including a mix of twelve
videos demonstrates promising results. The performances
from the baseline cases (HMM and HSMM) have shown
that they are too restrictive and unsuitable in our detection
scheme, proving the validity of hierarchical modeling.
The performances of the hierarchical models, including the
HHMM and S-HSMM, are shown to surpass all results reported
in earlier work in topic detection [23, 20, 4]. The
superior performance of the S-HSMM over the HHMM has
also demonstrated our belief that duration information is
indeed an important element in the segmentation problem.
Exploiting the hierarchy, structure sharing and duration
in a unified probabilistic framework, our contributions are
twofold: (1) we provide a coherent hierarchical probabilistic
framework for topic detection. Although the current report
concentrates on the educational genre, this framework can
clearly generalize to other genres such as news and documentaries
, and (2) to our knowledge we are the first to investigate
duration and hierarchical modeling for video segmentation
1
in a unified framework.
The remainder of this paper is organized as follows. In
the next section, we provide related background to this work.
This is followed by a detailed section on the detection framework
including the description of the HHMM and S-HSMM.
We detail the shot classification phase in Section 4. Experimental
results are then reported in Section 5. Finally, the
conclusion follows in Section 6.
RELATED BACKGROUND
Seeking high-level semantics to move beyond the shots has
been the central theme of much recent research. Attempts
towards this problem have resulted in a fast growing body
of work, and depending on the investigating domain, the abstracting
units appear under different names such as scene,
story, episode for motion pictures; topic, subtopic, macro
segments, story units for information-oriented videos (news,
documentaries, training and educational videos), or general
term like logical story units used in [8, 32]. Otherwise stated,
we shall the term `scene' in this section to mean all of the
aforementioned names.
Early attempts have targeted extracting scene-level concepts
in broadcast programs, in particular news videos (e.g., [9,
14, 26]). In these attempts, the semantic extraction problem
is usually cast as the classification problem. The authors
in [26], for example, combine a number of visual and
aural low-level features together with shot syntax presented
in news videos to group shots into different narrative structures
and label them as anchor-shot, voice-over, or inter-1
Since topic change coincides with a shot transition, the shot
boundary provides crucial information in detecting topic
transitions, therefore the term `duration' in this work is calculated
in terms of the number of shots. This drastically
simplifies the modeling process. An alternative way of modeling
duration is to uniformly replicate a shot label based on
its length. However, doing this would require an extra modeling
of shot transition knowledge. In this work, we avoid
this complication and concentrate on duration information
based on the shot counts.
12
view. Liu et al. [14] propose a video/audio fusion approach
to segment news reports from other categories in broadcast
programs with different types of classifiers (simple threshold
method, Gaussian mixture classifier, and support vector machine
). Ide et al. [9] propose an automatic indexing scheme
for news video where shots are indexed based on the image
content and keywords into five categories: speech/report,
anchor, walking, gathering, and computer graphics. Caption
text information is then used with classified shots to
build the indices.
Segmentation of the news story is the second major theme
explored in the broadcast domain. The common underlying
approach used in these works is the use of explicit `rules'
about the structure of news programs to locate the transitions
of a news story. Commonly accepted heuristics are for
example: a news story often starts and finishes with anchor-person
shots [31]; the start of a news story is usually coupled
with music [2]; or a relative long silence period is the indication
of the boundary between two news stories [33]. More
complicated rules via temporal analysis are also exploited
such as the work of [37] which utilises detection results of
anchor-persons and captions to form a richer set of rules
(i.e., if the same text caption appears in two consecutive
anchor-person shots, then they belong to the same news
story). There is also a body of work which casts the segmentation
problem of news story in a HMM framework [10,
4]. The authors in [10], for example, propose the news segmentation
problem as problem of decoding the maximum
state sequence of a trained HMM whose transition matrix is
tailored by explicit rules about the news program. A somewhat
similar approach to the work in this paper is [4] (whose
results came first in the TRECVID2003 story segmentation
benchmark). Shots in [4] are first classified into a set common
labels in news (e.g., anchor, 2anchor, text-scene, etc.).
These labels are then input to a HMM for the segmentation
task. They report best performances of 74.9% recall and
80.2% precision for the TRECVID dataset. The work of [4]
however remains limited due to the flat structure HMM, and
it is not clear how the set of `transition' states were chosen.
In an effort to move beyond flat structure, the authors of [4]
have raised the need for high-order statistical techniques,
which will be addressed in this paper through the HHMM
and S-HSMM.
More recent approaches towards scene extraction have
shifted to motion pictures (e.g., [30, 34, 1, 31]). Detecting
scenes in motion pictures is in general a challenging problem
and there are three main existing approaches as outlined
in [31]: temporal clustering-based, rule-based and memory-based
detection. In the clustering-based approach, shots are
grouped into scenes based on visual similarity and temporal
closeness (e.g., [8, 13]). Scene breaks in the rule-based
detection approach are determined based on the semantic
and syntactic analysis of audiovisual characteristics and in
some cases further enhanced with more rigorous grammars
from film theory (e.g., [34, 1]). The authors in [30] propose a
memory-based scene detection framework. Visual shot similarity
in these works is determined based on the consistency
in color chromaticality, and the soundtrack is partitioned
into `audio scenes'. Visual and aural data are then fused
within a framework of memory and attention span model to
find likely scene breaks or singleton events. Further related
background on scene detection can be found in many good
surveys (e.g., [30, 28, 31]).
Existing HMM-based approaches towards modeling long-term
temporal dependencies typically use pre-segmented training
data at multiple levels, and hierarchically train a pool
of HMMs, in which HMMs at the lower levels are used as
input to the HMMs at the upper levels. In principle, some
fundamental units are recognised by a sequence of HMMs,
and then likelihood values (or labels) obtained from these
HMMs are combined to form a hierarchy of HMMs
2
to capture
the interactions at higher semantic levels (e.g., [11,
18]). Analysing sports videos, Kijak et al. [11] propose a
two-tiered classification of tennis videos using two layers
of HMMs. At the bottom level, four HMMs are used to
model four shot classes (`first missed serve',`rally', `replay',
and `break'). Each HMM is trained separately and subse-quently
topped up by another HMM at the top level which
represents the syntax of the tennis video with three states
of the game: `sets', `games', and `points'. Parameters for
the top HMM are, however, all manually specified. In [18],
a generic two-level hierarchy of HMMs is proposed to detect
recurrent events in movies and talk shows. Their idea
is to use an ergodic HMM at the top level, in which each
state is another (non-ergodic) sub-HMM representing a type
of signal stationary properties. For the case of movies, the
top HMM has six states, and each is in turn another three-state
non-ergodic HMM. The observations are modelled as
a mixture of Gaussians. After training, the authors claim
that interesting events can be detected such as `explosion',
`male speech', and so on. While being able to overcome the
limitation of the flat HMM in modeling long-term dependencies
, approaches that use HMMs at multiple levels still
suffer from two major problems: (1) pre-segmented and an-notated
data are needed at all levels for training, and (2)
in most existing work parameterization at higher levels has
to be manually specified. In many cases, preparing training
data at multiple levels is extremely tedious and at worst,
may not be possible. With respect to the second problem,
since each semantic level has to be modeled separately, the
underlying problem is that the interactions across semantic
layers are not modeled and thus do not contribute to the
learning process.
One framework that integrates the semantics across layers
is the Hierarchical Hidden Markov Model (HHMM) proposed
recently in [6]. The hierarchical HMM extends the
standard HMM in a hierarchic manner to allow each state
to be recursively generalised as another sub-HMM, and thus
enabling the ability to handle hierarchical modeling of complex
dynamic processes, in particular "the ability to infer
correlated observations over long periods in the observation
sequence via the higher levels of hierarchy" [6]. The original
motivation in [6] was to seek better modeling of different stochastic
levels and length scales presented in language (e.g.,
speech, handwriting, or text). However, the model introduced
in [6] considers only state hierarchies that have tree
structures, disallowing the sharing of substructures among
the high-level states. Recognizing this need, the authors
in [3] have extended the strict tree-form topology in the
original HHMMs of [6] and allowed it to be a general lattice
structure. The extension thus permits a state at any arbitrary
level of the HHMMs to be shared by more than one
parental state at its higher level (i.e., resulting in a compact
form of parameter typing at multiple levels). This extended
2
Not to be confused with the Hierarchical HMMs.
13
form is very attractive for video content modeling since it
allows the natural organization of the video content to be
modeled not only in terms of multiple scales but also in
terms of shared substructures existing in the decomposition.
Further details on the HHMM are provided in Section 3.1.
Early application of the HHMM for video analysis is found
in [36] and later extended in [35]. In these works, the authors
use the HHMM to detect the events of `play' and
`break' in soccer videos. For inference and learning, the
HHMM is `collapsed' into a flat HMM with a very large
product state space, which can then be used in conjunction
with the standard forward/backward passes as in a normal
HMM. Four methods are compared in [36] to detect `play'
and `break': (1) supervised HMMs, in which each category
is trained with a separate HMM, (2) supervised HHMMs,
in which bottom level HMMs are learned separately and
parameters for the upper levels are manually specified, (3)
unsupervised HHMMs without model adaptation, and (4)
supervised HHMMs with model adaptation. In (3) and (4),
two-level HHMMs are used. Their results have shown a very
close match between unsupervised and supervised methods
in which the completely unsupervised method with model
adaptation performs marginally better. These figures are
75.5%, 75.0%, 75.0% and 75.7% respectively for those four
methods. While presenting a novel contribution to the feature
selection and model selection procedure, the application
of the HHMMs in this work is still limited both for learning
and for exploitation of the hierarchical structure. Flattening
a HHMM into a flat HMM as done in [36, 35] suffers from
many drawbacks as criticized in [17]: (a) it cannot provide
multi-scale interpretation, (b) it loses modularity since the
parameters for the flat HMM get constructed in a complex
manner, and (c) it may introduce more parameters, and
most importantly it does not have the ability to reuse parameters
, in other words parameters for the shared sub-models
are not `tied' during the learning, but have to be replicated
and thus lose the inherent strength of hierarchical modeling.
Being able to model shared structures, the extended HHMMs
of [3] allows us to build more compact models, which
facilitates more efficient inference and reduces the sample
complexity in learning. This model is applied in [20] and [22]
for the problem of topic transition detection and video structure
discovery respectively. The authors in [20] use a three-level
HHMM for the detection of topic transitions in educational
videos. Differing from our experiments in this paper
, the HHMM in [20] is modified to operate directly with
continuous-valued observed data via the use of Gaussian
mixture models as the emission probabilities. Each shot-based
observed vector consists of seven features extracted
from visual and audio streams. They report a 77.3% recall
rate and 70.7% precision for the detection task. In another
application, with the help of prior knowledge about educational
videos, a topology for a three-level HHMM is used
in [22] to automatically discover meaningful narrative units
in the educational genre. Their experiments have shown encouraging
results in which many meaningful structures are
hierarchically discovered such as `on-screen narration with
texts', `expressive linkage', `expressive voice-over', etc. The
work of [22] is somewhat similar to that of [18] reviewed
earlier in this section, except the model in [22] allows more
domain knowledge to be encoded and the parameters are all
learned automatically.
THE PROBABILISTIC TOPIC DETECTION FRAMEWORK
Our topic detection framework consists of two phases.
The first phase performs shot detection and low level feature
extraction and then classifies a shot in a meaningful label set
. This phase is described in Section 4. In the next phase,
we train a HHMM or S-HSMM over the alphabet space
from the training data and then use it in conjunction with
the Viterbi to perform segmentation and annotation. The
architecture of the framework is depicted in Figure-1.
1
2
F
E
A
T
U
R
E
E
X
T
R
A
C
T
I
O
N
SHOT DETECTION AND
CLASSIFICATION
Direct Narration
Assisted Narration
Voice-Over
Expressive Linkage Functional Linkage
M-phase Coxian
M-phase Coxian
M-phase Coxian
M-phase Coxian
M-phase Coxian
END
`Intro'
`main body'
1
2
3
4
5
Video & Audio Signals
Figure 1: The architecture for topic detection framework.
The two-level HHMM and the S-HSMM (whose topology
is shown on the top of Figure-1) are special cases of
the hierarchical model with two layers. For the S-HSMM
(HHMM), the top layer is a Markov sequence of switching
variables, while the bottom layer is a sequence of concate-nated
HSMMs (HMMs) whose parameters are determined
by the switching variables at the top. Thus, the dynamics
and duration parameters of the HSMM (HMM) at the bottom
layer are not time invariant, but are `switched' from
time to time, similar to the way the linear Gaussian dynamics
are switched in a switching Kalman filter. When
mapping to the topic modeling problem, the bottom layer
is used to capture `atomic' semantics such as voice-over, expressive
linkage or assisted narration. Combinations of these
atomic semantics then form higher-level semantics, each of
which is represented by a hidden state at the top layer in
our model.
3.1 The Hierarchical HMM
With the assumed knowledge of the flat HMM (e.g., see [24]),
we shall now briefly describe the HHMMs. A hierarchical
HMM is formally defined by a three-turple , , : a topo-logical
structure parameterized by and an emission alphabet
space . The topology specifies the model depth
D, the state space S
d
available at each level d, and the
parent-children relationship between two consecutive levels.
For example, the two-level topology shown on the top of
14
Figure-1 specifies the children set at the bottom level for
state 1 is {1, 2, 5} and for state 2 is {2, 3, 4}. Here, state 2
at the bottom level has been `shared' by both state 1 and
2 at the top level. Given , the parameter of the HHMM
is specified in the following way. For d < D, p S
d
and
i, j S
d+1
are the children of p:
d,p
i
is the initial probability
of i given p; A
d,p
i,j
is the transition probability from i
to j given the parent p; and A
d,p
i,end
is the probability that
state i going to end-state (i.e., returns control to its parent)
given the parent is p. Finally, for each state i at the lowest
level D and an alphabet v : B
v|i
is the emission probability
of observing v given the current state at the lowest
level is i. The whole parameter set is written compactly as:
= {, A, A
end
, B}, where:
=
[
1d<D
pSd
n
d,p
: 1 M o ,
B : |S
d
| ||
A =
[
1d<D
pSd
nA
d,p
: M M o , A
end
=
[
1d<D
pSd
nA
d,p
end
: 1 M o
where in each each M is implicitly meant the number of children
of p and |.| is the cardinality operator. Stochastic constraints
require: P
i
d,p
i
= 1, P
v
B
v|i
= 1 and P
j
A
d,p
i,j
+
A
d,p
i,end
= 1. An intuitive way to view the set is to consider
the subset {
d,p
, A
d,p
, A
d,p
end
} as the parameter of the
p-initiated Markov chain at level d. This chain is terminated
when one of the children i of p reaches the end-state with the
probability of A
d,p
i,end
. For inference and learning, the HHMM
is represented as a dynamic Bayesian network (DBN) and
can be learned by the Asymmetric Inside-Outside algorithm
in [3] or by the forward/backward passes in [17]. Figure-3
shows on its left the DBN representation of the HHMM
with two levels, i.e., D = 2. We refer readers to [6, 17, 3]
for further information on the HHMMs.
3.2 The Switching-Hidden Semi Markov Model
To provide an intuitive view to the S-HSMM, starting
from the description of the HHMMs from the previous section
, let us consider the case of a two-layer HHMM (D = 2)
defined as follows. The state space is divided into the set of
states at the top level Q
= S
1
= {1, . . . , |Q
|} and states
at the bottom level Q = S
2
= {1, . . . , |Q|}. This model is
parameterized by = {
, A
, , A, A
end
, B}.
At the top level,
p
and A
pq
are respectively the initial
probability and the transition matrix of a Markov chain defined
over the state space Q
. For each p Q
, ch(p) Q is
used to denote the set of children of p. As in the case of the
extended HHMM in [3], it is possible that different parent
states may share common children, i.e., ch(p) ch(q) = for
p, q Q
. A transition to p at the top level Markov chain
will initiate a sub-Markov chain at the lower level over the
state space ch(p) parameterized by {
p
, A
p
, A
p
end
} where
q
i
and A
p
ij
are the initial and transition probabilities as in the
normal HMM setting, and A
p
i,end
is the probability that this
chain will terminate after a transition to i. At each time
point t, a discrete symbol y
t
is generated with a probability
of B
v|i
where i is the current state at the bottom
level. In the description of this two-level HHMM, the duration
d for which a bottom state i remains the same clearly
has a geometric distribution parameterized by its non-self-transition
probability (1 - A
p
ii
), i.e., d Geom(1 - A
p
ii
).
In many cases, the geometric distributions are often too
restricted to model realistic data. The Switching Hidden
Semi-Markov Models (S-HSMMs) proposed in [5] overcomes
this restriction and allows the duration d of state i at the
bottom level to follow a more general discrete distribution
d D
p,i
d
. More precisely, the p-initiated chain at the bottom
level is now a semi-Markov sequence parameterized by
{
p
i
, A
p
ij
, D
p,i
d
} as opposed to the normal Markov chain in the
HHMM case. The authors in [5] consider two families of distributions
for modeling the duration: the multinomial and
the Coxian. However for the multinomial case, the complexity
of the learning algorithm is proportional to the maximum
duration length, thus making it unsuitable for the problem
of modeling video data which is usually very long in nature.
Apart from the disadvantage of assuming a maximum duration
, our empirical testing on the multinomial case with
the maximum length of 50 has also shown that it is about
20 times slower than its Coxian counterpart reported in this
paper, thus making it impractical in our settings. We will
therefore omit the multinomial case and will consider exclu-sively
the Coxian parameterization in this paper.
A discrete M -phase Coxian distribution Cox(; ), parameterized
by = {
1
, . . . ,
M
} (P
i
i
= 1) and =
{
1
, . . . ,
M
}, is defined as a mixture of P
M
i=1
i
S
i
where
S
i
(X
i
+ . . . + X
M
), in which X
i
are independent random
variables having geometric distributions X
i
Geom(
i
).
This distribution is a member of the phase-type distribution
family and has the following very appealing interpretation
. Let us construct a Markov chain with M + 1 states
numbered sequentially with the self transition parameter
A
ii
= 1 i
as shown in Figure-2. The first M states rep-1
absorbing
state
2
M
1
M
2
M
1
M
2
1
Figure 2: The phase diagram of an M -phase Coxian.
resent M phases, while the last is the absorbing state which
acts like an end state. The duration of each individual state
(phase) i is X
i
Geom(
i
). If we start from state i, the
duration of Markov chain before the end state reached is
S
i
= X
i
+ . . . + X
M
. Thus, Cox(, ) is indeed the distribution
of the duration of this constructed Markov chain with
as the initial state (phase) distribution. The discrete Coxian
is much more flexible than the geometric distribution:
its probability mass function is no longer monotonically decreasing
and it can have more than one mode.
Using the Coxian distribution, the duration for the states
at the bottom level in the S-HSMM is modeled as follows.
For each p-initiated semi-Markov sequence, the duration of a
child state i is distributed according to D
p,i
d
= Cox(d;
p,i
,
p,i
).
The parameter
p,i
and
p,i
are M -dimensional vectors
where M is a fixed number representing the number of phases
in the discrete Coxian. It is easy to verify that for M = 1,
the model reduces identically to a two-layer HHMM.
15
3.3 Inference and Learning in the S-HSMM
For inference and learning, the S-HSMM is represented as
a dynamic Bayesian network as shown in Figure-3 and then
forward/backward passes are applied to compute the filtering
and smoothing distributions required for EM learning.
t
+1
t
z
t
z
t
+1
m
t
m
t
+1
y
t
y
t
+1
x
t
+1
x
t
e
t
+1
e
t
z
t
z
t
+1
y
t
y
t
+1
x
t
+1
x
t
t
t
+1
e
t
e
t
+1
Figure 3: Two-slice DBN representation of a two-level
HHMM (left) and the (Coxian) S-HSMM (right).
At each time-slice t, an amalgamated hidden state S
t
=
{z
t
,
t
, x
t
, e
t
, m
t
} together with the observation y
t
are maintained
. The top level state is updated via z
t
and
t
is a
boolean-valued variable set to 1 when the z
t
-initiated semi-Markov
sequence ends at t. At the bottom level, x
t
is the
current child state in the z
t
-initiated chain, m
t
represents
the current phase of x
t
and e
t
is a boolean-valued variable
set to 1 when x
t
reaches the end of its duration. The forward
and backward procedures in the general DBN are then used
to compute the filtering distribution Pr(S
t
|y
1:t
) and two
smoothing distributions Pr(S
t
|y
1:T
) and Pr(S
t
, S
t+1
|y
1:T
).
With these smoothing distributions, it is sufficient to derive
all expected sufficient statistics required during EM learning
. The overall complexity for the forward pass (and also
for the EM) is O(|Q|
2
|Q
|
2
M T ). Further information can
be found in [5].
3.4 Viterbi decoding for segmentation
To compute the best sequence state, that is to find:
S
1:T
= argmax
S
1:T
Pr(S
1:T
|y
1:T
)
Viterbi decoding algorithms for the HHMM and S-HSMM
are developed. These algorithms are similar to the one used
in the standard HMM outlined in [24] except we replace
the normal state in the HMM setting by our amalgamated
state S
t
which
{z
t
, x
t
,
t
, m
t
, e
t
} for the S-HSMM and
{z
t
, x
t
,
t
, e
t
} for the HHMM (cf. Figure-3).
SHOT-BASED SEMANTIC CLASSIFICATION
In this section, we detail the first phase in the detection
framework. This includes the formulation of an alphabet
set for shot labeling, low-level feature extraction and shot
classification.
4.1 Shot labels set:
Existing work on the educational videos analysis (e.g., [21,
19]) has studied the nature of this genre carefully. As noted
in [21], the axiomatic distinction of the educational genre is
in its purpose of teaching and training; and as such a well-crafted
segment that moves viewers to actions or retains
a long-lasting message requires elaborative directing skills
3
.
Based on a narrative analysis used in the educational domain
and observed rules and conventions in the production of this
media, the authors in [21] propose a hierarchy of narrative
structures at the shot level as shown in Figure-4.
In this paper, we select the five most meaningful structures
from this hierarchy for experimentation. This set
includes: direct-narration (DN), assisted-narration (AN),
voice-over (VO), expressive-linkage (EL), and functional-linkage
(FL). We shall now briefly describe these narratives.
Direct-narration (DN) and assisted-narration (AN) are re-ferred
to jointly as on-screen narration, which refer to the
segments with the appearance of the narrator. The purpose
of these sections is to speak to the viewers with the
voice of authority, and is commonly used to demarcate a
new topic or subtopic, to clarify a concept or to lead the
viewers through a procedure with examples. DN is a more
strict form of on-screen narration. It involves eye-to-eye
contact where the narrator speaks to the viewers directly.
An analogy from news video is the anchor-shot. AN refers
to parts of the video when a narrator appears in a more
diverse style, and the attention of the viewers is not necessarily
focused on him or her. Here, the purpose is not only
to talk to the viewers, but also to emphasize a message by
means of text captions and/or to convey an experience via
background scenes. A similar structure from news for AN
is the reporting shot. Assisted narration can be used both
in the introduction of a topic or in the main body, and thus
this structure should be shared
4
by both higher semantics
`introduction' and `main body'. As we see later, this knowledge
is explicitly modeled and incorporated in the design of
the topology for the S-HSMM. An important feature is that
although the semantics of AN is shared, the typical durations
are different when it is used in the introduction or the
main body respectively. An AN section used to demarcate
a new topic usually contains only one, and sometimes two
shots, while an AN section used in the main body is typically
long, spanning a number of shots. Conditioning on the
parent (i.e., introduction or main body), the typical duration
distribution of the AN section is learned automatically
for each case by our model.
The voice-over (VO) structure is identified as sections
where the audiotrack is dominated by the voice of the narrator
, but without his or her appearance. The purpose of
these segments is to communicate with the viewers via the
narrator's voice. Additional pictorial illustration is usually
further shown in the visual channel.
Expressive linkage (EL) and Functional linkage (FL) belong
to the same broader linkage group in the hierarchy in
Figure-4. The purpose of the linkage structure is to maintain
the continuity of a story line but there is neither on-screen
nor voice-over narration involved. Functional linkage
contains transition shots encountered in switching from one
subject to the next. Usually, large superimposed text captions
are used and the voice narration is completely stopped
3
We note that the two closest video genre to educational
videos is news and documentaries. In the description of what
follows on educational genre, we can spot several similarities
across these genre.
4
In terms of parameterization, it is a form of parameter tying
.
16
Linkage
Narration
On-screen
Narration
S
u
p
p
o
r
t
i
v
e
N
a
r
r
a
t
i
o
n
N
a
r
r
a
t
i
o
n
V
o
i
c
e
O
v
e
r
vo
educational videos
on
lk
f
u
lk
e
x
lk
Ex
pre
ssi
ve
Lin
kag
e
Fu
nct
ion
al
Lin
kag
e
sn
Supportive Narration
Di
r
ect
i
on
Na
r
rat
i
on
an
w
t
dn
a
n
w
s
vo
w. Texts/Scenes
vo w
ith Sc
enes
v
o
t
s
vo
w
s
vo
w
t
Ass
Na
rrw
.S
cen
es
As
sN
arr
w.
Te
xts
vo with Texts
Figure 4: The hierarchy of narrative structures in educational
videos proposed in [21].
with possibly music played in the background. Expressive
linkage, on the other hand, is used to create `mood' for the
subject being presented. For example, in the video presenting
the fire safety topic, there is a segment in which the
narration is completely stopped and then a sequence of pictures
of the house on fire is shown. These scenes obviously
do not give any direct instruction, rather they create a sense
of `mood' that helps the video to be more appealing and interesting
.
4.2 Feature extraction and shot classification
The feature set and method for shot classification described
in [21] is employed in this paper. The feature set
is extracted from both visual and audio streams at the shot-based
level. From the image sequence, we choose to detect
the frontal faces to reflect the appearance of the narrator
using the CMU face detection algoritm [25]; and captioned
texts as one of the common means of conveying information
in educational videos using the algorithm described in [27].
In order to classify a shot into direct-narration, voice-over,
linkage, etc., further information is sought from the audio
stream. Audio features are computed as the percentage of
the following audio classes within a shot: vocal speech, music
, silence, and non-literal sound. A shot is then classified
into one of the elements of = {DN, AN, V O, EL, F L} using
the classification framework reported in [21]. Since we
claim no contribution at this stage, we shall refer readers
to [21] for full details on this classification scheme.
EXPERIMENTAL RESULTS
Our dataset D consists of 12 educational and training
videos containing different types of subjects and presentational
styles, and thus this constitutes a relatively noisy set
of data. We manually provide groundtruth for these videos
with topic transitions. In some cases, the groundtruth for
topic transitions comes directly from the hardcopy guidelines
supplied by the producer.
At the pre-processing stage, Webflix [15] is used to perform
shot transition detection and all detection errors are
corrected manually. Since our contribution from this paper
is at the semantic level, the latter step is to ensure an error
at the shot detection does not influence the performance of
the system at higher levels. Since educational videos mainly
contain cut and dissolve transitions, the shot detection accuracy
is found to be very high with rare cases being erroneous.
Given shot indices, each video is processed as described in
Section 4, and then each shot S is labeled as one of the
elements of = {DN, AN, V O, EL, F L}.
5.2 Model topology and parameterization
We will use four models in this experiments: the flat HMM
and HSMM (as the baseline cases), the HHMM and the S-HSMM
. For the flat HMM and HSMM, we range the number
of states from 2 to 5 with the observation space , where 2 is
intended to be the minimum number of states required (like
`intro' and `main body') and 5 is the number of alphabets
(i.e., in the relaxed way that the number of states equates to
the number of alphabets). The semi-Markov version HSMM
is further parameterized by 3-phase Coxian distributions as
the duration distributions of the states. The choice of M = 3
phases is hinted by the results reported in [5] where M = 3
has resulted in best performances.
For the HHMM and the S-HSMM, the topology shown in
the top of Figure-1 is used to construct the S-HSMM in this
experiment. This topology specifies Q
= 2 states at the top
level where state 1 and 2 correspond to the introduction and
the main body of the topic respectively. The Markov chain
at this level is similar to the flat HMM used in [4] for news
story segmentation
5
reviewed in Section 2. We incorporate
the assumed prior knowledge that a topic usually starts with
either direct-narration, assisted-narration or functional linkage
, thus state 1 has {1, 2, 5} as its children set. Similarly,
the main body can contain assisted-narration, voice-over or
expressive linkage, hence its children set is {2, 3, 4}. Here
state 2 (assisted narration) has been shared by both parent
state 1 (`intro') and 2 (`main body'). The bottom level
has 5 states corresponding to 5 shot labels. To map the
labels to the bottom states, we construct a diagonal-like B
observation matrix and fix it, i.e., we do not learn B. The
diagonal entries of B are set to 0.99 to relax the uncertainty
during the classification stage. The duration models in the
S-HSMM are used with M = 3 phases Coxian.
5.3 Detection Results
Given the dataset D, our evaluation employs a leave-one-out
strategy to ensure an objective cross-validation. We sequentially
pick out a video V and use the remainder set
{D \ V } to train the model, and then use V for testing. In
the results that follow, this method is used for all cases including
the flat HMM, the flat HSMM, hierarchical HMM,
and the S-HSMM. A topic transition is detected when the introduction
state at the top level is reached during the Viterbi
decoding. Examples of Viterbi decoding with the S-HSMM
and HHMM are shown in Figure-5.
To measure the performance, in addition to the well-known
5
They called `transition' and `internal' states instead of `introduction'
and `main body'.
17
recall (recall) and precision (prec) metrics, we include the
F-score (f-score) metric defined as:
f-score = 2 recall prec
recall + prec = 2
1
recall +
1
prec
-1
While the recall rate measures how well the system can recover
the true topic transitions, and high precision ensures
that it does not over-segment the video, the F-score shows
the overall performance of the system. In the ideal case
when recall=prec=100%, clearly f-score = 1, i.e., the
highest performance the system can achieve.
The baseline cases: flat HMM and HSMM
Since initialization is crucial during EM learning, we apply
multiple random restart points when conducting the experiments
, including the uniform initialization. Although
several restarts were used, the flat HMM is found to yield
extremely poor results in all cases. Even when we train and
test on the same dataset, the flat HMM still produces poor
detection results, proving to be unsuitable in our topical
transition detection settings.
The flat HSMM produces slightly better results than the
flat HMM, but still in all ten runs, the performance is still
very low (recall= 7.74% and prec= 48% in the best case).
The poor performance of the HMM and HSMM is of no
surprise, since their forms are too strict to model a rather
high concept - the `topic'. Furthermore, with the flat structures
, they offer no mechanism to incorporate prior domain
knowledge such as those that we use in the topology of the
S-HSMM and HHMM. This clearly shows that hierarchical
models are much more suitable for video analysis than the
flat ones. Given the poor results in the flat structure cases,
we will omit the HMM and HSMM in the discussion of what
follows below.
Detection with the S-HSMM and HHMM
The recall rate, precision and F-score for representative runs
are reported in Table 1, in which the best performance are
highlighted in bold. The detection results for each individual
video for the best cases are shown in Table 2. With different
random restarting points, including the uniform initialization
, the performance of the HHMM ranges from poor
to very good (41.29% 83.23% for recall and 80.00%
84.47% for precision), whereas the S-HSMM consistently
yields good results (83.87% 84.52% for recall and 87.92%
88.51% for precision).
Since during training there is nothing exposed to the testing
examples, we also report (in the second part of Table
1) the performances of the HHMM and S-HSMM in a
likelihood-based `best model selection' scheme. This scheme
works as follows. As in the leave-one-out strategy, let V be
a video selected from D, and N is the number of times we
train the model using the dataset {D \ V } (i.e., without
V ). Let
i
(V ) and L
i
(V ) (i = 1 . . . N ) respectively be the
learned model and the likelihood (at convergence) obtained
for i-th run. We then use the model
i
to test on the
unseen video V where i
= argmax
i=1...N
L
i
(V ). Simply speaking
, we sequentially `throw away' a video V , then select the
best model (i.e., highest likelihood) among all runs to test
on V . For the HHMM, the result stays the same as when
we choose the best performance based on the F-score. For
the S-HSMM, the recall stays the same, while the precision
slightly decreases from 88.51% to 87.92%. Nevertheless, the
S-HSMM is still superior to the HHMM.
recall (%)
prec (%)
f-score
results for best performance selection
Uniform
42.58
81.48
0.559
Rand. 1
83.23
84.47
0.840
HHMM
Rand. 2
83.23
84.87
0.840
Rand. 3
83.23
84.87
0.840
Rand. 3
41.29
80.00
0.545
Rand. 4
83.87
83.87
0.839
Uniform
84.52
87.92
0.862
Rand. 1
84.52
88.51
0.865
S-HSMM
Rand. 2
83.87
87.25
0.855
Rand. 3
84.52
88.51
0.865
Rand. 4
83.87
87.25
0.855
Rand. 5
84.52
88.51
0.865
results for best model selection
HHMM
83.23
84.87
0.840
S-HSMM
84.52
87.92
0.862
Table 1: Detection Performances for the S-HSMM and the
HHMM. Best performance for each case is highlighted in
bold (we note that best performances are attained in multiple
cases and we select one of them to highlight).
Table 1 and 2 show that modeling with the S-HSMM results
in better performances than the HHMM in both recall
and precision rates. And as a result, the F-score improves
from 0.840 to 0.865. While the recall rate improves only
slightly, the 4% improvement in the precision indicates
that the HHMM tends to over-segment the video more frequently
than the S-HSMM. This has confirmed our belief
that duration information is an important factor in our topic
transition detection settings. The semi-Markov modeling
has effectively overcome the limitation of the strict Markov
assumption of {future past | present}
6
in the flat HMM,
allowing longer temporal dependency to be captured via the
duration of the state. Nevertheless, given a somewhat more
contained set of data used in this experiment, the results
from both the S-HSMM and HHMM are better than the previous
detection results of news story reported in [4] (which
came first in TRECVIC2003 testbed) and the heuristics and
Bayesian approaches on topic detection in [23, 21]. These results
do not only imply the advantages of the S-HSMM over
the HHMM, but also show the contribution of the HHMM
in its own right.
CONCLUSION
In this paper we explore the difficult problem of detecting
topic transitions through the use of two probabilistic models
, the HHMM and the S-HSMM. Both allow the modeling
of hierarchy and the sharing of substructures within the hierarchy
, whilst the S-HSMM additionally allows the explicit
modeling of durative properties. Coupled with the use of the
Coxian model, we show how this unified framework performs
better than the baseline cases (the flat HMM and HSMM)
and previous results reported. In particular the use of the
S-HSMM demonstrates that the modeling of duration is a
6
i.e., the future is conditionally independent of the past
given the present.
18
Video
TP
FP
Miss
GT
1 - "EESafety"
10
8
1
3
3
5
13
2 - "SSFall"
4
4
1
1
2
2
6
3 - "ElectS"
6
6
2
1
2
2
8
4 - "TrainHaz"
18
20
2
2
3
1
21
5 - "EyeS"
10
10
0
1
0
0
10
6 - "FootS"
10
10
1
1
1
1
11
7 - "HKeeping"
11
11
3
3
1
1
12
8 - "Maintn"
9
8
1
3
4
5
13
9 - "HandS"
9
9
1
1
1
1
10
10 - "SBurning"
19
19
1
1
2
2
21
11 - "HeadProt"
6
5
1
3
1
2
7
12 - "WeldingS"
19
19
3
3
4
4
23
Sum
131
129
17
23
24
26
155
Table 2: Detection results for each video in the best performance
cases of the S-HSMM and the HHMM (TP: True
Positive, FP: False Positive, GT: Ground-Truth).
powerful tool in the extraction of higher level semantics.
The results demonstrate the promise of the approach and
although the results are demonstrated with the educational
and training film genre, the method can easily be applied to
other genres. We believe that the promise of the approach
lies in its unified probabilistic handling of durative properties
and shared hierarchical structure, allowing it to handle
long video sequences with inherent variability and complicated
semantics.
Acknowledgement
Hung Bui is supported by the Defense Advanced Research
Projects Agency (DARPA), through the Department of the
Interior, NBC, Acquisition Services Division, under Contract
No. NBCHD030010.
REFERENCES
[1] B. Adams, C. Dorai, and S. Venkatesh. Automated
film rhythm extraction for scene analysis. In IEEE
International Conference on Multimedia and Expo,
pages 10561059, Tokyo, Japan, August 2001.
[2] P. Aigrain, P. Jolly, and V. Longueville. Medium
knowledge-based macro-segmentation of video into
sequences. In M. Maybury, editor, Intelligent
Multimedia Information Retrieval, pages 159174.
AAAI Press/MIT Press, 1998.
[3] H. H. Bui, D. Q. Phung, and S. Venkatesh.
Hierarchical hidden markov models with general state
hierarchy. In D. L. McGuinness and G. Ferguson,
editors, Proceedings of the Nineteenth National
Conference on Artificial Intelligence, pages 324329,
San Jose, California, USA, 2004. AAAI Press / The
MIT Press.
[4] L. Chaisorn, T.-S. Chua, C.-H. Lee, and Q. Tian. A
hierarchical approach to story segmentation of large
broadcast news video corpus. In IEEE International
Conference on Multimedia and Expo, Taipei, Taiwan,
June 2004.
[5] T. V. Duong, H. H. Bui, D. Q. Phung, and
S. Venkatesh. Activity recognition and abnormality
detection with the Switching Hidden Semi-Markov
Model. In IEEE Int. Conf. on Computer Vision and
Pattern Recognition, volume 1, pages 838845, San
Diego, 20-26 June 2005. IEEE Computer Society.
[6] S. Fine, Y. Singer, and N. Tishby. The hierarchical
hidden markov model: Analysis and applications.
Machine Learning, 32(1):4162, 1998.
[7] A. Hanjalic. Shot-boundary detection: Unraveled and
resolved? IEEE Transaction in Circuits and Systems
for Video Technology, 12(2):90105, 2002.
[8] A. Hanjalic, R. L. Lagendijk, and J. Biemond.
Automated high-level movie segmentation for
advanced video retrieval systems. IEEE Transactions
in Circuits and Systems for Video Technology,
9(4):580588, 1999.
[9] I. Ide, K. Yamamoto, and H. Tanaka. Automatic video
indexing based on shot classification. In First
International Conference on Advanced Multimedia
Content Processing, pages 99114, Osaka, Japan,
November 1998.
[10] U. Iurgel, R. Meermeier, S. Eickeler, and G. Rigoll.
New approaches to audio-visual segmentation of TV
news for automatic topic retrieval. In IEEE Int. Conf.
on Acoustics, Speech, and Signal Processing, volume 3,
pages 13971400, Salt Lake City, Utah, 2001.
[11] E. Kijak, L. Oisel, and P. Gros. Hierarchical structure
analysis of sport videos using HMMs. In Int. Conf. on
Image Processing, volume 2, pages II10258 vol.3,
2003.
[12] S. E. Levinson. Continuously variable duration hidden
markov models for automatic speech recognition.
Computer Speech and Language, 1(1):2945, March
1986.
[13] T. Lin and H. J. Zhang. Automatic video scene
extraction by shot grouping. Pattern Recognition,
4:3942, 2000.
[14] Z. Liu and Q. Huang. Detecting news reporting using
audio/visual information. In International Conference
on Image Processing, pages 2428, Kobe, Japan,
October 1999.
[15] Mediaware-Company. Mediaware solution webflix
professional V1.5.3, 1999.
http://www.mediaware.com.au/webflix.html.
[16] C. D. Mitchell and L. H. Jamieson. Modeling duration
in a hidden markov model with the exponential
family. In Proc. of IEEE Int. Conf. on Acoustics,
Speech, and Signal Processing, pages II.331II.334,
Minneapolis, Minnesota, April 1993.
[17] K. Murphy and M. Paskin. Linear-time inference in
hierarchical HMMs. In T. G. Dietterich, S. Becker,
and Z. Ghahramani, editors, Advances in Neural
Information Processing Systems, Cambridge, MA,
2001. MIT Press.
[18] M. R. Naphade and T. S. Huang. Discovering
recurrent events in video using unsupervised methods.
In Int. Conf. om Image Processing, volume 2, pages
1316, Rochester, NY, USA, 2002.
[19] D. Q. Phung. Probabilistic and Film Grammar Based
Methods for Video Content Analysis. PhD thesis,
Curtin University of Technology, Australia, 2005.
[20] D. Q. Phung, H. H. Bui, and S. Venkatesh. Content
structure discovery in educational videos with shared
19
2 2 2 2 1 1 2 2 2 2 2 2 1 1 1 2 2 2 2 2 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 2 2 2
4 4 4 4 5 2 3 3 3 3 3 2 5 2 1 3 3 3 3 3 5 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 5 2 5 1 3 3 3
1 1 2 3 3 3 2 2 2 2 3 3 3 3 3 2 2 2 2 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2
2 2 2 1 2 1 2 2 2 2 2 1 2 1 1 2 2 2 2 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 2 2 2 1 2 2 2
2 2 2 1 1 1 2 2 2 2 1 1 1 1 1 2 2 2 2 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 2 2 2
Detected Topic Transitions
z
t
x
t
m
t
t
e
t
main body
intro
13
2 2 2 2 1 1 2 2 2 2 2 1 1 1 1 2 2 2 2 2 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 2 2 2
4 4 4 4 5 2 3 3 3 3 3 2 5 2 1 3 3 3 3 3 5 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 5 2 5 1 3 3 3
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
shot number
z
t
x
t
t
e
t
S
H
S
M
M
H
H
M
M
39
Detected Topic Transitions
5
21
Ground-Truth
Figure 5: Example of Viterbi decoding for the S-HSMM and the HHMM for the first 45 shots of video `EESafety'. These
results should be read together with Figure-3 to see the semantics of the DBN structure.
structures in the hierarchical HMMs. In Joint Int.
Workshop on Syntactic and Structural Pattern
Recognition, pages 11551163, Lisbon, Portugal,
August 1820 2004.
[21] D. Q. Phung and S. Venkatesh. Structural unit
identification and segmentation of topical content in
educational videos. Technical report, Department of
Computing, Curtin University of Technology, 2005.
TR-May-2005.
[22] D. Q. Phung, S. Venkatesh, and H. H. Bui.
Automatically learning structural units in educational
videos using the hierarchical HMMs. In International
Conference on Image Processing, Singapore, 2004.
[23] D. Q. Phung, S. Venkatesh, and C. Dorai. High level
segmentation of instructional videos based on the
content density function. In ACM International
Conference on Multimedia, pages 295298, Juan Les
Pins, France, 1-6 December 2002.
[24] L. R. Rabiner. A tutorial on hidden markov models
and selected applications in speech recognition. In
Procs. IEEE, volume 77, pages 257286, February
1989.
[25] H. A. Rowley, S. Baluja, and T. Kanade. Neutral
network-based face detection. IEEE Transactions on
Pattern Analysis and Machine Intelligence,
20(1):2338, January 1998.
[26] K. Shearer, C. Dorai, and S. Venkatesh. Incorporating
domain knowlege with video and voice data analysis.
In Workshop on Multimedia Data Minning, Boston,
USA, August 2000.
[27] J.-C. Shim, C. Dorai, and R. Bolle. Automatic text
extraction from video for content-based annotation
and retrieval. In International Conference on Pattern
Recognition, volume 1, pages 618620, Brisbane,
Australia, August 1998.
[28] C. G. Snoek and M. Worring. Multimodal video
indexing: A review of the state-of-the-art. Multimedia
Tools and Applications, 2004. In Press.
[29] H. Sundaram. Segmentation, Structure Detection and
Summarization of Multimedia Sequences. PhD thesis,
Columbia University, 2002.
[30] H. Sundaram and S.-F. Chang. Computable scenes
and structures in films. IEEE Transactions in
Multimedia, 4(4):482491, 2002.
[31] B. T. Truong. An Investigation into Structural and
Expressive Elements in Film. PhD thesis, Curtin
University of Technology, 2004.
[32] J. Vendrig and M. Worring. Systematic evaluation of
logical story unit segmentation. IEEE Transactions on
Multimedia, 4(4):492499, 2002.
[33] C. Wang, Y. Wang, H. Liu, and Y. He. Automatic
story segmentation of news video based on
audio-visual features and text information. In Int.
Conf. on Machine Learning and Cybernetics,
volume 5, pages 30083011, 2003.
[34] J. Wang, T.-S. Chua, and L. Chen. Cinematic-based
model for scene boundary detection. In The Eight
Conference on Multimedia Modeling, Amsterdam,
Netherland, 5-7 November 2001.
[35] L. Xie and S.-F. Chang. Unsupervised mining of
statistical temporal structures in video. In
A. Rosenfield, D. Doreman, and D. Dementhons,
editors, Video Mining. Kluwer Academic Publishers,
June 2003.
[36] L. Xie, S.-F. Chang, A. Divakaran, and H. Sun.
Learning hierarhical hidden markov models for
unsupervised structure discovery from video.
Technical report, Columbia University, 2002.
[37] X. Zhu, L. Wu, X. Xue, X. Lu, and J. Fan. Automatic
scene detection in news program by integrating visual
feature and rules. In IEEE Pacific-Rim Conference on
Multimedia, pages 837842, Beijing, China, 2001.
20 | domain knowledge;Topic Transition Detection;A variety in directional styles;semantic relationship of neighborhood scenes;coxian switching hidden semi-markov model;natural hierarchical organization of videos;model educational video content;extended Hierarchical Hidden Markov Model;unified and coherent probabilistic framework;Educational Videos;shot-based semantic classification;their semantically shared substructures;topic transition detection;probabilistic framework;Coxian Switching Hidden semi-Markov Model;Coxian;Hierarchical Markov (Semi-Markov) Models;typical durations of important semantics;modeling temporal correlation;hierarchical hidden markov model |
197 | Towards Content-Based Relevance Ranking for Video Search | Most existing web video search engines index videos by file names, URLs, and surrounding texts. These types of video metadata roughly describe the whole video in an abstract level without taking the rich content, such as semantic content descriptions and speech within the video, into consideration. Therefore the relevance ranking of the video search results is not satisfactory as the details of video contents are ignored. In this paper we propose a novel relevance ranking approach for Web-based video search using both video metadata and the rich content contained in the videos. To leverage real content into ranking, the videos are segmented into shots, which are smaller and more semantic-meaningful retrievable units, and then more detailed information of video content such as semantic descriptions and speech of each shots are used to improve the retrieval and ranking performance. With video metadata and content information of shots, we developed an integrated ranking approach, which achieves improved ranking performance. We also introduce machine learning into the ranking system, and compare them with IRmodel (information retrieval model) based method. The evaluation results demonstrate the effectiveness of the proposed ranking methods. | INTRODUCTION
Multimedia search has become an active research field due to the
rapid increase of online-available content and new practical
applications. Search technology is considered the key to
navigating the Internet's growing media (video, audio and image)
collections. Google Yahoo, Blinkx and other search companies
have provided elementary video search engines. However,
existing video search engines are all based on the text information
related to the video which can be retrieved from web pages, such
as file names, URLs, and surrounding texts. These types of textual
information can be considered as "metadata" of the video since
they only roughly describe the video. There is no doubt that text
searching is the most efficient way to retrieve information (even
when searching for videos), because it well matches the manner of
human thinking. However, only using metadata is far form
people's expectation in video searching, because even the best
case scenario, the metadata is only the highly concentrated
overview of a video, with many losses on details.
In general, a video consists of many shots and sub-events with a
temporal main thread. The video should be segmented into smaller
retrievable units that are directly related to what users perceive as
meaningful. Much research has concentrated on segmenting video
streams into "shots" using low level visual features [1]. Each
segment has its own scenes and meanings. In many cases, when
users query a video, they intend to find some desired clips in the
video instead of viewing it thoroughly. However, this can seldom
be achieved by searching the surrounding text which is related to
the whole video.
Much content information can be used to search videos and
shots. In content-based video retrieval systems, video shots can be
classified into or annotated by several semantic concepts. The
most substantial works in this field are presented in the TREC
Video Retrieval Evaluation (TRECVID) community [2]. In
addition, speech is also significant information which has close
connection to video contents. Some videos are associated with
transcripts/closed captions which are provided by content provider.
Using ASR (automatically speech recognition) to generate speech
text is another practical solution.
In this paper, with the video metadata and content information
of video shots, we index and rank the videos in a way similar to
general text-based web page search engines. The IR-model, which
is widely employed in text information retrieval and web page
search, will be applied to rank the search results by examining
relevance between query and indexed information (including both
metadata and content information). To fully utilize the content
information and get a better ranking performance, we integrate the
"shot relevance" into "video relevance". That is, the ranking is
decided not only by the relevance of video (metadata of the entire
video), but also by all the relevant shots within the video.
We also apply learning based method to rank the search results
based on a set of features extracted from the corresponding query,
video metadata, and content information.
The rest of this paper is organized as follows. Section 2
introduces the IR-model based ranking, including extraction of
video metadata and content information, and a ranking method
integrating these two types of information. In section 3, a learning
based ranking approach is presented. Section 4 compares the
ranking performance evolution results, and Section 5 concludes
the paper.
IR-MODEL BASED RANKING
In the traditional text retrieval and web page search, IR
(Information Retrieval) models are usually used to evaluate the
relevance between a query and a document. BM25 [3] is one of
the frequently used evaluation methods. Given a query Q and a
document D, the relevance between Q and D can be modeled as
the summation of the relevance between each query term (word) t
in Q and D:
and b are parameters. tf(t,D) is term frequency, means
the frequency of term t appears in document D. df(t) is document
frequency, means the frequency of document which contains term
t within all documents. |D| stands for the length of document D.
The basic idea of this IR model can be explained as, if the
query term appears in document more frequently (higher tf), and
the query term is more unique that less documents contain it
(lower df), the query will be more relevant to the document.
2.2 Index and Rank the Video Information
The video data used in our experimental system are from MSN
Video (http://video.msn.com/), which contains 7230 videos.
2.2.1 Metadata of the video
Because the videos in our data set are made by professional
content provider, there is rich meta information that describes
each entire video with brief text. Each video has the following
metadata fields: headline, caption, keywords, source, video URL,
thumbnail image URL, related web page anchor text, page URL,
etc. Besides these types of textual information, some format
information of the video, such as video length, frame size, bit rate,
and creation date are also extracted. Some selected information
fields of video metadata are listed in Table 1.
Table 1. Video metadata
Field
Example Value
Headline
Discovery launches
Caption
July 26: Watch the entire launch of space shuttle D...
Source
MSNBC
Keywords
Technology, science, Space, Partner Codes ...
Video URL
http://www.msnbc.msn.com/default.cdnx/id/871313...
Link anchor
MSNBC.com's Technology and Science front
Link URL
http://www.msnbc.msn.com/id/3032118
date
7/26/2005 4:40:48 PM
video length
609.72 seconds
Frame size
320 x 240
Bit rate
180 Kbps
For the videos contained in general web pages, some attributes
mentioned above may not be obtained directly, but the
surrounding texts, URL, filename can be extracted as the metadata
fields of the video.
These information fields correspond to document D in Section
2.1. Different fields can be represented by different type of D (D
i
in Equation 4). The overall relevance can be calculated by the
weighted summation of the relevance of all fields. The weight of
the fields (DW
i
in Equation 4) can be determined by their
importance, significance, and representativeness to the video.
=
)
,
(
)
,
(
Q
D
R
DW
Q
Video
R
i
i
(4)
In our system, four major information fields from video
metadata are selected to be indexed: headline, caption, keywords,
and source. Headline is a highly representative description of the
video content. Keywords are also good recapitulative terms. For
these two fields, higher weights are set. Caption is a more general
and detail depicts for the video; Source provided a higher level
and less relevant information, they will be set lower weights for
ranking. Table 2 gives out the weights of fields in our
experimental system.
Table 2. Weights for relevance evaluation
Fields weight
Headline
10
Keywords
10
Caption
5
source
1
2.2.2 Content information of the video shots
There is plenty of information in the visual/audio content of the
video sequence, which can not be sufficiently presented by the
aforementioned textual video metadata. We can build a set of
models that can be applied to automatically detect a corresponding
set of concepts such that each video shot can be annotated with a
detection confidence score for each concept. Successful concept
modeling and detection approaches have been introduced in
TRECVID, relying predominantly on visual/aural analysis and
statistical machine learning methods [4]. The LSCOM-lite
Lexicon [5] designed for the TRECVID 2005 Benchmark consists
of more than 40 concepts spread across multiple concept-types
such as object, events, site etc. Though the size of the lexicon is
still far from practical application for general Web-based video
search, this semantic information is promising to enable real
content-based video search, and therefore it is applied in our
ranking system.
Besides visual contents, information from audio channel,
especially the speech, is also very useful for searching videos.. In
our experimental system, we use Microsoft speech recognition
engine (with a large vocabulary). This engine gives recognized
words with a start timestamp, length, and a recognition confidence
value, which are very useful for later indexing and ranking. The
speech texts are allotted and assigned into video shots, according
to the timestamp of words and video shots.
The content information is associated with individual video shot,
which consist of semantic keywords (with corresponding detection
confidences), and speech words (with recognition confidences).
The confidences of words will act as weights of term frequencies
tf to calculate the relevance in Equation (2).
628
2.3 Integrated Ranking with Metadata and
Content Information
To combine metadata and content to rank the videos, we index the
videos by metadata and index the video shots by content
information separately, and then integrated these two rank lists,
named video list and shot list, to form a final ranking. The
integrated ranking returns search result by video, but taking all the
relevance shots within this video into consideration.
For video list, each item is a video. Let item
iv
.vid denotes the
video ID of the i
th
item, item
iv
.score denotes the ranking score of
the i
th
item. For shot list, each item is a shot from a video. Let
item
is
.vid, item
is
.sid, item
is
.score donote the video ID which the
shot belong to, the shot ID within the video, and the ranking score
of the i
th
item respectively.
The integrating process is presented in Algorithm 1. The basic
idea is that, all the ranking score of the relevance shots within the
video are accumulated to the ranking score of the video, with
corresponding weights. The relevant shots in the video will be
highlight when displaying the video as search result.
new a integrated result list (item denotes as item
iI
)
for each item
iv
in video list{
new item
ic
;
item
iI
.vid = item
is
.vid;
item
iI
score = item
iv
. score * Weight_v;
for each item item
is
in shot list{
if(item
iv
.vid == item
is
.vid) {
item
iI
addshot(item
is
.sid);
item
iI
score += item
is
. score * Weight_s;
remove item
is
from shot list
}
}
remove item
iv
from video list
add item
iI
to integrated list
}
add the remaining video list and shot list into the integrated list
sort the integrated list by item
iI
.score.
// Weight_v and Weight_s are weights for score accumulating
Algorithm 1. Generate integrated rank list.
LEARNING BASED RANKING
IR-model based ranking just consider some basic features such as
term frequency tf, document frequency df, and document length,
etc. In the learning based approach, more features are extracted
from the query, metadata, and content information. To be clear,
suppose the query contains three terms "a b c", we compute the
following features from each document field:
Ordered match: the frequency that both "a" and "b" appeared in
the indexed text, and "b" appears after "a".
Partly exact match: the frequency that "a b" or "b c" appeared in
the indexed text.
Exact match: the frequency that "a b c" appeared in the indexed
text.
Query length: number of query terms
For the content information, each word has a confidence value,
we also consider:
Weighted tf: Term frequency with confidence weighted,
High confident match: query term match with words with high
confidence.
High confident words: words with high confidence in the
indexed text.
Some non-textual, query-independent features, such as shot
length, video length, frame size, bit rate, etc, are also taken into
account.
By counting in the combinations of several document fields and
query terms (or part of query), we have about 50 dimensional
features in total for a query and search result to form a sample..
The GroundTruth of sample is the relevance judgments between a
query and a result, which is collected by a user labeling system
introduced in the next section.
3.2 Neutral Network based Ranking
Traditionally the learning algorithms are used for classification
problems. For ranking problems, one possible way is organize the
samples by pairs. Pair sample (x
1
, x
2
) are considered as a positive
sample, if x
1
are ranked ahead of x
2
, and vice versa. The loss
function is also formulated in pair wise. RankNET [6] is used in
our implementation to train the ranking model and to validate the
performance.
About half of the labeled data are used in training, and the
second half are used for validation.
EVALUATIONS
To evaluate the ranking performance of our proposed methods, we
developed a user labeling tool to collect some query-result
relevance judgments.
The video data set we used in our experiment includes news
video, TV programs, movie trailers, advertisements, etc.
According to the characteristics of the content of these videos, we
selected some news related queries, such as hot events, hot place,
and hot person names, to evaluate the ranking performance. For
each query, we use the IR-model based ranking describe in
Section 2 to generate a result list, and randomly select some
results form the list to label. Considering the labeling workload,
for each labeler and each query, 9 results are select from the list.
To make the selected query-result samples have a good uniformity
on distribution, 3 results are randomly selected from the first 1/3
part of the list, 3 are from the second 1/3 part, and the other 3 are
from the last 1/3 part. The order of these 9 selected results is
shuffled and then provided to users to do relevance labeling.
In the labeling tool, for a query and a result, user can see all the
information of the result, including the file format information
(frame size, bit rate, video length, etc), description (headline,
caption, keywords), video thumbnail, video (in a video player),
thumbnails of video shots, the speech text of the relevant shots.
The words matched with query terms are highlighted. See Figure
1. If there are relevant shots in the result, the thumbnails of them
are displayed with doubled size. The shot number, time
629
information, and the speech are also shown in the interface. Users
are asked to read the displayed information, browse the
thumbnails, and play the video and shots (a tool button is provided
to play from one shot) to give a relevance judgment from 1, 2, 3, 4,
and 5, which represent bad, fair, good, excellent, and perfect,
respectively.
Figure 1. Relevance labeling tool
In our experiment, ten users are invited to do labeling, and about
2,000 relevance judgments of query-result samples are collected.
4.2 Precision Performance of Ranking
We have conducted a comparison between the 4 approaches listed
below:
MR: Ranking only based on video metadata (Section 2.2.1).
CR: Ranking only by content information (Section 2.2.2).
RI: Integrated Ranking described in Section 2.3
RN: RankNET based ranking described in Section 3.2.
The precision in top N of the rank lists of all the labeled queries
is used to evaluate the performance of ranking method.
N
top
in
results
labeled
total
N
top
in
results
labeled
relevant
N
Precision
=
@
(5)
In our implementation, the judgment Perfect or Excellent are
considered as relevant results, while other judgments are treated as
irrelevant results. The Presicion@N (N=1 to 5) of the 4 ranking
methods are shown in Table 3.
From the results, we can see that:
1) Precisions of MR are very low. Only using video metadata
will result in a poor performance, since details of the video
content are ignored. The content information is more
effective to search and rank video than metadata, as
precisions of CR are higher than that of MR.
2) Precisions of RI are much higher than that of MR and CR.
By combining video metadata and content information, the
performance is significantly improved and reaches an
acceptable level, which shows that content-based relevance
ranking is a promising approach.
3) RN has a good performance, even better than RI. Comparing
to IR-model based ranking, more features are included to
learning the relevance. The result implies that the learning
method can organize the information for ranking in a more
effective way.
Table 3. Precision of the ranking approaches
Precision@?
1 2 3 4 5
MR
0.305 0.326 0.299 0.259 0.228
CR
0.544 0.526 0.571 0.550 0.522
RI
0.796 0.727 0.684 0.634 0.606
RN
0.805 0.746 0.763 0.717 0.669
DISCUSSIONS AND CONCLUSION
We have presented a novel content-based approach to rank video
search results
.
In addition to the video metadata, more detailed
content information in the video is used to improve the relevance
ranking of video search results. The videos are segmented into
shots, which can carry rich content information such as semantic
concept keywords and speech. With the video metadata and
content information, we proposed an IR-model based ranking
method and a learning-based ranking method. Evaluation of the
top ranked results shows that the proposed ranking methods have
significantly improved performance comparing to the approach
use video metadata only, which is frequently used in existing web
video search engines.
In future work, more types of content information can be
integrated into our ranking scheme, such as content-based quality
metric, user comments and rating for videos shared in web
communities. Moreover, how to define effective semantic
concepts, i.e., video semantic ontology, that facilitate video
searching and ranking is also a challenging problem., which is
also one of our future works.
REFERENCES
[1] Hong-Jiang Zhang, A. Kankanhalli, and S. Smoliar,
"Automatic Partitioning of Full-motion Video," A Guided
Tour of Multimedia Systems and Applications, IEEE
Computer Society Press, 1995.
[2] http://www-nlpir.nist.gov/projects/trecvid
[3] S. E. Robertson, S. Walker, and M. Beaulieu. Okapi at
TREC7: automatic ad hoc, filtering, VLC and filtering
tracks. In Proceedings of TREC'99.
[4] M. Naphade, J.R. Smith, F. Souvannavong, "On the
Detection of Semantic Concepts at TRECVID," ACM
Multimedia, ACM Press, New York, NY, pp. 660-667, Oct.
10-16, 2004
[5] M. Naphade, L. Kennedy, J.R. Kender, S.F. Chang, J.R.
Smith, P. Over, A. Hauptmann, "LSCOM-lite: A Light Scale
Concept Ontology for Multimedia Understanding for
TRECVID 2005," IBM Research Tech. Report, RC23612
(W0505-104), May, 2005.
[6] Chris Burges, et.al, "Learning to Rank using Gradient
Descent", ICML 2005, Bonn, Germany, pp.89-96, August 7-11
, 2005.
630 | video index, relevance ranking;content-based relevance ranking;video retrieval;metadata;learning based ranking;neutral network based ranking;Relevance ranking;content information;content-based approach;ranking method;integrated ranking;video metadata;IR-model;segmented;Content-based ranking;machine learning model;video segmentation;IR model based ranking;Video search;video search |
198 | Towards Reasonability Properties for Access-Control Policy Languages | The growing importance of access control has led to the definition of numerous languages for specifying policies. Since these languages are based on different foundations, language users and designers would benefit from formal means to compare them. We present a set of properties that examine the behavior of policies under enlarged requests, policy growth, and policy decomposition. They therefore suggest whether policies written in these languages are easier or harder to reason about under various circumstances. We then evaluate multiple policy languages, including XACML and Lithium, using these properties. | INTRODUCTION
Access-control policies should not be write-only. Because
they govern both the containment and availability of critical
information, they must be highly amenable to analysis by
both humans and by reasoning software such as verifiers.
An access-control policy dictates a function from requests
for access to decisions about whether or not to grant access
. The competing requirements of expressive power and
computational speed makes the design of policy languages a
delicate balancing act. Contemporary policy languages have
largely followed one of two routes. Some are based on logics,
restricting first-order logic (e.g., Lithium [9]) or augmenting
Datalog (e.g., Cassandra [2]). Others are custom languages
such as XACML [12] and EPAL [13], which behave roughly
by rule-evaluation and do not depend on theorem-proving
capabilities to determine a response to a query.
The custom language approach often produces fairly limited
languages. For example, to express hierarchical role-based
access-control (RBAC) [14] in XACML requires a
fairly cumbersome encoding [1]. On the other hand, its more
direct request evaluation strategy suggests that policies written
in XACML are more transparent than policies written
in languages based on first-order logic (as we motivate in
Section 2).
How, then, do we distinguish different policy languages?
Studies of complexity and expressive power may ensure tractable
verification and the ability to capture certain policies,
but do not directly classify the ease of reasoning about policies
in a language. In this paper we take a step towards formalizing
reasonability properties that make languages more
amenable to reasoning. We then apply these properties to
actual policy languages.
Such properties are useful even
when verification is computationally tractable because they
provide a guide to where and how to edit a policy for a
desired effect.
Concretely, our properties study three main questions:
how decisions change as requests include more information,
how decisions change as policies grow, and how amenable
policies are to compositional reasoning. The last of these is
especially important for two reasons. First, organizations in-creasingly
have different divisions creating policy fragments
that must be combined into a whole while preserving the intent
of each division; second, to mirror these use cases, and
to scale better as policies grow in size, it becomes important
for analysis and verification tools to function modularly.
These properties codify our observations made while writing
and studying policies for non-trivial systems. (We do
not, however, presume to make broad statements about the
impact of these properties for manual reasoning.) They are
meant to be descriptive rather than prescriptive: which ones
a language should satisfy depends on the context of its use.
We do expect these properties to help both language designers
and policy authors, the former to set goals and the latter
to evaluate languages.
We first motivate the work with an example. Section 3
presents background on policy languages. Section 4 presents
the heart of our formalism. Section 5 applies this framework
to XACML, and Section 6 to logical approaches such as
160
Lithium. The remainder discusses related work and offers
concluding remarks.
MOTIVATING EXAMPLE
Consider the following natural-language policy:
1
1. If the subject is a faculty member, then permit that
subject to assign grades.
2. If the subject is a student, then do not permit that
subject to assign grades.
3. If the subject is not a faculty member, then permit
that subject to enroll in courses.
We might represent this policy as follows:
faculty(
s) = Permit(s, grades, assign)
(
p
1
)
student(
s) = Permit(s, grades, assign)
(
p
2
)
faculty(s) = Permit(s, courses, enroll)
(
p
3
)
Let the above formalization be
p and the first line of the
policy be sub-policy
p
1
, the second
p
2
, and the third
p
3
.
Consider the following natural-language request:
A student requests to enroll in courses.
Assume that requests list the subject, resource, and action
by name if possible and by variable if the name is unknown,
along with any other known facts. In this representation,
the request becomes:
(s, courses, enroll) with student(s)
(
q
1
)
Should the policy grant access? At least three interpretations
of the policy are possible:
1.
p grants access due to p
3
. The request does not show
the subject being a faculty member; thus,
p
3
applies
and
p produces the decision to permit access. This
relies on the assumption that since the request does
not show the subject being faculty, that the subject is
in fact not faculty. One could drop this assumption.
2. The policy does not apply to the request. One would
reason that
p
1
and
p
2
do not apply since they are dealing
with assigning grades and not enrolling in courses.
Furthermore, one could conclude that
p
3
does not apply
since the request does not prove that the subject
is not faculty. To do so, the request would have been
(s, courses, enroll) with student(s) faculty(s)
Since the policy does not apply to the request, the
system should have and enact some default behavior.
3. By reasoning different than that used in the first interpretation
,
p could still grant the request. As in the
second interpretation, one could conclude that the request
alone fails to establish that the subject is not a
faculty member. However, if the subject were a faculty
member, then the first two lines together would
yield a contradiction:
p
1
would imply that the subject
could enroll in courses and
p
2
would imply that the
subject could not. Thus, student-faculty members do
not exist. Since the subject of the request is clearly
a student, he must not be faculty member. Thus,
p
3
applies to grant access.
1
This example is adapted from Halpern and Weissman [9].
In the first two interpretations the user may limit his reasoning
to each sub-policy independent of one another. However
, under the third interpretation (which, in fact, is the
one chosen by Halpern and Weissman), the user must reason
about all three sub-policies at once. Furthermore, under
Interpretation 2, the user must reason about both positive
and negative attributes, unlike under Interpretation 1.
These semantic differences drastically affect a reader's ability
to comprehend policies. For example, Interpretation 3
requires both global analysis and demands rich reasoning
power to deduce the contradiction. This paper formalizes
these differences and their burdens.
BACKGROUND
Despite the differences between access-control policy languages
, we can still identify many common elements. First
we describe features common to most languages, and then
we discuss in detail two areas in which many languages differ
: the available decisions and policy combinators.
3.1
Common Features
A policy language must provide a way of describing the
different forms of access and the environment in which they
could occur. This information forms a request. Many languages
break requests into four different parts:
Subject the person or process making the request,
Resource the object, subsystem, person, or process that
the subject would like to affect (e.g., a file name or a
process id),
Action the command or change that the subject would like
to execute on the resource, and
Environment describes any other relevant information including
the time of day, location, or the previous actions
of the subject.
The first three make up the form of access requested while
the last gives the context in which this access would occur.
Each of these parts lists attributes associated with its respective
topic. In some languages, the absence or negation of an
attribute might also be explicitly listed (see Section 4.2).
Languages must also provide a set of decisions. Such a set
must include some decisions that grant access and some that
prohibit access. A policy will associate with each request a
decision (or in the case of nondeterministic policies, a policy
will relate each request with some number of decisions).
Definition 3.1. An access-control policy language is a
tuple
L = (P, Q, G, N, ) with
P a set of policies,
Q a family of sets of requests indexed by policies,
G the set of decisions that stipulate that the system should
grant access (granting decisions),
N the set of decisions that stipulate that the system should
not grant access (non-granting decisions),
a function taking a policy p P to a relation between
Q
p
and G N,
where G N = .
161
When clear from context, the above symbols will be ref-erenced
without explicitly relating them to
L, and D will
represent G N. The function
gives the meaning of
policy p, and we write q p d for p P , q Q
p
, and d D,
when p assigns a decision of d to the request q. If for a
language Q
p
= Q
p
for all p and p in P , then we drop the
subscript on Q and treat it as a set of requests common to
all policies. Given
L define the partial order on D to be
such that d d if either d, d N, d, d G, or d N and
d G.
3.2
Decisions
Policy languages must provide decisions to indicate a pol-icy's
intent to grant or not to grant a request. Some languages
might just provide two decisions: permit for granting
access and deny for not granting access. A policy in such a
language associates various subsets of requests with one of
these two decisions. For example,
p
1
explicitly identifies a
subset of permitted requests and
p
2
gives denied requests.
However, a policy might assign some requests to neither
permit nor deny (e.g.,
q
1
under Interpretation 2 of
p). To err
on the side of safety, the policy language should provide for
such requests a default decision that does not imply a grant
of access creating a closed policy [10]. However, assigning
them the decision of deny may limit the ability to compose
policies. For example, while combining the policies of two
departments, one would like to distinguish between those
requests that each department really would like to prohibit
and those about which they do not care [3]. The decision of
not applicable serves this purpose.
With a decision of not applicable sufficing to prevent access
, some languages elect not to include statements associating
requests with deny. This leaves only statements permitting
some set of requests. The uniformity of statements
in such languages might make the policy easier to read and
compose (see Section 3.3). However, allowing for the explicate
denial of requests can quickly rule out exceptional cases
and provides a means to determine when a policy does not
grant access by desire rather than by default.
Some requests might not have a logical interpretation under
a given policy. For example, a request of
(s, grades, assign) with faculty(s) student(s) (q
2
)
under Interpretation 3 of
p contradicts the policy itself. A
request might even contain illogical values or require undefined
computation (such as division by zero). For generality,
a system might like to assign a decision to such inputs rather
than excluding them from the set of requests and leaving the
policy undefined on them. In such cases, a decision of error
or some refinement of it might be appropriate.
One may view the fact that an error state is reached given
a request to be a weakness in the policy. However, one may
also take it to be a statement about the world in which the
policy is to function: that no such requests may logically
exist.
Error decisions can enforce these preconditions or
assumptions that the policy has made.
3.3
Policy Combinators
The policy of an organization often consists of the composition
of policy fragments, or sub-policies, from a variety of
internal units (e.g., legal, accounting, and execute departments
of a corporation).
Thus, policy languages provide
combinators to create a single policy from these fragments.
Under
p, the request given above (q
2
) is permitted by
p
1
but denied
p
2
. The method used to combine the three sub-policies
of
p into one policy determines how to resolve this
conflict. Some languages, like the hypothetical language in
which
p is written, might have only one policy combinator
that is implicitly applied. Other languages provide multiple
combinators. If a policy has sub-policies nested inside of
sub-policies, the different layers may be resolved differently.
Some of the possible policy combinators are:
Permit Overrides If any of the sub-policies produces a
permit, return only permit. Otherwise, if any produces
a deny, return only deny. Else, return not applicable.
Deny Overrides If any of the sub-policies produces a deny,
return only deny. Otherwise, if any produces a permit,
return only permit. Else, return not applicable.
First Applicable Return the decision reached by the first
sub-policy to reach one other than not applicable.
All Seen Return a set containing the decisions reached by
all the sub-policies.
Either Permit or Deny Nondeterministically return one
of the produced decisions.
Error Return a error if the sub-policies produces both permit
and deny. Otherwise return permit if produced,
or deny if produced. Else, return not applicable.
And Conjoin the sub-policies together by logical And and
return the implied decision(s).
De Capitani di Vimercati et al. [4] list additional combinators
. The nature of the combinators available in a language
can greatly impact the clarity of policies written in it.
Notice that many of the above combinators behave the
same in the absence of the decision of deny. One might conclude
from this observation that allowing the explicit denial
of a request is an undesirable complication in a language.
To formalize the role of policy combinators, let policies be
either an atomic policy or a set of sub-policies combined by
some policy combinator. Let p be a policy that consists of
sub-policies p
i
with 1
i n. Then p = (p
1
, p
2
, . . . , p
n
)
represents the composition of the sub-policies using
.
2
Since sub-policies are themselves policies, one may apply
to them.
3
The relationship between
(p
1
, p
2
, . . . , p
n
)
and meaning of each sub-policy p
i
affects the clarity of the
policy and is studied in the next section.
POLICY LANGUAGE PROPERTIES
Having formalized policy languages, we are now ready to
describe properties of them.
2
We assume that the set of combinators in a given language
L = (Q, P, G, N, ) is clear from the structure of P and
. If this is not the case for a language, one could explicitly
add it to the definition of an policy language.
3
Some languages may permit contextual information from
enclosing policies to affect the meaning of the sub-policies.
For example, a language might have a notation of variable
binding. For such a language,
might be extended to
take a second argument that carries such contextual information
. All the following definitions could be extended, e.g.,
monadically, to deal with such an extended
.
162
4.1
Determinism and Totality
Definition 4.1. A language L = (P, Q, G, N, ) is deterministic
if
p P, q Q
p
, d, d D, q p d q p d = d = d
For a deterministic language, we can define a function
which takes a policy p P and returns a function from Q
p
to
D as p . q . d D s.t. q p d. For a deterministic language,
may be given instead of to define the language. (We
only even mention nondeterministic languages due to the
existence of one: XACML with obligations.)
Definition 4.2. A language L = (P, Q, G, N, ) is total
if
p P, q Q
p
, d D s.t. q p d
The policies of total languages will always make a decision.
4.2
Safety
Under Interpretation 2, the request contained too little
information to determine which of the sub-policies of
p applied
. Interpretation 1 avoids such indecision by having requests
implicitly refute the presence of any attribute not
listed. These two interpretations produce different meanings
for statements like
faculty(s) found in p
3
. Under
Interpretation 1,
faculty(s) holds if faculty(s) is not in
the request, while under the second, the request must explicitly
list
faculty(s) for it to hold. We call the former
implicit and the latter explicit.
The explicit approach permits distinguishing between unknown
information and attributes known to be absent. The
explicit interpretation, however, incurs the cost of listing a
possibly large set of absent attributes and can lead to indecision
as shown above.
Such indecision, however, allows the system to recognize
when the policy requires more information to yield a decision
. In contrast, the implicit interpretation can grant undue
access. If, for example, a request does not list faculty(s)
simply because the system did not determine whether s was
a faculty member or not, then the system might erroneously
allow s to enroll in courses. Thus, the sub-system producing
requests must be sure to include all the relevant facts in
each request.
For large scale systems, collecting or even determining
the germane information might consume large amounts of
time. For such systems, the explicate approach might prove
better since requests may leave out information safely and
be refined until the policy yields a decision. Furthermore,
overzealous optimizations and other coding errors might result
in the system producing requests that do not contain
all the relevant facts.
Having a policy drive which information requests include
allows for the system to collect only the information really
needed to reach a decision from the policy. Under this approach
, the sub-system evaluating the policy starts with a
request that contains only the readily available information.
If this sub-system needs additional information to reach a
decision from the policy, it requests the necessary additional
information. Thus, the system does not need to know what
information the policy requires at the time of generating the
initial request. This approach may allow for more efficient
implementations.
Once a datum has been published, it cannot easily be retracted
. Therefore, preventing unwanted access is usually
preferable to granting it. As a result, such incomplete requests
should only result in a grant of access if the complete
one would have. We can formally state this safety concern:
Definition 4.3. Let be a family of partial ordering on
requests of a language
L = (P, Q, G, N, ) indexed by the
policies of
L. L is safe with respect to
iff
p P, q, q Q
p
, q
p
q = p (q) p (q ).
Due to differences in the contents of a requests, for each language
a different family of partial orderings
will interest
users. The relation should be such that if q
p
q , then
q contains all the information contained in q and possibly
more. Often one partial ordering may serve every policy.
For example, consider a language in which requests are
sets of non-contradictory facts and the set of decisions is
{permit, deny}. Then using the subset partial ordering for
p
(for every policy p) will make sense since it matches the
intuition of information content. If the language is safe with
respect to such a defined
, then one may omit facts from
a request without causing undue access.
Informally, in a safe language, undue access is impossible
provided that requests tell no lies; whereas, in an unsafe
language, the requests must additionally tell the whole truth.
The choice between these is a function of trust in the program
generating requests, comprehensiveness of analysis to
generate requests, efficiency, and so on. Nevertheless, the
ability to conclude, given a request that will yield access,
that all requests with more information will also yield access
, can potentially be a great boon to policy reasoning.
Some languages might choose to avoid the complications
introduced by a policy testing for the absence of an attribute
all together. In some contexts, such as certificate passing
systems in which a certificate may be withheld, negated attributes
may not make sense.
4
In such a context, requests
would not list negated attributes and the policy would not
test for the absence of an attributes at all.
4.3
Independent Composition
Consider the third interpretation of
p. Under this interpretation
, the meaning of
p can only be determined by looking
at the interactions of the different sub-policies as a whole.
Notice that any one of these sub-policies would produce a
decision of not applicable in isolation, and yet together they
interact to produce a permit decision. The third interpretation
thus inhibits the easy use of local reasoning to reach
conclusions about the policy as a whole. This increases the
possibility of unintended results from combining sub-policies
into a policy.
The alternative, as found in the first two interpretations,
is for the sub-policies to be combined in such a way that
only the result of each in isolation matters. This property
is formalized as follows:
Definition 4.4. A policy combinator of a language
L = (P, Q, G, N, ) independently composes its sub-4
One may argue that certificate passing systems may use
negative certificates to achieve the checking of attribute absence
. Whether this captures the notion of the absence of
an attribute or just the presence of another related attribute
is unclear. For example, one could conceivably hold both a
positive and a negative certificate for an attribute.
163
policies iff
p
1
, p
2
, . . . , p
n
P, i,
1
i n = Q
(p
1
,p
2
,...,p
n
)
Q
p
i
(1)
and there exists a function
: Q D
D such that
p
1
, p
2
, . . . , p
n
P, q Q
(p
1
,p
2
,...,p
n
)
,
(p
1
, p
2
, . . . , p
n
) (q)
=
(q)( p
1
(q), p
2
(q), . . . , p
n
(q)) (2)
If all the combinators of
L independently compose, then L
has the independent composition property.
The first requirement forces a request defined for a policy
to also be defined on each of its sub-policies. This is necessary
for the second requirement to be well defined. The
second requirement ensures that one can determine the decision
of the whole policy from the request and decisions of
its sub-policies on that request; no other properties of the
sub-policies matter.
One might alternatively be tempted to define independent
composition thus:
Definition 4.5. A policy combinator of a language
L = (P, Q, G, N, ) semantically composes its sub-policies
iff
: (Q D)
(Q D), p
1
, p
2
, . . . , p
n
P,
(p
1
, p
2
, . . . , p
n
) =
( p
1
, p
2
, . . . , p
n
)
(3)
If all the combinators of
L semantically compose, then L has
the semantic composition property.
Semantic composition ensures that all sub-policies with the
same meaning in isolation will behave the same under the
combinator.
A language with the semantic composition
property is arguably more clear than one without it, since
only the isolated meaning of the sub-policy must known to
reason about its use under the combinator.
Theorem 4.6. If a policy combinator of an policy language
L has independent composition, then it has semantic
composition.
Proof. To prove that has semantic composition,
:
(Q D)
(Q D) required for Equation 3 will be
constructed from the
: Q D
D known to exist
since
independently composes. Let
(f
1
, f
2
, . . . , f
n
) = q . (q)(f
1
(q), f
2
(q), . . . , f
n
(q))
Then
( p
1
, p
2
, . . . , p
n
)
= q . (q)( p
1
(q), p
2
(q), . . . , p
3
(q))
= q . (p
1
, p
2
, . . . , p
n
) (q)
=
(p
1
, p
2
, . . . , p
n
)
Theorem 4.7. The semantic composition of a policy combinator
does not imply that it independently composes.
Proof. Consider a rather odd language that has only one
unary policy combinator,
, atomic policies that are sets of
values, G = {permit}, N = {deny}, and requests that are
sets of values. Let the semantics be
(p
1
) = q .
(permit p
1
(
{v }) = permit
deny
p
1
(
{v }) = deny
for some distinguished value v , and for atomic policies p,
p (q) equals permit iff p q = and equals deny otherwise.
The language has semantic composition: for
such that
(f
1
) =
(permit f
1
(
{v }) = permit
deny
f
1
(
{v }) = deny
clearly,
(p
1
) =
( p
1
).
To show that the language does not have independent
composition, assume that it does. Then there exists such a
: Q D
D to satisfy Equation 2. Let p
1
=
{v} and
p
2
=
{v, v } for some value v such that v = v . Then,
deny = ({v}) ({v})
=
(
{v})( {v} ({v})) = ({v})(permit)
=
(
{v})( {v, v } ({v})) = ({v, v }) ({v}) = permit
A contradiction is reached since permit = deny.
Only with independent composition can a policy reader
with a specific request in mind know the decision of the
whole policy from each of the component policies. This enables
a reader to ask what-if questions like "What if Bob
requests to write the log?" and determine the answer from
recursively asking that question of the sub-policies. Such an
ability is particularly helpful to readers interested in only a
subset of the possible requests or already familiar with some
of the sub-policies.
4.4
Monotonicity
As noted at the end of Section 3.3, the decision of deny
complicates the policy combinators. One of the reasons for
this is that, under combinators like Deny Overrides, a back-and
-forth pattern can arise when considering the decision of
the whole policy from the sub-policies. Consider each sub-policy
in
p with the request q
2
. Under a reasonable interpretation
p
1
yields a decision of permit,
p
2
a decision of deny,
and
p
3
not applicable. Thus, if the order of
p was changed
to
p
3
,
p
1
,
p
2
and we assume a Deny Overrides policy combinator
, the apparent decision would go from non-granting
to granting to non-granting.
Note that Permit Overrides does not exhibit this pattern
since it is impossible to go from a granting decision to a
non-granting one under it. Thus, the formalization of this
pattern focuses on the transition from a granting to a non-granting
decision.
Definition 4.8. A policy combinator of a language
L = (P, Q, G, N, ) is monotonic iff
p
1
, . . . , p
n
, p P, q Q ,
(p
1
, . . . , p
n
) (q) (p
1
, . . . , p
i
, p , p
i+1
, . . . , p
n
) (q)
where Q = Q
(p
1
,...,p
n
)
Q
(p
1
,...,p
i
,p ,p
i+1
,...,p
n
)
. We say
L is monotonic if every combinator is monotonic.
Adding another sub-policy to a monotonic combinator cannot
change the decision from granting to non-granting.
Having motivated and established these criteria, we now
apply them to concrete access control languages.
164
CORE XACML
In its entirety, XACML [12] exceeds the bounds of the
definitions given in Section 3. Full XACML includes obligations
, which act as annotations on the decisions of permit
and deny. These annotations specify actions that the system
enforcing the access controls must preform before granting
access or upon prohibiting access. Thus, an XACML policy
may have effects beyond just granting or prohibiting access
that the model presented fails to address.
Handling all of XACML is beyond the scope of this paper
. For illustrative purposes, we employ a formalized subset
of XACML, which we will call Core XACML (CXACML),
which corresponds to the input of the tool Margrave [7, 8].
This subset is expressive enough to capture RBAC
0
[14].
5.1
Syntax
CXACML has two syntaxes: one for policies and one for
requests. We present the policy syntax first, with the start
non-terminal P. For syntactic brevity, we use a Lisp-like
parenthetical syntax in place of XML notation.
P
::=
(Policy C T P
)
| (Rule T F)
C
::=
FirstApp
| DenyOver | PermitOver
T
::=
( (L
) (L
) (L
) (L
) )
L
::=
(A
+
)
A
::=
(id val)
F
::=
Permit
| Deny
Those policies formed by using solely the right choice of
the production rule for P are called rules. XACML does not
consider rules to be policies. However, since the semantics
assigned to rules allows them to behave as policies, we will
consider them policies. The elements of the syntax category
T are called targets. The four parts of the target are the
requirements placed on the subject, resource, action, and
environment, respectively, for the policy to apply to a request
. The non-terminals id and val are strings representing
the attribute IDs and values.
Example 5.1. The following is a CXACML policy:
(Policy FirstApp
((()) (((name log))) (()) (()))
(Rule (((role dr)) (()) (()) (())) Deny)
(Rule ((()) (()) (()) (())) Permit))
This policy permits all requests for access to a log except
those made by doctors, which it denies. In detail, the policy
is composed of two sub-policies using the combinator First
Applicable and applies only to requests where the resource
has the name of log. The first sub-policy denies requests
where the subject has the role of dr regardless of the resource,
action, or environment. The second permits all requests.
The syntax for requests is (with start non-terminal Q):
Q
::=
( (A
) (A
) (A
) (A
) )
They simply list the attributes possessed by the subject,
resource, action, and environment in turn.
5.2
Semantics
In the following natural semantics, we will use the convention
that a lower-case letter represents an element of the set
or syntactic category represented by the upper-case equivalent
. For example, P is the set of all policies and p is a policy.
Let D be the set of all decisions (D = {permit, deny, na}).
The core of the semantics of CX ACML compares requests
to targets. We will denote this relation by qt for request
q and target t. The natural semantics of Table 1 defines .
Next we define
. Table 2 gives the result of evaluating
rules. The following tables defines
over policies. Table 3
deals with two cases where a policy does not apply to a
request.
Finally, we must define the policy combinators:
Permit Overrides in Table 4, Deny Overrides in Table 5,
and First Applicable in Table 6.
5.3
Analysis
The syntax and semantics of CXACML defines
L
CXACML
=
(P, Q, G, N, ). The syntax determines P and Q where
the same set of requests is used for every policy (and thus,
we treat Q as a set of requests). From the semantics, G =
{permit}, N = {na, deny}. CXACML allows for explicit
denials and the checking of the implicit absence of attributes.
Theorem 5.2. L
CXACML
is deterministic.
Proof. Inspection of the inference rules for atomic policies
(Table 2) shows that only one of them can hold at a
time. Thus, atomic policies are deterministic.
Table 4 combined with Table 3 gives the semantics of the
policy combinator Permit Overrides.
The antecedents of
all these inference rules are disjoint, that is, at most one
them can hold for any policy and request. Thus, Permit
Overrides is deterministic. The same argument holds for
Deny Overrides and First Applicable using Tables 5 and 6.
Thus, all the combinators are deterministic.
Thus, one may view a CXACML policy as a function from
requests to decisions with
in place of . Further inspection
establishes that
is a total function.
For two requests q = (s r a e) and q = (s r a e ), let
q
p
q (for every policy p) if s
s , r
r , a
a , and
e
e where
is defined in Table 1.
Theorem 5.3. CXACML is not safe with respect to .
Proof. Consider the policy p shown in Example 5.1 and
the requests q = (() ((name log)) () ()) and
q = (((role dr)) ((name log)) () ()). Clearly q
p
q .
Yet p (q) = permit
deny = p (q )
Theorem 5.4. L
CXACML
has independent composition.
Proof. A combination algorithm c and target t together
determine a policy combinator. For each pair of values for c
and t, the needed function
t
c
: Q D
D exists to provide
the meaning of the policy (Policy c t p p
) and satisfy
Equation 2. For Permit Overrides (when c = PermitOver,
or PO for short), the function
t
PO
(q)(d d
) is equal to
na
if
q /
t
permit
else if
d = permit
t
PO
(q)(d
) = permit
deny
else if
d = deny
t
PO
(q)(d
) = deny
na
otherwise
The function
t
DO
(q)(d d
) for Deny Overrides is equal to
na
if
q /
t
deny
else if
d = deny
t
DO
(q)(d
) = deny
permit
else if
d = permit
t
DO
(q)(d
) = permit
na
otherwise
For First Applicable,
t
FA
(q)(d d
) is
165
(
a
1
)
(l
1
)
(
a
2
)
(l
2
)
(
a
3
)
(l
3
)
(
a
4
)
(l
4
)
((
a
1
) (
a
2
) (
a
3
) (
a
4
))
((l
1
) (
l
2
) (
l
3
) (
l
4
))
i s.t. l
i
(
a
)
(
a
)
(l
1
l
2
. . . l
n
)
i j s.t. a
i
= a
j
(
a
1
a
2
. . . a
n
)
(
a
1
a
2
. . . a
m
)
Table 1: The Match Relationship
q /
t
q (Rule t f) na
qt
q (Rule t Permit) permit
qt
q (Rule t Deny) deny
Table 2:
on Rules
q (Policy c t) na
q /
t
q (Policy c t p
)
na
Table 3: Default na Inference Rules
qt
i s.t. q p
i
permit
q (Policy PermitOver t p
1
p
2
. . . p
n
)
permit
qt
i s.t. q p
i
deny
j, (q p
j
permit)
q (Policy PermitOver t p
1
p
2
. . . p
n
)
deny
qt
i, q p
i
na
q (Policy PermitOver t p
1
p
2
. . . p
n
)
na
Table 4: Inference Rules for Permit Overrides
qt
i s.t. q p
i
deny
q (Policy DenyOver t p
1
p
2
. . . p
n
)
deny
qt
i s.t. q p
i
permit
j, (q p
j
deny)
q (Policy DenyOver t p
1
p
2
. . . p
n
)
permit
qt
i, q p
i
na
q (Policy DenyOver t p
1
p
2
. . . p
n
)
na
Table 5: Inference Rules for Deny Overrides
qt
q p
1
permit
q (Policy FirstApp t p
1
p
2
. . . p
n
)
permit
qt
q p
1
deny
q (Policy FirstApp t p
1
p
2
. . . p
n
)
deny
qt
q p
1
na
q (Policy FirstApp t p
2
, . . . , p
n
)
d
q (Policy FirstApp t p
1
p
2
. . . p
n
)
d
Table 6: Inference Rules for First Applicable
na
if
q /
t
d
else if
d = permit d = deny
t
FA
(q)(d
)
otherwise
where
t
PO
(
) =
t
DO
(
) =
t
FA
(
) = na for the empty sequence
.
Theorem 5.5. L
CXACML
is not monotonic.
Proof. Consider the policy p in Example 5.1 and the
policy p that would be p without the first rule. Let the request
q be (((role dr)) ((name log)) () ()). p (q) =
permit, but p (q) = deny. Thus, adding a rule to p results
in a request going from being granted to not being
granted.
ADAPTATIONS OF FIRST-ORDER LOGIC
Whereas XACML is an attempt to create a policy language
from whole cloth, other languages are adaptations
of first-order logic.
Halpern and Weissman present several
schemata for such languages [9]. Here we present and
analyze the languages produced by two of their schemata,
Lithium and
L
5
.
For the ease presentation, first, we define the language
schemata FOL, a more readily identifiable restriction of first-order
logic. To ensure efficiency (and decidability!), the languages
of
L
5
and Lithium use additional context-sensitive
constraints to restrict FOL. We discuss these restrictions
after giving a semantics to FOL. (The semantics of L
5
and
Lithium will be the same as that of FOL, restricted to subsets
of the language.)
The schemata FOL is a restriction of many-sorted first-order
logic. Each language of FOL corresponds to giving
the logic a different vocabulary (the parameters including
quantifier symbols, predicate symbols, constant symbols,
and function symbols). We assume that includes the sorts
S for subjects, R for resources, A for actions, and the predicate
symbol Permit of the sort S R A {T, F}.
5
may also include sorts to represent environmental data such
as the current time and location.
6.1
Syntax of
FOL
A standard policy under the vocabulary is an expression
with one of the following forms:
(
y
1
x
1
,
. . .,
y
m
x
m
(
1
. . .
n
Permit(s, r, a)))
(
y
1
x
1
,
. . .,
y
m
x
m
(
1
. . .
n
Permit(s, r, a)))
where each x
i
names a variable over the sort identified by
y
i
, s is a term over the sort S, r is a term over the sort R,
a is a term over the sort A, and each
j
is a literal over
that may include the variables x
1
, . . . , x
m
.
The policies of the language FOL() are the standard policies
under and conjunctions of policies:
P ::= StandardPolicy | (and P
)
Example 6.1. Let the vocabulary contain
1. the sorts S = {amy, bob, joe},
R = {grades, courses}, and A = {assign, enroll};
5
Halpern and Weissman treat the Permit predicate as taking
two arguments, a subject and a resource-action, instead
of three.
166
2. the predicates Permit : S R A {T, F}, faculty :
S {T, F}, and student : S {T, F}.
FOL( ) includes the following policy:
(and (
S
x (faculty(x)
Permit(x, grades, assign)))
(
S
x (student(x)
Permit(x, grades, assign)))
(
S
x (
faculty(x) Permit(x, courses, enroll))))
where S identifies the sort S. As the semantics will soon
show, this policy has the same meaning as policy
p from
Section 2 does under Interpretation 3.
The requests of FOL() have the form (s, r, a, e) where
s S is the subject making the request; r R is the requested
resource; a A is the action the subject would like
to preform on the resource; and e is a conjunction of ground
literals and universal formulas of the form
y
1
x
1
,
. . . ,
y
m
x
m
(
1
. . .
n
n+1
)
where each x
i
names a variable over the sort identified by y
i
and each
i
is a literal over that may include the variables
x
l
, . . . , x
m
. The expression e provides information about s,
r, a, and the environment.
Example 6.2. The four-tuple
(bob, courses, enroll,
student(bob)
faculty(amy) student(amy))
is a request of FOL( ) where is defined in Example 6.1.
6.2
Semantics of
FOL
The semantics of a policy follows from interpreting it as a
formula in many-sorted first-order logic. The policy combinator
and becomes conjunction. The standard policies and e
are interpreted as the corresponding logic formulas. A policy
p defines a relation p between requests and {permit, deny}
as follows:
(
s, r, a, e) p permit
iff
p e
Permit(
s, r, a)
(
s, r, a, e) p deny
iff
p e
Permit(s, r, a)
where
is interpreted as the standard "proves" relation for
many-sorted first-order logic over .
To define a deterministic and total version of FOL, we
expand the set of decisions to D = {na, permit, deny, error}
and define p ((s, r, a, e)) to be
error
if pe
Permit(
s, r, a) and pe Permit(s, r, a)
permit if pe
Permit(
s, r, a) and pe Permit(s, r, a)
deny if pe
Permit(
s, r, a) and pe Permit(s, r, a)
na
if pe
Permit(
s, r, a) and pe Permit(s, r, a)
Since a policy composed of sub-policies, each composed of
standard policies, is semantically equivalent to a policy composed
of all the standard policies without the intermediate
sub-policies, we will henceforth treat all policies as either a
standard policy or a conjunction of standard policies.
6.3
Analysis of
FOL
The language FOL() defines the deterministic and total
policy language (P, Q, G, N, ). The syntax determines P
and Q where Q may be treated as a set of requests since for
all policies p and p , Q
p
= Q
p
. The semantics requires that
G = {permit} and N = {na, deny, error}. The languages of
FOL has the policy combinator and. FOL allows for explicit
denials and checking for the explicit absence of attributes.
Given two requests q = (s, r, a, e) and q = (s , r , a , e ),
if s = s , r = r , or a = a , we consider the two requests incomparable
. If s = s , r = r , and a = a , then we would like
to order requests according to their information content.
One might conclude that q
p
q if e
=
e. However,
suppose e = , where is logical contradiction. Then
e contains no information and yet it implies e. Similarly, if
p e =
, then e contains no information with respect
to p. Thus, we define
p
as follows:
Let (s, r, a, e)
p
(
s , r , a , e ) iff
1. s = s , r = r , and a = a ; and
2. p e implies p e but not , or p e implies .
Theorem 6.3. FOL() is safe with respect to
for any
vocabulary .
Proof. Assume FOL() is not safe. Then there must
exist p P and q, q
Q such that q
p
q and p (q)
p (q ). Let q = (s, r, a, e) and q = (s, r, a, e ). Since
permit is the only granting decision, p (q) = permit and
thus p e
Permit(
s, r, a). Since N = {na, deny, error},
p (q ) must be either na, deny, or error.
Since q
p
q , two cases arise:
1. pe implies pe but not : Since pe = pe and
pe
Permit(
s, r, a), pe
Permit(
s, r, a). Thus,
p (q ) is either permit or error. However, if p (q ) =
error, then pe = , a contradiction. Furthermore,
p (q ) = permit /
N is also a contradiction.
2. p e implies : In this case, p (q) = error = permit,
a contradiction.
We can thus conclude that FOL() must be safe.
Theorem 6.4. FOL() does not have independent composition
for some .
Proof. Consider the policy p
a
:
(and (
S
x, student(x)
Permit(x, log, read))
(
S
x,
student(x) Permit(x, log, read)))
the policy p
b
:
(and (
S
x, student(x)
Permit(x, log, edit))
(
S
x,
student(x) Permit(x, log, edit)))
and request q = (bob, log, read, T). On q, p
a
produces the
decision of permit while its sub-policies yield na. However, p
b
produces the decision of na while its sub-policies also yield
na on q. Thus, the required function
and
would have to
satisfy
permit =
and
(q)(na na) = na
A contradiction, and hence
and
cannot exist.
Theorem 6.5. FOL() is not monotonic for some .
Proof. Consider the policy p
c
:
(and (
S
x, student(x)
Permit(x, log, read))
(
S
x, student(x)
Permit(x, log, read)))
with and without the second sub-policy, and the request
(bob, log, read, student(bob)). In the absence of the second
sub-policy, the decision is permit, whereas p
c
produces
error.
167
6.4
Analysis of Lithium
Halpern and Weissman restrict FOL to create the language
they dub Lithium.
6
A slightly modified form follows.
Lithium relies heavily on the notion of "bipolarity". A
literal
of f is labeled bipolar in f relative to the equality
statements in e if the following holds: there exists a term
in f and variable substitutions and such that it follows
from e that = .
Lithium also makes use of the notion of equality-safety.
(p, e
0
, e
1
) is equality-safe if
1. e
1
p when written in CNF (i.e., of the form c
1
. . .
c
n
where each c
j
has the form
y
1
x
1
,
. . . ,
y
m
x
m
(
)
where is a qualifier-free disjunction of literals) has
no clause with a disjunct of the form t = t , and
2. it is not the case that f
0
t = t where f
0
is the
conjunction of the equality statements in e
0
,
where t and t are closed terms such that (1) they both
appear in e
0
; and (2) either t is a sub-term of t , or both t
and t mention function symbols.
Like FOL, Lithium is a set of languages each with a different
vocabulary. Let Li() be the instance of Lithium using
the vocabulary .
Li() has the same set of policies as
FOL(). However, each policy p of Li() has a different set
of requests for which it is defined (a different value for Q
p
).
A request (s, r, a, e
0
e
1
) of
FOL() is in Q
p
iff:
1. e
0
is a basic environment (a conjunction of ground
terms),
2. e
1
is a conjunction of universally quantified formulas,
3. (p, e
0
, e
1
) is equality-safe, and
4. every conjunct of e
1
p has at most one literal that is
bipolar in e
1
p relative to the equality statements in
e
0
.
Lithium is safe since its requests are a subset of those of
FOL.
Theorem 6.6. Lithium does not have independent composition
for some .
Proof. Consider the policies p
a
and p
b
and the request
given with them in the proof of Theorem 6.4. The request
is in Q
p
a
. To show this, we check that the request satisfies
all four of the requirements for a request to be in Q
p
a
given above.
Since e
0
e
1
= T, the first three requirements
hold. The last requirement holds since student(x)
and
student(x) are the only bipolars and they are each in
a different conjunct.
By similar reasoning, the request is also in Q
p
b
. Thus, the
proof follows as before.
6.5
Analysis of
L
5
In their work, Halpern and Weissman define a further restriction
of FOL, which they call L
5
.
Like Lithium,
L
5
() includes all the policies of FOL()
with each policy having a different set of requests for which
it is defined. For a policy p of L
5
(), Q
p
consists of all the
requests (s, r, a, e) of FOL() such that:
6
The name Lithium only appears in the 2006 version of their
work [9].
1. e is a basic environment,
2. equality is not used in e or p,
3. for every atomic policy p in p, all variables appearing
in p appears as an argument to Permit in p , and
4. there are no bipolars in p relative to the empty set of
equality statements.
As with Lithium,
L
5
() is safe since it is a subset of a safe
language.
Halpern and Weissman have proven the following theorem
(Proposition 4.2 in their updated document [9]):
Theorem 6.7. Let p be a compound policy and (s, r, a, e)
be a request of
L
5
. Then e p
Permit(
s, r, a) iffthere is
a sub-policy p of p such that e p
Permit(
s, r, a).
Using the same approach as given in their proof, one can
generalize this proof to include statements of the form ep
Permit(s, r, a) also.
Theorem 6.8. L
5
() has independent composition for all
.
Proof. Allowing p
i
to range over the sub-policies of p,
the above result yields:
p (q) =
8
>
>
>
<
>
>
>
:
error
i, j, p
i
(q) = permit p
j
(q) = deny
permit else if i, p
i
(q) = permit
deny
else if
i, p
i
(q) = deny
deny
otherwise
From this, it is easy to construct an appropriate value for
and
(q)(d
1
, d
2
, . . . , d
n
):
error
if
i, j, d
i
= permit d
j
= deny
permit
else if
i, d
i
= permit
deny
else if
i, d
i
= deny
deny
otherwise
Notice that
and
does not use the value of q: it merely composes
the results from its sub-policies.
A policy author concerned solely with expressive power
would select Lithium over
L
5
. However, the choice becomes
more complicated when concerned about the ability to reason
about policies, because only
L
5
features independent
composition. We hope that elucidating this trade-off with a
combination of proof and illustrative examples, as we have
done above, will help authors choose better between the policy
languages they use, even when the languages are within
the same family.
RELATED WORK
De Capitani di Vimercati et al. discuss explicit denial
and how it introduces the need for policy combinators that
reduce the clarity of the language [4]. The authors list various
policy combinators that are possible, many of which are
more complex than those we present. The paper includes
discussion of a few policy languages, including XACML and
a language grounded in first-order logic. The paper does
not, however, attempt to systemically compare them.
The work of Mark Evered and Serge B
ogeholz concerns
the quality of a policy language [5].
After conducting a
case study of the access-control requirements of a health
168
information system, they proposed a list of criteria for policy
languages.
They state that languages should be concise,
clear, aspect-oriented (i.e., separate from the application
code), fundamental (i.e., integrated with the middleware,
not an ad hoc addition), positive (i.e., lists what is allowed,
not what is prohibited), supportive of needs-to-know, and
efficient. Although they compare four languages based on
these criteria, they do not formalize the criteria.
Some authors have considered formal treatments of programming
language expressiveness [6, 11]. Felleisen's is the
closest in spirit to ours. His framework examines the ability
to translate the features of one language in the other with
only local transformations. That work does not, however,
directly address reasoning.
DISCUSSION
This paper presents our analysis framework and its findings
. Some differences between languages lie in the realms
of decision sets, policy combinators, and checking for the
absence of attributes, but these are clear from the language
definitions. Our framework highlights the following more
subtle, semantic differences:
Independence Core XACML and
L
5
feature independent
composition of polices into compound policies, and
thus allow for reasoning about a policy by reasoning
about the sub-policies separately.
Lithium, in contrast
, does not exhibit this property, and therefore potentially
requires reasoning about a policy all at once.
Safety
L
5
and Lithium provide safety for the most natural
definition of the "contains more information" ordering.
Core XACML, in contrast, does not, which implies
that information missing from a Core XACML request
could result in unintended access being granted.
These differences are not orthogonal. Clearly, the combinators
selected determine whether the language will have
independent composition.
Furthermore, implicit checking
of attributes will result in the loss of safety.
These properties may guide policy language designers.
For example, suppose a designer wishes to create a safe
variant of XACML. One way to achieve this would be to
eliminate rules that deny access and thus the decision of
deny. (We provide further details in an extended version of
this work [15].)
As noted in Section 5, the comparison framework must be
generalized to treat language with more exotic constructs
like obligations. More importantly, we need to perform user
studies to determine whether, and how well, our properties
correlate with policy comprehension by humans. Lastly, this
framework should be coupled with one for measuring the
expressive power of a policy language before fair judgment
may be passed on languages.
ACKNOWLEDGMENTS
We thank Joe Halpern and Vicky Weissman for useful conversations
and for sharing their ongoing work. We also thank
Konstantin Beznosov, Kathi Fisler, and Steve Reiss. This
work was partially supported by NSF grant CPA-0429492 to
Brown University and by the Army Research Office through
grant number DAAD19-02-1-0389 ("Perpetually Available
and Secure Information Systems") to Carnegie Mellon Uni-versity's
CyLab.
REFERENCES
[1] A. Anderson. Core and hierarchical role based access
control (RBAC) profile of XACML, version 2.0.
Technical report, OASIS, Sept. 2004.
[2] M. Y. Becker and P. Sewell. Cassandra: Flexible trust
management, applied to electronic health records. In
IEEE Computer Security Foundations Workshop,
pages 139154, 2004.
[3] E. Bertino, P. Samarati, and S. Jajodia.
Authorizations in relational database management
systems. In ACM Conference on Computer and
Communications Security, pages 130139, 1993.
[4] S. De Capitani di Vimercati, P. Samarati, and
S. Jajodia. Policies, models, and languages for access
control. In Databases in Networked Information
Systems: 4th International Workshop, volume 3433 of
Lecture Notes in Computer Science. Springer-Verlag,
Mar. 2005.
[5] M. Evered and S. B
ogeholz. A case study in access
control requirements for a health information system.
In Workshop on Australasian Information Security,
Data Mining and Web Intelligence, and Software
Internationalisation, pages 5361, 2004.
[6] M. Felleisen. On the expressive power of programming
languages. Science of Computer Programming,
17:3575, 1991.
[7] K. Fisler, S. Krishnamurthi, L. A. Meyerovich, and
M. C. Tschantz. Verification and change-impact
analysis of access-control policies. In International
Conference on Software Engineering, pages 196205,
May 2005.
[8] M. M. Greenberg, C. Marks, L. A. Meyerovich, and
M. C. Tschantz. The soundness and completeness of
Margrave with respect to a subset of XACML.
Technical Report CS-05-05, Department of Computer
Science, Brown University, Apr. 2005.
[9] J. Halpern and V. Weissman. Using first-order logic to
reason about policies. In IEEE Computer Security
Foundations Workshop, pages 187201, 2003. Updated
2006 version available at http://www.citebase.org/
cgi-bin/citations?id=oai:arXiv.org:cs/0601034.
[10] S. Jajodia, P. Samarati, V. S. Subrahmanian, and
E. Bertino. A unified framework for enforcing multiple
access control policies. In ACM SIGMOD
International Conference on Management of Data,
pages 474485, 1997.
[11] J. C. Mitchell. On abstraction and the expressive
power of programming languages. Science of
Computer Programming, 212:141163, 1993.
[12] OASIS. eXtensible Access Control Markup Language
(XACML) version 2.0. OASIS Standard, Feb. 2006.
[13] C. Powers and M. Schunter. Enterprise privacy
authorization language (EPAL 1.2). W3C Member
Submission, Nov. 2003.
[14] R. S. Sandhu, E. J. Coyne, H. L. Feinstein, and C. E.
Youman. Role-based access control models. IEEE
Computer, 29(2):3847, 1996.
[15] M. C. Tschantz and S. Krishnamurthi. Towards
reasonability properties for access-control policy
languages with extended XACML analysis. Technical
Report CS-06-04, Department of Computer Science,
Brown University, Apr. 2006.
169
| common features;Access control;lithium;modularity;reasonability property;policy decomposition;properties;access control;policy combinator;XACML;comtemporary policy;access-control policy;policy language property;first order logic;xacml;multiple policy language;policy language;policy;security;formalize;policy languague |
199 | Tracking Dynamics of Topic Trends Using a Finite Mixture Model | In a wide range of business areas dealing with text data streams, including CRM, knowledge management, and Web monitoring services, it is an important issue to discover topic trends and analyze their dynamics in real-time.Specifically we consider the following three tasks in topic trend analysis: 1)Topic Structure Identification; identifying what kinds of main topics exist and how important they are, 2)Topic Emergence Detection; detecting the emergence of a new topic and recognizing how it grows, 3)Topic Characterization ; identifying the characteristics for each of main topics. For real topic analysis systems, we may require that these three tasks be performed in an on-line fashion rather than in a retrospective way, and be dealt with in a single framework. This paper proposes a new topic analysis framework which satisfies this requirement from a unifying viewpoint that a topic structure is modeled using a finite mixture model and that any change of a topic trend is tracked by learning the finite mixture model dynamically.In this framework we propose the usage of a time-stamp based discounting learning algorithm in order to realize real-time topic structure identification .This enables tracking the topic structure adaptively by forgetting out-of-date statistics.Further we apply the theory of dynamic model selection to detecting changes of main components in the finite mixture model in order to realize topic emergence detection.We demonstrate the effectiveness of our framework using real data collected at a help desk to show that we are able to track dynamics of topic trends in a timely fashion. | INTRODUCTION
In a wide range of business areas dealing with text streams,
including CRM, knowledge management, and Web monitoring
services, it is an important issue to discover topic trends
and analyze their dynamics in real-time.For example, it is
desired in the CRM area to grasp a new trend of topics in
customers' claims every day and to track a new topic as soon
as it emerges.A topic is here defined as a seminal event or
activity.Specifically we consider the following three tasks
in topic analysis:
1) Topic Structure Identification; learning a topic structure
in a text stream, in other words, identifying what kinds
of main topics exist and how important they are.
2) Topic Emergence Detection; detecting the emergence of
a new topic and recognizing how rapidly it grows, similarly,
detecting the disappearance of an existing topic.
3) Topic Characterization; identifying the characteristics for
each of main topics.
For real topic analysis systems, we may require that these
three tasks be performed in an on-line fashion rather than in
a retrospective way, and be dealt with in a single framework.
The main purpose of this paper is to propose a new topic
analysis framework that satisfies the requirement as above,
and to demonstrate its effectiveness through its experimental
evaluations for real data sets.
Our framework is designed from a unifying viewpoint that
a topic structure in a text stream is modeled using a finite
mixture model (a model of the form of a weighted average
of a number of probabilistic models) and that any change
of a topic trend is tracked by learning the finite mixture
model dynamically.Here each topic corresponds to a single
mixture component in the model.
All of the tasks 1)-3) are formalized in terms of a finite
mixture model as follows: As for the task 1), the topic structure
is identified by statistical parameters of a finite mixture
model.They are learned using our original time-stamp
based discounting learning algorithm, which incrementally
and adaptively estimates statistical parameters of the model
by gradually forgetting out-of-date statistics, making use of
time-stamps of data.This makes the learning procedure
adaptive to changes of the nature of text streams.
As for the task 2), any change of a topic structure is rec-ognized
by tracking the change of main components in a
mixture model.We apply the theory of dynamic model selection
[7] to detecting changes of the optimal number of
main components and their organization in the finite mixture
model.We may recognize that a new topic has emerged
if a new mixture component is detected in the model and remains
for a while.Unlike conventional approaches to statistical
model selection under the stationary environment, dynamic
model selection is performed under the non-stationary
one in which the optimal model may change over time.Further
note that we deal with a complicated situation where
the dimension of input data, i.e., the number of features of
a text vector, may increase as time goes by.
As for the task 3), we classify every text into the cluster
for which the posterior probability is largest, and then we
characterize each topic using feature terms characterizing
texts classified into its corresponding cluster.These feature
terms are extracted as those of highest information gain,
which are computed in real-time.
We demonstrate the validity of the topic trend analysis
framework, by showing experimental results on its applications
to real domains.Specifically we emphasize that it is
really effective for discovering trends in questions at a help
desk.
1.2
Related Work
The technologies similar to 1)-3) have extensively been ex-plored
in the area of topic detection and tracking (TDT) (see
[1]).Actually 1) and 2) are closely related to the subprob-lems
in TDT called topic tracking and new event detection,
respectively.Here topic tracking is to classify texts into one
of topics specified by a user, while new event detection, formerly
called first story detection, is to identify texts that
discuss a topic that has not already been reported in earlier
texts.The latter problem is also related to work on topic-conditioned
novelty detection by Yang et.al.[16]. In most of
related TDT works, however, topic tracking or new event
detection is conducted without identifying main topics or
a topic structure, hence the tasks 1)-3) cannot be unified
within a conventional TDT framework.Further topic timeline
analysis has not been addressed in it.
Swan and Allen [12] addressed the issue of how to auto-matically
overview timelines of a set of news stories.They
used the
-method to identify at each time a burst of feature
terms that more frequently appear than at other times.
Similar issues are addressed in the visualization community
[3].However, all of the methods proposed there are
not designed to perform in an on-line fashion.
Kleinberg [4] proposed a formal model of "bursts of ac-tivity"
using an infinite-state automaton.This is closely
related to topic emergence detection in our framework.A
burst has a somewhat different meaning from a topic in
the sense that the former is a series of texts including a
specific feature, while the latter is a cluster of categorized
texts.Hence topic structure identification and characterization
cannot be dealt with in his model.Further note that
Kleinberg's model is not designed for real-time analysis but
for retrospective one.
Related to our statistical modeling of a topic structure,
Liu et.al. [2] and Li and Yamanishi [6] also proposed methods
for topic analysis using a finite mixture model.Specifically
, Liu et.al.
considered the problem of selecting the
optimal number of mixture components in the context of
text clustering.In their approach a single model is selected
as an optimal model under the assumption that the optimal
model does not change over time.Meanwhile, in our
approach, a sequence of optimal models is selected dynamically
under the assumption that the optimal model may
change over time.
Related to topic emergence detection, Matsunaga and Yamanishi
[7] proposed a basic method of dynamic model selection
, by which one can dynamically track the change of
number of components in the mixture model.However, any
of all of these technologies cannot straightforwardly be applied
to real-time topic analysis in which the dimension of
data may increase as time goes by.
Related to topic structure identification, an on-line discounting
learning algorithm for estimating parameters in a
finite mixture model has been proposed by Yamanishi et.
al.[14]. The main difference between our algorithm and
theirs is that the former makes use of time-stamps in order
to make the topic structure affected by a timeline of topics
while the latter considers only the time-order of data
ignoring their time-stamps.
The rest of this paper is organized as follows: Section 2
describes a basic model of topic structure.Section 3 gives
a method for topic structure identification.Section 4 gives
a method for topic emergence detection.Section 5 gives a
method for topic characterization.Section 6 gives experimental
results.Section 7 gives concluding remarks.
MODEL
We employ a probabilistic model called a finite mixture
model for the representation of topic generation in a text
stream.Let
W = {w
1
, , w
d
} be the complete vocabulary
set of the document corpus after the stop-words removal
and words stemming operations.For a given document x,
let tf(w
i
) be the term frequency of word w
i
in x.Let idf(w
i
)
be the idf value of w
i
, i. e. , idf(w
i
) = log(N/df(w
i
)) where
N is the total number of texts for reference and df(w
i
) is
the frequency of texts in which w
i
appears.Let tf-idf(w
i
)
be the tf-idf value of w
i
in x, i. e. , tf-idf(w
i
) = tf(w
i
)
log(N/df(w
i
)). We may represent a text x of the form:
x = (tf(w
1
), ..., tf (w
d
))
or
x = (tf-idf(w
1
), ..., tf -idf(w
d
)).
We may use either type of the representation forms.
Let K be a given positive integer representing the number
of different topics.We suppose that a text has only
one topic and a text having the i-th topic is distributed
according to the probability distribution with a density:
p
i
(x|
i
) (i = 1, 2, , K), where
i
is a real-valued parameter
vector.We suppose here that x is distributed according
to a finite mixture distribution (see e.g., [8]) with K components
given by
p(x| : K) =
K
X
i=1
i
p(x|
i
),
(1)
where
i
> 0 (i = 1, , K) and P
K
i=1
i
= 1. We set
= (
1
, ,
K-1
,
1
, ,
K
).Here
i
denotes the degree
to what the i-th topic is likely to appear in a text stream.
Note that each component in the mixture defines a single
cluster in the sense of soft-clustering.
Throughout this paper we suppose that each p
i
(x|
i
) takes
a form of a Gaussian density: Letting d be the dimension of
812
Industry/Government Track Poster
Text Data Stream
Discount Learning
Finite Mixture
Model 1
Dynamic Model Selection
Topic Emergence Detection
Topic Characteriztion
Timeline of Topics
Discount Learning
Discount Learning
Finite Mixture
Model 2
Finite Mixture
Model K
Figure 1: Topic Trend Analysis System
each datum,
p
i
(x|
i
)
=
i
(x|
i
,
i
)
(2)
=
1
(2)
d/2
|
i
|
1/2
exp
,,
- 12(x -
i
)
T
-1
i
(x -
i
)
,
where
i
is a d-dimensional real-valued vector,
i
is a d d-dimensional
matrix, and we set
i
= (
i
,
i
).In this case
(1) is so-called a Gaussian mixture.Note that a Gaussian
density may be replaced with any other form of probability
distributions, such as a multinomial distribution.
In terms of a finite mixture model, a topic structure is
identified by A) the number of components K (how many
topics exist), B) the weight vector (
1
, ,
K
) indicating
how likely each topic appears, and C) the parameter values
i
(i = 1, , K) indicating how each topic is distributed.A
topic structure in a text stream must be learned in an on-line
fashion.Topic emergence detection is conducted by tracking
the change of main components in the mixture model.
Topic characterization is conducted by classifying each text
into the component for which the posterior is largest and
then by extracting feature terms characterizing the classified
texts.Topic drift may be detected by tracking changes
of a parameter value
i
for each topic i.These tasks will be
described in details in the sessions to follow.
The overall flow of the tasks is illustrated in Figure 1.
A text is sequentially input to the system.We prepare a
number of finite mixture models, for each of which we learn
statistical parameters using the time-stamp based learning
algorithm to perform topic identification.These tasks are
performed in parallel.On the basis of the input data and
learned models, we conduct dynamic model selection for
choosing the optimal finite mixture model.We then compare
the new optimal model with the last one to conduct
topic emergence detection.Finally for each component of
the optimal model, we conduct topic characterization.
TOPIC STRUCTURE IDENTIFICATION WITH DISCOUNTING LEARNING
In this section we propose an algorithm for learning a topic
structure, which we call an time-stamp based discounting
topic learning algorithm.
The algorithm is basically designed as a variant of the incremental
EM algorithm for learning a finite mixture model (see,
e.g., Neal and Hinton [9]). Our proposed one is distinguished
from existing ones with regards to the following three main
features:
1) Adaptive to the change of the topic structure. The parameters
are updated by forgetting out-of-date statistics as
time goes on.This is realized by putting a larger weight to
the statistics for a more recent data.
2) Making use of time stamps for texts. Not only the time
order of texts but also their time stamps are utilized to make
the topic structure depend on the timeline.For example, for
two text data x
t
1
, x
t
2
(t
1
< t
2
), if the length t
2
-t
1
is larger,
the topic structure learned at time t
2
will be less affected by
that at time t
1
.
3) Normalizing data of different dimensions. We consider
the on-line situation where the dimension of a datum may
increase as time goes by.This situation actually occurs because
new words may possibly be added to the list of words
every time a new text is input.Hence it is needed for normalizing
data of different dimensions.
We suppose that text data x
1
, x
2
, ... are given in this
order, and each has a time-stamp indicating when it appeared
.Here is a description of the algorithm, in which
is a discounting parameter,
i
denotes the posterior density
of the ith component, and m is introduced for calculation
of weights for old statistics.
Time-stamp Based Discounting Learning Algorithm
Initialization:
Set initial values of
(0)
i
,
(0)
i
,
(0)
i
, m
(0)
(i = 1, , k).Let
> 0, 0 < < 1 be given.
Iteration:
For t = 1, 2, .. do the following procedure.
For the t-th data be x
t
and its time stamp be t
new
.Let the
time stamp of the (t - 1)-th data be t
old
.
For i = 1, , k, update the parameters according to the
813
Industry/Government Track Poster
following rules:
p(i|x
t
)
:=
(t-1)
i
p
i
(x
t
|
(t-1)
i
,
(t-1)
i
)
P
k
l=1
(t-1)
l
p
l
(x
t
|
(t-1)
l
,
(t-1)
l
)
(t)
i
:=
WA(p(i|x
t
), 1/k|1, )
(t)
i
:=
WA(
(t)
i
,
(t)
i
|m
(t-1)
,
-(t
new
-t
old
)
)
(t)
i
:=
WA(
(t-1)
i
, x
t
|
i
m
(t-1)
,
-(t
new
-t
old
)
(t)
i
)
(t)
i
:=
WA(
(t-1)
i
, x
i
x
T
i
|
i
m
(t-1)
,
-(t
new
-t
old
)
(t)
i
)
(t)
i
:=
(t)
i
-
i
T
i
m
(t)
:=
(t
new
-t
old
)
m
(t-1)
+ 1,
where
WA denotes the operation such that
WA(X, Y |A, B) =
A
A + B X +
B
A + B Y.
Generally, we set the initial value
(0)
i
= 1/K, m
(0)
= 0, a
small value to
(0)
i
, and set
(0)
i
the first x
t
s that are different
each other.This algorithm updates
i
,
i
, and
i
as the
weighted average of the latest parameter value and the new
statistics.The weight ratio is m
(t-1)
:
-(t
new
-t
old
)
for
i
,
and
i
m
(t-1)
:
-(t
new
-t
old
)
(t)
i
for
i
and
i
, respectively.
Note that Yamanishi et.al.'s sequentially discounting learning
algorithm [14] can be thought of as a special case of this
algorithm in which the time interval t
l+1
- t
l
is independent
of l.In that case if we further let = 1, the algorithm
becomes an ordinary incremental EM algorithm.
In real implementation, we supposed that
i
is a diagonal
matrix for the sake of computational complexity issues.The
scalability issue for dealing with a general matrix
i
remains
for future study.
TOPIC EMERGENCE DETECTION WITH DYNAMIC MODEL SELECTION
In this section we are concerned with the issue of topic
emergence detection, i.e., tracking the emergence of a new
topic.We reduce here this issue to that of selecting the
optimal components in the mixture model dynamically.We
call this statistical issue dynamic model selection (see also
[7]).
The key idea of dynamic model selection is to first learn a
finite mixture model with a relatively large number of components
, then to select main components dynamically from
among them on the basis of Rissanen's predictive stochastic
complexity [10].
The procedure of dynamic model selection is described as
follows:
Initialization:
Let K
max
(maximum number of mixture components) and
W (window size) be given positive integers.
Set initial values of
(0)
i
,
(0)
i
= (
(0)
i
,
(0)
i
) (i = 1, , K
max
).
Iteration:
For t = 1, 2, , do the following procedure 1 to 4:
1. Model Class Construction:
Let G
t
i
be the window average of the posterior probability
(
t-W
i
+
+
t
i
)/W .For k = 1, , K
max
, do the following
procedure:
Let
1
, ,
k
be the indices of k highest scores such that
G
(t-1)
1
G
(t-1)
k
.Construct the following mixture
model with k components: For s = t - W, , t,
p
(t-1)
(x|
1
, ,
k
)
=
k-1
X
j=1
(t-1)
j
p
j
(x|
(t-1)
j
)
+
1
k
-1
X
j=1
(t-1)
j
!
U.
where
U is a uniform distribution over the domain.
2. Predictive Stochastic Complexity Calculation:
When the t-th input data x
t
with dimension d
t
is given,
compute
S
(t)
(k) =
t
X
s=t-W
"-logp
(s)
(x
s
|
1
, ,
k
)/d
s
"
.
(3)
3. Model Selection:
Select k
t
minimizing S
(t)
(k).Let p
j
(x|
(t-1)
j
) (j = 1, , k
t
)
be main components at time t, which we write as {C
(t)
1
, C
(t)
k
t
}.
4. Estimation of Parameters:
Learn a finite mixture model with K
max
components using
the time-stamp based discounting learning algorithm.Let
the estimated parameter be (
(t)
1
, ,
(t)
K
max
,
(t)
1
, ,
(t)
K
max
).
Note that the S
(t)
(k) can be thought of as a variant of
Rissanen's predictive stochastic complexity [10] normalized
by the dimension for each datum, which can be interpreted
as the total code length required for encoding a data stream
x
t-W
, ..., x
t
into a binary string sequentially.
Once main components C
(t)
1
, , C
(t)
k
t
are obtained, we
compare them with C
(t-1)
1
, , C
(t-1)
k
t-1
to check the emergence
of a new topic or the disappearance of an existing
topic in the following way.If a new component is selected
at some point and remains for a longer time than a specified
threshold, we may determine that a new topic has emerged.
Specifically, if the optimal number k
t
of components becomes
larger than k
t-1
, we can recognize that a new topic
has emerged.Similarly, if an existing component is not selected
at some time and does not appear any longer, then
we may determine that the topic has disappeared.
TOPIC CHARACTERIZATION WITH INFORMATION GAIN
Once the optimal finite mixture model is obtained, we
are concerned with the issue of how to characterize each
topic.We address this issue by extracting terms characterizing
each topic and by observing the growth or decay of
each topic component.Details are shown below.
A) Extracting terms characterizing each topic.
We attempt
to characterize each topic by extracting characteristic
words for it.We perform this task by computing the
information gain of possible words.
In the time-stamp based discounting topic learning algorithm
, the posterior probability distribution over the set of
clusters is estimated every time a text data is input.According
to that posterior distribution an input text will be
814
Industry/Government Track Poster
categorized into the component for which the posterior probability
is largest.This clustering task can be performed in
an on-line fashion.
After observing the t-th datum x
t
, for i = 1, , k, let
S
t
(i) be the set of texts in x
t
= x
1
, .., x
t
classified into the
i-th component and let t
i
be the size of
S
t
(i).Let S
t
=
k
i=1
S
t
(i).
Below we show the method for computing the information
gain of a term w for each topic component.For any term
w, let S(w) be a set of vectors in S
t
such that the frequency
of w is larger than a given threshold, and let m
w
be the size
of
S(w).Let S(
w) be a set of vectors in S
t
such that the
frequency of w is not larger than the threshold, and m
w
be
the size of
S(
w).
For a specified topic component, say, the i-th component,
let m
+
w
be the number of vectors in
S(w) that are also included
in
S
t
(i).Let m
+
w
be the number of vectors in
S(
w)
that are also included in
S
t
(i).
Then we define the information gain of w for the i-th topic
component as follows:
IG(w|i) = I(t, t
i
)
- `I(m
w
, m
+
w
) + I(m
w
, m
+
w
) ,
where I(x, y) is the information measure such as stochastic
complexity [10], extended stochastic complexity [13][5].The
stochastic complexity [10] is given as follows:
I(x, y) = xH
" y
x
"
+ 1
2 log
" x
2
"
,
where H(x) = -x log x - (1 - x) log(1 - x) log(1 - x) is the
binary entropy function, and log's base is 2.A special case
of extended stochastic complexity is given as follows [13][5]:
I(x, y) = min{y, x - y} + cpx log x,
where c is a constant.
We select a specified number of terms ws of largest information
gains.We can think of them the set of terms
characterizing the i-th topic component.
The statistics: m
w
, m
+
w
, m
w
, m
+
w
needed for computing
information gain can be calculated in an on-line fashion.
Hence topic characterization task is conducted in real-time.
B) Observing the growth or decay of each cluster. Let G
(t)
i
be the window average of the posterior probability of the ith
topic component, that is, G
t
i
= (
t-W
i
+, , +
t
i
)/W .
G
(t)
i
increases when texts corresponding to the i-th topic is
input, and decreases when the other is input.We can see
how rapidly this topic grows by observing G
(t)
i
as t goes by.
EXPERIMENTAL RESULTS
We conducted an experiment on real data: contact data of
a help desk for an internal e-mail service.An example of the
data record is presented in Table 1.It has the field of contact
date/time, question/request, answered date/time, answer,
and so on.The number of the records is 1202.The date of
the first/last contact is Feb 21 2004/May 20 respectively.
We input contact dates as the time-stamps, and ques-tions/requests
as the text data to our system.We set K
max
to 50 and to 0.99.Our system ran on an NEC Express5800
with 1GHz Pentium III and a 1GB memory.The system was
implemented using C, and OS was Windows 2000 Server.
Processing 1202 records of data took about five minute.
Figure 2 shows the number of components k
t
selected by
our system as main topics .The number increases at the
Date
k
*t
Figure 2: Number of main topics
beginning of March, and has a peak in the middle of April.
Since a fiscal year begins at April in Japan, we can suppose
that the number of topics at the help desk is increasing
around the first day of April.
Let us look into a few of the components, because we do
not have enough space for all of the components.Here, we
observe Component 27 and 42 in detail.Figure 3 shows the
window averages G
27
, G
42
of the posterior probabilities and
the periods where the components are selected as main topics
. G
27
increases in the beginning of April and has the first
G
27
G
42
C27 is main.
C42 is main.
Figure 3: G
i
and Period of Component 27, 42
peak at April 12.Then it repeats increase and decrease until
the middle of May.The corresponding component is selected
as main from the first week of April, and remains as main
until the middle of May (with short discontinuances). G
42
is positive during April, and also the corresponding topic is
main during April.The lines of G
i
s indicate how important
the corresponding topics are in each time.Moreover, we
can observe how the emerged topics grows and disappears
from the figure.The topic corresponding to Component 42
emerges at the beginning of April, grows for two weeks, is
attenuated, then drops out from the main topics at the end
of April.
Term "transfer" was extracted as a characteristic word
815
Industry/Government Track Poster
Table 1: Examples of help desk data records
Contact date/time
Question/Request
Answered date/time
Answer
,,,
Feb 26 2004 14:05
I forgot my password. How can I ...
Feb 26 2004 14:48
You can get a new ...
Feb 26 2004 14:08
Until what time is an account for a ...
Feb 26 2004 14:25
It is valid for 14 days after retirement.
Feb 26 2004 14:09
Is it possible to forward mails from ...
Feb 26 2004 15:09
Yes. You can set up by ....
....
....
....
....
for Component 27.Texts classified into this component are
questions like "Is it possible to use Service XXX after I am
transfered to YYY?".That kind of questions may increase
around the beginning of a fiscal year."Service ZZZ" and
"failure" were extracted as chracteristic words for Component
42.Actually, Service ZZZ failed in the beginning of
April, then, the topic consists of related complaints and
questions.
In this way we can recognize the emergence, growth, and
decay of each topic from the system.Through this example
it has turned out that our framework for topic trend analysis
are very effective for tracking dynamics of topic trends in
contact data at a help desk.
CONCLUSION AND FUTURE STUDY
In this paper we have proposed a framework for tracking
dynamics of topic trends using a finite mixture model.In
this framework the three main tasks: topic structure identification
, topic emergence detection, and topic characterization
are unified within a single framework.Topic structure
identification has been realized by our unique time-stamp
based learning algorithm.It enables tracking topic structures
adaptively by forgetting out-of-date statistics.Topic
emergence detection has been realized on the basis of the
theory of dynamic model selection.It enables detecting
changes of the optimal number of components in the finite
mixture model to check whether a new topic has appeared
or not.Topic characterization has been realized by on-line
text clustering and feature extraction based on information
gain.Through the experiments using real data collected at a
help desk, it is demonstrated that our framework works well
in the sense that dynamics of topic trends can be tracked in
a timely fashion.
The following issues remain open for future study:
Context-based topic trend analysis: In this paper we have
proposed an approach to word-based topic trend analysis.
However, we need to further analyze contexts, i.e., relations
among words, in order to more deeply analyze the semantics
of topics.
Multi-topics analysis: We supposed that one text comes
from a single mixture component corresponding to a single
topic.It is our future study how to deal with texts having
multi topics.
REFERENCES
[1] J.Allen, R.Papka, and V.Lavrenko: On-line new
event detection and tracking, in Proceedings of
SIGIR International Conference on Information
Retrieval, pp:37-45, 1998.
[2] X.Liu, Y.Gong, W.Xu, and S.Zhu: Document
clustering with cluster refinement and model
selection capabilities, in Proceedings of SIGIR
International Conference on Information Retrieval,
pp:191-198, 2002.
[3] S.Harve, B.Hetzler, and L.Norwell: ThemeRiver:
Visualizing theme changes over time, in Proceesings
of IEEE Symposium on Information Visualization,
pp:115-123, 2000.
[4] J.Kleiberg: Bursty and hierarchical structure in
streams, in Proceedings of KDD2002, pp:91-101,
ACM Press, 2003.
[5] H.Li and K.Yamanishi: Text classification using
ESC-based decision lists, Information Processing and
Management, vol.38/3, pp:343-361, 2002.
[6] H.Li and K.Yamanishi: Topic analysis using a finite
mixture model, Information Processing and
Management, Vol.39/4, pp 521-541, 2003.
[7] Y.Matsunaga and K.Yamanishi: An
information-theoretic approach to detecting
anomalous behaviors, in Information Technology
Letters vol.2 (Proc. of the 2nd Forum on Information
Technologies), pp:123-124, (in Japanese) 2003.
[8] G.McLahlan and D.Peel: Finite Mixture Models,
Wiley Series in Probability and Statistics, John
Wiley and Sons, 2000.
[9] R.M.Neal and G.E.Hinton: A view of the EM
algorithm that justifies incremental sparse, and other
variants, Learning in Graphical Models, M. Jordan
(editor), MIT Press, Cambridge MA, USA.
[10] J.Rissanen: Universal coding, information, and
estimation, IEEE Trans. on Inform. Theory,
30:629-636, 1984.
[11] R.Swan and J.Allen: Extracting significant
time-varying features from text, in Proceedings of 8th
International Conference on Information Knowledge
Management, pp:38-45, 1999.
[12] R.Swan and J.Allen: Automatic generation of
overview timelines, in Proceedings of SIGIR
International Conference on Information Retrieval,
pp:49-56, 2000.
[13] K.Yamanishi: A Decision-theoretic Extension of
Stochastic Complexity and Its Applications to
Learning, IEEE Trans. on Inform. Theory, vol.44/4,
pp:1424-1439, 1998.
[14] K.Yamanishi, J.Takeuchi, G.Williams, and P.Milne:
On-line unsupervised outlier detection using finite
mixtures with discounting learning algorithms," in
Proceedings of KDD2000, ACM Press, pp:320324
2000.
[15] Y.Yang, T.Pierce, J.G.Carbonell: A study on
retrospective and on-line event detection, in
Proceedings of SIGIR International Conference on
Information Retrieval, pp:28-30, 1998.
[16] Y.Yang, J.Zang, J.Carbonell, and C.Jin:
Topic-conditioned novelty detection, in Proceedings
of KDD 2002, pp:688-693, 2002.
816
Industry/Government Track Poster
| finite mixture model;CRM;time-stamp based discounting learning algorithm;topic structure identification;topic characterization;topic detection and tracking;time-stamp based learning algorithm;Topic Structure Identification;topic emergence detection;text mining;Topic Emergence Detection;tracking dynamics;dynamic model selection;Data Mining;information gain;topic trends;Topic Characterization;text data streams;model selection;topic trend;topic analysis |
2 | A Case Study on How to Manage the Theft of Information | This paper shows the importance that management plays in the protection of information and in the planning to handle a security breach when a theft of information happens. Recent thefts of information that have hit major companies have caused concern. These thefts were caused by companies' inability to determine risks associated with the protection of their data and these companies lack of planning to properly manage a security breach when it occurs. It is becoming necessary, if not mandatory, for organizations to perform ongoing risk analysis to protect their systems. Organizations need to realize that the theft of information is a management issue as well as a technology one, and that these recent security breaches were mainly caused by business decisions by management and not a lack of technology. | INTRODUCTION
After counter-terrorism and counter-intelligence, cyber crime is
the third highest priority for the U.S. Federal Bureau [4]. With
the rise of the theft of information and the lure of big profits for
this stolen information, it is necessary for information systems to
have the ability to protect this valuable asset. It is estimated that a
credit card number unsupported by any other documentation is
worth $10, and a credit history report retails for $60 [2]. Recent
breaches of information systems that have lead to thefts of
information have shown that management practices and not
technology was part of the issue and in some cases the primary
cause of the theft of the information. With each of these thefts,
there is a third party committing a crime, but in each case, risk
analysis could have been used to avoid or to help mitigate the
theft. It is becoming a necessity that companies examine their
business practices and company policies to avoid risks associated
with information stealing. The solution to information stealing
does not reside in technology alone but also requires an
understanding by management of the business and the risks
associated with it. This paper examines the theft of information
from different companies in order to explain the short coming of
management practices that lead to the theft. .
CASE STUDIES
In May of 2005, Citigroup lost computer tapes that were being
sent to the credit bureau via UPS that included Social Security
numbers and payment history information for 3.9 million
customers. After this event, this New York based company has
decided that it will start sending its data to the credit bureau
electronically using encryption [8].
Citigroup should have learned a lesson from Time Warner who
lost a shipment of backup tapes that contained personal
information of 600,000 employees that was being sent to an
offsite data storage company in March of 2005 [9]. But the
question remains, why was Citigroup sending sensitive
information unsecured? Why did they not encrypt the data in the
first place, and why did they realize that these tapes could get lost
or stolen as evident to what happened with Time Warner? The
answer is because they did not correctly identify the risk.
Citigroup believed that UPS was a secure method for sending this
information and that the data would be difficult to retrieve off the
tapes because of the hardware needed to read the tapes. Citigroup
needed to evaluate this risk of properly protecting confidential
information while in transmission. Now, Citigroup has the issue
of dealing with the negative public associated with this event, and
the loss of any potential customers/revenue it will lose because of
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies
are not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
Information Security Curriculum Development (InfoSecCD)
Conference '05, September 23-24, 2005, Kennesaw, GA, USA.
Copyright 2005 ACM 1-59593-261-5/05/0009...$5.00.
135
it. This issue would have been avoided if Citigroup would have
properly identified this risk and taken the steps to protect this
information. If the tapes were lost and the data was encrypted,
then this story would have never happened.
2.2
Case II: ChoicePoint
Choicepoint has made more than 50 acquisitions since 1997 to
make it one of the largest collections of personal data in the
United States. Choicepoint sells data "to clients doing
background checks on job and loan applicants and conducting
criminal investigations" [10]. On February 16, 2005,
ChoicePoint went public to tell 145,000 people that identity
thieves may have gained access to their personal information
including their Social Security numbers and credit reports.
"Authorities believe it was the work of a group of people who
used IDs stolen from legitimate business people to set up phony
businesses that contracted with ChoicePoint for ID checks,
Bernknopf (ChoicePoint's spoke person) said" [5].
With ChoicePoint's security incident, there was no firewall
hacked, or an IDS fooled. This was a deceptive scheme that took
advantage of security holes in the business process.
ChoicePoint's CISO, Rich Baich, stated "The mislabeling of this
event as a hack is killing ChoicePoint. It's such a negative
impression that suggests we failed to provide adequate protection.
Fraud happens everyday. Hacks don't" [10]. ChoicePoint
seemed to push that they were the victims of fraud, and not at
fault. The bottom line is that confidential information was stolen,
and the individuals who had their information stolen do not care if
it was hacker or if the company was a victim of fraud.
ChoicePoint failed to identify holes in the business process to
allow this event to happen. Which if someone hacked into their
system, it would have lead to the same result, the theft of
information. ChoicePoint needs to recognize that identifying
risks with their business process is just as important as securing
their information system from an external hacker.
2.3
Case III: Egghead.com
Egghead Software was a company that opened in 1984 to sell
computer hardware and software that grew to have more than 205
stores worldwide. Then in 1998 the company moved its business
to the internet as Egghead.com.
In December of 2000, Egghead.com stated that "a hacker has
breached its computer system and may have gained access to its
customer database" [6]. Jerry Kaplan, Egghead.com's co-chairman
, stated that there was "no evidence" to support that the
database with the credit card numbers for its customer was stolen
but, he also could not give confirmation that they were not stolen.
"Egghead's inability to determine how many of it's customers
credit cards had been compromised may mean that the company
does not have a real-time auditing system in place, said Paul
Robertson, senior developer for security service firm TruSecure
Corp. `If you don't know how many credit-card numbers you lost,
you are giving a quick, blanket, worst-case answer--and then
finding out what happened afterwards,' he said." [1]. The way
that Egghead.com handled its security incident showed that they
did not have a good plan to manage the theft of information, and
it appeared as if they made the plan to handle this situation as it
happened. This lack of planning and risk analysis by
management caused Egghead.com's business to suffer
tremendously. Shortly thereafter this event, Egghead.com went
into bankruptcy, and on November 26, 2001, Amazon.com
acquired Egghead.com's assets in the Bankruptcy Court [6].
It appears the inability for Egghead.com to successful determine
with certainty the extent of information stolen caused more
damage to the company's reputation then the actual event itself.
If Egghead.com had a well developed incident response plan in
place to handle this security breach and a way to handle the media
that followed, Egghead.com may have been able to weather the
storm and stay in business. But all customer confidence was lost
and Egghead.com was not able to recover.
2.4
Case IV: New Jersey Crime Ring
Bank employees for Wachovia Corporation, Bank of America
Corporation, Commerce Bancorp Inc., and PNC Bank stole
information on 676,000 customer accounts that are all New Jersey
residents. It is considered the largest banking security breach in
history by the U.S. Department of the Treasury. "The suspects
pulled up the account data while working inside their banks, then
printed out screen captures of the information or wrote it out by
hand, Lomia (a New Jersey Police Detective) said. The data was
then provided to a company called DRL Associates Inc., which
had been set up as a front for the operation. DRL advertised itself
as a deadbeat-locator service and as a collection agency, but was
not properly licensed for those activities by the state, police said"
[13].
With this security breach, there was no technology involved. No
hackers breached the information system. This was completely
an inside job. The question becomes of how this could have been
prevented? The answer is that in some cases the theft of
information can not be prevented. The only the thing that
management can do is be prepared for when it does happen.
Because of incidents like this, it is becoming a duty of
management to have an incident response plan in place long
before a security breach happens. From a risk analysis viewpoint,
an incident like this is difficult to detect and almost impossible to
stop before it happens. But when it does happen and the criminals
are caught, it becomes a necessity to punish the ones responsible
to the full extent of the law to deter others from following suit.
2.5
Case V: LexisNexis
LexisNexis is provider of legal and business data. In March of
2005, LexisNexis announced that the information on 32,000
people was stolen. These breaches occurred at one of the
subsidiary companies, Seisint Inc. Seisnt Inc. was the company
who was the provider of data to the Multistate Anti-Terrorism
Information Exchange (MATRIX) system. "LexisNexis, which
acquired Seisint of Boca Raton, Florida, in September for $775
million, expressed regret over the incident and said that it is
notifying the individuals whose information may have been
accessed and will provide them with credit-monitoring services"
[12]. In this incident, hackers stole username and passwords of
legitimate users to access the confidential information. In a
statement, "Kurt Sanford, president and CEO of LexisNexis
Corporate and Federal Markets, said that the company will
improve the user ID and password administration procedures that
its customers use and will devote more resources to protecting
user's privacy and reinforcing the importance of privacy" [12].
This security breach is very similar to the incident that happened
at ChoicePoint who is one of LexisNexis's competitors.
136
There are several policies that should have been implemented that
could have reduced the risk of this security breach. Since
LexisNexis gives third parties access to its confidential
information, there becomes a need to educate these organizations
on certain practices to protect the data. Where was this education,
and was there a lack of education due to the possible effect that it
could have on business? Also, what was the password policy for
its customers? LexisNexis has not elaborated on the details of the
security breach, but considering the statement of the CEO of
LexisNexis after the incident, there clearly seems that there was a
failure to detect the risk associated with their customer's
password policy that could result in a theft of information.
LexisNexis inability to properly assess this risk caused the
security breach. Through education and a secure password
administration policy, this event could have been avoided.
RESULTS AND DISCUSSION
When analyzing these case studies, an important thing to ponder
is that for every security breach reported, how many go
unreported? These security breaches could have been avoided
with proper risk assessment and risk analysis, or at least the
probability of a security breach could have been reduced greatly.
For all security breaches, the prevention or at least the reduction
of the probability of the security breach begins and ends with
decisions that management makes.
In an organization, when a security breach occurs it causes a
company to re-evaluate their policies that guide their information
security. With this rash of security incidents that have recently
taken place, companies do not need to wait until a security breach
happens to evaluate their security policies and analyze their risks.
Companies need to have an ongoing risk analysis that is
continually developed and re-developed. They need policies that
are ever changing to meet new threats and new security
weaknesses from a both business practices and technology
viewpoints. Looking at the incidents that happened at
ChoicePoint, LexisNexis, and Citigroup, these companies have
technological solutions to protect their data from being stolen, but
they failed at weighing equal importance the security of the data
from a business issue perspective. This showed in their inability
to properly evaluate the risk in the business practices. In several
of the cases, the theft of information occurred because of the
business practices of the company, and technology was not even
involved.
Also, companies need to learn from the mistakes of others
because history will repeat itself if the lesson is not learned.
There is an age old saying that is a wise person learns from their
mistakes, but an even wiser person learns from the mistakes of
others. Citigroup needed this advice. With Citigroup's loss of
their backup tapes, they should have learned from the mistake that
Time Warner made just months earlier, but they did not. Security
policies and practices need the flexibility to change, and
management has a responsibility to make these changes when
new threats or new weakness surface so that they can protect their
data.
Companies and organizations need to realize the importance of
making information security a business issue as well as a
technological one. With the issue that happened with
Egghead.com, they did have security systems in place to protect
their data from being stolen, "but it lacked the kind of coordinated
organizational response necessary to convince customers and
shareholders that their sensitive data were actually secure."
Egghead.com lost 25% of its stock value when their customer
data was stolen [7]. Egghead.com was not ready for the media
storm that followed the security breach which ultimately caused
their collapse. By making information security a business issue,
as well as a technological one, companies can add strategic,
operational, and organizational defenses to protect their data.
CONCULSIONS
As more identity thefts occur, companies that make their money
from storing this information are going to become liable. " `The
ChoicePoint scandal has been a wake up call for how vulnerable
consumers are to identity theft because of the lack of security
standards for the largely unregulated information broker
industry,' said Gail Hillebrand, Senior Attorney for Consumers
Union's West Coast Office. `This bill will ensure that information
brokers are held accountable for enforcing tough security
practices to prevent thieves from gaining access to sensitive
consumer data. And it gives consumers important new rights to
examine the information maintained about them and to correct
any errors they may find' [3].
Companies need to find the importance of protecting their data
from both technology and business practices weaknesses.
Companies view the protection of their data from a technology
issue, but fail to realize the importance that management plays in
protecting their systems with the creation of policies and
understanding the risks that face their information systems.
From a consumer standpoint, if a company is making profit from
someone's personal information and they fail to protect this data,
should they not give some sort of reputation? Companies own
and manage consumer information, and individuals have little
power over their information that is controlled by these
organizations. As identity theft continues and companies fail at
protecting their data, legislation will be passed that will force
companies to comply with regulator standards that may force
companies to give this reputation to individuals who have their
identity stolen.
Today, there are only laws to protect data in certain industries.
This includes the Health Insurance Portability and Accountability
Act for healthcare and the Gramm-Leach-Bliley Act for financial
services. With consumer groups voicing their opinions regarding
the theft of information from companies, the US Congress and
other state legislators are getting prepared to pass broader data
privacy protection to protect consumers [11].
There are steps that companies and organizations need to take to
protect themselves from the theft of information. First,
companies need to be prepared when a security breach occurs
because a risk to an asset is never zero percent. Organizations
need to establish policies and risk assessments that protect their
data from both technology risks and business practices well
before a security breach occurs. This is achieved by companies
having the organizational structure that allows management to
fully understand the business processes and technology that
expose their information systems to threats. Also, companies
need the ability to change and adapt to new threats that oppose
their information. It is not possible to prevent all security
breaches that lead to a theft of information, but companies will
need to have policies and practices in place to better protect the
137
data. Companies will need not only to weigh technology risk to
their information, but also understand business issues that expose
their information to theft. It no longer matters how the
information stolen, whether it was a hacker or a social engineer
that committed the crime; companies need to protect their
information from all threats and minimize their risks from all
aspects.
REFERENCES
[1] Charny, Ben and Lemos, Robert. December 22, 2000.
Egghead Scrambles to Guage Damage. Retrieved
06/19/2005 from
http://seclists.org/lists/isn/2000/Dec/0134.html
[2] Crawford, Michael. June 16, 2005. Criminals Grasp the
Metrics of Information Value. Retrieved 06/20/2005 from
http://www.computerworld.com.au/index.php?id=550545875
&eid=-255
[3] ConsumersUnion.org. Consumers Union applauds Nelson
(FL) bill to extend federal oversight to information brokers
like ChoicePoint. Retrieved 06/28/2005 from
http://www.consumersunion.org/pub/core_financial_services
/002027.html
[4] Easen, Nick. April 21, 2004. Cyber Crime is Right Under
Your Nose. Retrieved 06/25/2005 from
http://www.cnn.com/2004/BUSINESS/04/20/go.cyber.securi
ty/index.html
[5] Gross, Grant. February 23, 2005. ChoicePoint's Error Sparks
Talk of ID Theft Law. Retrieved 06/22/2005 from
http://pcworld.com/news/article/0,aid,119790,00.asp
[6] Liu, Bob. December 3, 2001. Eggheacd.com Becomes
Amazon.com Property. Retrieved 06/22/2005 from
http://www.internetnews.com/ec-news/article.php/932871
[7] McKinsey & Company, Inc. June 6, 2002. Managing
Information Security. Retrieved 06/22/2005 from
http://news.com.com/2009-1017-933185.html
[8] McMillian, Robert. June 7, 2005. Citigroup to Encrypt Data
Sent to Credit Bureaus. Retrieved 06/20/2005 from
http://www.computerworld.com/hardwaretopics/hardware/st
ory/0,10801,102315,00.html
[9] Mearian, Lucas. May 2, 2005. Time Warner Says Data of
600,000 Workers Lost. Retrieved 06/21/2005 from
http://www.computerworld.com/databasetopics/data/story/0,
10801,101500,00.html
[10] Mimoso, Michael. April 2005. Damage Control. Retrieved
06/21/2005 from
http://informationsecurity.techtarget.com/magItem/1,291266,
sid42_gci1073914,00.html
[11] Rasmussen, Michael. March 3, 2005. ChoicePoint Security
Breach Will Lead to Increased Regulation. Retrieved
06/25/2005 from
http://www.csoonline.com/analyst/report3416.html
[12] Robert, Paul. March 9, 2005. Hackers Grab LexisNexis Info
on 32,000 People. Retrieved 06/24/2005 from
http://www.pcworld.com/resource/article/0,aid,119953,pg,1,
RSS,RSS,00.asp
[13] Weiss, Todd. May 20, 2005. Scope of Bank Data Theft
Grows to 676,000 Customers. Retrieved 06/24/2005 from
http://www.computerworld.com/securitytopics/security/cybe
rcrime/story/0,10801,101903,00.html
138
| security breach;risk analysis;Information Security;business practises and policy;information system;cases of information theft;privacy;management issue;Information Security Management;theft of information;human factor;data protection procedure;Security Management;information security;cyber crime;confidential information;incident response plan;encryption;data protection;personal information |
20 | A Survey of Collaborative Information Seeking Practices of Academic Researchers | Information seeking and management practices are an integral aspect of people's daily work. However, we still have little understanding of collaboration in the information seeking process. Through a survey of collaborative information seeking practices of academic researchers, we found that researchers reported that (1) the lack of expertise is the primary reason that they collaborate when seeking information; (2) traditional methods, including face-to-face, phone, and email are the preferred communication mediums for collaboration; and (3) collaborative information seeking activities are usually successful and more useful than individually seeking information. These results begin to highlight the important role that collaborative information seeking plays in daily work. | INTRODUCTION
Information seeking and management practices are an
integral aspect of people's daily work. In organizational
work, information is vital for making decisions and
coordinating activities. Therefore, organizations have
developed a wide variety of processes and technologies to
support their workers' information seeking activities. Much
of this support has been for the individual information
seeker; in most organizations, information seeking has been
traditionally viewed as an individual activity [1, 2].
Yet, collaboration is becoming an increasingly important
component of work in organizations. Multidisciplinary
teams are a common feature of modern organizations [3,
4]. To successfully accomplish their work, team members
must collaborate with each other efficiently and effectively.
One important aspect of the team's work is seeking
information [5]. Yet, we have little understanding of
collaborative information seeking practices [6, 7].
Therefore, to help team members work together effectively
and to design information systems that support their work,
we must understand the collaborative information seeking
practices of team members.
To examine collaborative information seeking (CIS)
practices, we conducted a survey of academic researchers
in a small technology-focused research university.
Researchers have traditionally collaborated with each other
on research projects because of the often cross-disciplinary
nature of the work. This collaboration has increased in
recent years as information and communication
technologies have improved. Although the survey asked a
variety of questions, in this paper, we focus on three
particular areas of interest:
What triggers are most likely to lead to CIS activities?
When engaging in CIS, what media or channel of
communication is most likely used to collaborate?
How successful are these CIS activities?
In a previous study, we identified three triggers that cause
team members to collaborate when seeking information.
These triggers are (1) lack of expertise (2) complex
information need and (3) information not easily accessible
[8]. In this study, we were interested in identifying which
of these triggers researchers reported to be the most
important reason for them to collaborate when seeking
information. We also wanted to identify what were the
primary mechanisms of collaboration (e.g., e-mail, face-to-face
, etc.). We were also interested in determining the
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise,
or republish, to post on servers or to redistribute to lists, requires prior
specific permission and/or a fee.
GROUP'05, November 69, 2005, Sanibel Island, Florida, USA.
Copyright 2005 ACM 1-59593-223-2/05/0011...$5.00.
85
degree to which researchers found collaborative
information seeking to be successful, particularly in
comparison to individual information seeking.
COLLABORATIVE INFORMATION SEEKING
Although there is limited research on collaborative
information seeking, researchers are beginning to explore
this phenomena in various domains [9].
In a study of two design teams, Poltrock et al [10] found
that each team had different communication and
information seeking practices. Interestingly, they did not
examine an individual's role in the information seeking
process but rather how team members actively worked
together to identify information needs. They argue that an
understanding of collaborative information retrieval will
allow for the informed design of technologies meant to
support such work, and will also allow teams to work more
effectively with these sources of information. In a study of
information behavior in a hierarchical work environment
a military command and control environment
Sonnenwald and Pierce [11] described information seeking
as a dynamic activity in which "individuals must work
together to seek, synthesize and disseminate information."
They examined how team members maintained awareness
of each other's information activities and how this
awareness influenced their information sharing with each
other. Finally, in a study of collaborative information
seeking in the medical domain, Reddy and Dourish [12]
argue that work rhythms play a role in healthcare
providers' collaborative information seeking practices.
Although a few studies have examined collaborative
information seeking in small group settings through
ethnographic field studies, there have been, to the best of
our knowledge, no studies that have used surveys to gather
data on CIS from a larger population sample.
METHODS
Seventy researchers at a small a small technology-focused
research university participated in this study. The majority
were faculty researchers and a small percentage were
graduate research assistants. Most participants were from
science and technology disciplines.
3.2 Materials and Procedures
One-hundred and fifty potential participants were emailed a
request to participate, which included an email link to an
online survey. The response rate was 47%.
The survey included the following items:
1. What causes you to work together when looking for
information?
(a) The information needed is complex.
(b) The information needed requires a different
expertise.
(c) The information is not immediately accessible.
2. What medium are you most likely to use when
collaborating with your teammates to look for
information?
(a) Electronic forum; (b) Email; (c) Face-to-face;
(d) Fax; (e) Instant message; (f) Telephone; (f) Web
conferencing
3. When collaborating with teammates to look for
information, we usually find the information for which
the team is searching.
4. Participating in collaborative information seeking is
easier than individual information seeking.
5. Participating in collaborative information seeking leads
to more relevant information being found than when
individually seeking information.
6. Participating in collaborative information seeking leads
to information being found more quickly than when
individually seeking information.
Participants responded to each phrase under item 1 and to
items 3 6 on a scale ranging from 1 (strongly disagree) to
10 (strongly agree) and to item 2 on a scale ranging from 1
(not at all likely) to 10 (very likely). The survey also
included free-text opportunities for the respondents to
provide more information about their answers, if they
chose to do so.
RESULTS & DISCUSSION
In order to determine the triggers that are most likely to
lead to collaborative information seeking, the responses to
questionnaire items 1a, 1b, and 1c where considered. A
one-way within-subjects analyses of variance (ANOVA)
was computed with trigger serving as the independent
variable with three levels (complexity, expertise, and
accessibility) and rating as the dependent variable. This
ANOVA was statistically significant F(2, 132) = 16.878
and p < .001. Bonferroni's post hoc tests indicated that
expertise was rated significantly higher (M = 8.17) than
both complexity (M = 6.80) and accessibility (M = 6.73),
while complexity did not significantly differ from
accessibility.
The findings indicate that academic researchers will most
often collaborate because they find the information requires
a different expertise than their own. Many academic
research projects are multidisciplinary in nature and require
particular knowledge that a researcher may not have. As
one researcher stated, "The basic reason is that frequently a
wide range of expertise is needed and no one person can
possibly have all the skills needed to be successful."
86
Although the complexity of the information need and
accessibility of information could lead to collaboration, they
are not viewed as strongly as expertise. In regards to
information accessibility, one researcher points out,
"information is usually accessible; however, someone else
will likely understand it better." For this researcher the
difficulty was not in accessing the information but rather in
understanding its relevance which may require different
expertise.
During the CIS process, different researchers bring their own
particular expertise and perspective to the team. When a
researcher seeks information outside her domain of expertise,
she will often turn to another researcher for help. These
different expertises play an important role in the
collaborative information seeking activities of the research
team.
4.2 Communication Mediums for CIS
Activities
In order to examine the relationship between communication
mediums, and to reduce the number of variables for
subsequent analysis, a principal component factor analyses
with a Varimax rotation was computed on the responses to
questionnaire item 2. A four factor solution was selected
because all Eigen values were above 1, and a logical
grouping of sources emerged. We labeled the first factor
"traditional", and it included: email, face-to-face, and
telephone. We labeled the second factor "web" and it
included: instant messenger, web conferencing, and web
sites. The third and fourth factor each included one item,
"electronic forum" and "fax", and were, thus labeled
accordingly. Factor scores were created by using the mean of
all the items that loaded on a given factor, and these factor
scores were used in subsequent analyses.
In order to identify the media that are most likely used for
collaborate information seeking, a one-way within-subjects
analyses of variance (ANOVA) was computed with medium
factor scores serving as the independent variable with four
levels (traditional, web, electronic forum and fax) and rating
as the dependent variable. This ANOVA was statistically
significant as F(3, 195) = 84.709 and p < .001. Bonferroni's
post hoc tests indicated that traditional media (M= 8.10)
significantly outscored all other types; both web media
(M=3.64) and electronic forum (M=4.58) significantly
outscored fax (M=2.70) but did not significantly differ one
from the other.
Researchers preferred traditional media for their
communication. Within this category, we included e-mail.
Although e-mail may not seem to fit in the same category as
face-to-face and telephone, it has become such a ubiquitous
communication medium that respondents viewed it as being
similar to face-to-face and the telephone. Furthermore, email
has been in existence much longer than other types of
electronic mediums such as web conferences. People are
more comfortable and experienced with email and personal
conversations, whether these conversations are in person or
on the phone. The other media were not as strongly
embraced. For instance, we had anticipated that web-based
media such as web-conferencing and instant messaging
would have higher rating than it did. One possible
explanation is "newness" of the technology. For instance,
instant messenger tools are still relatively new and have not
permeated to all groups and ages. Furthermore, some of the
web-based media take time to set-up. Web conferences and
web sites require time and effort unlike picking up the phone
to talk to someone. Interestingly, although not included as a
medium to rate, some participants added campus mail and
"snail mail" as a medium for communication.
Whether collaborators are physically co-located or
geographically dispersed, communication is an essential
component of collaborative information seeking. The
researchers orient towards the mediums that are familiar to
them.
4.3 Success of Collaborative Information
Seeking Activities
In order to address the question of whether collaborations are
successful when engaging in CIS, a dichotomous variable
was created for each success item (3 6), whereby a rating
of 0 to 5 was considered "disagree", and a rating of 6 to 10
was considered "agree". We initially used a 10- point scale in
order to be consistent with the rest of the survey questions.
We then made the decision to reduce the scale to a
dichotomous variable in order to evaluate this question with
a test of statistical significance. Using this dichotomous
variable, a chi-square analysis was performed on the
frequencies for each success item. The results of these
analyses as well as the mean rating for each item, with mean
representing degree of agreement from 1 to 10 (10
representing "strongly agree"), is displayed in Table 1.
Table 1. Means and Chi-Square for Success Factors
Agreement
Success
Factor Mean
Agree Disagree
Chi-Square
Usually find
info
8.0152 64
2
X
2
=
58.242, p <
.001
Easier than
individual info
seeking 7.1061
50
16
X
2
=
17.515, p <
.001
Find more
relevant info
than individual
info seeking
7.3788
55
11
X
2
=
29.333, p <
.001
Quicker than
individual info
seeking 6.9394
48
10
X
2
=
24.897, p <
.001
87
Success is often subjective and difficult to define,
particularly with ill-defined tasks such as information
seeking. Therefore, we asked four questions related to
success to gain a better understanding of this important
area. Most researchers agreed that when collaborating with
colleagues to look for information, they usually found the
needed information. They also thought that collaboratively
seeking information was easier and lead to more relevant
information than individually seeking information.
Collaborative information seeking allows researchers to
rely on other colleagues for help and guidance; therefore,
allowing them to focus on their own area of expertise. This
could be one possible reason why researchers strongly
believe that CIS allows them to quickly find more relevant
information when compared to individual information
seeking. At the same time, one researcher provided a note
of caution stating that the success "depends on your team
of seekers." As in many collaborative activities, the success
depends on how well the team of information seekers can
work together when looking for information.
CONCLUSIONS
Collaborative information seeking is an important aspect of
the work done by teams. The findings presented here raise
issues that are important to consider when conceptualizing
collaborative information seeking and how to best support
this activity.
One important issue is how to support information seeking
in geographically dispersed teams. Physical co-located
team members can have face-to-face interaction. However,
for "virtual" teams technical support becomes even more
important because they do not have the advantages of the
face-to-face interaction. This technical support could
include features that allow individuals to exchange ideas,
or share searches while collaboratively searching for
information [9].
For the next stages of this study, we plan on conducting a
field study of academic research teams to better understand
the actual interaction of team members during the
collaborative information seeking process.
ACKNOWLEDGMENTS
We would like to thank the anonymous participants who
answered the survey. This research was supported in part
by Missouri Research Board grant 1734.
REFERENCES
1. Ellis, D. (1989). A behavioral model for information
retrieval system design. Journal of Information Science, 15:
p. 237-247.
2. Ellis, D. and M. Haugan. (1997) Modeling the Information
Seeking Patterns of Engineers and Research Scientists in an
Industrial Environment. The Journal of Documentation.
53(4): p. 384-403.
3. Hackman, R. ed. (1990) Groups that Work (and Those That
Don't): Creating Conditions for Effective Teamwork. Jossey-Bass
Publications: San Francisco.
4. Mankin, D., S. Cohen, and T. Bikson. (1996). Teams and
Technology. Boston, MA: Harvard Business School Press.
5. Bruce, H., et al. (2002). A comparison of the collaborative
information retrieval (CIR) behaviors of two design teams. in
Information Seeking In Context: The Fourth International
Conference on Information Needs, Seeking and Use in
Different Contexts. Lisbon, Portugal.
6. Sonnenwald, D.H. and L.A. Lievrouw. (1996). Collaboration
during the Design Process: A Case Study of Communication,
Information Behavior, and Project Performance. in Proc Int
Conf on Research in Information Needs, Seeking, and Use in
Different Contexts. Tampere, Finland: London: Taylor
Graham.
7. Haythornthwaite, C., B. Wellman, and M. Mantei. (1995).
Work Relationships and Media Use: A Social Network
Analysis. Group Decision and Negotiation. 4(3): p. 193-211.
8. Reddy, M. (In submission) Collaborative Information
Seeking: Supporting the work of multi-disciplinary patient
care teams. Journal of American Medical Informatics
Association (JAMIA).
9. Twidale, M. and D.M. Nichols. (1998). Designing Interfaces
to Support Collaboration in Information Retrieval.
Interacting with Computers. 10(2): p. 177-193.
10. Poltrock, S., et al. (2003). Information Seeking and Sharing
in Design Teams. in Proceedings of the 2003 International
ACM SIGGROUP Conference on Supporting Group Work.
11. Sonnenwald, D.H. and L.G. Pierce. (2000). Information
behavior in dynamic group work contexts: interwoven
situational awareness, dense social networks and contested
collaboration in command and control. Information
Processing and Management.36: p. 461-479.
12. Reddy, M. and P. Dourish. (2002). A Finger on the Pulse:
Temporal Rhythms and Information Seeking in Medical
Care. In Proc. of ACM Conf. on Computer Supported
Cooperative Work (CSCW'02). New Orleans, LA: New
York: ACM. p. 344-353.
88 | Academic Researchers;communication media;information seeking;Group Work;Survey;collaboration;Collaborative Information Seeking |
200 | Transactional Agent Model for Fault-Tolerant Object Systems | A transactional agent is a mobile agent which manipulates objects in multiple computers by autonomously finding a way to visit the computers. The transactional agent commits only if its commitment condition like atomicity is satisfied in presence of faults of computers. On leaving a computer , an agent creates a surrogate agent which holds objects manipulated. A surrogate can recreate a new incarnation of the agent if the agent itself is faulty. If a destination computer is faulty, the transactional agent finds another operational computer to visit. After visiting computers, a transactional agent makes a destination on commitment according to its commitment condition. We discuss design and implementation of the transactional agent which is tolerant of computer faults. | INTRODUCTION
A transaction manipulates multiple objects distributed in
computers through methods. Objects are encapsulations of
data and methods for manipulating the data. A transaction
is modeled to be a sequence of methods which satisfies
the ACID (atomicity, consistency, isolation, and dura-bility
) properties [8, 9]. Huge number and various types
of peer computers are interconnected in peer-to-peer (P2P)
networks [3]. Personal computers easily get faulty not only
by crash but also by hackers and intrusions. A mobile agent
can autonomously escape from faulty computers by moving
to another operational computer. Mobile agents [5, 19] are
programs which move to remote computers and then locally
manipulate objects on the computers.
An ACID transaction initiates a subtransaction on each
database server, which is realized in mobile agents [16, 9,
13]. In this paper, a transactional agent is a mobile agent
which autonomously decides in which order the agent visits
computers in presence of computer faults, and locally manipulates
objects in a current computer with not only atomicity
but also other types of commitment conditions like at-least-one
condition [6]. After manipulating all or some objects in
computers, an agent makes a decision on commit or abort.
For example, an agent atomically commits only if all objects
in the computers are successfully manipulated [4]. An
agent commits if objects in at least one of the computers are
successfully manipulated. In addition, an agent negotiates
with another agent which would like to manipulate a same
object in a conflicting manner. Through the negotiation,
each agent autonomously makes a decision on whether the
agent holds or releases the objects [6, 14].
If an agent leaves a computer, objects locked by the agent
are automatically released. Hence, once leaving a computer,
an agent cannot abort. An agent creates a surrogate agent
on leaving a computer. A surrogate agent still holds locks
on objects in a computer on behalf of the agent after the
agent leaves.
A transactional agent autonomously finds another destination
computer if a destination computer is faulty.
An
agent and surrogate are faulty if the current computer is
faulty. Some surrogate of the agent which exists on another
computer recreates a new incarnation of the agent. Simi-larly
, if a surrogate may be faulty, another surrogate detects
the fault and takes a way to recover from the fault. For
example, if an agent takes an at least one commitment condition
, a fault of the surrogate can be neglected as long as
at-least-one surrogate is operational.
In section 2, we present a system model. In section 3,
we discuss transactional agents. In section 4, we discuss
fault-tolerant mechanism. In sections 5 and 6, we discuss
implementation and evaluation of transactional agents.
SYSTEM MODEL
A system is composed of computers interconnected in reliable
networks. Each computer is equipped with a class
base (CB) where classes are stored and an object base (OB)
which is a collection of persistent objects. A class is composed
of attributes and methods. An object is an instantia-tion
of a class which is an encapsulation of data and meth-1133
2005 ACM Symposium on Applied Computing
ods. If result obtained by performing a pair of methods op
1
and op
2
on an object depends on the computation order,
op
1
and op
2
conflict with one another. For example, a pair
of methods increment and reset conflict on a counter object
. On the other hand, increment and decrement do not
conflict, i.e. are compatible.
A transaction is modeled to be a sequence of methods,
which satisfies the ACID properties [4]. Especially, a transaction
can commit only if all the objects are successfully
manipulated. If a method op
1
from a transaction T
1
is performed
before a method op
2
from another transaction T
2
which conflicts with op
1
, every method op
3
from T
1
has to
be performed before every method op
4
from T
2
conflicting
with the method op
3
. This is the serializability property [2,
4]. Locking protocols [2, 4, 7] are used to realize the serializability
of transactions. Here, a transaction locks an object
before manipulating the object.
A mobile agent is a program which moves around computers
and locally manipulates objects in each computer [5,
18, 19]. A mobile agent is composed of classes. A home
computer home(c) of a class c is a computer where the class
c is stored. For example, each class c is identified by a pair
of IP address of a home computer home (c) and a local path
to the directory where the class c is stored. A home computer
home (A) of a mobile agent A is a home computer of
the class of the agent A.
TRANSACTIONAL AGENTS
A transactional agent is a mobile agent which satisfies the
following properties:
1. autonomously decides on which computer to visit.
2. manipulates objects on multiple computer.
3. commits only if some commitment condition of the
agent is satisfied, otherwise aborts.
For simplicity, a term agent means a transactional agent
in this paper. Target objects are objects to be manipulated
by an agent.
T arget computers have the target objects.
An agent A is composed of routing RC(A), commitment
CC(A), and manipulation agents M C(A, D
1
), ..., M C(A,
D
n
), where D
i
stands for a target computer of the agent
A. Here, let Dom(A) be a set of target computers D
1
, ...,
D
n
of an agent A. First, an agent A on a current computer
has to move to a computer in Dom(A). A computer D
j
to
which an agent A on D
i
moves is a destination computer.
An agent A has to autonomously make a decision on which
computer to visit. In the routing agent RC(A), a destination
computer is selected. Then, the agent A moves to the
destination computer. Here, an agent first finds a candidate
set of possible destination computers. Then, the agent selects
one target computer in the candidate computers and
moves to the computer.
Secondly, a transactional agent A manipulates objects in a
current computer D. The agent A initiates a manipulation
agent M C(A, D) for manipulating objects in the current
computer D from the home computer. If an object base
is realized in a relational database system [11], objects are
manipulated by issuing SQL commands in M C(A, D).
Lastly, a transactional agent makes a decision on whether
the agent can commit or abort after visiting target computers
. A traditional transaction [2] atomically commits only if
objects in all the target computers are successfully manipulated
. In this paper, we consider other types of commitment
conditions [6]. For example, in the at-least-one commitment,
a transaction can commit only if objects in at least one target
computer are successfully manipulated.
3.2
Routing agent
A transactional agent A locally manipulates objects in a
computer D
i
through the manipulation agent M C(A, D
i
)
and then outputs intermediate objects OU T (A, D
i
). In the
meanwhile, the agent A visits another computer D
j
. Here,
objects in D
j
are manipulated through the manipulation
agent M C(A, D
j
) by using the intermediate objects In(A, D
j
)
(=OU T (A, D
i
)). Thus, the manipulation classes are related
with input-output relation. Here, D
i x
D
j
shows that the
manipulation agent M C(A, D
i
) outputs an intermediate object
x which is used by M C(A, D
j
). If D
i x
D
j
, the agent A
has to visit D
i
before D
j
and the intermediate object x has
to be delivered to D
j
. The input-output relation is shown
in an input-output graph as shown in Figure 1.
D
5
x
y
z
w
D
1
D
2
D
4
D
3
:computer
:temporary object
Figure 1: Input-output graph
There are computer and object nodes.
Directed edges
D
i
x and x
D
i
show that the manipulation agent
M C(A, D
i
) outputs and inputs an object x, respectively. In
Figure 1, the agent A outputs an intermediate object w in
D
i
. The agent A uses x in D
3
, D
4
, and D
5
. This means the
agent A is required to visit D
3
, D
4
, and D
5
after D
1
.
From the input-output graph, a transactional agent A decides
in which order the agent visits. A directed acyclic
graph (DAG) M ap(A) named a map is created from the
input-output graph [Figure 2]. Here, a node D shows a computer
D with a manipulation agent M C(A, D). A directed
edge D
1
D
2
a computer D
2
is required to be manipulated
after D
1
. D
1
D
2
if and only if (if f ) D
1
D
2
or D
1
D
3
D
2
for some computer D
3
. D
1
and D
2
are independent
(D
1
D
2
) if neither D
1
D
2
nor D
2
D
1
. Here,
a transactional agent A can visit the computers D
1
and D
2
in any order and can in parallel visit the computers D
1
and
D
2
. Figure 2 shows an example of a map M ap(A) obtained
from the input-output graph of Figure 1. Here, an agent A
is required to visit a computer D
3
after D
1
, D
4
after D
2
and
D
3
, and D
5
after D
4
. On the other hand, an agent A can
visit D
1
and D
2
in any order, even in parallel.
In Figure 1, the intermediate object w has to be delivered
to D
3
, D
4
, and D
5
. There are following ways to bring an
intermediate object x obtained in D
i
to D
j
:
1. A transactional agent A carries the intermediate object
x to D
j
.
2. x is transfered from D
i
to D
j
before A arrives at D
j
.
3. x is transfered from D
i
to D
j
after A arrives at D
j
.
A routing agent RC(A) of a transactional agent A with a
map M ap(A) is moving around computers [Figure 3]. First,
1134
D
1
D
2
D
3
D
4
D
5
Figure 2: Map.
a collection I of computers which do not have any in-coming
edge are found in M ap(A). For example, I =
{D
1
, D
2
} in
Figure 2. One computer D
i
is selected in I so as to satisfy
some condition, e.g. D
i
nearest to the current computer is
selected. For example, an agent takes a computer D
1
in Figure
2. The agent A moves to D
i
. Here, a manipulation agent
M C(A, D
i
) is loaded to D
i
from the home computer. After
manipulating objects in D
i
, D
i
is removed from M ap(A).
Another destination D
j
is selected and A moves to D
j
.
Initially, a routing agent RC(A) of the agent A is loaded
and started on a computer. The computer is a base computer
base (A) of the agent A. An agent A leaves the base
computer for a computer D
i
. Here, D
i
is a current computer
current (A) of A. If the agent A invokes a method t of
a class c on D
i
, the class c is searched:
1. The cache of the current computer D
i
is first searched
for the class c. If c is found in the cache, the method
t in the cache is invoked.
2. If not, the class base (CB
i
) of D
i
is locally searched.
If found, the class c in CB
i
is taken to invoke t.
3. Otherwise, the class c is transferred from the home
computer home (c) into D
i
.
A history H(A) shows a sequence of computers which an
agent A has visited.
CB
D1
D2
:routing agent
:manipulation agent
:map
Figure 3: Mobile agent.
3.3
Manipulation agent
A manipulation agent is composed of not only application-specific
classes but also library classes like JDBC [17] and
JAVA classes [18]. Each computer is assumed to support
a platform to perform a mobile agent on an object base
(OB). A platform includes cache and class base (CB). The
routing, manipulation, and commitment agents of a transactional
agent A are stored in the class base (CB) of the
home computer home (A). If an agent A invokes a method
t of a class c in a computer D
i
, the class c is loaded from
the home computer home (c) to the cache in D
i
. Then, the
method t of the class c is performed in D
i
. If a method u
of another class d is invoked in the method t, the class d is
loaded from the home computer home (d) as well as the class
c. Meanwhile, if another agent B invokes a method t of the
class c in D
i
, the class c in the cache is used to invoke the
method t without loading the class c. Thus, if classes are
cashed in a computer D
i
, methods in the classes are locally
invoked in D
i
without any communication. Otherwise, it
takes a longer time to invoke methods since classes with the
methods are transferred from the home computers in networks
. Here, the class c is loaded i.e. cached to D
i
. The
method t of the class c is performed on D
i
. If another agent
B comes to D
i
after A has left D
i
, B can take usage of the
class c in the cache.
3.4
Commitment agent
If a transactional agent A finishes manipulating objects
in each computer, the following commitment condition is
checked by the commitment agent CC(A):
1. Atomic commitment : an agent is successfully performed
on all the computers in the domain Dom(A), i.e. all-or
-nothing principle used in the traditional two-phase
commitment protocol [4, 15].
2. Majority commitment : an agent is successfully performed
on more than half of the computers in Dom(A).
3. At-least-one commitment : an agent is successfully performed
on at least one computer in Dom(A).
4.
n
r
commitment : an agent is successfully performed on
more than r out of n computers (r
n) in Dom(A).
5. Application specific commitment : condition specified
by application is satisfied.
3.5
Resolution of confliction
Suppose an agent A moves to a computer D
j
from another
computer D
i
. The agnet A cannot be performed on D
j
if
there is an agent or surrogate B conflicting with A. Here,
the agent A can take one of the following ways:
1. W ait: The agent A in the computer D
i
waits until
the agent A can land at a computer D
j
.
2. Escape: The agent A f inds another computer D
k
which has objects to be manipulated before D
j
.
3. N egotiate: The agent A negotiates with the agent B
in D
j
. After the negotiation, B releases the objects or
aborts.
4. Abort: The agent A aborts.
Deadlock among agents may occur. If the timer expires,
the agent A takes a following way:
1. The agent A retreats to a computer D
j
in the history
H(A). All surrogates preceding D
j
are aborted.
2. Then, the surrogate agent A
j
on D
j
recreates a new
incarnation of the agent A. The agent A finds another
destination computer D
h
.
The surrogate A
j
to which the agent A retreats plays a
role of checkpoint [12].
Suppose a surrogate agent B holds an object in a computer
D
j
. An agent A would like to manipulate the object
but conflicts with B in D
j
. The surrogate B makes a following
decision:
1. Atomic commitment : The agent A waits until the surrogate
B finishes.
2. At-least-one commitment : If the surrogate B knows
at least one sibling surrogate of B is committable, B
releases the object and aborts after informing the other
sibling surrogates of this abort.
3. Majority commitment : If the surrogate B knows more
than half of the sibling surrogates are committable,
B releases the object and aborts after informing the
other surrogates.
4.
n
r
commitment : If the surrogate B knows more than
1135
or equal to r sibling surrogate agents are committable,
the surrogate B releases the object and aborts.
FAULT-TOLERANT AGENT
We assume computers may stop by fault and networks
are reliable. A transactional agent is faulty only if a current
computer of the agent is faulty. Suppose an agent A finishes
manipulating objects on a computer D
i
. The agent A selects
one computer D
j
from the map M ap(A, D
i
). The agent A
detects by timeout mechanism that D
j
is faulty. The agent
A tries to find another destination computer D
k
[Figure 4].
If found, A moves to D
k
as presented here. If A cannot
find another destination computer in M ap(A, D
i
), the agent
A backs to the preceding computer D
k
[Figure 5]. D
i
is
removed from M ap(A, D
k
). Then, the agent in D
k
tries to
find another destination computer in M ap(A, D
k
).
Di
Dj
Dk
Figure 4: Forwarding recovery.
Dk
Di
Dj
Figure 5: Backwarding recovery.
An agent A leaves its surrogate agent A
i
on a computer
D
i
. The surrogate A
i
holds objects even after the agent
A leaves D
i
. An agent A and surrogate agent A
i
stop if
the current computers are faulty. First, suppose an agent A
stops on the current computer D
j
. Suppose that the agent
A comes from D
i
to D
j
. The surrogate A
i
on D
i
detects
that the agent A stops on D
j
. Here, A
i
takes one of the
following actions:
1. Find a succeeding surrogate A
k
of A
i
and skips A
j
.
2. Recreate a new incarnation of the agent A.
If the commitment condition is not atomic, the surrogate
A
j
takes the first one, i.e. skips the fault of A
j
. For
the atomic condition, A
i
recreates a new incarnation of the
agent A. The agent A takes another destination computer
D
k
in M ap(A, D
i
). If found, the agent A moves to D
k
.
Otherwise, A waits until the computer D
i
is recovered or
backs to the precedent computer from D
j
.
A surrogate A
i
on a computer D
i
may be faulty as well.
A preceding surrogate A
j
on D
j
detects the fault of A
i
.
Suppose a surrogate agent A
i
of A exists on D
i
. A
i+1
and
A
i-1
show the succeeding and precedeing surrogate agents of
A
i
, respectively [Figure 6]. A
i
periodically sends an enquiry
message AYL (are you alive) to A
i+1
and A
i-1
to check if
A
i+1
and A
i
are alive. On receipt of the AYL message, a
surrogate sends back a response message IAL (I am alive).
Thus, a faulty surrogate is detected by the succeeding and
preceding a surrogate with timeout mechanism.
If A
i
detects the stop of A
i+1
, A
i
does the follwings:
1. A new incarnation of the agent A is recreated on D
i
.
2. From the map M ap(A, D
i
), a new destination D different
from D
i-1
is detected.
A
i-1
A
i
A
i+1
AYL
AYL
IAL
IAL
Figure 6: Fault detection.
3. If detected, the agent A moves to D. Otherwise, A
i
informs A
i-1
of abort and then aborts. A
i-1
does the
procedure from step 1.
If the surrogate A
i
detects the stop of the preceding surrogate
A
i-1
or receives an abort message for A
i-1
, A
i
informs
the succeeding surrogate A
i+1
of abort. On receipt of the
abort message from A
i
, A
i+1
forwards the abort message to
A
i+2
and then aborts. Thus, abort messages are eventually
forwarded up to the agent A. In Figure 7, suppose A
2
stops.
A pair of surrogates A
1
and A
3
detect the stop of A
2
. A
1
creates a new incarnation A of the agent A. The obsolete
incarnation A still is moving to D
6
. The succeeding surrogate
A
3
of A
2
sends an abort message to A
4
. If the abort
message catches up the agent A, A can be aborted. Otherwise
, the obsolete incarnation A cannot stop. Thus, there
might exist multiple incarnations of an agent.
A0
A1
A2
A3
A4
A
D1
D2
D3
D4
D5
D6
A'
Ai : surrogate
A : agent
Figure 7: Incarnations of an agent.
On receipt of an AY L message from the preceding surrogate
A
i-1
, A
i
sends an IAL message with the address
information which A
i
knows of surrogates are backwarding
to preceding surrogates. If the surrogate A
i
finds A
i-1
to
be faulty, A
i
sends an abort message to not only A
i+1
but
also a surrogate whose address A
i
knows and which is nearest
to the current computer of A. By this method, an abort
message can more easily catch up with the agent mapped
the agent can be aborted.
IMPLEMENTATION
We discuss how to implement transactional agents in Aglets.
A transactional agent A is composed of routing, manipulation
, and commitment subagents. An routing agent RC(A)
with a map M ap(A) is transfered from one computer to another
. When an agent A, i.e. routing agent RC(A) arrives
at a computer D
i
, a manipulation agent M C(A, D
i
) is created
by loading the manipulation class.
An object base (OB) is realized in a relational database
system, Oracle [11]. A transactional agent manipulates table
objects by issuing SQL commands, i.e. select, insert,
delete, and update in a current computer D
i
. The computation
of each agent A on a computer D
i
is realized as
a local transaction on a database system. If the agent A
leaves D
i
, the transaction for A commits or aborts. That
is, objects manipulated by A are released. Even if the agent
A leaves D
i
, the objects manipulated by A are required to
be still held because A may abort after leaving D
i
. If the
1136
objects are released, the agent is unrecoverable. Therefore,
a surrogate agent is created on D
i
. The surrogate agent
is composed of a manipulation agent M C(A, D
i
) and an
object agent OBA
i
. OBA
i
behaves as follows:
1. On arrival at a computer D
i
, the routing agent RC(A)
of an agent A initiates a manipulation agent M C(A,
D
i
) and an object agent OBA
i
on D
i
, i.e. M C(A,
D
i
) and OBA classes are loaded. OBA
i
initiates a
transaction on an object base OB
i
.
2. If M C(A,D
i
) issues a method for manipulating objects
, OBA
i
issues SQL commands to the database
system in D
i
.
3. If the agent A finishes, A leaves D
i
. However, OBA
i
is still operational and holding the objects in D
i
.
4. OBA
i
commits and aborts if the agent A sends commit
and abort requests to the surrogate A
i
, respectively.
An object agent OBA
i
stays on a computer D
i
while holding
objects even if the agent A leaves D
i
. OBA
i
is a local
transaction on an object base OB
i
. On completion of the
agent A, OBA
i
and M C(A, D
i
) are terminated.
OB
i
D
i
OBA
OBA
i
SQL
XA
MC(A, D
i
)
RC(A)
Figure 8: Object agent (OBA).
An OBA class can be loaded to a computer with any
type of database system. If a transactional agent comes to
D
i
from another home computer, an OBA class is loaded
to D
i
from the home computer. Thus, OBA instances are
accumulated in the cache. In order to resolve this problem,
an OBA class is loaded as follows:
1. If the OBA class is not cached in the current computer,
the OBA class is loaded from home (OBA).
2. If the OBA class could not be loaded from home (OBA),
an OBA class in the home computer of the agent is
loaded to a computer.
The routing agent RC(A) leaves a computer D
i
if the manipulation
agent M C(A, D
i
) finishes manipulating objects.
M C(A, D
i
) recreates a new incarnation of a routing agent
RC(A) if the agent A stops due to the computer fault.
A transactional agent A can commit if all or some of the
surrogates commit depending on the commitment condition.
For example, a transactional agent commits if all the surrogate
agents successfully exist. Communication among an
agent and its surrogate agents is realized by using the XA interface
[20] which supports the two-phase commitment protocol
[15] [Figure 8]. Each surrogate agent issues a prepare
request to a computer on receipt of a prepare message from
A. If prepare is successfully performed, the surrogate agent
sends a prepared message to A. Here, the surrogate agent
is committable. Otherwise, the surrogate agent aborts after
sending aborted to A. The agent A receives responses from
the surrogate agents after sending prepare to the surrogates.
On receipt of the responses from surrogate agents, the agent
A makes a decision on commit or abort based on the commitment
condition. For example, if the atomic condition holds,
A sends commit only if prepared is received from every surrogate
. The agent A sends abort to all committable agents
if an aborted message is received from at least one surrogate.
On receipt of abort, a committable surrogate aborts. In the
at-least-one commitment condition, A sends commit to all
committable surrogates only if prepared is received from at
least one surrogate.
Next, we discuss how to support robustness against faults
of computers. Suppose a surrogate agent A
i
of a transactional
agent A stops after sending prepared.
Here, A
i
is
committable. On recovery of the committable surrogate A
i
,
A
i
unilaterly commits if the surrogate agent is committable
in the at-least-one commitment condition. In the atomic
condition, A
i
asks the other surrogates if they had commit-ted
. Suppose A
i
is abortable, i.e. faulty before receiving
prepared. On recovery, A
i
unilaterly aborts.
EVALUATION
Client
create
D
1
move
result
Routing Agent
Database
Server
Manipulation
Agent
Object Agent
M
1
M
2
M
1
D
2
result
M
2
Home
Computer
move
move
D
3
manipulate
result
M
3
M
3
result
result
result
manipulate
manipulate
Figure 9: Evaluation model
We evaluate the transactional agent which is implemented
in Aglets. In the evaluation, There are three server computers
D
1
, D
2
, and D
3
. A transactonal agent is created in a
computer C by loading classes from the home computer h.
The servers D
1
, D
2
, and D
3
are realized in personal computers
(Pentium 3) with Oracle database systems, which are
interconnected in the 1Gbps Ethernet.
First, a transactional agent A is initiated in a base computer
C. The agent A finds in which order D
1
, D
2
, and D
3
to be visited. Here, the agent A visits D
1
, D
2
, and D
3
in
this order as shown in Figure 9. On arrival of the agent A
on D
i
, the manipulation agent M C(A, D
i
) and object agent
OBA
i
are loaded to D
i
[Figure 9].
We consider that following types of transactional agents:
A. The manipulation agents M C(A, D
1
) derives intermediate
object I from the object base. The object bases
in D
2
and D
3
are updated by using the object I, i.e.
objects in I are added to the object base.
B. M C(A, D
1
) and M C(A, D
2
) derive objects to intermediate
objects I
1
and I
2
, respectively. Then, the object
base in D
3
is manipulated by using I
1
and I
2
.
There are three ways to deliver intermediate objects de-rived
to another computer:
1. The transactional agent A carries intermediate objects
to a destination computer D
j
from D
i
.
1137
2. After the agent A arrives at a computer D
j
, the agent
A requests D
i
to send the intermediate objects.
3. The agent A transfers the intermediate object I to a
computer D
j
before leaving D
i
.
The total response time of a transactional agent is measured
for number of intermediate objects, i.e. number of
tuples deriverd in computeres. Figures 10 and 11 show the
response time for the types of transactional agents A and
B, respectively. The second and third ways to deliver intermediate
objects to destination computers imply shorter
responce time than the first way.
Figure 10: Response A
Figure 11: Response B
CONCLUDING REMARKS
The authors discussed a transactional agent model to manipulate
objects in multiple computers with types of commitment
constraints in presence of computer faults. A transactional
agent autonomausly finds a distination computer,
moves to a computer, and then locally manipulates objects.
We discussed how to implement transactional agents in Aglets
and Oracle. We evaluated the transactional agent in terms
of response time.
REFERENCES
[1] American National Standards Institute. The Database
Language SQL, 1986.
[2] P. A. Bernstein, V. Hadzilacos, and N. Goodman.
Concurrency Control and Recovery in Database
Systems. Addison-Wesley, 1987.
[3] L. Gong. JXTA: A Network Programming
Environment, pages 8895. IEEE Internet
Computing,, 2001.
[4] J. Gray and A. Reuter. Transaction Processing :
Concepts and Techniques. Morgan Kaufmann, 1993.
[5] IBM Corporation. Aglets Software Development Kit
Home. http://www.trl.ibm.com/aglets/.
[6] T. Komiya, T. Enokido, and M. Takizawa. Mobile
agent model for transaction processing on distributed
objects. Information Sciences, 154:2338, 2003.
[7] F. H. Korth. Locking primitives in a database system.
Journal of ACM, 30(1):5579, 1989.
[8] N. A. Lynch, M. Merritt, A. F. W. Weihl, and R. R.
Yager. Atomic Transactions. Morgan Kaufmann, 1994.
[9] K. Nagi. Transactional Agents : Towards a Robust
Multi-Agent System. Springer-Verlag, 2001.
[10] A. Omicini, F. Zambonelli, M. Klusch, and
R. Tolksdorf. Coordination of Internet Agents.
Springer-Verlag, 2001.
[11] Oracle Corporation. Oracle8i Concepts Vol. 1 Release
8.1.5, 1999.
[12] R. S. Pamula and P. K. Srimani. Checkpointing
strategies for database systems. Proc. of the 15th
Annual Conf. on Computer Science, IEEE Computer
Society, pages 8897, 1987.
[13] S. Pleisch. State of the Art of Mobile Agent
Computing - Security, Fault Tolerance, and
Transaction Support. IBM Corporation, 1999.
[14] M. Shiraishi, T. Enokido, and M. Takizawa.
Fault-tolerant mobile agent in distributed objects
systems. Proc. of the 9th IEEE International
Workshop on Future Trends of Distributed Computing
Systems (FTDCS 2003), pages 145151, 2003.
[15] D. Skeen. Nonblocking commitment protocols. Proc.
of ACM SIGMOD, pages 133147, 1982.
[16] A. D. Stefano, L. L. Bello, and C. Santoro. A
distributed heterogeneous database system based on
mobile agents. Proc. of the 7th Workshop on Enabling
Technologies (WETICE'98), IEEE Computer Society,
pages 223229, 1998.
[17] Sun Microsystems Inc. JDBC Data Access API.
http://java.sun.com/products/jdbc/.
[18] Sun Microsystems Inc. The Source for Java (TM)
Technology. http://java.sun.com/.
[19] J. E. White. Telescript Technology : The Foundation
for the Electronic Marketplace. General Magic Inc.,
1994.
[20] X/Open Company Ltd. X/Open CAE Specification
Distributed Transaction Processing: The XA
Specification, 1991.
1138
| fault-tolerant agent;transactional agent;Transaction;ACID transaction;surrogate agent;Mobile agent;Fault-Tolerant;fault-tolerant;computer fault;mobile agent;transaction processing |
201 | Translating Unknown Cross-Lingual Queries in Digital Libraries Using a Web-based Approach | Users' cross-lingual queries to a digital library system might be short and not included in a common translation dictionary (unknown terms). In this paper, we investigate the feasibility of exploiting the Web as the corpus source to translate unknown query terms for cross-language information retrieval (CLIR) in digital libraries. We propose a Web-based term translation approach to determine effective translations for unknown query terms by mining bilingual search-result pages obtained from a real Web search engine. This approach can enhance the construction of a domain-specific bilingual lexicon and benefit CLIR services in a digital library that only has monolingual document collections. Very promising results have been obtained in generating effective translation equivalents for many unknown terms, including proper nouns, technical terms and Web query terms. | INTRODUCTION
With the development of digital library technologies, large
amounts of library content and cultural heritage material are being
digitized all over the world. As digital library systems become
commonly constructed and digitized content becomes widely
accessible on the Web, digital libraries that cross language and
regional boundaries will be in increasingly high demand globally.
Unfortunately, most of existing digital library systems only
provide monolingual content and search support in certain target
languages. To facilitate a cross-language information retrieval
(CLIR) service in digital library systems, it is important to
develop a powerful query translation engine. This must be able to
automatically translate users' queries from multiple source
languages to the target languages that the systems accept.
Conventional approaches to CLIR incorporate parallel texts [16]
as the corpus. These texts contain bilingual sentences, from which
word or phrase translations can be extracted with appropriate
sentence alignment methods [7]. The basic assumption of such an
approach is that queries may be long so query expansion methods
can be used to enrich query terms not covered in parallel texts.
However, this approach presents some fundamental difficulties for
digital libraries that wish to support practical CLIR services. First,
since most existing digital libraries contain only monolingual text
collections, there is no bilingual corpus for cross-lingual training.
Second, real queries are often short, diverse and dynamic so that
only a subset of translations can be extracted through the corpora
in limited domains. How to efficiently construct a domain-specific
translation dictionary for each text collection has become a major
challenge for practical CLIR services in digital libraries. In this
paper, we propose a Web-based approach to deal with this
problem. We intend to exploit the Web as the corpus to find
effective translations automatically for query terms not included
in a dictionary (unknown terms). Besides, to speedup online
translation process of unknown terms, we extract possible key
terms from the document set in digital libraries and try to obtain
their translations in advance.
For some language pairs, such as Chinese and English, as well as
Japanese and English, the Web offers rich texts in a mixture of
languages. Many of them contain bilingual translations of proper
nouns, such as company names and personal names. We want to
realize if this positive characteristic makes it possible to
automatically extract bilingual translations of a large number of
query terms. Real search engines, such as Google
and AltaVista
,
allow us to search English terms for pages in a certain language,
e.g. Chinese or Japanese. This has motivated us to develop the
proposed approach for mining bilingual search-result pages,
which are normally returned in a long, ordered list of snippets of
summaries to help users locate interesting documents. The
proposed approach uses the bilingual search-result pages of
unknown queries as the corpus for extracting translations by
utilizing the following useful techniques: (1) Term extraction
methods that extract translation candidates with correct lexical
boundaries. (2) Term translation methods that determine correct
translations based on co-occurrence and context similarity
analysis.
Several preliminary experiments have been conducted to test the
performance of the proposed approach. For example, very
promising translation accuracy has been obtained in generating
effective translation equivalents for many unknown terms,
including proper nouns, technical terms and Web query terms.
Also, it has been shown that the approach can enhance bilingual
lexicon construction in a very efficient manner and thereby benefit
CLIR services in digital libraries that only have monolingual
document collections. In Section 2 of this paper, we examine the
possibility of using search-result pages for term translation. The
technical details of the proposed approach, including the term
extraction and term translation methods, are presented with some
experiments in Sections 3 and 4 respectively. An application of
the proposed approach to bilingual lexicon construction is
described in Section 5. Finally, in Section 6 we list our
conclusions.
OBSERVATIONS AND THE PROPOSED APPROACH
A large number of Web pages contain a mixture of multiple
languages. For example, Chinese pages on the Web consist of rich
texts in a mixture of Chinese (main language) and English
(auxiliary language), many of which contain translations of proper
nouns and foreign terms. In fact, in the Chinese writing style, the
first time a foreign term appears in the text, we might also write
its original word, e.g., "" (Yahoo). In our research, we are
seeking to determine if the percentage of correct translations for
real queries is high enough in the top search-result pages. If this is
the case, search-result-based methods can be useful in alleviating
the difficulty of term translation. According to our observations,
many query terms are very likely to appear simultaneously with
their translations in search-result pages. Figure 1 illustrates the
search-result page of the English query "National Palace
Museum", which was submitted to Google to search Chinese
pages. Many relevant results were obtained, including both the
query itself and its Chinese aliases, such as ""
(National Palace Museum), "" (an abbreviation of National
Palace Museum) and "" (Palace Museum), which
might not be covered in general-purpose translation dictionaries.
Figure 1. An illustration showing translation equivalents, such
as National Palace Museum/"" (""),
which co-occur in search results returned from Google.
Although search-result pages might contain translations, the
difficulties in developing a high-performance search-result-based
term translation approach still remain. For example, it is not
straightforward to extract translation candidates with correct
lexical boundaries and minimum noisy terms from a text. It is also
challenging to find correct translations for each unknown term
within an acceptable number of search-result pages and an
acceptable amount of network access time. To deal with these
problems, the proposed approach contains three major modules:
search-result collection, term extraction and term translation, as
shown in Figure 2 (a). In the search-result collection module, a
given source query (unknown term) is submitted to a real-world
search engine to collect top search-result pages. In the term
extraction module, translation candidates are extracted from the
collected search-result pages using the term extraction method.
Finally, the term translation module is used to determine the most
promising translations based on the similarity estimation between
source queries and target translations.
In fact there are two scenarios to which the proposed approach can
be applied. Except online translation of unknown queries, another
application is offline translation of key terms as in Figure 2 (b).
To reduce unnecessary online translation processes, the proposed
approach can be used to augment the bilingual lexicon via
translating key terms extracted from the document set in a digital
library. These extracted key terms are likely to be similar to terms
that users may use in real user queries. The proposed approach
can be applied to those unknown key terms to obtain their
translations with an offline batch process (the extracted
translations might be edited by indexers). Furthermore, the
constructed bilingual lexicon can be incrementally updated with
the input of unknown queries from users and the performing of
online translation processes. To facilitate the above scenarios the
proposed term extraction and term translation techniques are
required, which will be further described in the following sections.
Figure 2. (a) An abstract diagram showing the concept of the
proposed approach for translating an unknown query. (b)
Two application scenarios of the proposed Web-based term
translation approach: online translation of unknown queries
and offline translation of key terms extracted from the
document set.
Search-Result
Collection
Search-Result
Pages
Translation
Candidates
Term
Extraction
Term
Translation
Search
Engine
Source
Query
Target
Translations
(a)
(b)
Proposed
Approach
Q
Doc
Online Translation of
Unknown Queries
Offline Translation of
Key Terms
109
TERM EXTRACTION
The first challenge of the proposed approach is: how to efficiently
and effectively extract translation candidates for an unknown
source term from a set of search-result pages. Other challenging
issues include: whether all possible translations can be extracted
and whether their lexical boundaries can be correctly segmented.
Conventionally, there are two types of term extraction methods
that can be employed. The first is the language-dependent
linguistics-based method that relies on lexical analysis, word
segmentation and syntactic analysis to extract named entities from
documents. The second type is the language-independent
statistics-based method that extracts significant lexical patterns
without length limitation, such as the local maxima method [19]
and the PAT-tree-based method [3]. Considering the diverse
applications in digital library and Web environments, we have
adopted the second approach. Our proposed term extraction
method, i.e., the PAT-tree-based local maxima method, is a hybrid
of the local maxima method [19] and the PAT-tree-based method
[3], which has been found more efficient and effective. First, we
construct a PAT tree data structure for the corpus, in this case, a
set of search-result pages retrieved using the source term as query.
(The same term extraction method will be applied to extract key
terms from digital libraries in Section 5 where the corpus is the
documents in digital libraries). By utilizing the PAT tree, we can
efficiently calculate the association measurement of every
character or word n-gram in the corpus and apply the local
maxima algorithm to extract the terms. The association
measurement is determined not only by the symmetric conditional
probability [19] but also by the context independency ratio [3] of
the n-gram. We detail the proposed method in the following
subsections.
3.1
Association Measurement
The proposed association measurement, called SCPCD, combines
the symmetric conditional probability (SCP) [19] with the concept
of context dependency (CD) [3]. SCP is the association estimation
of the correlation between its composed sub n-grams, which is as
defined below:
=
+
=
+
=
=
1
1
1
1
2
1
1
1
1
1
2
1
1
)
...
(
)
...
(
1
1
)
...
(
)
(
)
(
1
1
)
(
)
(
n
i
n
i
i
n
n
i
n
i
i
n
n
w
w
freq
w
w
freq
n
w
w
freq
w
w
p
w
w
p
n
w
w
p
w
w
SCP
K
K
K
K
(1)
where w
1
...
w
n
is the n-gram to be estimated, p(w
1
...
w
n
) is the
probability of the occurrence of the n-gram w
1
...
w
n
, and
freq(w
1
...w
n
) is the frequency of the n-gram.
To a certain degree, SCP can measure the cohesion holding the
words together within a word n-gram, but it cannot determine the
lexical boundaries of the n-gram. An n-gram with complete
lexical boundaries implies that it tends to have free association
with other n-grams appearing in the same context. Therefore, to
further ensure that an n-gram has complete lexical boundaries, the
concept of context dependency is introduced. Moreover, we
consolidate the concept with SCP to form one association
measurement. In order to achieve this goal, a refined measure, the
context independency ratio - which is a ratio value between 0 and
1 - is extended from [3]. It is defined as follows:
2
1
1
1
1
)
(
)
(
)
(
)
(
n
n
n
n
w
w
freq
w
w
RC
w
w
LC
w
w
CD
K
K
K
K
=
(2)
where LC(w
1
...
w
n
) is the number of unique left adjacent words in
western languages, or characters in oriental languages, for the
n-gram in the corpus, or is equal to the frequency of the n-gram if
there is no left adjacent word/character. Similarly, RC(w
1
...
w
n
) is
the number of unique right adjacent words/characters for the
n-gram, or is equal to the frequency of the n-gram if there is no
right adjacent word/character. Using this ratio we are able to judge
whether the appearance of an n-gram is dependent on a certain
string containing it. For example, if w
1
...
w
n
is always a substring
of string xw
1
...
w
n
y in the corpus, then CD(w
1
...
w
n
) is close to 0.
Combining formulae (1) and (2), the proposed association
measure SCPCD is as follows
=
+
=
=
1
1
1
1
1
1
1
1
1
)
(
)
(
1
1
)
(
)
(
)
(
)
(
)
(
n
i
n
i
i
n
n
n
n
n
w
w
freq
w
w
freq
n
w
w
RC
w
w
LC
w
w
CD
w
w
SCP
w
w
SCPCD
K
K
K
K
K
K
K
(3)
Note that the difference between the formulae of SCPCD and SCP
is in their numerator items. For SCP, those n-grams with low
frequency tend to be discarded, which is prevented in the case of
SCPCD. The proposed new measure determines a highly cohesive
term because of the frequencies of its substrings and the number
of its unique left and right adjacent words/characters.
3.2
Local Maxima Algorithm
The local maxima algorithm, called LocalMaxs in [18], is based
on the idea that each n-gram has a kind of cohesion that holds the
words together within the n-gram. This is a heuristic algorithm
used to combine with the previous association measurements to
extract n-grams, which are supposed to be key terms from the text.
We know different n-grams usually have different cohesion values.
Given that:
An antecedent (in size) of the n-gram w
1
w
2
...
w
n
,
ant(w
1
...
w
n
), is a sub-n-gram of the n-gram w
1
...
w
n
, having
size n - 1. i.e., the (n-1)-gram w
1
...
w
n-1
or w
2
...
w
n
.
A successor (in size) of the n-gram w
1
w
2
...
w
n
, succ(w
1
...
w
n
),
is a (n+1)-gram N such that the n-gram w
1
...
w
n
is an ant(N).
i.e., succ(w
1
...
w
n
) contains the n-gram w
1
...
w
n
and an
additional word before (on the left) or after (on the right) it.
The local maxima algorithm extracts each term whose cohesion,
i.e. association measure, is local maxima. That is, the term whose
association measure is greater than, or equal to, the association
measures of its antecedents and is greater than the association
measures of its successors.
3.3
The PAT-Tree Based Local Maxima
Algorithm
Despite the usefulness of the local maxima algorithm, without a
suitable data structure the time complexity of the algorithm is high.
The main time complexity problems occur in two areas. One is
calculating the context independency ratio (CD) for each unique
n-gram in the corpus and the other is to find the successor of an
n-gram. The two problems can be treated as one, i.e. finding the
successors of an n-gram. An intuitive way to do this is to find out
all (n+1)-grams and then compare the n-gram with them
sequentially to see if they are the successors of it. As this is
time-consuming, we introduce PAT tree as the data structure.
110
The above method is time consuming, however, so we use the
PAT tree, which is a more efficient data structure. It was
developed by Gonnet [8] from Morrison's PATRICIA algorithm
(Practical Algorithm to Retrieve Information Coded in
Alphanumeric) [15] for indexing a continuous data stream and
locating every possible position of a prefix in the stream. The
PAT tree structure is conceptually equivalent to a compressed
digital search tree, but smaller. The superior feature of this
structure mostly resulted from its use of semi-infinite strings [14]
to store the substream values in the nodes of the tree. This also
makes it easier and more efficient to find the successors of an
n-gram. More details on the PAT tree can be found in [3].
By utilizing the constructed PAT tree as the corpus, we can
efficiently retrieve all n-grams from the corpus, obtain their
frequencies and context dependency values, and then calculate the
association measures, SCPCD, of all of them.
3.4
Experiments on Term Extraction
To determine the effectiveness of the proposed association
measure SCPCD and the efficiency of the PAT-tree data structure,
we conducted several experiments on Web search-result pages
using the proposed PAT-tree-based local maxima algorithm.
First, to test whether SCPCD can perform better than SCP and CD,
we randomly selected 50 real queries in English from a Chinese
search engine called Openfind
3
. We then submitted each of them
to Google to search Chinese result pages. Most of these query
terms such as proper nouns and technical terms were not covered
in the common translation dictionary. After using the term
extraction method, the top 30 extracted
Chinese translation
candidates were examined and the extraction accuracy of each
candidate to the source query was manually determined. We
applied this test mainly to determine whether the SCPCD
measurement can extract more relevant translation candidates and
segment them with correct lexical boundaries. A translation
candidate was taken as correctly extracted only if it was correctly
segmented and contained meanings relevant to the source term. A
relevant translation candidate was not necessarily a correct
translation. The whole relevant set was determined by examining
the terms extracted by all of the test methods, e.g., CD, SCP, and
SCPCD. Table 1 clearly shows that the method based on the
SCPCD measurement achieves the best performance.
Table 1. The obtained extraction accuracy including precision,
recall, and average recall-precision of auto-extracted
translation candidates using different methods.
Association Measure
Precision
Recall
Avg. R-P
CD
68.1 %
5.9 %
37.0 %
SCP
62.6 %
63.3 %
63.0 %
SCPCD
79.3 %
78.2 %
78.7 %
In order to determine the efficiency of the PAT-tree data structure,
we compared the speed performance of the local maxima method
and the PAT-tree-based local maxima method. As Table 2 shows,
the PAT-tree data structure is more efficient in term extraction.
Although the PAT-tree construction phase took a little more time
3
http://www.openfind.com/
in a small corpus, in a real-world case for a large corpus - where
1,367 and 5,357 scientific documents were tested (refer to Section
5.2 for the details)
- the PAT-tree-based local maxima method
performed much better than the local maxima method.
Table 2. The obtained average speed performance of different
term extraction methods.
Term Extraction Method
Time for
Preprocessing
Time for
Extraction
LocalMaxs (Web Queries)
0.87 s
0.99 s
PATtree+LocalMaxs
(Web Queries)
2.30 s
0.61 s
LocalMaxs (1,367 docs)
63.47 s
4,851.67 s
PATtree+LocalMaxs
(1,367 docs)
840.90 s
71.24 s
LocalMaxs (5,357 docs)
47,247.55 s
350,495.65 s
PATtree+LocalMaxs
(5,357 docs)
11,086.67 s
759.32 s
TERM TRANSLATION
In the term translation module, we utilize the co-occurrence
relation and the context information between source queries and
target translations to estimate their semantic similarity and
determine the most promising translations. Several similarity
estimation methods were investigated based on co-occurrence
analysis. These included mutual information, DICE coefficient,
and statistical tests including the chi-square test and the
log-likelihood ratio test [17, 20], where the chi-square test and the
context vector analysis achieved the best performance. These will
be introduced below.
4.1
The Chi-Square Test
The chi-square test (
2
) was adopted as the major method of
co-occurrence analysis in our study. One major reason is that the
required parameters for the chi-square test can be effectively
computed using the search-result pages, which alleviates the data
sparseness problem. It also makes good use of all relations of
co-occurrence between the source and target terms, especially the
information that they do not co-occur. For source term s and target
term t, the conventional chi-square test can be transformed as the
similarity measure defined below [6]:
)
(
)
(
)
(
)
(
)
(
)
,
(
2
2
d
c
d
b
c
a
b
a
c
b
d
a
N
t
s
S
+
+
+
+
=
(4)
w
here
a: the number of pages containing both terms s and t;
b: the number of pages containing term s but not t;
c: the number of pages containing term t but not s;
d: the number of pages containing neither term s nor t;
N: the total number of pages, i.e., N= a+b+c+d.
Since most search engines accept Boolean queries and can report
the number of pages matched, the required parameters for the
chi-square test can be obtained by submitting Boolean queries
such as `st', `~st', `s~t' to search engines and utilizing the
returned page counts. On the other hand, it is easy to get number
N using some search engines (e.g., Google), which indicates the
111
total number of their collected Web pages. The number d may not
be directly available from the search engine, but it can be
calculated using the formula N= a+b+c+d, i.e., d = N-a-b-c.
4.2
Context Vector Analysis
Co-occurrence analysis is applicable to higher frequency terms
since they are more likely to appear with their translation
candidates. On the other hand, lower frequency terms have little
chance of appearing with candidates on the same pages. The
context vector method (CV) is therefore adopted to deal with this
problem. As translation equivalents may share similar terms, for
each query term, we take the co-occurring feature terms as the
feature vector. The similarity between query terms and
translation candidates can be computed based on their feature
vectors. Thus, lower frequency query terms still have a chance
to extract correct translations.
The context vector-based method has been used to extract
translations from comparable corpora, such as the use of Fung et
al.'s seed word [5]. In our method, real users' popular query terms
are used as the feature set, which should help to avoid many
inappropriate feature terms. Like Fung et al.'s vector space model,
we also use the TF-IDF weighting scheme to estimate the
significance of context features. This is defined as follows:
)
n
log(
)
,
(
max
)
,
(
N
d
t
f
d
t
f
w
j
j
i
t
i
=
(5)
where f(t
i
,d) is the frequency of term t
i
in search-result page d, N
is the total number of Web pages in the collection of search
engines, and n is the number of pages containing t
i
. Given the
context vectors of a source query term and each target translation
candidate, their similarity is estimated with cosine measure as
follows:
)
(
)
(
)
,
(
1
2
1
2
1
=
=
=
=
m
i
t
m
i
s
t
s
m
i
cv
i
i
i
i
w
w
w
w
t
s
S
(6)
It is not difficult to construct context vectors for source query
terms and their translation candidates. For a source query term, we
can use a fixed number of the top search results to extract
translation candidates. The co-occurring feature terms of each
query can also be extracted, and their weights calculated, which
together form the context vector of the query. The same procedure
is used to construct a context vector for each translation candidate.
4.3
The Combined Method
Benefiting from real-world search engines, the search-result-based
method using the chi-square test can reduce the work of corpus
collection, but has difficulty in dealing with low-frequency query
terms. Although context vector analysis can deal with difficulties
encountered by the chi-square test, it is not difficult to see that the
feature selection issue needs to be carefully handled. Intuitively, a
more complete solution is to integrate the above two methods.
Considering the various ranges of similarity values in the two
methods, we use a linear combination weighting scheme to
compute the similarity measure as follows:
=
m
m
m
t
s
R
t
s
S
all
)
,
(
)
,
(
(7)
where
m
is an assigned weight for each similarity measure S
m
,
and R
m
(s,t) - which represents the similarity ranking of each target
candidate t with respect to source term s - is assigned to be from 1
to k (the number of candidates) in decreasing order of similarity
measure S
m
(s,t).
4.4
Experiments on Term Translation
4.4.1
The Test Bed
To determine the effectiveness of the proposed approach, we
conducted several experiments to extract translation pairs for
Chinese and English terms in different domains.
Web Queries:
We collected query terms and the logs from two
real-world Chinese search engines in Taiwan, i.e., Dreamer and
GAIS. The Dreamer log contained 228,566 unique query terms for
a period of over 3 months in 1998, while the GAIS log contained
114,182 unique query terms for a period of two weeks in 1999.
We prepared two different test query sets based on these logs. The
first, called the popular-query set, contained a set of 430 frequent
Chinese queries in the logs. These queries were obtained from the
Chinese translations of 1,230 English terms out of the most
popular 9,709 query terms (with frequencies above 10 in both
logs), which co-occurred with their English counterparts in the
logs. The popular-query set was further divided into two types:
type Dic (the terms covered in the dictionary), consisting of about
36% (156/430) of the test queries and type OOV (out of
vocabulary; the terms not in the dictionary), consisting of about
64% (274/430) of the test queries.
The second set, called the random-query set, contained 200
Chinese query terms, which were randomly selected from the top
20,000 queries in the Dreamer log, where 165 (about 82.5%) were
not included in general-purpose translation dictionaries.
Proper Names and Technical Terms:
To further investigate
the translation effectiveness for proper names and technical terms,
we prepared two other query sets containing 50 scientists' names
and 50 disease names in English. These were randomly selected
from the 256 scientists (Science/People) and 664 diseases
(Health/Diseases and Conditions) in the Yahoo! Directory. It
should be noted that 76% (38/50) of the scientists' names and
72% (36/50) of the disease names were not included in the
general-purpose translation dictionary, which contained 202,974
entries collected from the Internet.
To evaluate the search-result-based methods, we obtained
search-result pages of the source query terms by submitting them
to real-world Chinese search engines, such as Google Chinese and
Openfind. Basically, we used only the first 100 retrieved results
(snippets) to extract translation candidates. The context vector of
each source query and the required parameters (page counts) for
the chi-square test were also extracted from the retrieved
search-result pages.
To evaluate the performance of translation extraction, we used the
average top-n inclusion rate as a metric. For a set of test queries,
the top-n inclusion rate was defined as the percentage of queries
whose translations could be found in the first n extracted
translations. Also, we wished to know if the coverage rate of
translations, i.e. the percentage of queries whose translations
could be found in the whole extracted candidate set, was high
enough in the top search-result pages for real queries.
112
4.4.2
Performance
Web Queries
We carried out experiments to determine the performance of the
proposed approach by extracting translations for the
popular-query set. Tables 3 and 4 show the results in terms of top
1-5 inclusion rates and coverage rates for Chinese and English
queries respectively. In this table, "CV", "
2
" and "Combined"
represent the context-vector analysis, the chi-square test, and the
combined method, respectively. In addition, "Dic", "OOV" and
"All" represent the terms covered in a dictionary, the terms not in
a dictionary, and the total test query set, respectively. The
coverage rates we obtained were promising, which shows that the
Web contains rich mixed texts in both languages. The
performance of the English query set was not as good as the
Chinese query set. The reason for this was that the English queries
suffered from more noise in Chinese translation candidates since
the search-result pages in the Chinese Web generally contain
much more Chinese than English content. We also conducted an
experiment for random queries. As Table 5 shows, the coverage
rates were encouraging.
Proper Names, Technical Terms and Common Terms
To further determine the effectiveness of the proposed approach in
dealing with the translation of proper names and technical terms,
we conducted an experiment on the test sets of scientists' names
and medical terms using the combined method. As the results in
Table 6 show, the top-1 inclusion rates for the scientists' and
disease names were 40% and 44% respectively. Some examples
of the extracted correct translations are shown in Table 7.
Although the achieved performance for real queries looked
promising, we wished to know if it was equally effective for
common terms. We randomly selected 100 common nouns and
100 common verbs from a general-purpose Chinese dictionary.
Table 8 shows the results obtained using the combined method. It
is easy to see that the proposed approach is less reliable in
Table 3. Coverage and inclusion rates for popular Chinese queries using different
methods.
Method
Query Type
Top-1
Top-3
Top-5
Coverage
Dic 56.4%
70.5%
74.4%
80.1%
OOV 56.2%
66.1%
69.3% 85.0%
CV
All 56.3%
67.7%
71.2%
83.3%
Dic 40.4%
61.5%
67.9%
80.1%
OOV 54.7%
65.0%
68.2% 85.0%
2
All 49.5%
63.7%
68.1%
83.3%
Dic 57.7%
71.2%
75.0%
80.1%
OOV 56.6%
67.9%
70.9% 85.0%
Combined
All 57.2%
68.6%
72.8%
83.3%
Table 4. Coverage and inclusion rates for popular English queries using
different methods.
Method
Top-1
Top-3
Top-5
Coverage
CV 50.9%
60.1%
60.8%
80.9%
2
44.6%
56.1%
59.2%
80.9%
Combined 51.8
%
60.7%
62.2% 80.9%
Table 5. Coverage and inclusion rates for random queries using the
different methods.
Method
Top-1
Top-3
Top-5
Coverage
CV 25.5%
45.5%
50.5%
60.5%
2 26.0%
44.5%
50.5%
60.5%
Combined 29.5%
49.5%
56.5% 60.5%
Table 6. Inclusion rates for proper names and technical terms using the
combined method.
Query Type
Top-1
Top-3
Top-5
Scientist Name
40.0%
52.0%
60.0%
Disease Name
44.0%
60.0%
70.0%
113
extracting translations of such common terms. One possible
reason is that the usages of common terms are diverse on the Web
and the retrieved search results are not highly relevant. It is
fortunate that many of these common words can be found in
general-purpose translation dictionaries.
Table 8. Top 1, 3, 5 inclusion rates obtained using the
combined method for extracting translations of common
nouns and verbs.
Query Type
Top-1
Top-3
Top-5
100 Common Nouns
23.0%
33.0%
43.0%
100 Common Verbs
6.0%
8.0%
10.0%
BILINGUAL LEXICON CONSTRUCTION
To enhance CLIR services in a digital library that only has
monolingual document collections, the proposed approach can be
used to construct a domain-specific bilingual lexicon. We take the
document set in digital libraries into consideration. The document
set in the target language is first analyzed and possible key terms
that are representative of the document set are extracted, using the
proposed term extraction method. These extracted key terms are
likely to be similar to terms that users may use in real user queries,
since they are relatively more significant than other terms in the
documents. The proposed term translation method can then be
applied to those key terms not included in common translation
dictionaries to obtain the translation of key terms in the source
language. Therefore, a bilingual lexicon can then be constructed
where the mappings between key terms and relevant terms in the
source and target languages are maintained.
As we have already indicated, the constructed bilingual lexicon
can benefit CLIR services. For a given source query, the similarity
with candidate source relevant terms can be calculated using the
context vector method presented in Section 4. Also, and the
top-ranked relevant terms can be extracted using the constructed
bilingual lexicon. After the corresponding translations of relevant
terms are obtained, relevant documents in the target language can
be retrieved, using these relevant translations. The source query
can then be expanded with the relevant translations and
conventional CLIR methods can be used to retrieve documents in
the target language.
5.2.
An Application
We tested the STICNET Database
4
, which is a government-supported
Web-accessible digital library system providing a
search service for scientific documents collected in Taiwan. The
system contained documents in either English or Chinese, but no
cross-language search was provided. To test the performance of
bilingual lexicon construction, we selected 1,367 Information
Engineering documents and 5,357 Medical documents
respectively from the STICNET Database for the period 1983 to
1997 as the test bed. Using the PAT-tree-based term extraction
method, key terms were automatically extracted from each
document collection and their relevant translations were extracted
by the proposed term translation approach.
In the collection of Information Engineering documents, 1,330
key terms (with a threshold of 2 to 6-gram character strings, a
term frequency>10, and an association value>0.1) were
automatically extracted. Meanwhile, 5,708 key terms (with a
threshold of 2 to 6-gram character strings and a term
frequency>40) were automatically extracted from the Medical
document collection. Among the 1,330 auto-extracted key terms
from the Information Engineering documents, 32% were not
included in KUH Chinese Dictionary
5
(unknown terms) - one of
the largest Chinese dictionaries with 158,239 term entries - where
75% of these unknown terms were found useful. In the case of
Medical documents, 71% of the 5,708 auto-extracted key terms
were not included in KUH Chinese Dictionary where 36.6% of
these unknown terms were found useful. Table 9 shows the
accuracy of the extracted translations for these useful unknown
terms. The promising result shows the potential of the proposed
approach to assist bilingual lexicon construction.
4
http://sticnet.stic.gov.tw/
5
http://www.edu.tw/mandr/clc/dict/
Table 7. Some examples of the test English proper names and technical terms, and their
extracted Chinese translations.
Query Type
English Query
Extracted Translations
(in Traditional Chinese)
Scientist Name
Galilei, Galileo (Astronomer)
Crick, Francis (Biologists)
Kepler, Johannes (Mathematician)
Dalton, John (Physicist)
Feynman, Richard (Physicist)
//
/
//
//
Disease Name
Hypoplastic Left Heart Syndrome
Legionnaires' Disease
Shingles
Stockholm Syndrome
Sudden Infant Death Syndrome (SIDS)
/
114
Table 9. The top-n inclusion rates of translations for
auto-extracted useful unknown terms.
Query Type
Top-1
Top-3
Top-5
Auto-extracted useful terms
in Information Engineering
33.3% 37.5% 50.0%
Auto-extracted useful terms
in Medicine
34.6% 46.2% 50.0%
RELATED WORK
Many effective retrieval models have been developed for CLIR.
For example, the Latent Semantic Indexing (LSI) method [4] has
been utilized to model inter-term relationships, instead of exact
term matching. Other methods include the cross-lingual relevance
model [11], which integrates popular techniques of
disambiguation and query expansion. However, translation of
queries not covered in a bilingual dictionary remains one of the
major challenges in practical CLIR services [9].
To deal with the translation of out-of-dictionary terms,
conventional research on machine translation has generally used
statistical techniques to automatically extract translations from
domain-specific, sentence-aligned parallel bilingual corpora [20].
However, a large parallel corpus is difficult to obtain. Some work
has been done on term translation extraction from comparable
texts, such as bilingual newspapers [5], which are easier to obtain.
Using a non-parallel corpus is more difficult than a parallel one,
due to the lack of alignment correspondence for sentence pairs.
On the other hand, research on digital libraries has made the same
endeavor. Larson et al. [10] proposed a method for translingual
vocabulary mapping using multilingual subject headings of book
titles in online library catalogs - a kind of parallel corpus.
However, book titles are still limited in coverage, compared to the
rich resources on the Web.
A new potential research direction is to perform query translation
directly, through mining the Web's multilingual and wide-range
resources [16]. Web mining is a new research area that focuses on
finding useful information from large amounts of semi-structured
hypertexts and unstructured texts [1]. Chen et al. [2] proposed a
dictionary-based approach in which the search results returned
from Yahoo China search engine were utilized to extract
translations for terms not covered in the dictionary. In their work
only an English term appearing (maybe in parenthesis)
immediately or closely after a Chinese term was considered a
possible translation. In our previous research, we proposed an
approach for extracting translations of Web queries through the
mining of anchor texts and link structures and obtained very
promising results [12, 13]. Previous experiments showed that the
anchor-text-based approach can achieve a good precision rate for
popular queries. Its major drawback is the very high cost of the
hardware and software required to collect sufficient anchor texts
from Web pages. Collecting anchor texts requires a powerful Web
spider and takes cost of network bandwidth and storage. Because
of the practical needs of digital libraries, search-result pages,
which are easier to obtain are, therefore, investigated in this paper.
CONCLUSION
In this paper, we have introduced a Web-based approach for
dealing with the translation of unknown query terms for
cross-language information retrieval in digital libraries. With the
proposed term extraction and translation methods, it is feasible to
translate unknown terms and construct a bilingual lexicon for key
terms extracted from documents in a digital library. With the help
of such bilingual lexicons, it would be convenient for users to
formulate cross-lingual queries. The simplicity of the approach
not only makes it very suitable for digital library systems, but
would also facilitate the implementation of CLIR services.
REFERENCES
[1]
Chakrabarti, S. Mining the Web: Analysis of Hypertext and
Semi Structured Data, Morgan Kaufmann, 2002.
[2]
Chen, A., Jiang, H., and Gey, F. Combining Multiple Sources
for Short Query Translation in Chinese-English
Cross-Language Information Retrieval. In Proceedings of the
5th International Workshop on Information Retrieval with
Asian Languages (IRAL 2000), 2000, 17-23.
[3]
Chien, L.F. PAT-Tree-based Keyword Extraction for
Chinese Information Retrieval. In Proceedings of the 20
th
Annual International ACM Conference on Research and
Development in Information Retrieval (SIGIR 1997), 1997,
50-58.
[4]
Dumais, S. T., Landauer, T. K., and Littman, M. L.
Automatic Cross-Linguistic Information Retrieval Using
Latent Semantic Indexing. In Proceedings of ACM-SIGIR
Workshop on Cross-Linguistic Information Retrieval (SIGIR
1996), 1996, 16-24.
[5]
Fung, P. and Yee, L. Y. An IR Approach for Translating
New Words from Nonparallel, Comparable Texts. In
Proceedings of the 36th Annual Conference of the
Association for Computational Linguistics (ACL 1998), 1998,
414-420.
[6]
Gale, W. A. and Church, K. W. Identifying Word
Correspondences in Parallel Texts. In Proceedings of DARPA
Speech and Natural Language Workshop, 1991, 152-157.
[7]
Gale, W.A. and Church, K.W. A Program for Aligning
Sentences in Bilingual Corpora. Computational Linguistics,
19, 1 (1993), 75-102.
[8]
Gonnet, G.H., Baeza-yates, R.A. and Snider, T. New Indices
for Text: Pat Trees and Pat Arrays. Information Retrieval
Data Structures & Algorithms, Prentice Hall, 1992, 66-82.
[9]
Kwok, K. L. NTCIR-2 Chinese, Cross Language Retrieval
Experiments Using PIRCS. In Proceedings of NTCIR
workshop meeting, 2001, 111-118.
[10]
Larson, R. R., Gey, F., and Chen, A. Harvesting Translingual
Vocabulary Mappings for Multilingual Digital Libraries. In
Proceedings of ACM/IEEE Joint Conference on Digital
Libraries (JCDL 2002), 2002, 185-190.
[11]
Lavrenko, V., Choquette, M., and Croft, W. B. Cross-Lingual
Relevance Models. In Proceedings of ACM Conference on
Research and Development in Information Retrieval (SIGIR
2002), 2002, 175-182.
[12]
Lu, W. H., Chien, L. F., and Lee, H. J. Translation of Web
Queries using Anchor Text Mining. ACM Transactions on
Asian Language Information Processing, 1 (2002), 159-172.
[13]
Lu, W. H., Chien, L. F., and Lee, H. J. Anchor Text Mining
for Translation of Web Queries: A Transitive Translation
Approach. ACM Transactions on Information Systems, 22
(2004), 128.
[14]
Manber, U. and Baeza-yates, R. An Algorithm for String
Matching with a Sequence of Don't Cares. Information
Processing Letters, 37 (1991), 133-136.
[15]
Morrison, D. PATRICIA: Practical Algorithm to Retrieve
Information Coded in Alphanumeric. JACM, 1968, 514-534.
115
[16]
Nie, J. Y., Isabelle, P., Simard, M., and Durand, R.
Cross-language Information Retrieval Based on Parallel
Texts and Automatic Mining of Parallel Texts from the Web.
In Proceedings of ACM Conference on Research and
Development in Information Retrieval (SIGIR 1999), 1999,
74-81.
[17]
Rapp, R. Automatic Identification of Word Translations from
Unrelated English and German Corpora, In Proceedings of
the 37th Annual Conference of the Association for
Computational Linguistics (ACL 1999), 1999, 519-526.
[18]
Silva, J. F., Dias, G., Guillore, S., and Lopes, G. P. Using
LocalMaxs Algorithm for the Extraction of Contiguous and
Non-contiguous Multiword Lexical Units. Lecture Notes in
Artificial Intelligence, 1695, Springer-Verlag, 1999, 113-132.
[19]
Silva, J. F. and Lopes, G. P. A Local Maxima Method and a
Fair Dispersion Normalization for Extracting Multiword
Units. In Proceedings of the 6
th
Meeting on the Mathematics
of Language, 1999, 369-381.
[20]
Smadja, F., McKeown, K., and Hatzivassiloglou, V.
Translating Collocations for Bilingual Lexicons: A Statistical
Approach, Computational Linguistics, 22, 1 (1996), 1-38.
116
| Information Search and Retrieval;Web Mining;Term Translation;translation dictionary;Context Vector Analysis;Unknown Cross-Lingual Queries;Web-based term translation approach;Cross-Language Information Retrieval;BILINGUAL LEXICON CONSTRUCTION;Digital Library;PAT-Tree Based Local Maxima Algorithm;CLIR services;Term Extraction;Digital Libraries |
202 | TypeCase: A Design Pattern for Type-Indexed Functions | A type-indexed function is a function that is defined for each member of some family of types. Haskell's type class mechanism provides collections of open type-indexed functions, in which the indexing family can be extended by defining a new type class instance but the collection of functions is fixed. The purpose of this paper is to present TypeCase: a design pattern that allows the definition of closed type-indexed functions, in which the index family is fixed but the collection of functions is extensible. It is inspired by Cheney and Hinze's work on lightweight approaches to generic programming. We generalise their techniques as a design pattern . Furthermore, we show that type-indexed functions with type-indexed types, and consequently generic functions with generic types, can also be encoded in a lightweight manner, thereby overcoming one of the main limitations of the lightweight approaches. | Introduction
A type-indexed function is a function that is defined for each member
of a family of types. One of the most popular mechanisms
implementing this notion is the Haskell [31] type class system. A
type class consists of a collection of related type-indexed functions;
the family of index types is the set of instances of the type class.
Type classes provide just one possible interpretation of the notion
of type-indexed functions. In particular, they assume an open-world
perspective: the family of index types is extensible, by defining a
new type class instance for that type, but the collection of type-indexed
functions is fixed in the type class interface so needs to
be known in advance. For some applications -- particularly when
providing a framework for generic programming -- the family of
index types is fixed (albeit large) and the collection of type-indexed
functions is not known in advance, so a closed-world perspective
would make more sense.
The original concept of a design pattern has its origins in Christopher
Alexander's work in architecture, but it has been picked up
with enthusiasm by the object-oriented programming community.
The idea of design patterns is to capture, abstract and record beneficial
recurring patterns in software design. Sometimes those patterns
can be captured formally, as programming language constructs
or software library fragments. Often, however, the appropriate
abstraction cannot be directly stated, either because of a lack
of expressiveness in the language, or because there is inherent ambiguity
in the pattern -- Alexander describes a pattern as a solution
`you can use [. . . ] a million times over, without ever doing it the
same way twice' [1]. In this case, one must resort to an informal
description. Even if the abstraction itself can be captured formally,
one might argue that a complete description of the pattern includes
necessarily informal information: a name, motivation, examples,
consequences, implementation trade-offs, and so on.
In this paper, we present a technique that allows the definition of
closed type-indexed functions, as opposed to the open type-indexed
functions provided by type classes; we do so in the format of a
design pattern. Our inspiration comes from previous research on
lightweight approaches to generic programming (LAGP). In particular
, Hinze's two papers "A Lightweight Implementation of Generics
and Dynamics" [4] (LIGD, with James Cheney) and "Generics
for the Masses" [19] (GM) provide our motivation and basis.
Those two papers focus on the particular context of generic
programming, and provide a number of techniques that can be used
to encode first-class generic functions in Haskell. However, those
techniques have a wider applicability, not addressed by Hinze. We
propose a generalisation of the technique, and demonstrate its use
in a variety of applications. Our specific contributions are:
Generalisation of the lightweight approaches. We provide templates
for designing closed type-indexed functions, abstracting
away from generic programming. The techniques in LIGD and
GM are instances of these templates.
A design pattern for type-indexed functions. We document this
generalisation as a design pattern.
Type-indexed functions with type-indexed types. We show that
with our more general interpretation of the design pattern, type-indexed
functions with type-indexed types are also instances of
the design pattern. As a consequence, generic functions with
generic types can also be encoded in a lightweight manner.
Thus, we remove one of the main limitations of the lightweight
approaches.
Other applications. We present two other interesting applications
of the pattern: PolyP in Haskell 98, and a very flexible printf
function.
The remainder of this paper is structured as follows. In Section 2
we review the lightweight approaches to generic programming. In
Section 3 we abstract the essence of the technique as a design pattern
. Section 4 presents two other small applications of the design
pattern, and Section 5 uses it to model type-indexed functions with
type-indexed types. Section 6 concludes.
Lightweight generic programming
We start by summarising the earlier work on lightweight approaches
to generic programming underlying our generalisation.
2.1
"A Lightweight Implementation of Generics and
Dynamics"
Cheney and Hinze [4] show how to do a kind of generic programming
, using only the standard Hindley-Milner type system extended
with existential types. The index family consists of hierarchical
sums and products of integers and characters. This family is enough
to represent a large subset of Haskell 98 datatypes (including mutually
recursive and nested datatypes).
data Sum a b
= Inl a | Inr b
data Prod a b
= Prod a b
data Unit
= Unit
This style of generic programming requires a representation of
types as values in order to support typecase analysis. The key idea
of the LIGD paper is to use a parametrised type as the type representation
, ensuring that the type parameter reflects the type being
represented. Some Haskell implementations have recently been extended
with generalised algebraic datatypes (GADTs) [32], which
can be used for this purpose; but LIGD predates that extension, and
depends only on existential quantification.
data Rep t
=
RUnit
(t Unit)
| RInt
(t Int)
| RChar
(t Char)
| a b. RSum (Rep a) (Rep b) (t (Sum a b))
| a b. RProd (Rep a) (Rep b) (t (Prod a b))
data a
b = EP{from :: a b,to :: b a}
(Note that the universal quantifications are in contravariant positions
, so act existentially.)
The intention is that the equivalence type a
b represents embedding/projection
pairs witnessing to an isomorphism between
types a and b, thereby enforcing a correspondence between types t
and Rep t. Of course, within Haskell, it is not possible to automatically
verify the isomorphisms (from
to = id and to from = id), so
these laws should be externally checked. Furthermore, we follow
the convention of ignoring the `ugly fact' of bottom values destroying
the `beautiful theory' of many such isomorphisms [8].
A common case is with the trivial embedding/projections.
self :: a
a
self
= EP{from = id,to = id}
Using self , we can provide a set of smart constructors for the Rep
type, yielding representations of types by themselves.
rUnit :: Rep Unit
rUnit
= RUnit self
rInt :: Rep Int
rInt
= RInt self
rChar :: Rep Char
rChar
= RChar self
rSum :: Rep a
Rep b Rep (Sum a b)
rSum ra rb
= RSum ra rb self
rProd :: Rep a
Rep b Rep (Prod a b)
rProd ra rb
= RProd ra rb self
Using these smart constructors, we can build representations for
recursive datatypes, by making explicit the structure isomorphism
of the datatype. For instance, the isomorphism defining lists is
[a]
= 1 + a [a], and so the corresponding type representation is
as follows.
rList ::
a. Rep a Rep [a]
rList ra
= RSum rUnit (rProd ra (rList ra)) (EP from to)
where from
[ ]
= Inl Unit
from
(x : xs)
= Inr (Prod x xs)
to
(Inl Unit)
= [ ]
to
(Inr (Prod x xs)) = x : xs
Note that the representation of a recursive datatype is an infinite
value; but, because of laziness, this poses no problem.
Having constructed representation values for arbitrary types, the
final step is to define generic functions. Using the representation
as a basis for structural case analysis, it is possible to simulate a
typecase [16]. For example, here is a definition of generic equality:
eq ::
t. Rep t t t Bool
eq
(RInt ep)
t
1
t
2
= from ep t
1
from ep t
2
eq
(RChar ep)
t
1
t
2
= from ep t
1
from ep t
2
eq
(RUnit ep)
= True
eq
(RSum ra rb ep) t
1
t
2
= case (from ep t
1
,from ep t
2
) of
(Inl x,Inl y) eq ra x y
(Inr x,Inr y) eq rb x y
False
eq
(RProd ra rb ep) t
1
t
2
= case (from ep t
1
,from ep t
2
) of
(Prod x y,Prod x y )
eq ra x x
eq rb y y
Using Haskell type classes, it is possible to make the use of generic
functions even more convenient: the class TypeRep can be used to
build values of type Rep t implicitly.
class TypeRep t where
rep :: Rep t
instance TypeRep Unit where
rep
= rUnit
instance TypeRep Int where
rep
= rInt
instance TypeRep Char where
rep
= rChar
instance
(TypeRep a,TypeRep b) TypeRep (Sum a b) where
rep
= rSum rep rep
instance
(TypeRep a,TypeRep b) TypeRep (Prod a b) where
rep
= rProd rep rep
instance TypeRep a
TypeRep [a] where
rep
= rList rep
For example, we can now express generic equality with an implicit
rather than explicit dependence on the representation.
ceq ::
t. TypeRep t t t Bool
ceq t
1
t
2
= eq rep t
1
t
2
2.2
"Generics for the Masses"
Hinze's later GM approach [19] has a very similar flavour to LIGD;
however, somewhat surprisingly, Hinze shows how to do generic
programming strictly within Haskell 98, which does not support
rank-n types or even existential types. Nevertheless, there is a close
relationship between type classes and polymorphic records (for
example, one possible translation of type classes into System F uses
polymorphic records), and these require something like existential
types for their encoding. Thus, type class instances can be seen
as implicitly-passed records. Hinze uses this observation to deliver
two implementations of generics.
2.2.1
Generic functions on types
The first implementation of generics in GM ("GM1", from now
on) can be seen as a direct descendent of LIGD. Instead of using a
datatype with an existential quantification, Hinze uses a type class
Generic.
99
class Generic g where
unit
:: g Unit
sum
::
(TypeRep a,TypeRep b) g (Sum a b)
prod
::
(TypeRep a,TypeRep b) g (Prod a b)
datatype :: TypeRep a
(b a) g b
char
:: g Char
int
:: g Int
The parameter g of the type class represents the generic function,
and each of the member functions of the type class encodes the
behaviour of that generic function for one structural case. Generic
functions over user-defined types can also be defined using the
datatype type case. In this case, the isomorphism between the
datatype and its structural representation must be provided.
The type class TypeRep is used to select the appropriate behaviour
of the generic function, based on the type structure of its argument
. The role of this type class is somewhat analogous to the
synonymous one in Section 2.1. One contrast with LIGD is that
TypeRep for GM1 is not optional, because the type representations
are always implicitly passed.
class TypeRep a where
typeRep :: Generic g
g a
instance TypeRep Unit where
typeRep
= unit
instance
(TypeRep a,TypeRep b) TypeRep (Sum a b) where
typeRep
= sum
instance
(TypeRep a,TypeRep b) TypeRep (Prod a b) where
typeRep
= prod
instance TypeRep Char where
typeRep
= char
instance TypeRep Int where
typeRep
= int
For GM, the type class TypeRep directly selects the appropriate
behaviour for a particular structural case from the generic function.
In contrast, for LIGD, the corresponding type class TypeRep builds
a value as a type representation for a particular structural case,
and this representation is then used by a generic function to select
the appropriate behaviour. The effect is the same, but GM is more
direct.
A new generic function is defined via an instance of Generic,
providing an implementation for each structural case. For instance,
the generic function gSize that counts all the elements of type Int
and Char in some structure could be encoded as follows.
newtype GSize a
= GSize{appGSize :: a Int}
instance Generic GSize where
unit
= GSize ( 0)
sum
= GSize (t case t of
Inl x
gSize x
Inr y
gSize y)
prod
= GSize (t case t of
Prod x y
gSize x + gSize y)
datatype iso
= GSize (t gSize (from iso t))
char
= GSize ( 1)
int
= GSize ( 1)
gSize :: TypeRep a
a Int
gSize
= appGSize typeRep
A record of type GSize a contains a single function appGSize of
type a
Int, which can be used to compute the number of elements
in some structure of type a. The function gSize, which is the actual
generic function, simply extracts the sole appGSize field from a
record of the appropriate type, built automatically by typeRep.
2.2.2
Generic functions on type constructors
The second implementation of generics in GM ("GM2") permits
parametrisation by type constructors rather than by types. For example
, whereas the generic function gSize of the previous section
has type a
Int for all first-order types a in the type class TypeRep,
in this section we show a generic function gSize with type f a
Int
for all type constructors f in the constructor class FunctorRep.
Lifting in this fashion introduces the possibility of ambiguity:
a type g
(f a) may be considered a type constructor g applied
to a type f a, or the composition of constructors g and f applied
to type a. Therefore we must explicitly pass type representations,
increasing flexibility but decreasing brevity. This is reflected in the
analogous type class Generic, where the implicitly-passed TypeRep
contexts are now changed to explicitly-passed functions.
class Generic g where
unit
:: g Unit
sum
:: g a
g b g (Sum a b)
prod
:: g a
g b g (Prod a b)
datatype ::
(b a) g a g b
char
:: g Char
int
:: g Int
However, this modification of the type class restricts expressivity,
since the only generic function we can call is the one being defined,
recursively. Consequently, generic functions that perform calls to
other generic functions (as when defining generic membership in
terms of generic equality) become harder to define.
With the new Generic class it is also possible to build the
values for type representations automatically, using another type
class TypeRep. Just as with LIGD, this class now becomes optional.
Alternatively, we can use a type class FunctorRep to capture the
notion of unary type constructor or functor.
class FunctorRep f where
functorRep :: Generic g
g a g (f a)
We have to define similar classes for each arity of type constructor.
Generic functions are defined in a very similar fashion to GM1.
For instance, the type Count a below represents a generic function
that counts zero for each occurrence of a value of type Int or Char
in some structure of type a.
newtype Count a
= Count{applyCount :: a Int}
instance Generic Count where
unit
= Count ( 0)
sum a b
= Count (x case x of
Inl l
applyCount a l
Inr r
applyCount b r)
prod a b
= Count ((Prod x y)
applyCount a x
+ applyCount b y)
datatype iso a
= Count (x
applyCount a
(from iso x))
char
= Count ( 0)
int
= Count ( 0)
While this function by itself approximates const 0, it is the basis
for other more useful functions that really count the number of elements
in some structure in some way, by overriding the behaviour
of the basic generic function for occurrences of the type parameter:
gSize :: FunctorRep f
f a Int
gSize
= applyCount (functorRep (Count ( 1)))
The payback of using FunctorRep is that we can define the
behaviour of the generic function for its parameters. For instance,
we could sum all the integers in some integer-parametrised datatype
by using the identity function to define the behaviour of the generic
function for the type parameter.
gSum :: FunctorRep f
f Int Int
gSum
= applyCount (functorRep (Count id))
100
Closed type-indexed functions
In LIGD and GM, we are shown three methods for implementing
closed type-indexed functions. Those three variations give us different
expressive power, and impose different constraints on the
type system. A choice of implementation techniques, together with
technical trade-offs making no one method superior in all circumstances
, is characteristic of design patterns.
In this section, we introduce the TypeCase design pattern,
capturing the different techniques for implementing closed type-indexed
functions.
The TypeCase design pattern
Intent:
Allowing the definition of closed type-indexed functions.
Motivation:
The typecase design pattern captures a closed-world
view of ad-hoc polymorphism. In Haskell, the type class system
is a mechanism that supports ad-hoc polymorphism, but from an
open-world point of view: they can be extended with cases for
new datatypes, at the cost of a non-extensible set of functions.
Under the closed-world assumption, there is a fixed set of type-structural
cases but arbitrarily many type-indexed functions ranging
over those cases. An example where the closed-world perpective
works better than the open-world one is generic programming, in
which we take a structural perspective on types as opposed to the
more traditional nominal one. Using just a few operations on types,
it is possible to represent the whole family of structural definitions
of interest. For instance, here is a possible definition for a generic
function that counts all the elements of some structure t:
gsize t ::
:: t
Int
gsize Unit
= 0
gsize Int
= 1
gsize Sum
(Inl x)
= gsize x
gsize Sum
(Inr y)
= gsize y
gsize Prod
(Prod x y) = gsize x + gsize y
With an open-world perspective, we can present a fixed number
of type-indexed definitions that range over those few cases; but
we cannot easily introduce new definitions. This is clearly not
appropriate for generic programming. In fact, what we expect from
a generic programming facility is the ability to a introduce new
generic definition without affecting the surrounding context. This
is precisely what the closed-world perspective provides us.
Applicability:
Use this pattern:
to encode collections of definitions that are indexed by some
fixed family of types, while allowing new definitions to be added
to the collection without affecting modularity;
when a definition is variadic, that is, it has a variable number of
arguments (see Section 4.2 for an example);
to try to avoid type-class trickery, such as multiple-parameter
type classes, functional dependencies, overlapping instances or
even duplicate instances (just consider a direct encoding of the
examples presented in the paper into type classes [30]);
to capture some shape invariants, like the ones captured by
some nested types or phantom types [29, 18].
Structure:
See Figure 1.
Participants:
Structural Cases: a set of datatypes which represent the possible
structural cases for the type-indexed function;
Typecase: representing the structure of a type-indexed function;
Dispatcher: a type class, containing a single function, that is
responsible for dispatching a value of one of the structural cases
into the corresponding branch of the typecase, based on the type
of the value;
Type-indexed function: defining the type-indexed function using
an instance of the typecase.
Collaborations:
The typecase uses the structural cases in order to create a
corresponding number of cases that can be used to define the
type-indexed function.
The dispatcher uses the structural cases in order to create
a corresponding number of instances that will forward some
value of that family of structural cases into the corresponding
case in the typecase component.
The type-indexed function (TIF) uses an instance of the typecase
in order to implement the desired functionality for the type-indexed
function.
Implementation:
Typically, a typecase component is created using
the structural cases. There are three main variations for the implementation
of a typecase: two of them are based on type classes
and the other one on a smart datatype. A smart datatype is a parametrised
type where the type parameters are dependent on the constructors
. The idea of a smart datatype can be represented in various
forms: existential datatypes with an equivalence type ( la LIGD),
GADTs, phantom types, among others.
The goal of this design pattern is to simulate a closed type-indexed
function. In general, a type-indexed function f has the
following structure.
f t ::
| d
1
... d
k
::
f t
1
a
1
... a
i
= x
11
... x
1n
e
1
.
.
.
f t
m
z
1
... z
j
= x
m1
... x
mn
e
m
The type signature tells us that f has one type parameter t and
optional type parameters d
1
... d
k
with the same structure and kind
as t. The type of the TIF may depend on t and d
1
... d
k
.
We should note that this is not the same as having a TIF with
multiple type arguments. There is no problem, in principle, in having
multiple-parameter type arguments, but it would lead to an explosion
in the number of typecases. This would be a generalisation
of this design pattern. For simplicity, we will only consider type
parameters with the same structure. The usefulness of this simpler
case is reflected in applications such as generic map where the input
and output structures of the generic map function are the same.
The body of f contains (at least) m branches, providing the
behaviour of the TIF for each member of the family of types t
(that is, t
1
a
1
... a
i
,...,t
m
z
1
... z
j
). This family of types corresponds
to the structural cases participant of the design pattern
. For each branch of the definition, we bind possible variables
x
11
... x
1n
,...,x
m1
... x
mn
and define each typecase of f with
e
1
,...,e
m
.
We now discuss the three main variations of the design pattern.
1. Smart datatypes: This variation is inspired by the LIGD approach
. Hindley-Milner typing extended with existential datatypes
(supported in most Haskell compilers) is enough to encode
it. However, with extensions such as GADTs (supported
by GHC 6.4) the encoding becomes much more direct. Unfortu-nately
, neither of those extensions conforms to Haskell 98. We
will present this version of the design pattern using a GADT
syntax for simplicity.
Using the structural cases given by t
1
a
1
... a
i
,...,t
m
z
1
... z
j
,
we can derive the typecase and dispatcher seen in Figure 1.
Since there are m structural cases in a standard instance of the
design pattern, one would create m constructors c
t
1
,...,c
t
m
and
also m instances for Rep
. TIFs can now be defined using those
components, by creating some function f that takes a first argument
of type Rep
and returns a value of type .
101
Smart Datatype
Implicit/Explicit Representations
Typecase
data t d
1
... d
k
where
c
t
1
::
(a
1
... a
i
)
(t
1
a
1
... a
i
) d
11
... d
1k
.
.
.
c
t
m
::
(z
1
... z
j
)
(t
m
z
1
... z
j
) d
m1
... d
mk
class
(g ::
k
+1
) where
case
t
1
::
(a
1
... a
i
)
g
(t
1
a
1
... a
i
) d
11
... d
1k
.
.
.
case
t
m
::
(z
1
... z
j
)
g
(t
m
z
1
... z
j
) d
m1
... d
mk
Dispatcher
class Rep
t d
1
... d
k
where
rep :: Rep
t d
1
... d
k
instance
(a
1
... a
i
)
Rep
(t
1
a
1
... a
i
) d
11
... d
1k
where
rep
= c
t
1
rep
i
.
.
.
instance
(z
1
... z
j
)
Rep
(t
m
z
1
... z
j
) d
m1
... d
mk
where
rep
= c
t
m
rep
j
class Rep
t d
1
... d
k
where
rep :: g
g t d
1
... d
k
instance
(a
1
... a
i
)
Rep
(t
1
a
1
... a
i
) d
11
... d
1k
where
rep
= case
t
1
{rep
i
}
.
.
.
instance
(z
1
... z
j
)
Rep
(t
m
z
1
... z
j
) d
m1
... d
mk
where
rep
= case
t
m
{rep
j
}
Type-indexed
function
f :: t d
1
... d
k
f
(c
t
1
r
a
1
... r
a
i
) = x
11
... x
1n
[[e
1
]]
.
.
.
f
(c
t
m
r
z
1
... r
z
j
) = x
m1
... x
mn
[[e
m
]]
f :: Rep
t d
1
... d
k
f
= f rep
newtype F t d
1
... d
k
= F{f :: }
f :: Rep
t d
1
... d
k
f
= f rep
instance Rep
F where
case
t
1
{r
a
1
... r
a
i
} = x
11
... x
1n
[[e
1
]]
.
.
.
case
t
m
{r
z
1
... r
z
j
} = x
m1
... x
mn
[[e
m
]]
Figure 1. The structure of the TypeCase design pattern.
The dispatcher component is optional in this variation. The
TIFs created with this variation are fully closed to extension;
no customisation is possible. This means that if we want to add
extra functionality we need to modify the smart datatype (and
the dispatcher if we have one). However, TIFs that call other
TIFs are trivial to achieve; there is no need for tupling.
2. Implicit representations: The implicit representation version
of the design pattern is inspired by GM1. Perhaps surprisingly,
some implementations of this instance require only Haskell 98.
However, if we need to have structurally-dependent variables,
then we also require multiple-parameter type classes.
Proceeding in a similar fashion to the smart datatype approach
, we use the structural cases to derive the typecase and
dispatcher seen in Figure 1. Again, because we have m structural
cases, we create m functions case
t
1
,...,case
t
m
and m instances
of Rep
.
The dispatcher is not an optional component: it always
needs to be defined in this variation. As with the smart datatype
variation, TIFs defined in this way are fully closed to extension,
and calls to other TIFs are trivial.
3. Explicit representations: The explicit representation variation
of the design pattern is inspired by GM2. Like the implicit
approach, Haskell 98 is enough to handle the simpler forms
(one type parameter). However, if we discard the optional dispatcher
, then Haskell 98 can handle all forms.
Using the structural cases to derive the typecase and dispatcher
seen in Figure 1, we would obtain a very similar structure
to the implicit representation version. The most noticeable
difference is that, with the explicit representation, the definition
of rep needs to provide the corresponding case function with
the representations for each of its type parameters. The second
difference is that , which corresponds to the representations
of the type parameters, reflects the fact that we are providing
explicit representations. Thus, corresponds in this instance
to explicit arguments of the function, while with the implicit
representation it corresponds to (implicitly passed) type class
constraints. The dispatcher is an optional component.
Variations of this instance of the design pattern can also be
found in the literature [10, 37], as described in Section 4.2. TIFs
defined in this fashion are not fully closed to extension: it is possible
to override default behaviour. However, the extra flexibility
comes at a cost: recursive calls to other TIFs are not possible.
One common solution for this problem is to tuple together into
a record the mutually-dependent functions. Another possibility
would be to have a notion of dependencies: if a TIF f requires
calls to another TIF g, then the record that defines f has a field
that is an instance of g. Although this work is quite tedious, Lh
[26] shows how a type system can lighten the burden.
An associated problem for TIFs in this setting is the issue
of composability. If two TIFs are defined using different instances
(this is, they are not tupled together), then we cannot, in
a straightforward manner, use the same representation to compose
them. To illustrate the problem, consider:
newtype F v
1
... v
n
= F{f :: }
newtype G v
1
... v
n
= G{g :: }
instance Generic F where
...
instance Generic G where
...
Now let us suppose that we define a type-indexed abstraction
(that is, a function that uses one or more TIFs and is not defined
over the structure of types):
h rep
= ... f rep ... g rep ...
The interpretation of this definition as a type-indexed function
could be thought of as: h a
= ... f a ... g a .... While this
is a perfectly reasonable interpretation, in practice f requires
inconsistent types F v
1
... v
n
and G v
1
... v
n
for rep: F and
G are two different type constructors, so in a Hindley-Milner
type system, unification obviously fails. However, F and G
do have something in common. In particular, they are both
102
instances of Generic. So, in Haskell extended with higher-order
polymorphism, we can capture this relation with a rank-2
type, thus providing a possible solution for the problem of
composability.
h ::
( g. Generic g g v
1
... v
n
)
h rep
= ... f rep ... g rep ...
We should note that even though we have presented three main
variations of the design pattern, the concept of a design pattern is,
by itself, quite informal and thus prone to different interpretations.
For instance, as we will see later, applications of the pattern (such
as GM) can have more type cases than there are datatype variants,
because some cases overlap. It is important to note that, depending
on the context of a problem, a design pattern can be adapted to
better fit that problem.
Applications
We present two applications of the design pattern. In Section 4.1,
still within the context of generic programming, we show how
one can build a library inspired by PolyP [21, 22] but working in
Haskell 98. In Section 4.2, we present a very flexible version of a
C-style printf function.
4.1
Light PolyP
It probably comes as no surprise to the reader that the technique
introduced in GM and LIGD can be applied to other generic programming
approaches as well. PolyP was one of the first attempts to
produce a generic programming language. It is a simpler language
than Generic Haskell, working in a much more restricted family
of datatypes, namely one-parameter regular types. But this restriction
allows stronger properties to be stated: its simplicity and strong
theoretical background make it an appropriate language for teaching
both the theory [3] and practice of generic programming. Our
proposal Light PolyP encourages this, because no external PolyP
compiler is required (although one might still be desirable, for a
more convenient syntax).
Norell [30] shows how to use the Haskell type class system (extended
with multiple-parameter type classes and functional dependencies
) to obtain first-class PolyP generic functions in Haskell. In
this section, we will present a "lighter" version of PolyP, requiring
only Haskell 98 (without extensions such as multiple-parameter
type classes and functional dependencies) but with the same expressive
power.
Instead of using sums of products like LAGP or Generic
Haskell, PolyP uses lifted pattern functors as structural cases. The
pattern functors Empty, Plus and Prod have counterparts in LAGP.
The pattern functors Rep and Par correspond respectively to the recursive
argument and the parameter of the unary regular datatype.
The pattern functor Const t for some type t represents the constant
functor, and Comp handles the composition of functors required
for regular types.
data Empty p r
= Empty
data Plus g h p r
= Inl (g p r) | Inr (h p r)
data Prod g h p r
= Prod (g p r) (h p r)
newtype Par p r
= Par{unPar ::p}
newtype Rec p r
= Rec{unRec :: r}
newtype Comp d h p r
= Comp{unComp :: d (h p r)}
newtype Const t p r
= Const{unConst :: t}
The equivalence type is used to establish the isomorphism
between a regular datatype and its top-level structure. The embedding/projection
functions are traditionally called inn and out.
data Iso a b
= Iso{inn :: a b,out :: b a}
listIso
= Iso inL outL
where
inL
(Inl Empty)
= [ ]
inL
(Inr (Prod (Par x) (Rec xs))) = x : xs
outL
[ ]
= Inl Empty
outL
(x : xs) = Inr (Prod (Par x) (Rec xs))
In PolyP no generic customisation is allowed, thus we can use
an implicit representation version of the design pattern and consequently
, it is possible for one generic function to use other generic
functions in its definition. The typecase component corresponds to:
class Generic f where
empty
:: f Empty
plus
::
(Rep g,Rep h) f (Plus g h)
prod
::
(Rep g,Rep h) f (Prod g h)
par
:: f Par
rec
:: f Rec
comp
::
(Functor d,Rep h) f (Comp d h)
constant :: f
(Const t)
The dispatcher simply selects the corresponding case based on
the type of the argument of the generic function g.
class Rep g where
rep :: Generic f
f g
instance Rep Empty where
rep
= empty
instance
(Rep g,Rep h) Rep (Plus g h) where
rep
= plus
instance
(Rep g,Rep h) Rep (Prod g h) where
rep
= prod
instance Rep Par where
rep
= par
instance Rep Rec where
rep
= rec
instance
(Functor d,Rep h) Rep (Comp d h) where
rep
= comp
instance Rep
(Const t) where
rep
= constant
Like GM, defining a generic function is a matter of declaring
a record with a single field, a function of the appropriate type. As
an example, we could define fmap2, the map operation for binary
functors, as follows.
newtype FMap2 a b c d f
= FMap2{
appFMap2 ::
(a c) (b d) f a b f c d}
instance Generic
(FMap2 a b c d) where
empty
= FMap2 (
Empty)
plus
= FMap2 (f g t case t of
Inl x
Inl (fmap2 f g x)
Inr y
Inr (fmap2 f g y))
prod
= FMap2 (f g t case t of
Prod x y
Prod (fmap2 f g x) (fmap2 f g y))
par
= FMap2 (f g (Par t) Par (f t))
rec
= FMap2 (f g (Rec t) Rec (g t))
comp
= FMap2 (f g (Comp t)
Comp
(fmap (fmap2 f g) t))
constant
= FMap2 (
(Const t) (Const t))
fmap2 :: Rep f
(a c) (b d) f a b f c d
fmap2
= appFMap2 rep
With fmap2 it is now possible to define several widely-applicable
recursion operators [28, 14] using PolyP. For example, the cata-morphism
operator could be defined as:
cata iso f
= f fmap2 id (cata iso f ) out iso
Note that one must give explicitly the isomorphism that converts
between the datatype and its representation. This contrasts
with the original PolyP approach, in which that translation is inferred
. This is the common trade-off of brevity for flexibility; being
forced to state the isomorphism allows the programmer to choose a
different one, giving something analogous to Wadler's ideas about
103
views [34]. We might say that this style of generic programming is
isomorphism-parametrised instead of datatype-parametrised.
In the original PolyP, the polytypic construct provides a convenient
syntax for encoding generic functions. Furthermore, combinators
for pointfree programming may be provided, making generic
definitions even more compact. These combinators are just normal
Haskell functions, and so there is no problem in implementing them
in pure Haskell; but to keep the example short, we have stuck with
pointwise definitions.
The advantages of this translation when compared with the one
proposed in [30] are that it requires only Haskell 98, and that the
types of the generic functions are much closer to what one would
expect. In Norell's translation, the type class constraints posed
some problems because both the two-parameter class FunctorOf
and the classes for the generic functions propagated throughout
the code. With the Light PolyP approach, only instances of Rep
propagate, leading usually to just one type class constraint.
4.2
Printf
The C-style printf function, which takes a variable number of
parameters, has always been a challenge for programmers using
strongly and statically typed languages. The problem with printf is
that, in its true essence, it requires dependent types. This happens
because the value of the format string determines the type of the
function. However, it has been shown by Danvy [10] that by changing
the representation of the control string it is possible to encode
printf in any language supporting a standard Hindley-Milner type
system.
4.2.1
A solution using explicit representations
In this section, we will demonstrate that Danvy's solution is another
instance of the TypeCase design pattern, using an explicit representation
. Furthermore, we will show a new use of the printf function
by making use of the fact that we can (in some cases) infer the
format string.
Danvy's original solution had the following combinators:
lit
:: String
(String a) String a
lit x k s
= k (s ++ x)
eol
::
(String a) String a
eol k s
= k (s ++ "\n")
int
::
(String a) String Int a
int k s x
= k (s ++ show x)
str
::
(String a) String String a
str k s x
= k (s ++ x)
eod
:: String
String
eod
= id
If we capture all the occurrences of the form String
t with a
newtype Printf , and modify the definitions in order to reflect this
newtype, we obtain the following code.
newtype Printf t
= Printf {printfApp :: String t}
lit
:: String
Printf a Printf a
lit x k
= Printf (s printfApp k (s ++ x))
eol
:: Printf a
Printf a
eol k
= Printf (s printfApp k (s ++ "\n"))
int
:: Printf a
Printf (Int a)
int k
= Printf (s x printfApp k (s ++ show x))
str
:: Printf a
Printf (String a)
str k
= Printf (s x printfApp k (s ++ x))
eod
:: Printf String
eod
= Printf id
Taking one step further, we can now abstract over Printf and
create a type class that replaces it with some functor f .
class Format f where
lit :: String
f r f r
eol :: f r
f r
int :: f r
f (Int r)
str :: f r
f (String r)
eod :: f String
With this last transformation, we can start seeing an instance of the
TypeCase design pattern. The structural cases participant consists
of functions of the form Int
r or String r, or a String -- lit
and eol are overlapping cases. The class Format constitutes the
typecase participant. Because the dispatcher is optional in explicit
versions of the design pattern, there is no obligation to define it.
Now, using the newtype Printf , we can define an instance of Format
that implements the functionality of printf .
instance Format Printf where
lit x k
= Printf (s printfApp k (s ++ x))
eol k
= Printf (s printfApp k (s ++ "\n"))
int k
= Printf (s x printfApp k (s ++ show x))
str k
= Printf (s x printfApp k (s ++ x))
eod
= Printf id
The final touch is provided by the definition of printf in terms of
printfApp. The printf function is expected to receive the formatting
argument of type Printf t as its first parameter. The parameter t
defines the type of printf , which can involve a variable number of
arguments. Analysing the type of printfApp, we see that the first
parameter is the formatting argument, the resulting type is the type
that we expect for printf , and there is a second argument which is a
String. Now, what does that String represent? Danvy's solution uses
a continuation-passing style and the second argument of printfApp
corresponds to the value fed to the initial continuation. Thus using
the string "" for that argument does the trick.
printf
:: Printf t
t
printf p
= printfApp p ""
We have shown, informally, that Danvy's solution is indeed an
instance of the TypeCase design pattern. However, some questions
might be asked at this point. Do we really need to create a class in
order to implement printf ? What other instances of the class would
we be able to provide? In fact there are not many other uses for the
type class; printf seems to be the only natural instance. Perhaps we
could consider scanf , another C function that uses the same format
string; but the derived type for scanf would be different, and so
it is not possible to reuse the same type class. Another possibility
would be considering other versions of printf , such as one for the
IO monad. However, if we think that printf is really the only useful
instance of the type class, why not get rid of the type class all
together?
A design pattern is a flexible design, and depending on the context
of the problem, it can be adapted to fit the problem. If a type-indexed
function is used at just one type index, it is reasonable to
simplify the pattern and eliminate the type class. The result would
be the specialised solution using the newtype Printf t presented before
. We could go even further and argue that Danvy's original solution
is already an instance of the design pattern, corresponding to
one further simplification of the design pattern, namely getting rid
of the newtype.
4.2.2
An alternative solution using smart datatypes
In the previous section, we have argued that Danvy's version of
printf is an instance of the TypeCase design pattern. However,
Danvy's solution and explanation for printf is not, perhaps, very
intuitive to understand. In this section, we take a different perpective
and will look at the formatting parameter of printf as a special
kind of list. This perpective corresponds to an instance of the
design pattern using a smart datatype. The datatype (the typecase
participant) encodes a list, which has an empty case that corres-104
ponds to the combinator eod, and a number of recursive cases that
correspond to lit, eol, int and str.
data Printf t where
Lit :: String
Printf t Printf t
Eol :: Printf t
Printf t
Int :: Printf t
Printf (Int t)
Str :: Printf t
Printf (String t)
Eod :: Printf String
Informally speaking, we have reused the types from the newtype
solution and lifted the functions to constructors. However, using
a datatype instead of a number of functions makes it easier to
view the format parameter of printf as a list. For instance, the Lit
constructor takes the literal string that we wish to print and also the
list corresponding to the rest of the format parameter of printf .
The printfApp from the previous section would, in this setting,
correspond to a dependently-typed function (in the sense that the
types of its branches are determined by the constructors used to
perform pattern matching).
printfApp
:: Printf t
String t
printfApp
(Lit x k) s = printfApp k (s ++ x)
printfApp
(Eol k) s = printfApp k (s ++ "\n")
printfApp
(Int k) s = (x printfApp k (s ++ show x))
printfApp
(Str k) s = (x printfApp k (s ++ x))
printfApp Eod
s
= s
The final step is to define printf . Little effort is required; we just
need to copy the definition of printf from the previous section. The
only apparent difference between the two versions is that, where
the first version uses functions like lit and int, this version uses
constructors like Lit and Int. However, despite the similarity of the
two solutions, their expressive power is not the same. The smart
datatype solution in this section is fully closed to extension. That
is, in order to add another case in the formatting list, such as a
constructor Chr that handles characters, we would need to modify
the GADT itself. On the other hand, the solution in the previous
section using the explicit version of the design pattern allows some
form of extensibility. Adding a new case for printf that handles
characters corresponds to adding a new function, which could even
be in a different module.
4.2.3
Making use of a dispatcher
The two solutions that we presented did not make any use of a
dispatcher. In this section we will show how the dispatcher can
be useful. The version of the dispatcher presented here is for the
explicit representation solution in Section 4.2.1, but could be easily
adapted to the smart datatype solution in Section 4.2.2.
Suppose that we want to define a function that prints a pair of
integers. Equipped with printf , we could try to encode that with
either one of the following two functions.
printPair x y
= printf fmt "(" x ", " y ")"
where
fmt
= str $ int $ str $ int $ str $ eod
printPair2 x y
= printf fmt x y
where
fmt
= lit "(" $ int $ lit ", " $ int $ lit ")" $ eod
The function printPair tackles the problem using a printf that takes
a format argument expecting five arguments: three strings and two
integers. The function printPair2, on the other hand, makes use of
the fact that the string arguments are constants, and uses lit instead.
Thus, in this case, printf takes the format argument and two integer
arguments. Although relatively compact, the format argument is not
as convenient to use as it would be in C, where one would write
something like "(%d, %d)".
The role of the dispatcher is to infer automatically the corresponding
type representation for some type t. In the case of printf ,
it is not possible to infer all possible representations. Consider, for
instance, the end of line case eol ::f r
f r, which takes an existing
format with some type r, adds a newline and returns a format of the
same type. Clearly, there is no way to deduce that there is an occurrence
of eol based on the type alone. Similarly, the lit case has no
effect on the type. Nevertheless, the other, more type-informative,
cases of printf can be inferred.
class Rep t where
rep :: Format f
f t
instance Rep String where
rep
= eod
instance Rep r
Rep (Int r) where
rep
= int rep
instance Rep r
Rep (String r) where
rep
= str rep
We should note that these instance declarations are outside the
scope of Haskell 98 -- types are used where type variables should
occur. However, this is a quite mild extension, and is supported by
most Haskell compilers.
Making use of the fact that now we can infer some cases of the
string format, we could define:
printPair :: Int
Int String
printPair x y
= printf rep "(" x ", " y ")"
printTrio :: Int
Int Int String
printTrio x y z
= printf rep "(" x ", " y ", " z ")"
The function printPair does the same as before. However, with
this new definition, the format directive is automatically inferred.
The function printTrio is doing the same as printPair, except that
it does it for triples. We should emphasise that the occurrences of
printf in those two functions use different numbers of arguments.
We should also mention that, in some situations, we will need to
provide explicit types, otherwise the type checker would not be able
to infer the correct instances of the type class Rep.
This use of printf seems to be practical, and for this simple
version of it we might even argue that everything that we could
do with a manually-provided parameter could be done with an
automatically-inferred one. We simply do not need lit and eol,
because those can be simulated using str (with, of course, extra
String arguments). Nevertheless, if we decided to go for a more
powerful version of printf , this might not be the case. Consider, for
instance, the formatting directive "%2d". In this case the number 2
is specifying the minimum width of the string that represents that
number. If we wanted to allow this kind of behaviour, we could
add an extra parameter of type Int to the int case. However, the
problem now is to choose a value for that parameter when we
automatically build the format directive. In this case we need to
use some default value (for instance 1). However we are no longer
able, for all possible cases, to simulate the functionality of printf
with manual format strings using only automatically-built ones.
Type-indexed types
Until now we have been discussing type-indexed functions, that
is, families of functions indexed by types. We turn now to type-indexed
types, that is, families of types indexed by types. In the
context of generic programming, we call these generic types. Generic
functions with generic types are functions that have different
result types for each structural case.
In this section, we will show how to implement type-indexed
types as another variation of the TypeCase design pattern. We do
this by translating a standard example of Generic Haskell [20],
namely generic tries [17], into our approach.
105
5.1
Encoding type-indexed types
Section 3 presents templates for encoding type-indexed functions.
In this section, we show how to translate a type-indexed type into
an instance of the TypeCase design pattern.
In general, a type-indexed type has the form
t ::
::
t
1
a
1
... a
i
= d
11
... d
1n
1
.
.
.
t
m
z
1
... z
j
= d
m1
... d
mn
m
where is the type-level function that defines the type-indexed
type; t is the family of types (or type constructors)
(t
1
a
1
... a
i
),...,
(t
m
z
1
... z
j
) of kind that corresponds to the structural cases
of the design pattern; and, finally, is the kind of t :: . For
each type that is member of that family, we have a corresponding
branch for . The type-level lambda abstraction on the right side of
each branch is optional, and corresponds to possible parametrically
polymorphic variables d
1
... d
n
that the type-indexed type might
depend on. Finally,
1
...
m
corresponds to the family of types (or
type constructors) that defines the type-indexed type.
5.1.1
Type class translation
We can now derive an instance of the TypeCase design pattern
to capture type-indexed functions with type-indexed types. The
typecase participant, for instances of the design pattern using either
implicit or explicit representations, could be defined as follows.
class
(g :: ) where
case
t
1
::
(a
1
... a
i
)
g
(t
1
a
1
... a
i
) d
11
... d
1n
1
.
.
.
case
t
m
::
(z
1
... z
j
)
g
(t
m
z
1
... z
j
) d
m1
... d
mn
m
We reuse the name for the name of the type class that encodes
the typecase component. The parameter g is a type constructor
with kind
, where is the literal occurrence
of (if we were to use instead of its literal occurrence
, we would obtain the wrong kind). There are m functions
case
t
1
,...,case
t
m
that correspond to the typecases for each type
(t
1
a
1
... a
i
),...,(t
m
z
1
... z
j
). Each case of the typecase function
is defined by providing the type constructor g with the corre-ponding
types. Finally,
(a
1
... a
i
)
,...,
(z
1
... z
j
)
corresponds to the
representations for the types
(a
1
... a
i
),...,(z
1
... z
j
).
The only difference between explicit and implicit versions of the
design pattern for the typecase component is that in the explicit version
the occurrences of are expanded into explicitly-passed representations
of the form g a
..., whereas with the implicit representations
those occurrences are replaced by type class constraints
of the form Rep
a
....
The dispatcher can also be derived; but to do so requires extensions
to Haskell 98 -- specifically, multiple-parameter type classes
with functional dependencies. The problem is that, even in its
simplest form, a type-indexed type requires at least two type arguments
: the first one corresponding to the index type, and the
second one that is the resulting type-indexed type for that index,
and thus depending on the index. This problem is not too serious if
we use the explicit representations variant of the pattern, since the
dispatcher is optional, but using implicit representations forces us
outside Haskell 98.
class Rep
t d
1
... d
n
| t d
1
... d
n
where
rep :: g
g t d
1
... d
n
instance
(a
1
... a
i
)
Rep
(t
1
a
1
... a
i
) d
11
... d
1n
1
where
rep
= case
t
1
{rep
i
}
.
.
.
instance
(z
1
... z
j
)
Rep
(t
m
z
1
... z
j
) d
m1
... d
mn
m
where
rep
= case
t
m
{rep
j
}
The type class Rep
has at least two type arguments: t and . If
there are parametric types that depends on, then the type class
also needs to account for those types (d
1
... d
n
). The class contains
just one member function, rep, used to build representations for
. The function rep has a type class constraint ensuring that g
is an instance of . There are, at least, m instances of Rep
, and
those instances define rep with the corresponding case
t
function.
If we are implementing an implicit version of the design pattern,
then the definition of rep is complete; otherwise, for an explicit
version, we need to apply case
t
to a number i of rep functions
(where i is the number of type parameters of t
). The constraints
(a
1
... a
i
)
,...,
(z
1
... z
j
)
are very similar to the constraints , and in
fact for implicit representations they coincide: they correspond to
representations for the types a
1
... a
i
,...,z
1
... z
n
.
5.1.2
Smart datatype translations
Encoding type-indexed functions with smart datatypes proceeds
in a similar fashion to the encoding with type classes. We will
demonstrate how to do this translation using a GADT syntax (as
found in the new GHC 6.4 Haskell compiler).
A type-indexed type generates a smart datatype of the following
form.
data t d
1
... d
n
where
c
t
1
::
(a
1
... a
i
)
(t
1
a
1
... a
i
) d
11
... d
1n
1
.
.
.
c
t
m
::
(z
1
... z
j
)
(t
m
z
1
... z
j
) d
m1
... d
mn
m
Instead of being parametrised by a "function" (like the type class
approach), a smart datatype is parametrised by all the types on
which it depends. Another difference from the type class approach
is that the functions that represent each case are now replaced
by constructors c
t
1
,...,c
t
m
that can just be pattern matched (in a
dependent manner) by functions defined over those datatypes. A
final difference is that
(a
1
... a
i
)
,...,
(z
1
... z
j
)
need to reflect the fact
that we are now using a smart datatype.
The changes to Rep
are minimal; the only change to the type
class version is that in the definition of rep we now use the constructors
c
t
1
,...,c
t
m
instead of the functions case
t
1
,...,case
t
m
.
class Rep
t d
1
... d
n
| t d
1
... d
n
where
rep :: t d
1
... d
n
instance
(a
1
... a
i
)
Rep
(t
1
a
1
... a
i
) d
11
... d
1n
1
where
rep
= c
t
1
rep
i
.
.
.
instance
(z
1
... z
j
)
Rep
(t
m
z
1
... z
j
) d
m1
... d
mn
m
where
rep
= c
t
m
rep
j
5.2
Tries
Tries or digital search trees are a traditional example of a generic
type. Tries make use of the structure of search keys in order
to organise information, which can then be efficiently queried. In
this section we will show how to implement generic tries using a
variation of the LAGP type representations. For a more theoretical
presentation of tries, see [20, 17]; the implementation of tries
presented here follows closely the implementations found in those
papers.
In [20], the generic type for tries is given as follows.
106
FMap t ::
::
FMap Unit
v
= Maybe v
FMap Int
v
= MapInt v
FMap Plus t
1
t
2
v
= OptPair (FMap t
1
v
) (FMap t
2
v
)
FMap Prod t
1
t
2
v
= FMap t
1
(FMap t
2
v
)
It is clear that the type-indexed function FMap takes a type
parameter t :: and another type of kind
and returns another type
of kind . Only the shape of parameter t is analysed; the other
parameter v needs to be used in the definition because the resulting
type is parametrically polymorphic in relation to v.
We encode this characterisation of FMap as follows.
class FMap g where
unit :: g Unit v Maybe
plus :: g a v c
g b v d g (Plus a b) v (PlusCase c d)
prod :: g a
(d v) c g b v d g (Prod a b) v (ProdCase c d)
data :: g a v c
Iso b a Iso (d v) (c v) g b v d
int :: g Int v MapInt
This class forms the typecase participant of an explicit representation
variant of the TypeCase pattern. The class FMap is a variation
of the Generic class from Section 2.2.2. The functor g ::
( ) takes the necessary information to rebuild the type-indexed
type. The three parameters of the functor correspond, respectively
, to the type parameter t, the second parameter and the
resulting type of FMap. (The kind of the resulting type is now
. We could have used kind as in FMap, but we believe this
version is slightly more readable.) The function unit just reflects the
change of the functor g and adds the information for the parametric
type v and the functor Maybe that is used to define the trie for the
Unit case. The cases for plus and prod have explicit arguments that
correspond to the recursive calls of the function; and the functors
PlusCase c d and ProdCase c d correspond to the respective cases
of the type-indexed type. The data function handles user-defined
datatypes, having a recursive case and two isomorphisms: the first
between the structural cases and a second between the tries corresponding
to those cases. Finally, we could also define some extra
base cases to handle primitive types such as Int and Char.
The auxiliary definitions for the newtypes PlusCase a b v and
ProdCase a b v are defined as follows.
data OptPair a b
= Null | Pair a b
newtype PlusCase a b v
=
PlusCase
{unPlus :: OptPair (a v) (b v)}
newtype ProdCase a b v
=
ProdCase
{unProd :: a (b v)}
The introduction of OptPair a b is for efficiency reasons [20].
In order to use a user-defined type (or a built-in type that does
not have a special case for it), we need to do much the same work
as for GM2 in Section 2.2.2. As an example, we show what to do
for Haskell's built-in lists.
list :: FMap g
g a (FList c v) c g [a] v (FList c)
list ra
= data (plus unit (prod ra (list ra))) listEP
(Iso unFList FList)
listEP :: Iso
[a] (Plus Unit (Prod a [a]))
listEP
= Iso fromList toList
where
fromList
[ ]
= Inl Unit
fromList
(x : xs)
= Inr (Prod x xs)
toList
(Inl Unit)
= [ ]
toList
(Inr (Prod x xs)) = x : xs
newtype FList c v
= FList{
unFList ::
(PlusCase Maybe (ProdCase c (FList c))) v}
The function list defines the encoding for the representation of lists.
Because lists are a parametrised datatype with one type parameter,
list is a function that takes one argument; this argument corresponds
to the representation of the list type argument, and list returns the
representation for lists. The definition is nearly the same as the
equivalent for GM, but it takes an extra isomorphism describing
the mapping between the structural representation of a list trie and
a newtype FList c v that is introduced to represent the resulting list
trie. The function listEP is just the isomorphism
[a]
= 1 + a [a].
This means that listEP can be shared with other versions of generics
that use the same structural cases. However, list and FList c v still
have to be introduced for each type-indexed datatype. Nevertheless,
that is boilerplate code, and, with compiler support, it is should be
possible to avoid writing it.
Having set up the main components of the design pattern, we
can now move on to define our first function over tries. The function
empty creates a new empty trie and can be defined as follows.
newtype EmptyTrie a v t
= EmptyTrie{empty :: t v}
instance FMap EmptyTrie where
unit
= EmptyTrie Nothing
int
= EmptyTrie (MapInt [ ])
plus ra rb
= EmptyTrie (PlusCase Null)
prod ra rb
= EmptyTrie (ProdCase (empty ra))
data ra iso iso2
= EmptyTrie (to iso2 (empty ra))
This function is very simple but, nonetheless, it has a type-indexed
type: the unit case returns Nothing; the int case returns a value of
a user-defined type for integer tries; the cases for prod and plus
return, respectively, values for the previously defined ProdCase and
PlusCase types; finally, the data returns a value of the newtype used
to represent the trie of some user-defined datatype.
Another function that we will probably want to have in a library
for tries is the lookUp function which, given a key, returns the
corresponding value stored in the trie.
newtype LUp a v t
= LUp{lookUp :: a t v Maybe v}
instance FMap LUp where
unit
= LUp ( fm fm)
int
= LUp (i fm lookUpInt i fm)
plus ra rb
= LUp (t fm
case
(unPlus fm) of
Null
Nothing
(Pair fma fmb) case t of
(Inl l) lookUp ra l fma
(Inr r) lookUp rb r fmb)
prod ra rb
= LUp (t (ProdCase fma)
case t of
(Prod x y) (lookUp ra x lookUp rb y) fma)
data ra iso iso2
=
LUp
(t r lookUp ra (from iso t) (from iso2 r))
(The operator
represents monadic composition.) The functions
empty and lookUp have definitions that only have generic function
calls to themselves. However, that is not the case for all generic
functions. One such function is the generic function that creates a
trie containing a single element; a possible definition makes use of
the generic function empty. We discussed in Section 3 that, using
an explicit version of the design pattern, there are some issues with
generic functions calling generic functions other than themselves.
One solution for this problem is using tupling. Just as one does
with a type class, we would choose a fixed set of functions and
group them together in a record. For instance, in the case of tries,
we could have the following.
data Tries a v t
= Tries{
empty :: t v
,
isempty :: t v
Bool,
single :: a
v t v,
lookup :: a
t v Maybe v,
insert ::
(v v v) a v t v t v,
merge ::
(v v v) t v t v t v,
delete :: a
t v t v}
107
With our definition we could, for any function in the record, make
mutual generic calls.
Whilst we could have used a multiple-parameter type class
with functional dependencies in order to implement this library of
functions over tries, there would be one important disadvantage in
doing so (apart from the fact that we need to leave Haskell 98):
we can only have functions on types of kind . With type classes,
contexts are implicitly passed, and there is no way to redefine those
implicit behaviours. In other words, type classes have the same
limitation as implicit representations as a version of the TypeCase
design pattern, in that they can only work on types. On the other
hand, derived from the fact that we use external representations,
with this implementation we can define generic functions over type
constructors.
Tupling is not the only option to solve the problem of generic
function calls. Another possibility is to have the notion of dependencies
: instead of tupling all functions together, we can, for each
generic function that we need to use, include one instance of that
function. Here is a possible definition of single using this strategy.
data Single a v t
= Single{
emptyT :: EmptyTrie a v t
,
single :: a
v t v}
instance FMap Single where
unit
= Single unit ( v Just v)
int
= Single int (i v MapInt [(i,v)])
plus ra rb
= Single (plus (emptyT ra) (emptyT rb))
(i v
case i of
Inl l
PlusCase (Pair (single ra l v)
(empty (emptyT rb)))
Inr r
PlusCase (Pair (empty (emptyT ra))
(single rb r v)))
prod ra rb
= Single (prod (emptyT ra) (emptyT rb))
(i v
case i of
Prod x y
ProdCase (single ra x (single rb y v)))
data ra iso iso2
= Single (data (emptyT ra) iso iso2)
(i v to iso2 (single ra (from iso i) v))
The idea of dependencies is motivated by Dependency-Style Generic
Haskell [26, 27]. In this version of Generic Haskell, the type
system reflects the uses of generic functions in the definitions by
keeping track of constraints that identify such uses. With this definition
, we have to manually introduce those dependencies by adding
extra fields to the record that keep track of all the functions on
which the definition depends. That change is also reflected in the
instance that defines the generic function, where we need to provide
values for the extra fields; the values for those fields just reconstruct
the dependent functions with their values for those fields.
Discussion and conclusions
The goal of design patterns is not to come up with a miraculous
solution for a problem. Instead, design patterns capture good techniques
that appear in the literature or in practice, in a variety of
contexts, and document them to make them easier to identify and
implement. In this paper we have generalised the technique found
in LIGD and GM to a design pattern, and presented a number of
applications of the pattern. Furthermore, we have identified other
occurrences of the design pattern in the literature.
6.1
Related work
The technique used by Danvy [10] and generalised by Yang [37]
allows us to encode type-indexed values in a Hindley-Milner type
system. This encoding is directly related to the explicit representation
version of the TypeCase pattern. This technique influenced
many other works, ranging from type-directed partial evaluation
[37, 9, 12], through embedded interpreters [2], to a generalisation
of families of functions like zipWith [13] -- these are all possible
applications of the TypeCase design pattern. Our paper revises that
technique and shows how slightly richer type systems can be used
to improve it. In particular, the use of a dispatcher makes it possible
to automatically built the values encoding types. Moreover,
the issue of composability (identified by Yang), while still a problem
, can benefit from stronger type systems: the use of rank-two
types combined with type classes provides a good solution.
The work on extensional polymorphism [11] presents an approach
that allows functions to implicitly bind the types of their
arguments in a modified version of ML. Furthermore, using a
typecase construct it is possible to support generic programming.
Harper and Morrisett's work on intensional type analysis [16]
presents an intermediate language where run-time type analysis is
permitted, using typecase and Typecase constructs to define type-indexed
functions and type-indexed types, respectively. However,
approaches based on run-time type analysis have important drawbacks
; for instance, they cannot support abstract datatypes, and
they do not respect the parametricity theorem [35, 33]. Subsequent
approaches to intensional type analysis by Crary and others [7, 6]
use a type-erasure semantics that does not suffer from those problems
. Still, those approaches were limited to first-order type analysis
. More recently, Weirich [36] proposed a version of intensional
type analysis covering higher-order types with a type-erasure semantics
. Furthermore, she presented an implementation in Haskell
(augmented with rank-two types). This work inspired Hinze's implementation
of GM, which shows, in essence, how to avoid rank-two
types by using Haskell's class system. Our work makes use
of those results and explains how to simulate typecase constructs.
Furthermore, we show that the limitation of GM that generic functions
with generic types cannot be defined can be lifted with our
more general interpretation.
Generic programming (or perhaps datatype-generic programming
[15]) is about defining functions and types that depend on
the structure of other types. One of the first attempts to produce
a generic programming language was PolyP [21]. This language
allowed the definition of generic functions over regular datatypes
with one type parameter. In Section 4.1 we show that, using our
design pattern, it is possible to define PolyP-like generic functions
just using Haskell 98. A previous attempt [30] to define first-class
PolyP functions in Haskell required extensions to the language.
The Generic Haskell [26, 5] project is more ambitious than PolyP,
and aims at defining generic functions for nearly all types defin-able
in Haskell 98. Furthermore, Generic Haskell features generic
types and generic function customisation (which were not present
in PolyP). Dependency-Style Generic Haskell [26, 27] introduces
a rather complex type system that keeps track of dependencies on
generic function calls. The need for this sophisticated type system
is a consequence of a model for generic programming that allows
generic function customisation. The approach presented in [24] is
another kind of lightweight approach to generic programming, relying
on a run-time type-safe cast operator. With that operator it is
possible to define a number of traversals that allow a very interesting
model of generic programming based on nominal typing. Our
design pattern can be used to encode many of the generic definitions
that these generic programming techniques allow. However,
it can be less practical than approaches providing a special-purpose
compiler. Nevertheless, the advantage of our technique is that we
do not need to commit in advance to a model of generic programming
: we have the freedom to choose our own model of generic
programming.
Design patterns in the object-oriented programming community
have been given a great deal of attention. Whilst amongst the
108
functional programming community there has been some work
on -- or, at least, involving the concept of -- design patterns
[24, 23, 25], the concept is still much less popular than in the object-oriented
community. Moreover, most of this work presents patterns
that are really more like algorithmic patterns rather than design
patterns. Perhaps the reason why this happens is that functional
languages are very expressive, and often natural features of those
languages, such as laziness or higher-order functions, can be used
to remove the need for complex designs. Nevertheless, we believe
that our design pattern is more related to the OO concept of a design
pattern with type classes/datatypes taking the role of OO interfaces
and class instances taking the role of OO concrete classes. One
difficulty found in this work had to do with the fact that, unlike
OO design patterns which are documented using informal notations
such as UML, we do not have a notation to "talk" about the design
of Haskell programs. The notation that we used is quite ad-hoc and
it can be difficult to read.
6.2
Future work
We mentioned that this design pattern seems to be very similar to
OO design patterns. It would be interesting to explore the applicability
of this design pattern in an OO environment.
Design patterns are useful to overcome the lack of certain features
in programming languages. In our case, we overcome the
lack of a typecase construct. The work on intensional type analysis
investigates the possibility of languages supporting typecase constructs
directly in the language. Combining these results in order to
extend Haskell with a more natural support for typecase programming
is something we would like to try in the future.
Problems that use multiple instances of the design pattern are
not composable. For instance, in a generic programming context,
we could have a class Generic that allowed us to define generic
functions with one type parameter; and we could also have a class
FMap for working with tries. Although, those classes are structured
in a similar way, they require two distinct representations of types,
one for each of the classes; we hope to address this impracticality.
Acknowledgements
We would like to thank Ralf Hinze for the discussion that inspired
this paper. Stefan Holdermans, the anonymous referees and the
members of the Algebra of Programming group at Oxford and
the EPSRC-funded Datatype-Generic Programming project made
a number of helpful suggestions.
References
[1] C. Alexander. A Pattern Language. Oxford University Press, 1977.
[2] N. Benton. Embedded interpreters. Microsoft Research, Cambridge,
Jan. 2005.
[3] R. Bird and O. de Moor. Algebra of Programming. International
Series in Computer Science. Prentice Hall, 1997.
[4] J. Cheney and R. Hinze. A lightweight implementation of generics
and dynamics. In Haskell Workshop, pages 90104, 2002.
[5] D. Clarke and A. Lh. Generic Haskell, specifically. In Generic
Programming, pages 2147. Kluwer, B.V., 2003.
[6] K. Crary and S. Weirich. Flexible type analysis. In International
Conference on Functional Programming, pages 233248, 1999.
[7] K. Crary, S. Weirich, and J. G. Morrisett. Intensional polymorphism
in type-erasure semantics. In International Conference on Functional
Programming, pages 301312, 1998.
[8] N. A. Danielsson and P. Jansson. Chasing bottoms: A case study in
program verification in the presence of partial and infinite values. In
D. Kozen, editor, LNCS 3125: Mathematics of Program Construction,
pages 85109. Springer-Verlag, 2004.
[9] O. Danvy. Type-directed partial evaluation.
In Principles of
Programming Languages, 1996.
[10] O. Danvy. Functional unparsing. Journal of Functional Programming
, 8(6):621625, 1998.
[11] C. Dubois, F. Rouaix, and P. Weis. Extensional polymorphism. In
Principles of Programming Languages, pages 118129, 1995.
[12] P. Dybjer and A. Filinski. Normalization and partial evaluation. In
LNCS 2395: Applied Semantics, pages 137192. Springer, 2002.
[13] D. Fridlender and M. Indrika. Do we need dependent types? Journal
of Functional Programming, 10(4):409415, 2000.
[14] J. Gibbons. Calculating functional programs. In Algebraic and
Coalgebraic Methods in the Mathematics of Program Construction,
pages 149202, 2000.
[15] J. Gibbons. Patterns in datatype-generic programming. In Declarative
Programming in the Context of Object-Oriented Languages, 2003.
[16] R. Harper and G. Morrisett.
Compiling polymorphism using
intensional type analysis. In Principles of Programming Languages,
pages 130141, San Francisco, California, 1995.
[17] R. Hinze. Generalizing generalized tries. Journal of Functional
Programming, 10(4):327351, 2000.
[18] R. Hinze. Fun with phantom types. In J. Gibbons and O. de Moor,
editors, The Fun of Programming, pages 245262. Palgrave, 2003.
[19] R. Hinze. Generics for the masses. In International Conference on
Functional Programming, pages 236243. ACM Press, 2004.
[20] R. Hinze, J. Jeuring, and A. Lh. Type-indexed data types. Science
of Computer Programming, 51(1-2):117151, 2004.
[21] P. Jansson. Functional Polytypic Programming. PhD thesis, Chalmers
University of Technology, May 2000.
[22] J. Jeuring and P. Jansson. Polytypic programming. In J. Launchbury,
E. Meijer, and T. Sheard, editors, LNCS 1129: Advanced Functional
Programming, pages 68114. Springer-Verlag, 1996.
[23] T. Khne. A Functional Pattern System for Object-Oriented Design.
Verlag Dr. Kovac, ISBN 3-86064-770-9, Hamburg, Germany, 1999.
[24] R. Lmmel and S. Peyton Jones. Scrap your boilerplate: a practical
design pattern for generic programming. In Types in Language Design
and Implementation, 2003.
[25] R. Lmmel and J. Visser. Design patterns for functional strategic
programming. In Workshop on Rule-Based Programming, 2002.
[26] A. Lh. Exploring Generic Haskell. PhD thesis, Utrecht University,
2004.
[27] A. Lh, D. Clarke, and J. Jeuring. Dependency-style Generic Haskell.
In International Conference on Functional Programming, pages 141
152, 2003.
[28] E. Meijer, M. Fokkinga, and R. Paterson. Functional programming
with bananas, lenses, envelopes and barbed wire. In LNCS 523:
Functional Programming Languages and Computer Architecture,
pages 124144. Springer-Verlag, 1991.
[29] D. Menendez. Fixed-length vectors in Haskell. http://www.
haskell.org/pipermail/haskell/2005-May/015815.html
.
[30] U. Norell and P. Jansson. Polytypic programming in Haskell. In
Implementing Functional Languages, 2003.
[31] S. Peyton Jones, editor. Haskell 98 Language and Libraries: The
Revised Report. Cambridge University Press, 2003.
[32] S. Peyton Jones, G. Washburn, and S. Weirich. Wobbly types: Type
inference for generalised algebraic data types. Microsoft Research,
Cambridge, 2004.
[33] J. C. Reynolds. Types, abstraction and parametric polymorphism. In
Information Processing 83, pages 513523. Elsevier, 1983.
[34] P. Wadler. Views: a way for pattern matching to cohabit with data
abstraction. In Principles of Programming Languages, pages 307
313. ACM Press, 1987.
[35] P. Wadler. Theorems for free! In Functional Programming and
Computer Architecture, 1989.
[36] S. Weirich. Higher-order intensional type analysis in type-erasure
semantics. http://www.cis.upenn.edu/~sweirich/papers/
erasure/erasure-paper-july03.pdf
, 2003.
[37] Z. Yang. Encoding types in ML-like languages. In International
Conference on Functional Programming, pages 289300, 1998.
109 | Generic programming;type-indexed functions;type classes |
203 | UML-Based Service Robot Software Development: A Case Study | The research field of Intelligent Service Robots, which has become more and more popular over the last years, covers a wide range of applications from climbing machines for cleaning large storefronts to robotic assistance for disabled or elderly people. When developing service robot software, it is a challenging problem to design the robot architecture by carefully considering user needs and requirements, implement robot application components based on the architecture, and integrate these components in a systematic and comprehensive way for maintainability and reusability. Furthermore, it becomes more difficult to communicate among development teams and with others when many engineers from different teams participate in developing the service robot. To solve these problems, we applied the COMET design method, which uses the industry-standard UML notation, to developing the software of an intelligent service robot for the elderly, called T-Rot, under development at Center for Intelligent Robotics (CIR). In this paper, we discuss our experiences with the project in which we successfully addressed these problems and developed the autonomous navigation system of the robot with the COMET/UML method. | INTRODUCTION
Robots have been used in several new applications. In recent
years, both academic and commercial research has been focusing
on the development of a new generation of robots in the emerging
field of service robots. Service robots are individually designed to
perform tasks in a specific environment for working with or
assisting humans and must be able to perform services semi- or
fully automatically [1]. Examples of service robots are those used
for inspection, maintenance, housekeeping, office automation and
aiding senior citizens or physically challenged individuals [2]. A
number of commercialized service robots have recently been
introduced such as vacuum cleaning robots, home security robots,
robots for lawn mowing, entertainment robots, and guide robots
[3, 4].
In this context, Public Service Robot (PSR) systems have been
developed for indoor service tasks at Korea Institute of Science
and Technology (KIST) [5, 6]. The PSR is an intelligent service
robot, which has various capabilities such as navigation,
manipulation, etc. Up to now, three versions of the PSR systems,
that is, PSR-1, PSR-2, and a guide robot Jinny have been built.
The worldwide aging population and health care costs of aged
people are rapidly growing and are set to become a major problem
in the coming decades. This phenomenon could lead to a huge
market for service robots assisting with the care and support of
the disabled and elderly in the future [8]. As a result, a new
project is under development at Center for Intelligent Robotics
(CIR) at KIST, i.e. the intelligent service robot for the elderly,
called T-Rot.
In our service robot applications, it is essential to not only
consider and develop a well-defined robot software architecture,
but also to develop and integrate robot application components in
a systematic and comprehensive manner. There are several
reasons for this:
First, service robots interact closely with humans in a wide
range of situations for providing services through robot
application components such as vision recognition, speech
recognition, navigation, etc. Thus, a well-defined robot
control architecture is required for coherently and
systematically combining these services into an integrated
system.
Second, in robot systems, there are many-to-many relations
among software components as well as hardware
components. For instance, a local map module requires
range data from a laser scanner, ultrasonic sensors, and
infrared sensors, as well as prior geometrical descriptions of
the environment. On the other hand, the laser scanner should
provide its data to a path planner, localizer, and a local map
building module. These relationships, as well as interactions
among software or hardware modules, must be carefully
analyzed and systematically managed from an early stage of
development in order to understand the big picture.
Third, the functional performance of each software and
hardware module becomes highly dependent on the
architecture, as the number of robot platforms increases [6],
and new services are added, or existing services are removed
or updated to address changes in user needs.
Fourth, previously developed software modules like maps,
localization, and path planners can be directly reused for
new tasks or services by service robot developers. Thus, a
robot architecture, as well as systematic processes or
methods, are required to support the implementation of the
system, to ensure modularity and reusability.
As a consequence, in the previous work [5,6], the Tripodal
schematic control architecture was proposed to tackle the
problems. Many related research activities have been done.
However, it is still a challenging problem to develop the robot
architecture by carefully taking into account user needs and
requirements, implement robot application components based on
the architecture, and integrate these components in a systematic
and comprehensive way. The reason is that the developers of
service robots generally tend to be immersed in technology
specific components, e.g. vision recognizer, localizer and path
planner, at an early stage of product development without
carefully considering architecture to integrate those components
for various services [9]. Moreover, engineers and developers are
often grouped into separate teams in accordance with the specific
technologies (e.g., speech processing, vision processing), which
makes integration of these components more difficult [7, 9]. In
such a project like T-Rot, particularly, several engineers and
developers (i.e., approximately, more than 150 engineers) from
different organizations and teams participate in the
implementation of the service robot. Each separate team tends to
address the specific technologies such as object recognition,
manipulation, and navigation and so on. Engineers who come
from different teams are concerned with different characteristics
of the system. Thus, a common medium is required to create
mutual understanding, form consensus, and communicate with
each other for successfully constructing the service robot. Without
such a medium or language, it is difficult to sufficiently
understand the service robot system and interact between teams to
integrate components for services.
Within the domain of software engineering, many approaches
have been suggested for a systematic and complete system
analysis and design, and for the capture of specifications. The
object-oriented paradigm [10,11] is a widely-accepted approach
to not only cover the external and declarative view of a system,
but also at the same time bridge seamlessly with the internal
implementation view of a system [13]. Object-oriented concepts
are crucial in software analysis and design because they focus on
fundamental issues of adaptation and evolution [14]. Therefore,
compared with the traditional structured software development
methods, object-oriented methods are a more modular approach
for analysis, design, and implementation of complex software
systems, which leads to more self-contained and hence modifiable
and maintainable systems. More recently, the Unified Modeling
Language (UML) [15,16] has captured industry-wide attention for
its role as a general-purpose language for modeling software
systems, especially for describing object-oriented models. The
UML notation is useful to specify the requirements, document the
structure, decompose into objects, and define relationships
between objects in a software system. Certain notations in the
UML have particular importance for modeling embedded systems
[17,18], like robot systems. By adopting the UML notation,
development teams thus can communicate among themselves and
with others using a defined standard [14,17,18]. More importantly,
it is essential for the UML notation to be used with a systematic
object-oriented analysis and design method in order to be
effectively applied [14].
As a result, our aim is to develop the intelligent service robot
based on the systematic software engineering method, especially
for real-time, embedded and distributed systems with UML. To
do so, we applied the COMET method, which is a UML based
method for the development of concurrent applications,
specifically distributed and real-time applications [14]. By using
the COMET method, it is possible to reconcile specific
engineering techniques with the industry-standard UML and
furthermore to fit such techniques into a fully defined
development process towards developing the service robot
systems.
In this paper, we describe our experience of applying the COMET
/UML method into developing the intelligent service robot for the
elderly, called T-Rot, developed at CIR. In particular, we focused
on designing an autonomous navigation system for the service
robot, which is one of the most challenging issues for the
development of service robots.
Section 2 describes the hardware configuration and services of the
T-Rot, and discusses the related work. Section 3 illustrates how to
apply the COMET method into designing and developing the
autonomous navigation system for the service robot, and
discusses the results of experiments. The lessons learned from the
project are summarized in section 4, and section 5 concludes the
paper with some words on further work.
BACKGROUD ON T-Rot
Fig. 1. KIST service robots
At KIST, intelligent service robots have been developed in large-scale
indoor environments since 1998. So far, PSR-1 and PSR-2,
which performs delivery, patrol, and floor cleaning jobs, and a
guide robot Jinny, which provides services like exhibition guide
and guidance of the road at a museum, have been built [5,6] (see
Fig. 1). The service robot T-Rot is the next model of the PSR
system under development for assisting aged persons.
Development of T-Rot, in which our role is developing and
integrating robot software, started in 2003 by mainly CIR with
535
more than 10 groups consisting of more than 150 researchers and
engineers from academia and industry. This project is based on
the needs and requirements of elderly people through the studies
and analysis of the commercial health-care market for providing
useful services to them. Thus, the aim of this project is to develop
the intelligent service robot for the elderly by cooperating and
integrating the results of different research groups. This project
that is divided into three stages will continue until 2013 and we
are now in the first stage for developing the service robot
incrementally to provide various services.
2.2
Hardware of T-Rot
The initial version of T-Rot, as shown in Fig. 2, has three single
board computer (SBC), that is, mobile Pentium 4 (2.2GHz) and
1GB SDRAM on each SBC. In terms of software environment,
Linux Red hat 9.0 and RTAI (Real-Time Application Interface)
[12] are used as operating system. Fig. 3 shows hardware
configuration as a whole. As mentioned earlier, development of
T-Rot is conducted incrementally for various services and thus the
platform will be extended with manipulators and robot hands later.
In our project, we developed the robot software based on the
initial version of the platform. The details of the hardware
platform are described in Table 1.
Fig. 2. T-Rot robot hardware platform
Fig. 3. T-Rot robot hardware platform configuration
Table 1. T-Rot hardware platform devices
Intel Mobile Pentium 4 (2.2 GHz)
1GB SDRAM
SBC
30GB Hard Disk
16 microphones for speaker localization
1 microphone for speech recognition
Voice
1 speaker for speech generation
Vision
2 stereo vision cameras for recognizing users and object
s (1288 H x 1032 V maximum resolution and 7Hz fram
e rates)
Pan/Tilt for controlling the vision part
2 laser scanners (front and back)
2 IR scanners (front and back)
12 Ultrasonic sensors
Sensor
1 Gyroscope sensor for measuring balance
2 actuators for two drive wheels (right and left)
2 free wheels (the support wheels)
2 Servo Motors (100 [w])
2 encoders (2048 ppr)
Actuator
2 bumpers
1 TFT LCD & Touch (10.4" 1024x768, 26000 colors)
KVM (Keyboard/Mouse)
Interface
Wireless LAN for communications
2.3
Robot Services
Some of the primary services under-developed that the initial
version for T-Rot provides for the elderly are described as below.
Voice-based Information Services: The robot T-Rot can
recognize voice commands from a user (i.e., an aged person)
via microphones equipped with the robot and can synthesize
voices for services. While a user is watching TV, the user
can ask some questions about the specific TV program or
request a task to open an Internet homepage by speaking the
TV program name.
Sound Localization and Voice Recognition: A user can call
a robot's predefined name, to let the robot recognize the call
while the robot knows the direction to move to the user. This
service analyzes audio data from 3 microphones on the
shoulder for sound localization and 16 mic array on the head
for speech recognition to recognize the command from the
user.
Autonomous navigation: A user can command the robot to
move to a specific position in the map to perform some task.
For instance, the robot can navigate to its destination in the
home environment via its sensors, which include laser
scanners and ultrasonic sensors. The robot plans a path to
the specified position, executes this plan, and modifies it as
necessary for avoiding unexpected obstacles. While the
robot is moving, it constantly checks sensor data from its
sensors every 200 ms.
An errand service: The robot can carry objects that a user
(i.e., an aged person) usually uses, like a plate, books, a cane
a cup of tea, beverages, etc according to the user's
instructions. For instance, the user can order the robot to
bring a cup of tea or beverage by speaking the name of the
drink.
Of these T-Rot services, our emphasis was on the autonomous
navigation service, which is one of the most challenging issues
and is essential in developing service robots, particularly mobile
service robots to assist elderly people. It includes hardware
integration for various sensors and actuators, and the development
of crucial navigation algorithms like maps, path planners, and
536
localizers as well as software integration of software modules like
a path planner, a localizer, and a map building module.
2.4
Control Architecture of PSR
Up to now, there have been many related research activities to
develop efficient and well-defined control architectures and
system integration strategies for constructing service robots. A
recent trend is that many control architectures are converging to a
similar structure based on a hybrid approach that integrates
reactive control and deliberation [6]. At KIST, for developing
service robots, that is PSR-1, PSR-2, and Jinny in the previous
work [5,6], the Tripodal schematic control architecture was
proposed as the solution to the problem.
One important point of Tripodal schematic design is to integrate
robot systems by using a layered functionality diagram. The
layered functionality diagram is a conceptual diagram of three
layers for arrangement of various hardware and software modules
and functions. It also shows the connectivity and the information
flow between components. Those layers are composed of
deliberate, sequencing, and reactive layers based on the hybrid
approach. The purposes of the deliberate layer are to interface
with a user and to execute a planning process. The sequencing
layer is classified into two groups, that is, the controlling part that
executes the process by managing the components in the reactive
layer and the information part that extracts highly advanced
information from sensor data. The reactive layer controls the real-time
command and hardware-related modules for sensors and
actuators. The detailed description of whole control architecture
of the PSR is introduced in [5].
However, as described earlier, in order to effectively apply this
approach and the UML notation to developing service robots, it is
essential to use a systematic software engineering process or
methods like object-oriented analysis and design methods,
especially for real-time and embedded systems. We believe that
only a systematic and comprehensive software development
process and method will be able to resolve the issues discussed
before and will be vital for success in developing service robots.
2.5
The COMET method
COMET [14] is a method for designing real-time and distributed
applications, which integrates object-oriented and concurrent
processing concepts and uses the UML notation [15,16]. The
COMET object- oriented software life cycle model is a highly
iterative software development process based around the use case
concept. Therefore, in this project, the COMET method with
UML was used to develop a system for autonomous navigation by
the intelligent service robot, T-Rot. The method separates
requirements activities, analysis activities and design activities,
and these activities are briefly described as below. The details are
described in section 3 with the case study.
Requirements modeling - A use case model is developed in
which the functional requirements of the system are defined
in terms of actors and use cases.
Analysis modeling - Static and dynamic models of the
system are developed. The static model defines the
structural relationships among problem domain classes. A
dynamic model is then developed in which the use cases
from the requirements model are refined to show the objects
that participate in each use case and how they interact with
each other.
Design modeling The software architecture of the system
is designed, in which the analysis model is mapped to an
operational environment. For distributed applications, a
component based development approach is taken, in which
each subsystem is designed as a distributed self-contained
component.
APPLYING THE COMET/UML METHOD TO T-ROT
In this section, we explain how to develop robot software for the
autonomous navigation system with the COMET/UML method.
In our project, the UML notation conforms to UML 1.3 and the
Rational Rose tool is used.
3.1
Requirements Modeling
Capturing the functional requirements of the system is the first
phase in software development, which defines what the system
should do or provide for the user. In our approach, developers can
catch the functional requirements or services by using the use
case model in terms of use cases and actors (see Fig. 4). To
identify and define the requirements of the system more clearly,
the system has to be considered like a black box. In the service
robot, the actor can be usually a human user as well as external
I/O devices and external timer.
Navigation
Commander
(from 1.0 Actors)
Clock
(from 1.0 Actors)
Obstacle Avoidance
<<extend>>
Fig. 4. Use case diagram for Navigation
Table 2 shows a specification for Navigation use case. In our
navigation system, we identified a Commander and a Clock as an
actor. While the robot is moving, if the robot recognizes obstacles,
it should avoid them for continuing to go to the destination. Even
when humans or objects suddenly appear, the robot must be able
to stop to avoid crashing into those. However, in order to do this,
the robot has to check that there are obstacles by using sensor data
more often (e.g., every 50 ms) than the normal navigation system
does (e.g., every 200 ms). As a result, the Obstacle Avoidance use
case is extended from the Navigation use case. During the
Navigation use case is executing, if the obstacles are recognized,
then the Obstacle Avoidance use case is triggered to perform the
emergency stop of the robot. If the obstacles disappear, the robot
moves again to the destination.
Table 2. Navigation use case
Summary
The Commander enters a destination and the robot
system moves to the destination.
Actor Commander
Precondition
The robot system has the grid map and the current
position is known
Description
1. The use case begins when the commander enters a
destination.
2. The system calculates an optimal path to the
destination.
3. The system commands the wheel actuator to start
537
moving to the destination.
4. The wheel actuator notifies the system that it has
started moving
5. The system periodically reads sensor data and
calculates the current position.
6. The system determines that it arrives at the destination
and commands the wheel actuator to stop.
7. The wheel actuator notifies the system that it has
stopped moving and the use case is finished.
Alternative
6.1. If the system doesn't arrive at the destination, it
keeps moving.
Postcondition The robot system is at the destination and waiting for the
next destination.
3.2
Analysis Modeling
3.2.1
Static Modeling
The objective of static modeling is to understand the interface
between the system and the external environment and to describe
the static structure of the system under development by
developing a system context class diagram. It is specifically
important for real-time and embedded systems like robot systems
[14]. The system context class diagram can be determined by
static modeling of the external classes that connect to the system.
Commander
(from 1.0 Actors)
Sensor
<<external input device>>
WheelActuator
<<external output device>>
CommandLine
<<external user>>
1
1
1
1
Robot Navigation System
<<System>>
1..*
1
1..*
1
Inputs To
1
1
1
1
Outputs to
1
1 1
1
interacts with
Clock
(from 1.0 Actors)
Clock
<<external timer>>
1
1
1
1
Awakens
Fig. 5. Robot Navigation System context class diagram
The system context class diagram of the Robot Navigation System
is shown in Fig. 5, which illustrates the external classes to which
the system has to interface. In our navigation system, a
commander enters a destination via a command line, to which the
robot should move. The system uses sensor data via various
sensors such as laser scanners, IR scanners, ultrasonic sensors, etc
and it controls the wheels of the robot via the wheel actuator.
Therefore, the external classes correspond to the users (i.e., a
Commander who interacts with the system via a Command Line),
and I/O devices (i.e., a Sensor and Wheel Actuator). A Clock actor
needs an external timer class called Clock to provide timer events
to the system. This external timer class is needed to periodically
check sensor data via those sensors for avoiding obstacles (i.e.,
doing the emergency stop) while the robot is moving.
Next, to structure the Robot Navigation System into objects,
object structuring needs to be considered in preparation for
dynamic modeling. The objective of the object structuring is to
decompose the problem into objects within the system. We
identified the internal objects according to the object structuring
criteria in COMET (see Fig. 6). In our system, interface objects,
i.e. a Command Line Interface, Sensor Interface and Wheel
Actuator Interface are identified by identifying the external
classes that interface to the system, i.e. the Command Line,
Sensor, and Wheel Actuator, respectively. There are four entity
objects identified, that is, a Current Position, Destination,
Navigation Path and Navigation Map, which are usually long-living
object that stores information. In addition to those objects,
there is a need for control objects, which provide the overall
coordination for objects in a use case and may be coordinator,
state-dependent control, or timer objects. The Navigation System
has a state-dependent control object called Navigation Control
that controls the wheel actuator and sensors. The states of the
Navigation Control object are shown on a Navigation Control
statechart (this will be discussed in the dynamic modeling). There
are two timer objects, i.e. a Navigation Timer and an Obstacle
Avoidance Timer. The Obstacle Avoidance Timer is activated by a
timer event from an external timer to periodically check that there
is any obstacle around the robot. On the other hand, the
Navigation Timer is started by the Navigation Control object and
generates a timer event for navigation. Also, a Localizer
algorithm object and Path Planner algorithm object are identified,
which encapsulate an algorithm used in the problem domain,
namely the autonomous navigation.
<< Robot Navigation System >>
Commander
(from 1.0 Actors)
CommandLineInterface
<<user interface>>
CommandLine
<<external user>>
1
1
1
1
1
1
1
1
SensorInterface
<<input device interface>>
Sensor
<<external input device>>
1
1..*
1
1..*
WheelActuator
<<external output device>>
WheelActuatorInterface
<<output device interface>>
1
1
1
1
Destination
<<entity>>
Navigation Path
<<entity>>
Navigation Map
<<entity>>
Current Position
<<entity>>
Navigation Control
<<state dependent>>
Navigation Timer
<<timer>>
ObstacleAvoidanceTimer
<<timer>>
Clock
<<external timer>>
1
1
1
1
Localizer
<<algorithm>>
PathPlanner
<<algorithm>>
Fig. 6. Object structuring class diagram for Navigation
System
3.2.2
Dynamic Modeling
Dynamic modeling emphasizes the dynamic behavior of the
system and plays an important role for distributed, concurrent and
real-time system analysis. The dynamic model defines the object
interactions that correspond to each use case and thus is based on
the use cases and the objects identified during object structuring.
In our case, collaboration diagrams are developed to show the
sequence of object interactions for each use case. Additionally, if
the collaboration involves the state-dependent object, which
executes a statechart, the event sequence is shown on a statechart.
: Navigation
Control
: CommandLine
: Sensor
: WheelActuator
: WheelActuatorInterface
: SensorInterface
: Destination
: Navigation
Path
: Navigation Map
: Current
Position
: CommandLineInterface
: Navigation
Timer
Path
Planner
Localizer
Sequencing
Layer
<<external user>>
<<user interface>>
<<state dependent control>>
<<timer>>
<<entity>>
<<algorithm>>
<<entity>>
<<entity>>
<<entity>>
<<algorithm>>
<<external input device>>
<<input device interface>>
<<output device interface>>
<<external output device>>
Deliberate
Layer
Reactive
Layer
1.2a: Store Destination
2.11, 3.11 : Check Destination
2.12 : No , 3.12: Yes
1.13, 2.18: Planned Path
1.10, 2.15: Read a Path
1.14: Start
2.19: Move
3.13: Stop
1.17: Started
3.16: Stopped
1.4, 2.7, 3.7: Read Current Position
1.7, 2.10, 3.10: Current Position
1.2, 2.5, 3.5: Read Map
1.8, 2.13: Update Map
1.3, 2.6, 3.6 : Map
1.9, 2.14: Updated Map
1: Enter Destination
1.1: Destination Entered
2.1, 3.1: Read Sensors
2.4, 3.4: Sensor Data
2.2, 3.2: Read
2.3, 3.3: Data
1.15: Start WheelActuator Output
2.20:Move WheelActuator Output
3.14: Stop WheelActuator Output
1.16, 5.8: Started Ack
3.15: Stopped Ack
1.5, 2.8, 3.8: Localize
1.6, 2.9, 3.9: Current Position
2, 3: After(Elapsed Time)
1.18, 5.10: Start Timer
3.17, 4.10: Stop Timer
1.12, 2.17: Path
1.11, 2.16: Plan a path
Fig. 7. Collaboration diagram for Navigation use case
538
In the navigation system, the Localizer has the algorithm which
can calculate the current position based on sensor data via sensors.
So, the role of the Localizer is to update the current position of
the service robot. In the Path Planner object, there is a method for
calculating a path to arrive at the destination based on both sensor
information and the current position that is calculated at the
Localizer. The Navigation Timer is an internal timer that is
controlled by the Navigation Control. After the destination is
entered from the external user, the Navigation Control starts the
Navigation Timer, then the timer generates a timer event
periodically (i.e., every 200ms) until the Navigation Control stops
the timer.
The Navigation use case starts with the commander entering the
destination into the navigation system. The message sequence
number starts at 1, which is the first external event initiated by the
actor. Subsequence numbering in sequence is from 1.1 to 1.18 as
shown in Fig. 7. The next message sequence activated by the
Navigation Timer is numbered 2, followed by the events 2.1, 2.2,
and so forth. The following message sequences are illustrated in
the collaboration diagram (see Fig. 7).
The collaboration diagram for the Obstacle Avoidance use case is
shown in Fig. 8. When activated by the Obstacle Avoidance
Timer every 50 ms, the Sensor Interface object reads sensor data
via various sensors (Events 4.1, 5.1, 6.1). If an obstacle is
recognized, the Obstacle Avoidance Timer sends the emergency
stop message to the Wheel Actuator Interface (Event 4.5).
Afterwards, the timer also sends a suspend event to the
Navigation Control. If the obstacle disappears, the timer sends a
restart event to the Navigation Control for the robot to move
again.
: Sensor
: WheelActuator
: WheelActuatorInterface
: SensorInterface
: Clock
: ObstacleAvoidanceTimer
<<state dependent control>>
<<external timer>>
<<external input device>>
<<timer>>
<<input device interface>>
<<output device interface>>
<<external output device>>
: Navigation
Control
Sequencing
Layer
Reactive
Layer
5.6: Start
5.9: Started
4, 5, 6: Timer Event
4.2, 5.2, 6.2: Read
4.3, 5.3, 6.3: Data
4.1, 5.1, 6.1: Read Sensors
4.4, 5.4, 6.4: Sensor Data
4.5: Stop
4.8: Stopped
4.9: Suspend
5.5: Restart
6.5: Time Expired
5.7: Start WheelActuator Output
4.6 : Stop WheelActuator Output
5.8: Started Ack
4.7: Stopped Ack
Fig. 8. Collaboration diagram for Obstacle Avoidance use case
With COMET, the software architecture can be based on a
software architectural style (pattern) such as client/server or
layers of abstraction. In our project, the layered strategy of the
Tripodal schematic design described in section 2 is applied for
design and modeling of the robot system, which provides a
conceptual diagram of three layers (i.e., deliberate, sequencing,
and reactive layers) for arrangement of various hardware and
software modules and functions. Therefore, in the collaboration
diagrams (see Fig. 7 and 8), the Command Line Interface is
located in the deliberate layer and the Sensor Interface, Wheel
Actuator Interface, and Obstacle Avoidance Timer are in the
reactive layer. The others are positioned in the sequencing layer.
In our navigation system, after drawing the collaboration
diagrams for the Navigation and Obstacle Avoidance use cases
which include the Navigation Control state-dependent object, we
develop a Navigation Control statechart, which is executed by the
Navigation Control object. The statechart needs to be considered
in connection with the collaboration diagram. Specifically, it is
required to take into account the messages that are received and
sent by the control object, which executes the statechart [14]. An
input event (e.g., 1.1: destination entered) into the Navigation
Control object on the collaboration diagram should be consistent
with the same event shown on the statechart. The output event,
which causes an action, enable or disable activity, like 1.2: Read
Map (which cases an action) on the statechart must be consistent
with the output event depicted on the collaboration diagram.
Because the statechart modeling involves two state-dependent use
cases in the navigation system, it is also required to consolidate
the two partial statecharts to create a complete statechart. The
complete statechart for both the Navigation and Obstacle
Avoidance use cases is shown in Fig. 9.
Idle
Starting
Planning a Path
Checking
Destination
Stopping
3.16: Stopped / 3.17: Stop Timer
Reading Sensors
Localizing
Moving
1.17, 5.9: Started / 1.18, 5.10: Start Timer
1.13: Planned Path[ Start ] / 1.14: Start
2.18: Planned Path[ Move ] / 2.19: Move
Reading
Map
2.4, 3.4: Sensor Data / 2.5, 3.5: Read Map
1.1: Destination Entered / 1.2 : Read Map, 1.2a: Store Destination
1.3, 2.6, 3.6: Map / 1.4, 2.7, 3.7: Read Current Position
Updating
Map
2.10, 3.10:Current Position[ Move ] / 2.11, 3.11: Check Destination
1.7: Current Position[ Start ] / 1.8: Update Map
1.9, 2.14: Updated Map / 1.10, 2.15: Read a Path
2.12: No / 2.13: Update Map
3.12 : Yes / 3.13: Stop
Suspending
2, 3: After( Elapsed Time ) / 2.1, 3.1: Read Sensors
4.9: Suspend / 4.10: Stop Timer
5.5: Restart / 5.6: Start
6.1: Time Expired
Fig. 9. Statechart for Navigation Control
3.3
Design Modeling
3.3.1
Software Architecture
In this phase, all collaboration diagrams developed for use cases
in the analysis model are merged into the consolidated
collaboration diagram.
The consolidated collaboration diagram is
thus intended to be a complete description of all objects and their
interactions.
The consolidation of the two collaboration diagrams respectively
supporting the two use cases is shown in Fig. 10. Some objects
and message interactions appear on more than one collaboration
diagram. For instance, the Navigation Control, Navigation Timer,
Sensor Interface and Wheel Actuator Interface objects participate
in both the Navigation and Obstacle Avoidance use cases. For
those objects, their message interactions are only shown once in
the consolidated collaboration diagram.
3.3.2
Architectural Design of Distributed Real-time
Systems
The robot system is a distributed embedded system and executes
on distributed nodes by the communication methods like TCP/IP,
CAN (Controller Area Network), and Wire/Wireless LAN. With
COMET, a distributed real-time system is structured into
distributed subsystems. Tasks in different subsystems may
communicate with each other via several types of message
communication, such as asynchronous, synchronous with reply,
synchronous without reply, and client/server communications, etc.
539
Hence, we should define distributed nodes and their messages to
each node.
The overall distributed software architecture for the robot
navigation system is depicted in Fig. 11. In the robot system,
objects that are part of the navigation are located in the robot
navigation system. The robot navigation system communicates
with the external I/O devices via synchronous message without
reply communication and with the external timer via
asynchronous message communication.
: Navigation
Control
: CommandLine
: Sensor
: WheelActuator
: WheelActuatorInterface
: SensorInterface
: Destination
: Navigation
Path
: Navigation Map
: Current
Position
: CommandLineInterface
: Navigation
Timer
Path
Planner
Localizer
<<external user>>
<<user interface>>
<<state dependent control>>
<<timer>>
<<entity>>
<<algorithm>>
<<entity>>
<<entity>>
<<entity>>
<<algorithm>>
<<external input device>>
<<input device interface>>
<<output device interface>>
<<external output device>>
: Clock
: ObstacleAvoidanceTimer
<<external timer>>
<<timer>>
Deliberate
Layer
Sequencing
Layer
Reactive
Layer
Store Destination
Check Destination
Yes/No
Planned Path
Read a Path
Start
Move
Stop
Started
Stopped
Read Current Position
Current Position
Read Map
Update Map
Map
Enter Destination
Start WheelActuator Output
Move WheelActuator Output
Stop WheelActuator Output
Started Ack
Stopped Ack
Read Sensors
Sensor Data
Read
Data
Read Sensors
Sensor Data
Localize
Current Position
Destination Entered
After(Elapsed Time)
Start Timer
Stop Timer
Path
Plan a path
Timer Event
Suspend
Restart
Time Expired
Stop
Stopped
Fig. 10. Consolidated collaboration diagram for Navigation
System
: CommandLine
: Sensor
: WheelActuator
: Robot Navigation
System
<< synchronous message without reply>>
<< synchronous message without reply>>
<< synchronous message without reply>>
: Clock
<<asynchronous message>>
Enter Destination
Start WheelActuator Output
Stop WheelActuator Output
Move WheelActuator Output
Read
Timer Event
Fig. 11. Distributed software architecture for Navigation
System
3.3.3
Task Structuring
During the task structuring phase, a task architecture can be
developed in which the system is structured into concurrent tasks,
and the task interfaces and interconnections are defined. A task is
an active object and has its own thread of control. In this sense,
the term "object" will be used to refer to a passive object in this
paper. In COMET, task structuring criteria are provided to help in
mapping an object-oriented analysis model of the system to a
concurrent tasking architecture. At the end of this phase, a task
behavior specification (TBS) is developed.
The task architecture for the Navigation System is shown in Fig.
12. In order to determine the tasks in the system, it is necessary to
understand how the objects in the application interact with each
other based on the collaboration diagrams. In the collaboration
diagram of Fig. 7, the Localizer object reads sensor data and the
map from the Current Position object, calculates a new current
position, and sends the current position to the Current Position
object for updating it. Thus, the Localizer object is structured as
an asynchronous algorithm task called Localizer. There are two
asynchronous algorithms, i.e. Localizer and Path Planner, which
are internal asynchronous tasks. There are four passive entity
objects, i.e. Destination, Current Position, Navigation Map, and
Navigation Path, which do not need a separate thread of control
and are further all categorized as data abstraction objects. The
Sensor and Wheel Actuator are a passive input device and a
passive output device, respectively because they do not generate
an interrupt on completion of the input or output operation.
: CommandLine
: Sensor
: WheelActuator
<< external user >>
<< passive input device >>
<< control clustering >>
<< passive output device >>
: Destination
: Navigation
Path
: Navigation Map
: Current
Position
<< data abstraction >>
: Localizer
: Path
Planner
<< asynchronous algotithm >>
<< asynchronous algotithm >>
: Navigation
Controller
<< data abstraction >>
<< data abstraction >>
<< data abstraction >>
: Clock
: Navigation
Controller
<<sequential clustering>>
<<external timer>>
Reactive
Layer
Deliberate
Layer
Sequencing
Layer
enter (in destination)
read(out sensorData)
read(out sensorData)
read(out sensorData, out map)
update(in currentPosition)
read(out destination,out currentPosition,out map)
update(in path)
store(destination)
check(currentPosition, yes/no)
read(in sensorData,in map,out currentPosition)
Start WheelActuator Output
Move WheelActuator Output
Stop WheelActuator Output
read(in destination,in currentPosition,in map, out path)
read(out map)
update(in sensorData, in currentPosition, out map)
suspend()
restart()
timerEvent
stopWheelActuatorOutput
Fig. 12. Task architecture for Navigation System
The Navigation Control is a state-dependent control object that
executes the Navigation Control statechart and is structured as a
control task because it needs to have a separate thread of control.
The Navigation Control object can be combined with the
Command Line Interface, Navigation Timer, Sensor Interface, and
Wheel Actuator Interface objects into one task, Navigation
Controller, based on the control clustering task structuring
criterion because it is not possible for them to execute
concurrently (see the middle of Fig. 12). The Obstacle Avoidance
Timer object is structured as a periodic task, activated periodically
to read sensor data. It can be grouped with the Sensor Interface
and Wheel Actuator Interface into one sequentially clustered task,
Obstacle Avoidance Controller based on sequential clustering
since those are carried out in a sequential order. The design of
those composite tasks, the Navigation Controller and Obstacle
Avoidance Controller are considered in the next section (i.e.,
detailed software design).
After developing the task architecture, a task behavior is
described for specifying the characteristics of each task based on
COMET. During the task structuring, the TBS focuses on the task
inputs and outputs. One part of the TBS, i.e. the task's event
sequencing logic is defined in the detailed software design phase.
3.3.4
Detailed Software Design
The internals of composite tasks which have passive objects
nested inside them are designed, detailed task synchronization
issues are addressed, and each task's internal event sequencing
logic is defined in this phase. Before this is done, the information
hiding classes (from which the passive objects are instantiated)
are designed. In particular, the operations of each class and the
design of the class interfaces are determined and specified in a
class interface specification (because of space limitation, the
detailed TBS and the class interface specification have not been
included).
540
Let us consider the internal design of the Navigation Controller,
which is a composite task designed as a control clustering task, to
show the nested information hiding objects (see Fig. 13). The
information hiding objects are the Navigation Control state-dependent
control object, the Sensor Interface and Wheel
Actuator Interface objects, the Navigation Timer object and the
user interface object, the Command Line Interface. In addition,
the Navigation Controller contains one coordinator object called
Navigation Coordinator, which receives incoming messages and
coordinates the execution of the other objects. That is, the
Navigation Coordinator extracts the event from the request and
calls Navigation Control.processEvent (in event, out action) (see
Fig. 13). The Navigation Control returns the action to be
performed like store, check, start, etc according to the state
transition table. Afterwards, the Navigation Coordinator initiates
the action.
<<control clustering>>
:NavigationController
: Navigation
Control
: Navigation
Timer
: CommandLineInterface
: Current
Position
: Navigation
Map
: Navigation
Path
: WheelActuatorInterface
: WheelActuator
: SensorInterface
: Sensor
: Destination
: Navigation
Coordinator
: CommandLine
<< external user >>
<<user interface>>
<<timer>>
<<coordinator>>
<<data abstraction>>
<<data abstraction>>
<<data abstraction>>
<<input device interface>>
<<state dependent control>> <<output device interface>>
<<data abstraction>>
Start WheelActuator Output
Move WheelActuator Output
Stop WheelActuator Output
read(out sensorData)
store(in destination)
check(in currentPosition,out yes/no)
read(out sensorData)
startTimer( )
stopTimer( )
activate( )
read(in sensorData, in map, out CurrentPosition)
read(out map)
update(sensorData, currentPosition, map)
read(destination, currentPosition, map)
start(in path,out started)
move(in path)
stop(out stopped)
processEvent(in event,out action)
startRobot(in destination)
enter(in destination)
Fig. 13. Detailed software design for Navigation Controller
In our system, communication between tasks such as the
Navigation Controller, Localizer, and Path Planner is through
data abstraction classes like the Current Position and Navigation
Path. As a result, connector objects [14] are not used for the
message communication interface between tasks.
: CommandLineInterface
: Navigation
Control
: Navigation
Coordinator
: Destination : Navigation
Map
: Current
Position
: Navigation
Path
: WheelActuatorInterface
: Navigation
Timer
:
SensorInterface
1. startRobot(destination)
1.2. store(destination)
1.4. read(map)
1.6. read(sensorData, map, currentPosition)
1.1. processEvent(event, action)
1.3. processEvent(event, action)
1.5. processEvent(event, action)
1.7. processEvent(event, action)
1.8. update(sensorData, currentPosition, map)
1.9. processEvent(event, action)
1.10. read(destination, currentPosition, map, path)
1.11. processEvent(event, action)
1.12. start(path, started)
1.13. processEvent(event, action)
1.14. startTimer( )
2. activate( )
2.1. processEvent(event, action)
2.2. read(sensorData)
2.3. processEvent(event, action)
2.4. read(map)
2.5. processEvent(event, action)
2.6. read(sensorData, map, currentPosition)
2.7. processEvent(event, action)
2.8. check(currentPosition, yes/no)
2.10. update(sensorData, currentPosition, map)
2.9. processEvent(event, action)
2.12. read(destination, currentPosition, map, path)
2.11. processEvent(event, action)
2.13. processEvent(event, action)
2.14. move(path)
3. stop(stopped)
4. processEvent(event, action)
5. stopTimer( )
if not desitniation
if destination
Fig. 14. The task event diagram for Navigation Controller
Lastly, the task's event sequencing logic is specified, which
describes how the task responds to each of its message or event
inputs. However, instead of using informally Pseudo code in
COMET, in this project, task event diagrams are developed for
tasks by using the UML sequence diagrams for understanding and
readability, which turned out to be very useful when to implement
the tasks. Fig. 14 illustrates the task event diagram for the
Navigation Controller.
LESSONS LEARNED
This section summarizes the lessons learned from the project
where we successfully applied the object-oriented method with
UML to developing the service robot.
4.1
UML for Service Robot Domain
Through the case study, we found that the UML standard was
very useful as a notation for specifying the requirements,
documenting the structure, decomposing into objects, and
defining relationships between objects especially in a service
robot system. Certain diagrams and notations were particularly
importance for analyzing, designing, and modeling service robot
systems as follows.
Use case diagrams: With the use case model, services or
functions (i.e., functional requirements), which a service
robot performs or provides, can be defined in terms of actors
who are users of the robot system and use cases. Each use
case defines the behavior of some aspect of the robot system
without revealing its internal structure.
Class diagrams: The class diagram notation is used to depict
the static model, which focuses on the static structure of the
robot system. The class diagram shows the classes of objects
in the system, their internal structure including attributes,
their operations, and their relationships to other classes
(such as associations and generalization/inheritance).
Collaboration diagrams: This diagram shows how objects
that participate in each use case interact with each other by
sending and receiving messages in the dynamic model. It
defines a specific way to use objects in the robot system by
showing the possible interactions between them, especially
to satisfy the needs described in the use case, namely
provide the services. Compared to a sequence diagram, the
diagram in particular is useful for synthesizing the
collaboration diagrams to create the software architecture of
the system as discussed in section 3.3.
Sequence diagrams: This diagram show objects interaction
arranged in time sequence and in particular could be used to
describe the task event sequencing logic, which describes
how the task responds to each of its message or event inputs.
In COMET, the event sequencing logic is usually described
informally in Pseudo code. We found that the sequence
diagram can help the engineers describe the task event
sequencing logic and implement the tasks by showing the
order in which messages are passed between tasks and
objects.
State chart diagrams: The service robot system is highly
state-dependent like real-time embedded systems. This
diagram describes how state-dependent aspects of the
system are defined by a finite state machine and can help
design and developing highly state-dependent systems. It is
541
also possible for this diagram to model object behavior over
several use cases with the collaboration diagrams.
In addition, by using the UML notation as a defined standard,
different research groups and development teams can
communicate among themselves and with others to develop and
integrate specific components for providing various services.
4.2
Importance of Systematic Process/Method
for Service Robot Domain
In order to effectively apply the UML notation and the robot
control architecture like the Tripodal schematic control
architecture to developing service robots, it is essential to use
them with a systematic software engineering process or method,
like an object-oriented analysis and design method, especially for
real-time and embedded systems. It is not possible to resolve the
issues in integrating and developing the service robots discussed
before without systematic and comprehensive software
development methods, particularly for service robots.
In our case study, we applied COMET/UML method to
developing the service robot. The COMET object-oriented
software life cycle model is a highly iterative software
development process based around the use case concept. In the
requirements model, the service or functions (i.e., the function
requirements) of the robot system are defined in terms of actors
and use cases. In the analysis model, the use case is refined to
describe the objects that participate in the use case, and their
interactions. In the design model, the robot software architecture
is developed, emphasizing issues of distribution, concurrency, and
information hiding. This project showed that this was a viable
approach because applying the COMET method with UML led to
developing an effective service robot architecture by carefully
taking into account user needs and requirements, implementing
technical components based on the architecture, and integrating
these components in a systematic and comprehensive fashion.
4.3
Customizing the COMET Method for
Service Robot Domain
Service robots like PSR-1, PSR-2, and Jinny have been built at
KIST based on the Tripodal schematic control architecture. The
Tripodal schematic design addressed on developing efficient and
well-defined control architecture and a system integration strategy
for constructing service robots. T-Rot is the next model of the
PSR system under development for assisting aged persons. One of
our aims is to develop the intelligent service robot for the elderly
by cooperating and integrating the results of different research
groups in accordance with the Tripodal schematic control
architecture that has already been implemented on the PSR and
successfully tested. Thus, the layered strategy of the Tripodal
schematic design has been applied for design and modeling of the
T-Rot. In the collaboration diagrams of the analysis modeling,
and the consolidated collaboration diagram and the task
architecture of the design modeling, the Command Line Interface
is located in the deliberate layer for interfacing with a user, while
the Sensor Interface, Wheel Actuator Interface, and Obstacle
Avoidance Timer are in the reactive layer for controlling and
managing the components in the reactive layer. The Navigation
Control, Navigation Timer, Destination, Current Position,
Navigation Path, Navigation Map, Localizer, and Path Planer are
positioned in the sequencing layer for controlling the robot
motion by executing relatively simple computations in real-time.
As a result, the Tripodal schematic control architecture was
helpful in arranging various hardware and software modules and
functions.
Additionally, as stated in section 4.1, in COMET, the event
sequencing logic is usually described informally in Pseudo code.
We found that the sequence diagram can help the engineers
describe the task event sequencing logic and implement the tasks
by showing the order in which messages are passed between tasks
and objects. Hence, instead of using informal Pseudo code, task
event diagrams were developed for tasks by using the UML
sequence diagrams to improve understanding and readability. It
turned out that these task event diagrams are very useful when
implementing these tasks.
4.4
Human Communication
Human communication to understand and develop what is desired
of the service robot is likely to be more difficult than expected. In
our case study, most engineers who are involved in the project
come from the mechanical or robotics engineering field. The
different research groups and teams tend to focus on their own
technology and components and thus it is not easy to realize how
much knowledge they have and how much information will need
to be made explicit and communicated to integrate those
components for the service robot. Several things can be done to
improve the situation. One is for engineers from different teams,
especially software engineers and mechanical engineers to work
together for analyzing, designing, and developing the robot
system during the project. It is very important that all engineers
and developers from different groups and teams interact directly.
Also, in order to develop a common ground for understanding the
domain, technology, process and method, a common medium or
language such as UML is critical. In addition to the standard
notation like UML, guidelines about what notation to use, when
to use it, and how to use the notation comprehensively and
systematically are required. This is why the method like COMET
is needed. Domain knowledge and experiences in each area will
make it much easier to communicate what is desired, e.g. service
robot domain, the autonomous robot navigation, vision processing,
and so on for software engineers, and object-oriented concepts,
software development process, and UML, etc for mechanical
engineers. If there is relatively little domain knowledge and
experience, to have one day or half-day technical workshop is
needed. This has proved useful in a variety of settings in the
development of the robot system, such as developing and
increasing background knowledge of the domain and technology.
4.5
Necessity of Multi-Aspect Integration
Method for Service Robot Domain
A service robot should be able to perform several tasks
autonomously to provide various services for human beings in a
dynamic and partially unknown environment by applying both
technology and knowledge. In order to be able to achieve
complex tasks, perform new tasks, and integrate data learned from
experience for the robot services, it is required to consider not
only the robot's behavior, but also other robot's characteristics
such as learning, planning, decision-making, and knowledge
representation. It is necessary to allow existing robot behaviors to
be used in new ways, to plan for accomplishing more complex
tasks, to reuse the knowledge of one task in other tasks, and to
542
complete tasks more efficiently by learning various action
sequences.
In the case study, we focused on designing and modeling the
robot's behavioral aspect, which is related to the sequencing and
reactive layers in the Tripodal layered design, by applying the
COMET/UML method. However, it is clear that planning and
learning abilities have to also be considered when designing and
developing a service robot, which correspond to the deliberate
layer that is responsible for interfacing with a user and executing
the planning process. As a consequence, a task manager, which is
located in the deliberate layer, has been in charge of these robotic
abilities in the project. Because the planning process is knowledge
based and not reactive, a different analysis and design approach is
needed for the task manager. Hence, we are convinced that
methods to model the robot's learning, planning and decision
making aspects as well as to incorporate, use and maintain task
knowledge are necessary. Furthermore, it is essential to integrate
these methods with the COMET method into a multi-aspect
integration method for developing service robot software.
CONCLUSIONS AND FUTHER WORK
Service robots have been suggested for a growing number of
applications. A service robot is a complex system as it includes
various technical components (i.e., hardware and software) to be
integrated correctly and many different research groups to
develop the components. As a result, it is not only essential to
develop complex algorithms or technical components, but also to
integrate them adequately and correctly to provide the various
robot services.
In the paper, we have presented our case study where we
developed the autonomous navigation system for the intelligent
service robot for the elderly, T-Rot. The object-oriented method
for real-time embedded systems, COMET has been applied to the
service robot T-Rot with the industry standard UML. It makes it
possible to reconcile specific engineering techniques like robot
technologies with the UML notation and furthermore to fit such
techniques into a fully defined development process towards
developing the service robot system. In this way, we contribute to
developing service robot software with UML in a systematic
manner.
The service robot T-Rot is still under development (at this point,
we are at the first stage of total three stages). Thus, the current
status of our work is to extend applications that include vision
processing, speech processing and manipulation for providing
various robot services. Also, we work on designing the
knowledge-based task manager for improving the robot's ability.
ACKNOWLEDGMENTS
This research (paper) was performed for the Intelligent Robotics
Development Program, one of the 21st Century Frontier R&D
Programs funded by the Ministry of Commerce, Industry and
Energy of Korea.
REFERENCES
[1]
K. Kawamura and M. Iskarous, Trends in service robots for
the disabled and the elderly, Proc. of the 1994 IEEE/RSJ Int.
Conf. on Intelligent Robots and Systems, Vol. 3 (1994) 1674.
[2]
R. D. Schraft, "Mechatronics and robotics for service
applications," in IEEE Robotics and Automation Magazine,
no. 4, pp. 31 - 35, Dec. 1994.
[3]
Rofer T., Lankenau A. and Moratz R., Service Robotics-Applications
and Safety Issues in an Emerging Market,
Workshop W20 proc. ECAI2000, Berlin, 2000.
[4]
B. You et al., "Development of a Home Service Robot
`ISSAC'", Proc. of the 1994 IEEE/RSJ Int. Conf. on
Intelligent Robots and Systems, Las Vegas, Nevada, 2003,
pp. 2630-2635.
[5]
G. Kim, W. Chung, M. Kim, and C. Lee, "Tripodal
Schematic Design of the Control Architecture for the Service
Robot PSR," in Proc. of the IEEE Conf. on Robotics and
Automation, Taipei, Taiwan, pp.2792-2797, 2003.
[6]
G. Kim, W. Chung, M. Kim, and C. Lee, "Implementation of
Multi-Functional Service Robots Using Tripodal Schematic
Control Architecture", in Proc. of IEEE Conf. on Robotics
and Automation, New Orleans, LA, USA, 2004
[7]
A. C. Dominguez-Brito, D.Hernandez-Sosa, J. Isern-Gonzalez
, and J. Cabrera-Games. Integrating robotics
software. IEEE International Conference on Robotics and
Automation, 2004.
[8]
Q. Meng and M.H. Lee, "Learning and Control in Assistive
Robotics for the Elderly", Proc. Of the 2004 IEEE Conf. on
Robotics, Automation and Mechartonics, Singapore, Dec.,
2004, pp. 71-76.
[9]
M. Kim, J. Lee, K. Kang, Y. Hong, and S. Bang, "Re-engineering
Software Architecture of Home Service Robots:
A Case Study", Proc. Of 27th Int. Conf. on Software
Engineering (ICSE2005), St. Louis, USA, May, 2005,
pp.505-513.
[10]
G. Booch, Object-Oriented Analysis and Design with
Applications, 2nd ed. Redwood City, CA: Benjamin
Cummings, 1994.
[11]
I. Jacobson, Object-Oriented Software Engineering, Addison
Wesley, 1992.
[12]
Real-Time Application Interface, 2004. Available at: http://
www.rtai.org
[13]
Gjalt de Jong, "A UML-Based Design Methodology for
Real-Time and Embedded Systems", DATE 2002, March,
2002.
[14]
H. Gomaa, Designing Concurrent, Distributed, and Real-Time
Application with UML, Addison-Wesley, 2000.
[15]
OMG Unified Modeling Language, Version 1.5, March 2003.
Available at:http://www.uml.org
[16]
M. Fowler and K. Scott, UML Distilled 2nd Edition,
Addison Wesley, 2000.
[17]
G. Martin, L. Lavagno, and J. Louis-Guerin, "Embedded
UML: a merger of real-time UML and codesign", CODES
2001, Copenhagen, April 2001, pp.23-28.
[18]
G. Martin, "UML for Embedded Systems Specification and
Design: Motivation and Overview", DATE 2002, March,
2002.
543 | Software engineering;object-oriented analysis and design methods;service robot development;UML |
204 | Unified Utility Maximization Framework for Resource Selection | This paper presents a unified utility framework for resource selection of distributed text information retrieval. This new framework shows an efficient and effective way to infer the probabilities of relevance of all the documents across the text databases. With the estimated relevance information, resource selection can be made by explicitly optimizing the goals of different applications. Specifically, when used for database recommendation, the selection is optimized for the goal of high-recall (include as many relevant documents as possible in the selected databases); when used for distributed document retrieval, the selection targets the high-precision goal (high precision in the final merged list of documents). This new model provides a more solid framework for distributed information retrieval. Empirical studies show that it is at least as effective as other state-of-the-art algorithms. | INTRODUCTION
Conventional search engines such as Google or AltaVista use
ad-hoc information retrieval solution by assuming all the
searchable documents can be copied into a single centralized
database for the purpose of indexing. Distributed information
retrieval, also known as federated search [1,4,7,11,14,22] is
different from ad-hoc information retrieval as it addresses the
cases when documents cannot be acquired and stored in a single
database. For example, "Hidden Web" contents (also called
"invisible" or "deep" Web contents) are information on the Web
that cannot be accessed by the conventional search engines.
Hidden web contents have been estimated to be 2-50 [19] times
larger than the contents that can be searched by conventional
search engines. Therefore, it is very important to search this type
of valuable information.
The architecture of distributed search solution is highly
influenced by different environmental characteristics. In a small
local area network such as small company environments, the
information providers may cooperate to provide corpus statistics
or use the same type of search engines. Early distributed
information retrieval research focused on this type of
cooperative environments [1,8]. On the other side, in a wide
area network such as very large corporate environments or on
the Web there are many types of search engines and it is difficult
to assume that all the information providers can cooperate as
they are required. Even if they are willing to cooperate in these
environments, it may be hard to enforce a single solution for all
the information providers or to detect whether information
sources provide the correct information as they are required.
Many applications fall into the latter type of uncooperative
environments such as the Mind project [16] which integrates
non-cooperating digital libraries or the QProber system [9]
which supports browsing and searching of uncooperative hidden
Web databases. In this paper, we focus mainly on uncooperative
environments that contain multiple types of independent search
engines.
There are three important sub-problems in distributed
information retrieval. First, information about the contents of
each individual database must be acquired (resource
representation) [1,8,21]. Second, given a query, a set of
resources must be selected to do the search (resource selection)
[5,7,21]. Third, the results retrieved from all the selected
resources have to be merged into a single final list before it can
be presented to the end user (retrieval and results merging)
[1,5,20,22].
Many types of solutions exist for distributed information
retrieval. Invisible-web.net
resource selection components. This solution is useful when the
users want to browse the selected databases by themselves
instead of asking the system to retrieve relevant documents
automatically. Distributed document retrieval is a more
sophisticated task. It selects relevant information sources for
users' queries as the database recommendation system does.
Furthermore, users' queries are forwarded to the corresponding
selected databases and the returned individual ranked lists are
merged into a single list to present to the users.
The goal of a database recommendation system is to select a
small set of resources that contain as many relevant documents
as possible, which we call a high-recall goal. On the other side,
the effectiveness of distributed document retrieval is often
measured by the Precision of the final merged document result
list, which we call a high-precision goal. Prior research
indicated that these two goals are related but not identical [4,21].
However, most previous solutions simply use effective resource
selection algorithm of database recommendation system for
distributed document retrieval system or solve the inconsistency
with heuristic methods [1,4,21].
This paper presents a unified utility maximization framework to
integrate the resource selection problem of both database
recommendation and distributed document retrieval together by
treating them as different optimization goals.
First, a centralized sample database is built by randomly
sampling a small amount of documents from each database with
query-based sampling [1]; database size statistics are also
estimated [21]. A logistic transformation model is learned off
line with a small amount of training queries to map the
centralized document scores in the centralized sample database
to the corresponding probabilities of relevance.
Second, after a new query is submitted, the query can be used to
search the centralized sample database which produces a score
for each sampled document. The probability of relevance for
each document in the centralized sample database can be
estimated by applying the logistic model to each document's
score. Then, the probabilities of relevance of all the (mostly
unseen) documents among the available databases can be
estimated using the probabilities of relevance of the documents
in the centralized sample database and the database size
estimates.
For the task of resource selection for a database
recommendation system, the databases can be ranked by the
expected number of relevant documents to meet the high-recall
goal. For resource selection for a distributed document retrieval
system, databases containing a small number of documents with
large probabilities of relevance are favored over databases
containing many documents with small probabilities of
relevance. This selection criterion meets the high-precision goal
of distributed document retrieval application. Furthermore, the
Semi-supervised learning (SSL) [20,22] algorithm is applied to
merge the returned documents into a final ranked list.
The unified utility framework makes very few assumptions and
works in uncooperative environments. Two key features make it
a more solid model for distributed information retrieval: i) It
formalizes the resource selection problems of different
applications as various utility functions, and optimizes the utility
functions to achieve the optimal results accordingly; and ii) It
shows an effective and efficient way to estimate the probabilities
of relevance of all documents across databases. Specifically, the
framework builds logistic models on the centralized sample
database to transform centralized retrieval scores to the
corresponding probabilities of relevance and uses the centralized
sample database as the bridge between individual databases and
the logistic model. The human effort (relevance judgment)
required to train the single centralized logistic model does not
scale with the number of databases. This is a large advantage
over previous research, which required the amount of human
effort to be linear with the number of databases [7,15].
The unified utility framework is not only more theoretically
solid but also very effective. Empirical studies show the new
model to be at least as accurate as the state-of-the-art algorithms
in a variety of configurations.
The next section discusses related work. Section 3 describes the
new unified utility maximization model. Section 4 explains our
experimental methodology. Sections 5 and 6 present our
experimental results for resource selection and document
retrieval. Section 7 concludes.
PRIOR RESEARCH
There has been considerable research on all the sub-problems of
distributed information retrieval. We survey the most related
work in this section.
The first problem of distributed information retrieval is resource
representation. The STARTS protocol is one solution for
acquiring resource descriptions in cooperative environments [8].
However, in uncooperative environments, even the databases are
willing to share their information, it is not easy to judge whether
the information they provide is accurate or not. Furthermore, it
is not easy to coordinate the databases to provide resource
representations that are compatible with each other. Thus, in
uncooperative environments, one common choice is query-based
sampling, which randomly generates and sends queries to
individual search engines and retrieves some documents to build
the descriptions. As the sampled documents are selected by
random queries, query-based sampling is not easily fooled by
any adversarial spammer that is interested to attract more traffic.
Experiments have shown that rather accurate resource
descriptions can be built by sending about 80 queries and
downloading about 300 documents [1].
Many resource selection algorithms such as gGlOSS/vGlOSS
[8] and CORI [1] have been proposed in the last decade. The
CORI algorithm represents each database by its terms, the
document frequencies and a small number of corpus statistics
(details in [1]). As prior research on different datasets has shown
the CORI algorithm to be the most stable and effective of the
three algorithms [1,17,18], we use it as a baseline algorithm in
this work. The relevant document distribution estimation
(ReDDE [21]) resource selection algorithm is a recent algorithm
that tries to estimate the distribution of relevant documents
across the available databases and ranks the databases
accordingly. Although the ReDDE algorithm has been shown to
be effective, it relies on heuristic constants that are set
empirically [21].
The last step of the document retrieval sub-problem is results
merging, which is the process of transforming database-specific
33
document
scores
into
comparable
database-independent
document scores. The semi supervised learning (SSL) [20,22]
result merging algorithm uses the documents acquired by query-based
sampling as training data and linear regression to learn the
database-specific, query-specific merging models. These linear
models are used to convert the database-specific document
scores into the approximated centralized document scores. The
SSL algorithm has been shown to be effective [22]. It serves as
an important component of our unified utility maximization
framework (Section 3).
In order to achieve accurate document retrieval results, many
previous methods simply use resource selection algorithms that
are effective of database recommendation system. But as
pointed out above, a good resource selection algorithm
optimized for high-recall may not work well for document
retrieval, which targets the high-precision goal. This type of
inconsistency has been observed in previous research [4,21].
The research in [21] tried to solve the problem with a heuristic
method.
The research most similar to what we propose here is the
decision-theoretic framework (DTF) [7,15]. This framework
computes a selection that minimizes the overall costs (e.g.,
retrieval quality, time) of document retrieval system and several
methods [15] have been proposed to estimate the retrieval
quality. However, two points distinguish our research from the
DTF model. First, the DTF is a framework designed specifically
for document retrieval, but our new model integrates two
distinct applications with different requirements (database
recommendation and distributed document retrieval) into the
same unified framework. Second, the DTF builds a model for
each database to calculate the probabilities of relevance. This
requires human relevance judgments for the results retrieved
from each database. In contrast, our approach only builds one
logistic model for the centralized sample database. The
centralized sample database can serve as a bridge to connect the
individual databases with the centralized logistic model, thus the
probabilities of relevance of documents in different databases
can be estimated. This strategy can save large amount of human
judgment effort and is a big advantage of the unified utility
maximization framework over the DTF especially when there
are a large number of databases.
UNIFIED UTILITY MAXIMIZATION FRAMEWORK
The Unified Utility Maximization (UUM) framework is based
on estimating the probabilities of relevance of the (mostly
unseen) documents available in the distributed search
environment. In this section we describe how the probabilities of
relevance are estimated and how they are used by the Unified
Utility Maximization model. We also describe how the model
can be optimized for the high-recall goal of a database
recommendation system and the high-precision goal of a
distributed document retrieval system.
3.1 Estimating Probabilities of Relevance
As pointed out above, the purpose of resource selection is high-recall
and the purpose of document retrieval is high-precision. In
order to meet these diverse goals, the key issue is to estimate the
probabilities of relevance of the documents in various databases.
This is a difficult problem because we can only observe a
sample of the contents of each database using query-based
sampling. Our strategy is to make full use of all the available
information to calculate the probability estimates.
3.1.1 Learning Probabilities of Relevance
In the resource description step, the centralized sample database
is built by query-based sampling and the database sizes are
estimated using the sample-resample method [21]. At the same
time, an effective retrieval algorithm (Inquery [2]) is applied on
the centralized sample database with a small number (e.g., 50)
of training queries. For each training query, the CORI resource
selection algorithm [1] is applied to select some number
(e.g., 10) of databases and retrieve 50 document ids from each
database. The SSL results merging algorithm [20,22] is used to
merge the results. Then, we can download the top 50 documents
in the final merged list and calculate their corresponding
centralized scores using Inquery and the corpus statistics of the
centralized sample database. The centralized scores are further
normalized (divided by the maximum centralized score for each
query), as this method has been suggested to improve estimation
accuracy in previous research [15]. Human judgment is acquired
for those documents and a logistic model is built to transform
the normalized centralized document scores to probabilities of
relevance as follows:
(
)
))
(
exp(
1
))
(
exp(
|
)
(
_
_
d
S
b
a
d
S
b
a
d
rel
P
d
R
c
c
c
c
c
c
+
+
+
=
=
(1)
where
)
(
_
d
S
c
is the normalized centralized document score and
a
c
and b
c
are the two parameters of the logistic model. These two
parameters are estimated by maximizing the probabilities of
relevance of the training queries. The logistic model provides us
the tool to calculate the probabilities of relevance from
centralized document scores.
3.1.2 Estimating Centralized Document Scores
When the user submits a new query, the centralized document
scores of the documents in the centralized sample database are
calculated. However, in order to calculate the probabilities of
relevance, we need to estimate centralized document scores for
all documents across the databases instead of only the sampled
documents. This goal is accomplished using: the centralized
scores of the documents in the centralized sample database, and
the database size statistics.
We define the database scale factor for the i
th
database as the
ratio of the estimated database size and the number of
documents sampled from this database as follows:
^
_
i
i
i
db
db
db
samp
N
SF
N
=
(2)
where
^
i
db
N
is the estimated database size and
_
i
db
samp
N
is the
number of documents from the i
th
database in the centralized
sample database. The intuition behind the database scale factor
is that, for a database whose scale factor is 50, if one document
from this database in the centralized sample database has a
centralized document score of 0.5, we may guess that there are
about 50 documents in that database which have scores of about
0.5. Actually, we can apply a finer non-parametric linear
interpolation method to estimate the centralized document score
curve for each database. Formally, we rank all the sampled
documents from the i
th
database by their centralized document
34
scores to get the sampled centralized document score list
{S
c
(ds
i1
), S
c
(ds
i2
), S
c
(ds
i3
),.....} for the i
th
database; we assume
that if we could calculate the centralized document scores for all
the documents in this database and get the complete centralized
document score list, the top document in the sampled list would
have rank SF
dbi
/2, the second document in the sampled list
would rank SF
dbi
3/2, and so on. Therefore, the data points of
sampled documents in the complete list are: {(SF
dbi
/2, S
c
(ds
i1
)),
(SF
dbi
3/2, S
c
(ds
i2
)), (SF
dbi
5/2, S
c
(ds
i3
)),.....}. Piecewise linear
interpolation is applied to estimate the centralized document
score curve, as illustrated in Figure 1. The complete centralized
document score list can be estimated by calculating the values of
different ranks on the centralized document curve as:
]
,
1
[
,
)
(
S
^
^
c
i
db
ij
N
j
d
.
It can be seen from Figure 1 that more sample data points
produce more accurate estimates of the centralized document
score curves. However, for databases with large database scale
ratios, this kind of linear interpolation may be rather inaccurate,
especially for the top ranked (e.g., [1, SF
dbi
/2]) documents.
Therefore, an alternative solution is proposed to estimate the
centralized document scores of the top ranked documents for
databases with large scale ratios (e.g., larger than 100).
Specifically, a logistic model is built for each of these databases.
The logistic model is used to estimate the centralized document
score of the top 1 document in the corresponding database by
using the two sampled documents from that database with
highest centralized scores.
))
(
)
(
exp(
1
))
(
)
(
exp(
)
(
2
2
1
1
0
2
2
1
1
0
^
1
i
c
i
i
c
i
i
i
c
i
i
c
i
i
i
c
ds
S
ds
S
ds
S
ds
S
d
S
+
+
+
+
+
=
(3)
0
i
,
1
i
and
2
i
are the parameters of the logistic model. For
each training query, the top retrieved document of each database
is downloaded and the corresponding centralized document
score is calculated. Together with the scores of the top two
sampled documents, these parameters can be estimated.
After the centralized score of the top document is estimated, an
exponential function is fitted for the top part ([1, SF
dbi
/2]) of the
centralized document score curve as:
]
2
/
,
1
[
)
*
exp(
)
(
1
0
^
i
db
i
i
ij
c
SF
j
j
d
S
+
=
(4)
^
0
1
1
log(
(
))
i
c
i
i
S d
=
(5)
)
1
2
/
(
))
(
log(
)
(
(log(
^
1
1
1
=
i
db
i
c
i
c
i
SF
d
S
ds
S
(6)
The two parameters
0
i
and
1
i
are fitted to make sure the
exponential function passes through the two points (1,
^
1
)
(
i
c
d
S
)
and (SF
dbi
/2, S
c
(ds
i1
)). The exponential function is only used to
adjust the top part of the centralized document score curve and
the lower part of the curve is still fitted with the linear
interpolation method described above. The adjustment by fitting
exponential function of the top ranked documents has been
shown empirically to produce more accurate results.
From the centralized document score curves, we can estimate
the complete centralized document score lists accordingly for all
the available databases. After the estimated centralized
document scores are normalized, the complete lists of
probabilities of relevance can be constructed out of the complete
centralized document score lists by Equation 1. Formally for the
i
th
database, the complete list of probabilities of relevance is:
]
,
1
[
,
)
(
R
^
^
i
db
ij
N
j
d
.
3.2 The Unified Utility Maximization Model
In this section, we formally define the new unified utility
maximization model, which optimizes the resource selection
problems
for
two
goals
of
high-recall
(database
recommendation) and high-precision (distributed document
retrieval) in the same framework.
In the task of database recommendation, the system needs to
decide how to rank databases. In the task of document retrieval,
the system not only needs to select the databases but also needs
to decide how many documents to retrieve from each selected
database. We generalize the database recommendation selection
process, which implicitly recommends all documents in every
selected database, as a special case of the selection decision for
the document retrieval task. Formally, we denote d
i
as the
number of documents we would like to retrieve from the i
th
database and
,.....}
,
{
2
1
d
d
d
=
as a selection action for all the
databases.
The database selection decision is made based on the complete
lists of probabilities of relevance for all the databases. The
complete lists of probabilities of relevance are inferred from all
the available information specifically
s
R
, which stands for the
resource descriptions acquired by query-based sampling and the
database size estimates acquired by sample-resample;
c
S
stands
for the centralized document scores of the documents in the
centralized sample database.
If the method of estimating centralized document scores and
probabilities of relevance in Section 3.1 is acceptable, then the
most probable complete lists of probabilities of relevance can be
derived and we denote them as
1
^
^
*
1
{(R(
),
[1,
]),
db
j
d
j
N
=
2
^
^
2
(R(
),
[1,
]),.......}
db
j
d
j
N
.
Random vector
denotes an
arbitrary set of complete lists of probabilities of relevance and
)
,
|
(
c
s
S
R
P
as the probability of generating this set of lists.
Finally, to each selection action
d
and a set of complete lists of
Figure 1.
Linear interpolation construction of the complete
centralized document score list (database scale factor is 50).
35
probabilities of relevance
, we associate a utility function
)
,
(
d
U
which indicates the benefit from making the
d
selection when the true complete lists of probabilities of
relevance are
.
Therefore, the selection decision defined by the Bayesian
framework is:
d
S
R
P
d
U
d
c
s
d
)
.
|
(
)
,
(
max
arg
*
=
(7)
One common approach to simplify the computation in the
Bayesian framework is to only calculate the utility function at
the most probable parameter values instead of calculating the
whole expectation. In other words, we only need to calculate
)
,
(
*
d
U
and Equation 7 is simplified as follows:
)
,
(
max
arg
*
*
d
U
d
d
=
(8)
This equation serves as the basic model for both the database
recommendation system and the document retrieval system.
3.3 Resource Selection for High-Recall
High-recall is the goal of the resource selection algorithm in
federated search tasks such as database recommendation. The
goal is to select a small set of resources (e.g., less than N
sdb
databases) that contain as many relevant documents as possible,
which can be formally defined as:
=
=
i
N
j
ij
i
i
db
d
d
I
d
U
^
1
^
*
)
(
R
)
(
)
,
(
(9)
I(d
i
) is the indicator function, which is 1 when the i
th
database is
selected and 0 otherwise. Plug this equation into the basic model
in Equation 8 and associate the selected database number
constraint to obtain the following:
sdb
i
i
i
N
j
ij
i
d
N
d
I
to
Subject
d
d
I
d
i
db
=
=
=
)
(
:
)
(
R
)
(
max
arg
^
1
^
*
(10)
The solution of this optimization problem is very simple. We
can calculate the expected number of relevant documents for
each database as follows:
=
=
i
db
i
N
j
ij
Rd
d
N
^
1
^
^
)
(
R
(11)
The N
sdb
databases with the largest expected number of relevant
documents can be selected to meet the high-recall goal. We call
this the UUM/HR algorithm (Unified Utility Maximization for
High-Recall).
3.4 Resource Selection for High-Precision
High-Precision is the goal of resource selection algorithm in
federated search tasks such as distributed document retrieval. It
is measured by the Precision at the top part of the final merged
document list. This high-precision criterion is realized by the
following utility function, which measures the Precision of
retrieved documents from the selected databases.
=
=
i
d
j
ij
i
i
d
d
I
d
U
1
^
*
)
(
R
)
(
)
,
(
(12)
Note that the key difference between Equation 12 and Equation
9 is that Equation 9 sums up the probabilities of relevance of all
the documents in a database, while Equation 12 only considers a
much smaller part of the ranking. Specifically, we can calculate
the optimal selection decision by:
=
=
i
d
j
ij
i
d
i
d
d
I
d
1
^
*
)
(
R
)
(
max
arg
(13)
Different kinds of constraints caused by different characteristics
of the document retrieval tasks can be associated with the above
optimization problem. The most common one is to select a fixed
number (N
sdb
) of databases and retrieve a fixed number (N
rdoc
) of
documents from each selected database, formally defined as:
0
,
)
(
:
)
(
R
)
(
max
arg
1
^
*
=
=
=
=
i
rdoc
i
sdb
i
i
i
d
j
ij
i
d
d
if
N
d
N
d
I
to
Subject
d
d
I
d
i
(14)
This optimization problem can be solved easily by calculating
the number of expected relevant documents in the top part of the
each database's complete list of probabilities of relevance:
=
=
rdoc
i
N
j
ij
Rd
Top
d
N
1
^
^
_
)
(
R
(15)
Then the databases can be ranked by these values and selected.
We call this the UUM/HP-FL algorithm (Unified Utility
Maximization for High-Precision with Fixed Length document
rankings from each selected database).
A more complex situation is to vary the number of retrieved
documents from each selected database. More specifically, we
allow different selected databases to return different numbers of
documents. For simplification, the result list lengths are required
to be multiples of a baseline number 10. (This value can also be
varied, but for simplification it is set to 10 in this paper.) This
restriction is set to simulate the behavior of commercial search
engines on the Web. (Search engines such as Google and
AltaVista return only 10 or 20 document ids for every result
page.) This procedure saves the computation time of calculating
optimal database selection by allowing the step of dynamic
programming to be 10 instead of 1 (more detail is discussed
latterly). For further simplification, we restrict to select at most
100 documents from each database (d
i
<=100) Then, the
selection optimization problem is formalized as follows:
]
10
..,
,
2
,
1
,
0
[
,
*
10
)
(
:
)
(
R
)
(
max
arg
_
1
^
*
=
=
=
=
=
k
k
d
N
d
N
d
I
to
Subject
d
d
I
d
i
rdoc
Total
i
i
sdb
i
i
i
d
j
ij
i
d
i
(16)
N
Total_rdoc
is the total number of documents to be retrieved.
Unfortunately, there is no simple solution for this optimization
problem as there are for Equations 10 and 14. However, a
36
dynamic programming algorithm can be applied to calculate the
optimal solution. The basic steps of this dynamic programming
method are described in Figure 2. As this algorithm allows
retrieving result lists of varying lengths from each selected
database, it is called UUM/HP-VL algorithm.
After the selection decisions are made, the selected databases are
searched and the corresponding document ids are retrieved from
each database. The final step of document retrieval is to merge
the returned results into a single ranked list with the semi-supervised
learning algorithm. It was pointed out before that the
SSL algorithm maps the database-specific scores into the
centralized document scores and builds the final ranked list
accordingly, which is consistent with all our selection
procedures where documents with higher probabilities of
relevance (thus higher centralized document scores) are selected.
EXPERIMENTAL METHODOLOGY
It is desirable to evaluate distributed information retrieval
algorithms with testbeds that closely simulate the real world
applications.
The TREC Web collections WT2g or WT10g [4,13] provide a
way to partition documents by different Web servers. In this
way, a large number (O(1000)) of databases with rather diverse
contents could be created, which may make this testbed a good
candidate to simulate the operational environments such as open
domain hidden Web. However, two weakness of this testbed are:
i) Each database contains only a small amount of document (259
documents by average for WT2g) [4]; and ii) The contents of
WT2g or WT10g are arbitrarily crawled from the Web. It is not
likely for a hidden Web database to provide personal homepages
or web pages indicating that the pages are under construction
and there is no useful information at all. These types of web
pages are contained in the WT2g/WT10g datasets. Therefore,
the noisy Web data is not similar with that of high-quality
hidden Web database contents, which are usually organized by
domain experts.
Another choice is the TREC news/government data [1,15,17,
18,21]. TREC news/government data is concentrated on
relatively narrow topics. Compared with TREC Web data: i) The
news/government documents are much more similar to the
contents provided by a topic-oriented database than an arbitrary
web page, ii) A database in this testbed is larger than that of
TREC Web data. By average a database contains thousands of
documents, which is more realistic than a database of TREC
Web data with about 250 documents. As the contents and sizes
of the databases in the TREC news/government testbed are more
similar with that of a topic-oriented database, it is a good
candidate to simulate the distributed information retrieval
environments of large organizations (companies) or domain-specific
hidden Web sites, such as West that provides access to
legal, financial and news text databases [3]. As most current
distributed information retrieval systems are developed for the
environments of large organizations (companies) or domain-specific
hidden Web other than open domain hidden Web,
TREC news/government testbed was chosen in this work.
Trec123-100col-bysource testbed is one of the most used TREC
news/government testbed [1,15,17,21]. It was chosen in this
work. Three testbeds in [21] with skewed database size
distributions and different types of relevant document
distributions were also used to give more thorough simulation
for real environments.
Trec123-100col-bysource:
100 databases were created from
TREC CDs 1, 2 and 3. They were organized by source and
publication date [1]. The sizes of the databases are not skewed.
Details are in Table 1.
Three testbeds built in [21] were based on the trec123-100col-bysource
testbed. Each testbed contains many "small" databases
and two large databases created by merging about 10-20 small
databases together.
Input:
Complete lists of probabilities of relevance for all
the |DB| databases.
Output:
Optimal selection solution for Equation 16.
i) Create the three-dimensional array:
Sel (1..|DB|, 1..N
Total_rdoc/10
, 1..N
sdb
)
Each Sel (x, y, z) is associated with a selection
decision
xyz
d
,
which represents the best selection
decision in the condition: only databases from number 1
to number x are considered for selection; totally y*10
documents will be retrieved; only z databases are
selected out of the x database candidates. And
Sel (x, y, z) is the corresponding utility value by
choosing the best selection.
ii) Initialize Sel (1, 1..N
Total_rdoc
/10, 1..N
sdb
) with only the
estimated relevance information of the 1
st
database.
iii) Iterate the current database candidate i from 2 to |DB|
For each entry Sel (i, y, z):
Find k such that:
)
10
,
min(
1
:
)
)
(
)
1
,
,
1
(
(
max
arg
*
10
^
*
y
k
to
subject
d
R
z
k
y
i
Sel
k
k
j
ij
k
+
=
)
,
,
1
(
)
)
(
)
1
,
,
1
(
(
*
*
10
^
*
z
y
i
Sel
d
R
z
k
y
i
Sel
If
k
j
ij
>
;
+
This means that we should retrieve
*
10 k
documents
from the i
th
database, otherwise we should not select this
database and the previous best solution Sel (i-1, y, z)
should be kept.
Then set the value of
iyz
d
and Sel (i, y, z) accordingly.
iv) The best selection solution is given by
_
/10
|
|
Toral
rdoc
sdb
DB N
N
d
and the corresponding utility value is Sel (|DB|,
N
Total_rdoc/10
, N
sdb
).
Figure 2.
The dynamic programming optimization
procedure for Equation 16.
Table1:
Testbed statistics.
Number of documents
Size (MB)
Testbed
Size
(GB)
Min
Avg
Max
Min
Avg
Max
Trec123
3.2
752
10782
39713
28
32
42
Table2:
Query set statistics.
Name
TREC
Topic Set
TREC
Topic Field
Average Length
(Words)
Trec123
51-150
Title
3.1
37
Trec123-2ldb-60col ("representative"):
The databases in the
trec123-100col-bysource were sorted with alphabetical order.
Two large databases were created by merging 20 small
databases with the round-robin method. Thus, the two large
databases have more relevant documents due to their large sizes,
even though the densities of relevant documents are roughly the
same as the small databases.
Trec123-AP-WSJ-60col ("relevant"):
The 24 Associated Press
collections and the 16 Wall Street Journal collections in the
trec123-100col-bysource testbed were collapsed into two large
databases APall and WSJall. The other 60 collections were left
unchanged. The APall and WSJall databases have higher
densities of documents relevant to TREC queries than the small
databases. Thus, the two large databases have many more
relevant documents than the small databases.
Trec123-FR-DOE-81col ("nonrelevant"):
The 13 Federal
Register collections and the 6 Department of Energy collections
in the trec123-100col-bysource testbed were collapsed into two
large databases FRall and DOEall. The other 80 collections were
left unchanged. The FRall and DOEall databases have lower
densities of documents relevant to TREC queries than the small
databases, even though they are much larger.
100 queries were created from the title fields of TREC topics
51-150. The queries 101-150 were used as training queries and
the queries 51-100 were used as test queries (details in Table 2).
4.2 Search Engines
In
the
uncooperative
distributed
information
retrieval
environments of large organizations (companies) or domain-specific
hidden Web, different databases may use different types
of search engine. To simulate the multiple type-engine
environment, three different types of search engines were used
in the experiments: INQUERY [2], a unigram statistical
language model with linear smoothing [12,20] and a TFIDF
retrieval algorithm with "ltc" weight [12,20]. All these
algorithms were implemented with the Lemur toolkit [12].
These three kinds of search engines were assigned to the
databases among the four testbeds in a round-robin manner.
RESULTS RESOURCE SELECTION OF DATABASE RECOMMENDATION
All four testbeds described in Section 4 were used in the
experiments to evaluate the resource selection effectiveness of
the database recommendation system.
The resource descriptions were created using query-based
sampling. About 80 queries were sent to each database to
download 300 unique documents. The database size statistics
were estimated by the sample-resample method [21]. Fifty
queries (101-150) were used as training queries to build the
relevant logistic model and to fit the exponential functions of the
centralized document score curves for large ratio databases
(details in Section 3.1). Another 50 queries (51-100) were used
as test data.
Resource selection algorithms of database recommendation
systems are typically compared using the recall metric
n
R
[1,17,18,21]. Let B denote a baseline ranking, which is often the
RBR (relevance based ranking), and E as a ranking provided by
a resource selection algorithm. And let B
i
and E
i
denote the
number of relevant documents in the i
th
ranked database of B or
E. Then R
n
is defined as follows:
=
=
=
k
i
i
k
i
i
k
B
E
R
1
1
(17)
Usually the goal is to search only a few databases, so our figures
only show results for selecting up to 20 databases.
The experiments summarized in Figure 3 compared the
effectiveness of the three resource selection algorithms, namely
the CORI, ReDDE and UUM/HR. The UUM/HR algorithm is
described in Section 3.3. It can be seen from Figure 3 that the
ReDDE and UUM/HR algorithms are more effective (on the
representative, relevant and nonrelevant testbeds) or as good as
(on the Trec123-100Col testbed) the CORI resource selection
algorithm. The UUM/HR algorithm is more effective than the
ReDDE algorithm on the representative and relevant testbeds
and is about the same as the ReDDE algorithm on the Trec123-100Col
and the nonrelevant testbeds. This suggests that the
UUM/HR algorithm is more robust than the ReDDE algorithm.
It can be noted that when selecting only a few databases on the
Trec123-100Col or the nonrelevant testbeds, the ReDEE
algorithm has a small advantage over the UUM/HR algorithm.
We attribute this to two causes: i) The ReDDE algorithm was
tuned on the Trec123-100Col testbed; and ii) Although the
difference is small, this may suggest that our logistic model of
estimating probabilities of relevance is not accurate enough.
More training data or a more sophisticated model may help to
solve this minor puzzle.
Collections Selected. Collections Selected.
Trec123-100Col Testbed. Representative Testbed.
Collection Selected. Collection Selected.
Relevant Testbed. Nonrelevant Testbed.
Figure 3.
Resource selection experiments on the four testbeds.
38
RESULTS DOCUMENT RETRIEVAL EFFECTIVENESS
For document retrieval, the selected databases are searched and
the returned results are merged into a single final list. In all of
the experiments discussed in this section the results retrieved
from individual databases were combined by the semi-supervised
learning results merging algorithm. This version of
the SSL algorithm [22] is allowed to download a small number
of returned document texts "on the fly" to create additional
training data in the process of learning the linear models which
map database-specific document scores into estimated
centralized document scores. It has been shown to be very
effective in environments where only short result-lists are
retrieved from each selected database [22]. This is a common
scenario in operational environments and was the case for our
experiments.
Document retrieval effectiveness was measured by Precision at
the top part of the final document list. The experiments in this
section were conducted to study the document retrieval
effectiveness of five selection algorithms, namely the CORI,
ReDDE, UUM/HR, UUM/HP-FL and UUM/HP-VL algorithms.
The last three algorithms were proposed in Section 3. All the
first four algorithms selected 3 or 5 databases, and 50 documents
were retrieved from each selected database. The UUM/HP-FL
algorithm also selected 3 or 5 databases, but it was allowed to
adjust the number of documents to retrieve from each selected
database; the number retrieved was constrained to be from 10 to
100, and a multiple of 10.
The Trec123-100Col and representative testbeds were selected
for document retrieval as they represent two extreme cases of
resource selection effectiveness; in one case the CORI algorithm
is as good as the other algorithms and in the other case it is quite
Table 5.
Precision on the representative testbed when 3 databases were selected. (The first baseline is CORI; the second baseline for
UUM/HP methods is UUM/HR.)
Precision at
Doc Rank
CORI
ReDDE
UUM/HR
UUM/HP-FL
UUM/HP-VL
5 docs
0.3720
0.4080 (+9.7%)
0.4640 (+24.7%)
0.4600 (+23.7%)(-0.9%)
0.5000 (+34.4%)(+7.8%)
10 docs
0.3400
0.4060 (+19.4%)
0.4600 (+35.3%)
0.4540 (+33.5%)(-1.3%)
0.4640 (+36.5%)(+0.9%)
15 docs
0.3120
0.3880 (+24.4%)
0.4320 (+38.5%)
0.4240 (+35.9%)(-1.9%)
0.4413 (+41.4%)(+2.2)
20 docs
0.3000
0.3750 (+25.0%)
0.4080 (+36.0%)
0.4040 (+34.7%)(-1.0%)
0.4240 (+41.3%)(+4.0%)
30 docs
0.2533
0.3440 (+35.8%)
0.3847 (+51.9%)
0.3747 (+47.9%)(-2.6%)
0.3887 (+53.5%)(+1.0%)
Table 6.
Precision on the representative testbed when 5 databases were selected. (The first baseline is CORI; the second baseline for
UUM/HP methods is UUM/HR.)
Precision at
Doc Rank
CORI
ReDDE
UUM/HR
UUM/HP-FL
UUM/HP-VL
5 docs
0.3960
0.4080 (+3.0%)
0.4560 (+15.2%)
0.4280 (+8.1%)(-6.1%)
0.4520 (+14.1%)(-0.9%)
10 docs
0.3880
0.4060 (+4.6%)
0.4280 (+10.3%)
0.4460 (+15.0%)(+4.2%)
0.4560 (+17.5%)(+6.5%)
15 docs
0.3533
0.3987 (+12.9%)
0.4227 (+19.6%)
0.4440 (+25.7%)(+5.0%)
0.4453 (+26.0%)(+5.4%)
20 docs
0.3330
0.3960 (+18.9%)
0.4140 (+24.3%)
0.4290 (+28.8%)(+3.6%)
0.4350 (+30.6%)(+5.1%)
30 docs
0.2967
0.3740 (+26.1%)
0.4013 (+35.3%)
0.3987 (+34.4%)(-0.7%)
0.4060 (+36.8%)(+1.2%)
Table 3.
Precision on the trec123-100col-bysource testbed when 3 databases were selected. (The first baseline is CORI; the second
baseline for UUM/HP methods is UUM/HR.)
Precision at
Doc Rank
CORI
ReDDE
UUM/HR
UUM/HP-FL
UUM/HP-VL
5 docs
0.3640
0.3480 (-4.4%)
0.3960 (+8.8%)
0.4680 (+28.6%)(+18.1%)
0.4640 (+27.5%)(+17.2%)
10 docs
0.3360
0.3200 (-4.8%)
0.3520 (+4.8%)
0.4240 (+26.2%)(+20.5%)
0.4220 (+25.6%)(+19.9%)
15 docs
0.3253
0.3187 (-2.0%)
0.3347 (+2.9%)
0.3973 (+22.2%)(+15.7%)
0.3920 (+20.5%)(+17.1%)
20 docs
0.3140
0.2980 (-5.1%)
0.3270 (+4.1%)
0.3720 (+18.5%)(+13.8%)
0.3700 (+17.8%)(+13.2%)
30 docs
0.2780
0.2660 (-4.3%)
0.2973 (+6.9%)
0.3413 (+22.8%)(+14.8%)
0.3400 (+22.3%)(+14.4%)
Table 4.
Precision on the trec123-100col-bysource testbed when 5 databases were selected. (The first baseline is CORI; the second
baseline for UUM/HP methods is UUM/HR.)
Precision at
Doc Rank
CORI
ReDDE
UUM/HR
UUM/HP-FL
UUM/HP-VL
5 docs
0.4000
0.3920 (-2.0%)
0.4280 (+7.0%)
0.4680 (+17.0%)(+9.4%)
0.4600 (+15.0%)(+7.5%)
10 docs
0.3800
0.3760 (-1.1%)
0.3800 (+0.0%)
0.4180 (+10.0%)(+10.0%)
0.4320 (+13.7%)(+13.7%)
15 docs
0.3560
0.3560 (+0.0%)
0.3720 (+4.5%)
0.3920 (+10.1%)(+5.4%)
0.4080 (+14.6%)(+9.7%)
20 docs
0.3430
0.3390 (-1.2%)
0.3550 (+3.5%)
0.3710 (+8.2%)(+4.5%)
0.3830 (+11.7%)(+7.9%)
30 docs
0.3240
0.3140 (-3.1%)
0.3313 (+2.3%)
0.3500 (+8.0%)(+5.6%)
0.3487 (+7.6%)(+5.3%)
39
a lot worse than the other algorithms. Tables 3 and 4 show the
results on the Trec123-100Col testbed, and Tables 5 and 6 show
the results on the representative testbed.
On the Trec123-100Col testbed, the document retrieval
effectiveness of the CORI selection algorithm is roughly the
same or a little bit better than the ReDDE algorithm but both of
them are worse than the other three algorithms (Tables 3 and 4).
The UUM/HR algorithm has a small advantage over the CORI
and ReDDE algorithms. One main difference between the
UUM/HR algorithm and the ReDDE algorithm was pointed out
before: The UUM/HR uses training data and linear interpolation
to estimate the centralized document score curves, while the
ReDDE algorithm [21] uses a heuristic method, assumes the
centralized document score curves are step functions and makes
no distinction among the top part of the curves. This difference
makes UUM/HR better than the ReDDE algorithm at
distinguishing documents with high probabilities of relevance
from low probabilities of relevance. Therefore, the UUM/HR
reflects the high-precision retrieval goal better than the ReDDE
algorithm and thus is more effective for document retrieval.
The UUM/HR algorithm does not explicitly optimize the
selection decision with respect to the high-precision goal as the
UUM/HP-FL and UUM/HP-VL algorithms are designed to do.
It can be seen that on this testbed, the UUM/HP-FL and
UUM/HP-VL algorithms are much more effective than all the
other algorithms. This indicates that their power comes from
explicitly optimizing the high-precision goal of document
retrieval in Equations 14 and 16.
On the representative testbed, CORI is much less effective than
other algorithms for distributed document retrieval (Tables 5 and
6). The document retrieval results of the ReDDE algorithm are
better than that of the CORI algorithm but still worse than the
results of the UUM/HR algorithm. On this testbed the three
UUM algorithms are about equally effective. Detailed analysis
shows that the overlap of the selected databases between the
UUM/HR, UUM/HP-FL and UUM/HP-VL algorithms is much
larger than the experiments on the Trec123-100Col testbed,
since all of them tend to select the two large databases. This
explains why they are about equally effective for document
retrieval.
In real operational environments, databases may return no
document scores and report only ranked lists of results. As the
unified utility maximization model only utilizes retrieval scores
of sampled documents with a centralized retrieval algorithm to
calculate the probabilities of relevance, it makes database
selection decisions without referring to the document scores
from individual databases and can be easily generalized to this
case of rank lists without document scores. The only adjustment
is that the SSL algorithm merges ranked lists without document
scores by assigning the documents with pseudo-document scores
normalized for their ranks (In a ranked list of 50 documents, the
first one has a score of 1, the second has a score of 0.98 etc)
,which has been studied in [22]. The experiment results on
trec123-100Col-bysource testbed with 3 selected databases are
shown in Table 7. The experiment setting was the same as
before except that the document scores were eliminated
intentionally and the selected databases only return ranked lists
of document ids. It can be seen from the results that the
UUM/HP-FL and UUM/HP-VL work well with databases
returning no document scores and are still more effective than
other alternatives. Other experiments with databases that return
no document scores are not reported but they show similar
results to prove the effectiveness of UUM/HP-FL and UUM/HP-VL
algorithms.
The above experiments suggest that it is very important to
optimize the high-precision goal explicitly in document
retrieval. The new algorithms based on this principle achieve
better or at least as good results as the prior state-of-the-art
algorithms in several environments.
CONCLUSION
Distributed information retrieval solves the problem of finding
information that is scattered among many text databases on local
area networks and Internets. Most previous research use
effective
resource
selection
algorithm
of
database
recommendation system for distributed document retrieval
application. We argue that the high-recall resource selection
goal of database recommendation and high-precision goal of
document retrieval are related but not identical. This kind of
inconsistency has also been observed in previous work, but the
prior solutions either used heuristic methods or assumed
cooperation by individual databases (e.g., all the databases used
the same kind of search engines), which is frequently not true in
the uncooperative environment.
In this work we propose a unified utility maximization model to
integrate the resource selection of database recommendation and
document retrieval tasks into a single unified framework. In this
framework, the selection decisions are obtained by optimizing
different objective functions. As far as we know, this is the first
work that tries to view and theoretically model the distributed
information retrieval task in an integrated manner.
The new framework continues a recent research trend studying
the use of query-based sampling and a centralized sample
database. A single logistic model was trained on the centralized
Table 7.
Precision on the trec123-100col-bysource testbed when 3 databases were selected (The first baseline is CORI; the second
baseline for UUM/HP methods is UUM/HR.) (Search engines do not return document scores)
Precision at
Doc Rank
CORI
ReDDE
UUM/HR
UUM/HP-FL
UUM/HP-VL
5 docs
0.3520
0.3240 (-8.0%)
0.3680 (+4.6%)
0.4520 (+28.4%)(+22.8%)
0.4520 (+28.4%)(+22.8)
10 docs
0.3320
0.3140 (-5.4%)
0.3340 (+0.6%)
0.4120 (+24.1%)(+23.4%)
0.4020 (+21.1%)(+20.4%)
15 docs
0.3227
0.2987 (-7.4%)
0.3280 (+1.6%)
0.3920 (+21.5%)(+19.5%)
0.3733 (+15.7%)(+13.8%)
20 docs
0.3030
0.2860 (-5.6%)
0.3130 (+3.3%)
0.3670 (+21.2%)(+17.3%)
0.3590 (+18.5%)(+14.7%)
30 docs
0.2727
0.2640 (-3.2%)
0.2900 (+6.3%)
0.3273 (+20.0%)(+12.9%)
0.3273 (+20.0%)(+12.9%)
40
sample database to estimate the probabilities of relevance of
documents by their centralized retrieval scores, while the
centralized sample database serves as a bridge to connect the
individual databases with the centralized logistic model.
Therefore, the probabilities of relevance for all the documents
across the databases can be estimated with very small amount of
human relevance judgment, which is much more efficient than
previous methods that build a separate model for each database.
This framework is not only more theoretically solid but also
very effective. One algorithm for resource selection (UUM/HR)
and two algorithms for document retrieval (UUM/HP-FL and
UUM/HP-VL) are derived from this framework. Empirical
studies have been conducted on testbeds to simulate the
distributed search solutions of large organizations (companies)
or domain-specific hidden Web. Furthermore, the UUM/HP-FL
and UUM/HP-VL resource selection algorithms are extended
with a variant of SSL results merging algorithm to address the
distributed document retrieval task when selected databases do
not return document scores. Experiments have shown that these
algorithms achieve results that are at least as good as the prior
state-of-the-art, and sometimes considerably better. Detailed
analysis indicates that the advantage of these algorithms comes
from explicitly optimizing the goals of the specific tasks.
The unified utility maximization framework is open for different
extensions. When cost is associated with searching the online
databases, the utility framework can be adjusted to automatically
estimate the best number of databases to search so that a large
amount of relevant documents can be retrieved with relatively
small costs. Another extension of the framework is to consider
the retrieval effectiveness of the online databases, which is an
important issue in the operational environments. All of these are
the directions of future research.
ACKNOWLEDGEMENT
This research was supported by NSF grants EIA-9983253 and
IIS-0118767.
Any
opinions,
findings,
conclusions,
or
recommendations expressed in this paper are the authors', and
do not necessarily reflect those of the sponsor.
REFERENCES
[1]
J. Callan. (2000). Distributed information retrieval. In W.B.
Croft, editor, Advances in Information Retrieval. Kluwer
Academic Publishers. (pp. 127-150).
[2]
J. Callan, W.B. Croft, and J. Broglio. (1995). TREC and
TIPSTER experiments with INQUERY. Information
Processing and Management, 31(3). (pp. 327-343).
[3]
J. G. Conrad, X. S. Guo, P. Jackson and M. Meziou.
(2002). Database selection using actual physical and
acquired logical collection resources in a massive domain-specific
operational environment. Distributed search over
the hidden web: Hierarchical database sampling and
selection. In Proceedings of the 28
th
International
Conference on Very Large Databases (VLDB).
[4]
N. Craswell. (2000). Methods for distributed information
retrieval. Ph. D. thesis, The Australian Nation University.
[5]
N. Craswell, D. Hawking, and P. Thistlewaite. (1999).
Merging results from isolated search engines. In
Proceedings of 10th Australasian Database Conference.
[6]
D. D'Souza, J. Thom, and J. Zobel. (2000). A comparison
of techniques for selecting text collections. In Proceedings
of the 11th Australasian Database Conference.
[7]
N. Fuhr. (1999). A Decision-Theoretic approach to
database selection in networked IR. ACM Transactions on
Information Systems, 17(3). (pp. 229-249).
[8]
L. Gravano, C. Chang, H. Garcia-Molina, and A. Paepcke.
(1997). STARTS: Stanford proposal for internet meta-searching
. In Proceedings of the 20th ACM-SIGMOD
International Conference on Management of Data.
[9]
L. Gravano, P. Ipeirotis and M. Sahami. (2003). QProber:
A System for Automatic Classification of Hidden-Web
Databases. ACM Transactions on Information Systems,
21(1).
[10]
P. Ipeirotis and L. Gravano. (2002). Distributed search over
the hidden web: Hierarchical database sampling and
selection. In Proceedings of the 28th International
Conference on Very Large Databases (VLDB).
[11]
InvisibleWeb.com. http://www.invisibleweb.com
[12]
The lemur toolkit. http://www.cs.cmu.edu/~lemur
[13]
J. Lu and J. Callan. (2003). Content-based information
retrieval in peer-to-peer networks. In Proceedings of the
12th International Conference on Information and
Knowledge Management.
[14]
W. Meng, C.T. Yu and K.L. Liu. (2002) Building efficient
and effective metasearch engines. ACM Comput. Surv.
34(1).
[15]
H. Nottelmann and N. Fuhr. (2003). Evaluating different
method of estimating retrieval quality for resource
selection. In Proceedings of the 25th Annual International
ACM SIGIR Conference on Research and Development in
Information Retrieval.
[16]
H., Nottelmann and N., Fuhr. (2003). The MIND
architecture for heterogeneous multimedia federated digital
libraries. ACM SIGIR 2003 Workshop on Distributed
Information Retrieval.
[17]
A.L. Powell, J.C. French, J. Callan, M. Connell, and C.L.
Viles. (2000). The impact of database selection on
distributed searching. In Proceedings of the 23rd Annual
International ACM SIGIR Conference on Research and
Development in Information Retrieval.
[18]
A.L. Powell and J.C. French. (2003). Comparing the
performance of database selection algorithms. ACM
Transactions on Information Systems, 21(4). (pp. 412-456).
[19]
C. Sherman (2001). Search for the invisible web. Guardian
Unlimited.
[20]
L. Si and J. Callan. (2002). Using sampled data and
regression to merge search engine results. In Proceedings
of the 25th Annual International ACM SIGIR Conference
on Research and Development in Information Retrieval.
[21]
L. Si and J. Callan. (2003). Relevant document distribution
estimation method for resource selection. In Proceedings of
the 26th Annual International ACM SIGIR Conference on
Research and Development in Information Retrieval.
[22]
L. Si and J. Callan. (2003). A Semi-Supervised learning
method to merge search engine results. ACM Transactions
on Information Systems, 21(4). (pp. 457-491).
41 | resource selection;distributed information retrieval |
205 | Unwanted Traffic in 3G Networks | The presence of "unwanted" (or background) traffic in the Internet is a well-known fact. In principle any network that has been engineered without taking its presence into account might experience troubles during periods of massive exposure to unwanted traffic, e.g. during large-scale infections. A concrete example was provided by the spreading of Code-Red-II in 2001, which caused several routers crashes worldwide. Similar events might take place in 3G networks as well, with further potential complications arising from their high functional complexity and the scarcity of radio resources. For example, under certain hypothetical network configuration settings unwanted traffic, and specifically scanning traffic from infected Mobile Stations, can cause large-scale wastage of logical resources, and in extreme cases even starvation. Unwanted traffic is present nowdays also in GPRS/UMTS, mainly due to the widespread use of 3G connect cards for laptops. We urge the research community and network operators to consider the issue of 3G robustness to unwanted traffic as a prominent research area. | INTRODUCTION
Public wide-area wireless networks are now migrating to
third-generation systems (3G), designed to support packet-switched
data services and Internet access. Several UMTS
networks became operational since 2003 while early GPRS
deployments date back to 2000. Since then, the growing
popularity of 3G terminals and services has extended the
coverage of Internet wireless access to the geographic area,
and 3G networks are becoming key components of the global
Internet. In a recent CCR contribution Keshav [17] foresees
that cell phones will become the dominant component of future
Internet population, while Kleinrock expects this role
to be played by "small pervasive devices ubiquitously embedded
in the physical world" (quoted from [14, p. 112]).
Both scenarios underlay that the main access mode in the future
Internet will be wide-area wireless. Currently deployed
3G networks, along with their future evolutions, are in pole-position
face to concurrent technologies (e.g. WIMAX) to
provide such access connectivity in the large-scale.
Generally speaking, the 3G network being essentially a
mixture of two paradigms, namely mobile cellular and IP, it
is exposed to the security and reliability issues affecting each
component, plus the new risks emerging from their combination
. The 3G environment inherits from the cellular
paradigm a number of features like terminal personalization
and geolocalization that make privacy and information security
particularly critical. When coupled with the IP world,
markedly the "openess" of its applications and accessibility,
the concerns of privacy and security from the user perspective
become even more critical than in legacy 2G networks.
Because of that - and of some "lessons learned" from past
mistakes in 2G security [5] - privacy and information security
aspects have received a thorough treatment in the 3G specifications
(see [7] for an exhaustive overview). Nevertheless,
the specific topic of 3G network security in relation to the robustness
and availability of the network infrastructure itself
has not received adequate attention by the research community
to date. The problem can be condensed in the following
question: What is the level of robustness of a 3G network
against deliberate attacks or other unanticipated stimuli?
The problem of network security involves issues related to
network resilience and stability, and can not be addressed
without a deep understanding of the detailed structure and
organization of the real network. Considered the relative
recent deployment of 3G, and the very limited access that
research groups have to these networks, it should be no surprise
that the work in this area has been sporadic. Some
exploits against 3G network are known and documented in
industry reports (e.g. [15] [2]), while the fact that a limited
amount of malicious traffic can cause large-case troubles to
a wireless cellular network has been "unveiled" in the recent
paper [18] with reference to a 2G network supporting open
SMS service. But at this stage what is still missing is an
exhaustive and systematic recognition of the potential risks,
threats and problems to 3G network security, from which a
research agenda can be drawn.
We provide here a novel contribution towards this goal
by introducing an issue that has passed unrecognized so far:
the impact onto 3G networks of unwanted traffic, and specifically
large-scale worm infections. Remarkably, all the cited
previous works consider deliberate DoS attack against the
network. Instead here we focus on a slightly more subtle
issue, namely the (side-)effects onto the network of (unwanted
) traffic, whose intended target is typically not the
network but rather its terminals. Our work was inspired
by the consequences of the Code-Red-II infection onto the
routers of the wired Internet, reported in [3] and [4].
We claim that under certain conditions and for certain
network configuration scenarios large-scale worm infections
can cause sensible degradation and risks for the network
performances and availability. We urge the research community
and network operators to consider the issue of 3G
robustness to unwanted traffic as a prominent research area.
The goal of this contribution is to trigger interest and at the
same time move the first pioneering steps in such direction.
The following discussion is based on empirical observations
from an operational GPRS/UMTS network collected
during an ongoing research project in traffic monitoring and
modeling in 3G, the DARWIN project [1], carried out in collaboration
with mobilkom austria AG&CoKG (the leading
mobile operator in Austria, EU) and Kapsch CarrierCom
(provider of equipments and network engineering services).
OVERVIEW OF 3G NETWORKS
Network structure.
A 3G network includes two main sections
: a Packet-Switched Core Network (CN), which is based
on IP, and one or more Radio Access Network (RAN). Along
with the UMTS RAN (UTRAN) based on W-CDMA, several
operators maintain a parallel GPRS RAN evolved from
the legacy GSM radio. This structure is sketched in Figure
1. It is also possible to connect additional separate RANs
to the same CN, typically WLAN [13] and perhaps in the
future also WIMAX. Each RAN can evolve independently
from the CN: for example in several networks GPRS has
been upgraded to EDGE [10, p. 152], while UMTS upgrade
towards HSDPA [8, p. 351] is ongoing. Each RAN is connected
to the legacy 2G Circuit-Switched Core-Network (not
shown in Figure 1) for traditional services like voice calls,
and to the Packet-Switched Core-Network (CN for short)
for data services. The CN embeds several elements: SGSN,
GGSN, and a number of information servers. Some of the
latter are shared with the Circuit-Switched Core-Network
of the legacy 2G system
, e.g. the HLR/AuC. The SGSNs
perform functions such as access control, location management
, paging, route management [10]. The GGSN is the
logical gateway between the CN and external packet networks
(Internet and private networks), is endowed with a
full IP-stack and handles the IP-level connectivity with the
MS. The SGSN and GGSN of the same operator communicate
through the Gn interface. The CNs of different opera-1
Notably the close coupling between the circuit- (GSM) and
packet-switched (GPRS/UMTS) sections is a source of concern
since in principle troubles originated in the latter might
cause impairments or side-effect to the former as well.
tors are interconnected through the Gp interface for support
of roaming. The Gn protocol stack [10, p. 94] shows that a
lower UDP/IP layer is used to carry the user data packets
across Gn, with an intermediate encapsulation into a 3G-specific
protocol (GPRS Tunnelling Protocol, GTP). In fact,
the Gn interface is basically a wide-area IP network interconnecting
the different SGSN/GGSN sites, and as such it
embeds routers, IP subnets etc. Besides that, the CN is rich
in IP-based elements, including servers supporting control
and management functions (e.g. DNS, DHCP, RADIUS, see
[10]) and application elements (e.g. WAP gateway, proxies,
internal servers). The latter are always located behind the
GGSN, on the Gi side (ref. Figure 1) as they operate directly
on the data-plane. Note also that packet filtering and other
restiction policies can be located on separate dedicated elements
(NAT, IDS, firewalls) at the network boundaries (Gi,
Gp) and/or directly configured into the GGSNs.
3G terminals.
The population of 3G terminals is highly
heterogeneous and includes very different types of device:
hand-held phones and PDA, connect-card pluggable into
laptops, blackberry, etc. Additionally, a broad range of automatic
devices with no human interaction is emerging, taking
advantage of the ubiquity of the GPRS/UMTS coverage
(e.g. sensors, alarms, presence indicators, remote cameras).
Presently the most numerous 3G terminals are hand-held
phones. They span a broad range of technological platforms,
a major point of difference (for the moment) from the wired
Internet that is essentially a monoculture. The last aspect
is critical when considering malware infections: such a "bi-ological
variety" intrinsically limits the potential infection
scope, which in turn reduces somehow the very appeal for
programming new pieces of malware. As a result, large-scale
infections of cellular phones have not yet been observed, despite
a growing number of exploits and pieces of malicious
code targeting GPRS/UMTS phones have already appeared
in the wild (e.g. Cabir, Mosquito, Comwarrior
2
).
3G datacards for laptop.
Many 3G datacards for laptop
were sold starting in 2004, often coupled with flat-rate offers.
Most of these laptops are equipped with Microsoft Windows
- note that for some datacards drivers are not available for
other operating systems. This introduced into the 3G environment
a sub-population of homogeneous terminals, i.e.
Windows laptops, that are intrinsically exposed to all kinds
of exploits and infections that are found in the wired Internet
. In case of active infection (e.g. a scanning worm) they
introduce into the 3G network the same "unwanted" traffic
patterns (e.g. probe SYN packets) that are found in wired
LANs and in the Internet.
PROBLEM STATEMENT
Unwanted traffic.
The term "unwanted traffic" has been
used in [16] to refer cumulatively to those traffic components
originated directly or indirectly by malicious or anyway "non
productive" activities. It includes backscatter traffic asso-ciated
to remote DoS attacks, scanning probes, spam, exploit
attempts etc. Unwanted traffic might have a negative
impact onto the underlying network, and in extreme cases
drive the network or at least some of its elements to crash.
A bright example was provided by the spreading of Code-Red
-II in 2001 [3]. Once installed on a victim host, the
worm started to scan for new potential victims by sending
a high rate of probing TCP SYN packets to random
addresses. This caused troubles to the packet forwarding
modules of several edge routers all over the Internet, some
of which eventually crashed [4]. In simple words, the problem
is that route caching mechanisms were designed (and
optmized) to operate under "normal" (i.e. expected) traffic
conditions, where most of the packets are directed to a
relativelly small subset of popular subnets. In such nominal
condition, route caching can be very effective. But during
the infection probing SYN packet were massively generated
and sent to randomly chosen IP addresses, thus driving the
cache access mechanisms to explode. In other words, the
worm infection built-up a traffic aggregate macroscopically
different from the "normal" pattern, and the network proved
to be not robust enough to sustain such different conditions.
The lesson to be learned is that in terms of the characteristics
of the macroscopic traffic aggregate (entropy of the
destination IP address distribution, packet size, etc.) large
infections or other unwanted traffic components can expose
the network to a different "operating point" from what the
network was engineered and optmized for, with potentially
dramatic effects
3
.
Potential impact on 3G.
In principle, the 3G network is
exposed to the same type of incidents, and perhaps even
more given the higher functional complexity inherited by the
wireless cellular paradigm. The 3G network is ultimately an
IP network, but with important peculiarities. First, the underlying
transport stratum, specifically the 3G-specific lower
protocols in the RAN, are endowed with very high functional
complexity and signaling interactions - mainly for the sake
of mobility management and efficient resource management.
Second, the population of internal "hosts" is extremely large
(from thousands to millions of MSs) and highly dynamic (activity
periods can be as short as few seconds). The potential
impact of large-scale infections and unwanted traffic in such
a system is an intriguing point for research, that has not yet
been addressed by the research community. The existence of
the problem has been conjectured in a previous work [9, p.
447-448]. In lack of past empirical events, it is not possible
to claim that 3G network are exposed to serious damages
from large infections. On the other hand, without a systematic
risk assessment it is neither possible to provide a priori
guarantees about their robustness. Empirical evidence of
the very existence of unwanted traffic in a real 3G network
has been reported in [6] along with initial but technically-detailed
speculations on the potential impact that the observed
traffic would have under certain hypothetical conditions
and configuration setting. The actual impact, if any,
depends on a combination of factors related to the network
configuration and equipment features. In the following we
illustrate the problem by discussing a few examplary forms
of impact that might take place in a real network.
Stateful elements.
The presence of massive amounts of
TCP SYN packets might cause troubles to those stateful
elements designed to reserve resources for each TCP con-3
In this regard, this is another example of (lack of) robusteness
to unanticipated types of events in HOT systems [11].
nection (e.g. application layer proxies, servers, NATs). Note
that some stateful operations might be enabled also on the
GGSNs.. In this cases the GGSN logic should be robust to
high rates of SYN packets coming from the MSs.
Large volumes of SYN packets might be originated by deliberate
DoS/DDoS or from large-scale infections of scanning
worms. In both cases, the source(s) can be hosts in the Internet
(exogenous traffic) or other MS in the RAN (endogenous
traffic). In general, exogenous traffic can be blocked at
the external firewall as for any other private network. The
first element to inspect the IP packets sent by the MSs is
the GGSN. The latter generally embeds full router capabilities
, therefore it can be configured with the same stateless
/ stateful firewalling policies and/or throttling mechanisms
(see e.g. [12]) to filter endogenous uplink traffic. For an
improved robusteness against residual unblocked SYNs, all
stateful elements should be designed to resist massive SYN
storms rather than just rely on external filtering elements.
Wastage of logical resources.
The UMTS radio bearer
channels (called Dedicated Channel, DCH) are assigned dy-namically
to active MSs. The assignment policy is implemented
in the RNC and is generally based on a combination
of timeouts from the last data packet and thresholds on
the recent sending / receving rates. The exact algorithm is
vendor-dependent, with parameters configurable by the operator
. Let us consider here the simplest case of a purely
timeout-based DCH assignment policy: the DCH is assigned
to the MS at the time of the first packet (sent or received),
and is released after T
DCH
seconds from the last packet,
T
DCH
being the holding timeout for DCH. Note that when
the MS does not have an assigned DCH, packets are ex-changed
on the common channels FACH or RACH (see [8,
Ch. 7]). Note also that each channel switch operation involves
a signaling procedure at the radio interface, contributing
to the total transfer delay for the arriving packet. The
value of T
DCH
must be tuned carefully. Too short values
causes a high frequency of channel switch cycles, and consequently
(i) a higher consumption of signaling resources on
the radio link and (ii) longer packet delays and hence worse
user experience. On the other hand, too long values will
lead to wastage of logical resources, i.e. DCHs, whose available
number if limited in each cell. Therefore, the optimal
value of T
DCH
must be chosen according to the distribution
of idle-period duration for "typical users".
Given such framework, consider what happen when a number
of infected terminals are scanning the local address space.
Each active MS (not necessarely infected) will be visited by
scanning probes at an average rate of R
v
pkt/sec. The exact
value of R
v
depends on several factors like number of
scanning MSs, scanning rate, etc. (see [6] for more details)
and can typically be in the order of few seconds or below.
In case that the average probe interarrival time is smaller
than the DCH holding timer, i.e.
v
= (R
v
)
-1
< T
DCH
, the
incoming unwanted traffic will keep the DCH channel assigned
to the target MSs indefinitely, until the user switches
off the terminal or explicitely close the PDP-context
4
. Note
that the volume in byte count of such incoming background
traffic is extremely low and would pass unnoticed by the
user. No assumption is made about the vulnerability of the
4
The "PDP-context" is the logical connection to the 3G
network, conceptually similar to a wired modem dial-up.
ACM SIGCOMM Computer Communication Review
55
Volume 36, Number 2, April 2006
target MS to the specific exploit, the only condition being
that it is reachable by probing packets, i.e. it has an active
PDP-context. Such always-on "spurious" DCH waste
resources on the radio interface. Notably, wastage is limited
to the logical resources, i.e. DCH, since the physical
bandwidth is left largely unused as only sporadic and small
packets (probe SYNs) are transmitted over the air. Such
phenomenon might lead to logical congestion of some radio
cells as soon as the number of active MSs in the cell reaches
the number of available DCHs.
Signaling overhead.
One key assumption in the above scenario
is that the average interarrival of background packets
is smaller than the DCH holding timeour, i.e.
v
< T
DCH
.
Other problems arise in case that
v
is higher but close to
T
DCH
, i.e.
v
= T
DCH
+
for small , particularly in the
case of low T
DCH
. In this case, a DCH reassignment follows
immediately a DCH release at rate 1/T
DCH
, thus wasting
signaling bandwidth in the radio section. Again, the more
"victims" are present in the same cell the higher the impact.
CONCLUSIONS
We warn that unwanted (or "background") traffic can
have an impact onto the functionally-complex 3G network,
at least under certain conditions of network configuration
and setting. Real measurements [6] provide evidence of the
presence of such traffic inside a real GPRS/UMTS network.
We have speculated on its potential impact under hypothetical
network conditions (e.g. MS-to-MS communication enabled
, no firewalling set in the GGSNs). The extent to which
such conditions are effectively found in a real network is unknown
, as mobile operators do not disclose details about the
deployment and configuration of their networks. Since the
actual impact, if any, depends pointedly on a combination
of factors related to the network configuration and equipment
features, in many cases the relevant countermeasures
and fixes are obvious or anyway simple to implement once
that the potential risk has been identified. Often preventive
actions are as simple as a careful and informed network engineering
and equipment configuration. For instance, stateful
firewalling at the GGSN prevents probe packets to reach
the target MS thus avoding DCH channels to be "spuri-ously"
kept alive by background traffic. Alternatively, a
more sophisticated DCH assignment strategy (e.g. based on
thresholds on the packet rate) would alleviate the problem.
However, such features might never be activated unless an
explicit recognition of the problem of unwanted traffic and
its consequences. In summary, the very first problem is to
recognize and assess the potential risks, which might be hidden
in the intricate web of interactions and dependencies
embedded within the functionally-complex 3G network.
The potential risks due to the presence of unwanted traffic
must be taken into account in the design of the network setting
, so as to avoid the emergence of hazardous conditions.
A coherent process of risk assessment should be considered
as a natural component of the network engineering process.
In turn, risk recognition must be based on a thorough understanding
of the specific traffic environment, which is conti-nously
evolving following the emerging of new services, new
types of terminals, new forms of infections, new attacks, etc.
Automatic or semi-automatic methods can be implemented
to detect drifts in the macroscopic composition of the traffic,
including the raise of new components of unwanted traffic,
borrowing concepts and tools from the recent achievements
in the field of anomaly detection in the Internet. The prerequisite
for all that is a continuous (always-on) process of
large-scale traffic monitoring and analysis from inside the
network, i.e. on the internal interfaces like Gn.
REFERENCES
[1] DARWIN home page
http://userver.ftw.at/
ricciato/darwin.
[2] A. Bavosa. Attacks and Counter Measures in 2.5G
and 3G Cellular IP Networks. Juniper White Paper,
June 2004. Online at www.juniper.net/solutions/lit-erature/white
papers/200074.pdf.
[3] C.C. Zou, W. Gong, D. Towsley. Code Red Worm
Propagation Modeling and Analysis. 9th ACM Conf.
on Computer and Comm. Security (CCS'02), 2002.
[4] Cisco. Dealing with mallocfail and High CPU
Utilization Resulting From the "Code Red" Worm.
www.cisco.com/warp/public/117/ts codred worm.pdf.
[5] E. Barkan, E. Biham, N. Keller. Instant Ciphertext-Only
Cryptanalysis of GSM Encrypted Communications
. Crypto 2003, Santa Barbara, CA, August 2003.
[6] F. Ricciato, P. Svoboda, E. Hasenleithner, W.
Fleischer. On the Impact of Unwanted Traffic onto a
3G Network. Technical Report FTW-TR-2006-006,
February 2006. Available online from [1].
[7] G. M. Koien. An Introduction ro Access Security in
UMTS. IEEE Wireless Communications, 11(1), 2004.
[8] H. Holma, A. Toskala. WCDMA for UMTS. Wiley.
[9] H. Yang, F. Ricciato, S. Lu, L. Zhang. Securing a
Wireless World. Proceedings of the IEEE, 94(2), 2006.
[10] J. Bannister, P. Mather, S. Coope. Convergence
Technologies for 3G Networks. Wiley, 2004.
[11] J. M. Carlson, J. Doyle. HOT: Robustness and design
in complex systems. Phys. Rev. Let., 84(11), 2000.
[12] J. Twycross, M. M. Williamson. Implementing and
testing a virus throttle. Tech. Report HPL-2003-103,
May 2003. Online www.hpl.hp.com/techreports/2003.
[13] K. Ahmavaara, H. Haverinen, R. Pichna. Interworking
Architecture Between 3GPP and WLAN systems.
IEEE Communications Magazine, November 2003.
[14] L. Kleinrock. The Internet: History and Future. Lectio
Magistralis at Politecnico di Torino, October 2005.
Online at www.tlc.polito.it/
nordio/seminars.
[15] O. Whitehouse. GPRS Wireless Security: Not Ready
For Prime Time. Research Report by stake, June 2002.
Online at www.atstake.com/research/reports.
[16] R. Pang et al. Characteristics of Internet Background
Radiation. IMC'04, Taormina, Italy, October 2004.
[17] S. Keshav. Why Cell Phones Will Dominate the
Future Internet. Computer Communication Review,
35(2), April 2005.
[18] W. Enck, P. Traynor, P. McDaniel, T. La Porta.
Exploiting Open Functionality in SMS Capable
Cellular Networks. 12th ACM Conf. on Computer and
Comm. Security (CCS'05), November 2005.
ACM SIGCOMM Computer Communication Review
56
Volume 36, Number 2, April 2006 | Unwanted traffic;Cellular networks;3G |
206 | Use of Contextualized Attention Metadata for Ranking and Recommending Learning Objects | The tools used to search and find Learning Objects in different systems do not provide a meaningful and scalable way to rank or recommend learning material. This work propose and detail the use of Contextual Attention Metadata, gathered from the different tools used in the lifecycle of the Learning Object, to create ranking and recommending metrics to improve the user experience. Four types of metrics are detailed: Link Analysis Ranking, Similarity Recommendation, Personalized Ranking and Contextual Recommendation. While designed for Learning Objects, it is shown that these metrics could also be applied to rank and recommend other types of reusable components like software libraries. | INTRODUCTION
One of the main reasons to capture and analyze the information
about the interaction between a user and a tool is to improve the
user experience. For example, a online library system could
record the subject of the books that a client has bought before in
order to recommend him new books about a similar subject the
next time he/she logs in, saving him/her the hassle to search for
them [1]. A news web site could record the topic of the news
articles that a user normally read in order to filter out news that do
not interest such user [2]. A collaborative browser could use the
information recollected from the browsing patterns of a given
community to improve the rank of different pages on the searches
of an individual user, member of that community [3]. The
generic name of Attention Metadata[4] has been applied to
describe the information about these interactions.
When the information stored does not only contain the reference
to the user and the action that it performs, but also register when
the action took place, through which tool the action was
performed, what others thing was doing the user at the same time,
what is the profile of the user performing the action, to what
community he/she belongs, etc, it leads to an improved and more
useful form of record, called Contextualized Attention Metadata
[5] (CAM). AttentionXML [6] and its extensions [5] are an effort
to standardize the way in which CAM is stored. This
standardization will lead to the opportunity to share attention
records between different applications. For example, a second
generation of an Attention-Sharing online library could know
which news topics the user is interested in and it could
recommend him/her books related to those topics.
The authors believe that one group of applications that could
greatly benefit from CAM information is the search and find of
Learning Objects. These applications have suffered from an
under-par performance compared to similar applications in other
fields [7] [8]. The main reason for this is the lack of a meaningful
and scalable way to rank or recommend the objects to the users.
Currently, two main methods are used to rank (not even
recommend) Learning Objects: Manual Rating or Metadata
Content Rating. In the first approach, Manual Rating, each
Learning Objects should be rated by a group of experts and/or the
user community. For each search, the returned objects are ranked
based on their average rate. While this is bound to provide
meaningful ordering, it does not scale. For example MERLOT
use this approach, but only 10% of the total content of the
database has ever be rated [9]. The other approach, use only the
information contained in the metadata record to perform ranking
based on the similarity with the query terms. The most common
method used for this is the TFIDF metric[10] that measure in a
Vector Space the distance between the query vector and the
vector composed from the text contained in the metadata record.
Given than TFIDF was designed to work over full text documents
and that metadata records contain very few textual descriptions
[11], normally the ordering is not meaningful for the user. SILO
(Search and Indexing Learning Objects) tools from the
ARIADNE [12] repository use this approach. CAM could be
used to generate a third approach, one in which the human
attention (meaningful) is processed to construct an automated
(scalable) rating and recommending procedure.
The following sections of this work describe in detail what
information should be stored in the CAM record of Learning
Object Applications (Section 2) and the mechanisms by which
such information could be used to generate rating and
recommending metrics (Section 3). It is also discussed how these
mechanisms and metrics could be applied to related contexts
(Section 4) and which research questions need to be addressed in
further work (Section 5). The work finalize with an overview of
related research (Section 6)
CAM FOR LEARNING OBJECTS APPLICATIONS
Users interact with a Learning Object through the object's whole
lifecycle. CAM recorders capture and timestamp all those
interactions in order to provide the information needed to
calculate useful metrics to be used in a next generation breed of
learning object management tools. According to the
AttentionXML extension proposed by Najjar et al at [5], these
interactions are stored inside an Action record. This work will
briefly list the different Actions that should be recorded through
the Learning Object lifecycle. The lifecycle phases are taken
from the enumeration done by Collins and Strijker in [13]. Also,
it is suggested which applications should generate the attention
records.
Creation: In this phase the author creates or assembles the
learning object in its digital form using some sort of authoring
tool. The Creating Action should be captured and it must include
the identity of the created object, its author(s), the authoring tool
used and the list of component-objects [14] reused through the
creation process. This record should be created by the authoring
tool, for example Microsoft Power Point.
Labeling: At this stage the author, an indexer or even an
automated system could add a metadata record that describes the
Learning Object. The Labeling Action must include information
that identify the object, the labeler, the origin of the metadata
(Automated, Semi-Automated, Manual), the metadata format
used, the level of confidence of the information (how sure the
autor is that metadata values are correct) and a unique identifier
for the metadata record. Normally this record should be also
created by the authoring tool at the end of the creation of the
objects, but could also be created by metadata editors as [15] or
automated metadata generators as [16].
Offering: At this stage the author or indexer inserts the object in
a repository or other system that allow the object to be shared
with others. The Inserting Action must include the following
information: Object Unique Identifier, Inserter, Tool Used and
Learning Object Unique Identifier inside the sharing tool. This
record should be created by the sharing tool, being it a Learning
Object Repository or a Peer to Peer sharing application.
Selecting: In this stage the user search, find and select Learning
Objects in the Sharing System. Several Actions should be
captured during this phase. A Searching Action when a query is
performed to find relevant objects. It must include information
that describe the query performed and the objects returned. A
Recommending Action when the system suggests relevant objects
without the user performing a query. It must contain information
a list of the object(s) recommended, the user action that trigger
the recommendation and the tool used to perform the
recommendation. A Browsing Action when the user reviews the
metadata or description of an object. It must store information
that identifies the metadata record browsed and the time expend
in the review. Finally, A Selecting Action when the user chooses
an object by downloading it or accepting the recommendation. It
must contain information that identifies the selected object. All
this actions should also contain information about the user that
performs the action. These records should be generated by the
sharing or recommending tool.
Using: This stage comprehends all the actions that the final user
(instructor or learner) performs with the learning object during its
normal utilization in a learning environment. There are several
actions to be registered. A Publicating Action when the instructor
inserts the object into a Course belonging to some kind of
Learning Management System. It must contain information that
identifies the published object and the context (course, lesson)
where it was published. A Sequencing Action when one or more
objects are included in an instructional design or sequenced
package. It must contain information about the identification (in
an ordered form) of the integrated objects. A Viewing Action, the
object is read or viewed by learners. It must contain information
about the time spent reviewing the material. An Annotating
Action when the instructor or the learner add a comment or rate
the learning object. It must include information about the
comment or the rate given and the identifier of the object. All
these action should also store information about the user that
performs them. Different tools should be in charge of the
generation of the attention records, a LMS for the Publishing
Action, a Learning Activity Management System [17] or SCORM
[18] Packager for the Sequencing Action, a Web browser or
document reader for the Viewing Action and a Rating or Review
system for the Annotating Action.
Lifecycle
Actions
Main Information
Source
Creation
Creating
author, components
Authoring tool,
Components
Labeling
Labeling
metadata format,
origin, confidence
Authoring tool
or Metadata
generator
Offering
Inserting
inserter LOR
or
Sharing app.
Searching
query, results
LOR's search
tool
Recommending
objects recommended
Recommender
Browsing
Time LOR
or
Recommender
Selecting
Selecting
object identifier
LOR or
Recommender
Publicating
LMS context
LMS
Sequencing
list of sequenced
objects
ID tool or
Packager
Viewing
Time, tool used
Browser or
Reading app.
Using
Annotating
rate or review
LMS
Retaining
Retaining
decision to keep or
delete
LMS
Table 1. Proposed CAM information to be stored for
Learning Object Applications
10
Retaining: In this phase, the instructor check for the validity of
the learning object and decides if it is still useful or if it should be
replaced / updated. The Retaining Action should contain
information that identifies the object and the decision taken (keep,
update, delete). This attention record will normally be generated
by the LMS where the object has been published.
A summary of the Actions that CAM should record is presented
in Table 1. Some of these CAM Actions (Creating, Inserting,
Selecting, Viewing) are already produced and stored in different
tools [5]. The others are easy to implement in existing tools
taking in account that most of them (LMS, Metadata Generators,
etc) already produce a log with the user's interactions. In the next
section, metrics to exploit this Action records to improve tools to
search and find Learning Objects are proposed.
RANKING AND RECOMMENDING METRICS USING CAM
Several ranking and recommending metrics will be proposed.
These metrics will use only two sources of information to be
calculated: the first one is the Learning Object Metadata (LOM)
[19] record that describe each Learning Object; the second one is
the CAM Actions described in the previous section.
3.1 Link Analysis Based Ranking
One of the most famous and successful ranking algorithms at the
present is PageRank [20]. PageRank use the information
contained in the network of links between web pages to calculate
the relative "importance" of a page. It could be summarized as: a
page is important if it is linked by a high number of pages. Also,
the importance increases if the pages linking to it have also a high
importance rank. Unfortunately, this algorithm could not be
applied directly to Learning Objects. While LOM records have a
linking field, it is rarely populated [11]. Also, LOM linking
reflect just a semantic relationship; it does not imply a "vote" for
that object as it is assumed for Web pages.
As an alternative to the explicit linking structure that the web
posses, CAM allow us to create an implicit linking between
Learning Objects and other entities related to them: Authors,
Users, Courses, Learners, etc. For example: Creating Actions can
be converted into a link between an author and an object,
Selecting Actions can be converted into a link between a user and
an object, Publicating Actions can be converted into a link
between a course and an object and also between a user and the
same object. Viewing Actions can be converter into a link
between a learner and an object. As result of this conversion of
CAM to links between different entities, a K-partite graph is
created (a graph with different partitions, where there are not links
between nodes of the same partition). In this graph each type of
entity (Learning Object, User, Course, and Learner) is considered
a partition Figure 1 present diagram of an example of such a
graph.
Once CAM information is represented as a graph, it is easy to use
basic graph algorithms to calculate ranking metrics. Following
there are some metrics that could be developed this way:
Popularity Rank (PR): Using the information contained in
the Selecting Action (converted already in a 2-partite graph),
it is easy to obtain the number of times that an object has
been downloaded. To calculate just count the number of
incident links that each Learning Object node has from nodes
in the User Partition. This metric is a just a basic way to put
most downloaded objects first in the result list.
)
(
)
(
object
inDegree
object
PR
=
Figure 1. K-Partite Graph representation of CAM
Author-Corrected Popularity Rank (ACP): Combining
the Creating and Selecting Actions, it could be calculated
how popular an object is based on the number of downloads
and the popularity of the Author. The first step is to create a
3-partite graph with Users, Objects and Author partitions.
Then the PopularityRank (PC) is calculated for all the
objects. Next, the Author Popularity (AP) is calculated
adding the PC of all the Learning Objects nodes that are
linked to the author node. Finally, the AP is multiplied by a
weighting factor and added to the also weighted PC. This
metrics enable new objects (that do not have any downloads)
from a well downloaded author, appear higher in the list.
=
i
i
object
PR
author
AP
)
(
)
(
; if object
i
is linked to author
)
(
)
(
)
(
author
AP
object
PR
object
ACP
+
=
Weighted Popularity (WP): Selecting, Publicating and
Retaining Actions can be combined to generate a 2-partite
graph between Users and Learning Objects. The links of this
graph will be weighted: if the link is made using the
Retaining information (inDegree
R
) it will have more weight
as if the link was made using the Publication information
(inDegree
P
). In the same way, Publication links will weight
more than Selection links (inDegree
S
). The rationale behind
this metric is that different actions mean different level of
"preference" for an object. If the instructor has use the
object and she is happy with it to keep it for the next
semester is a stronger vote of support than just using it for
the first time or that just downloading it. That difference of
importance is represented in the weight given to each kind of
link.
)
(
)
(
)
(
)
(
object
inDegree
object
inDegree
object
inDegree
object
WP
R
P
S
+
+
+
=
with
> >
O 1
O 2
O 3
C1
C2
U1
U 2
A1
A2
User Partition
Course Partition
Author Partition
Object Partition
11
Rate of Reuse Rank (RRR): Using the Selecting Action (or
also the Publicating and Retaining Actions as in the previous
metric), the number of times that an object has been
downloaded during a given period of time P (last week,
month, year) can be calculated. The 2-partite graph (Users
and Objects partitions) can be constructed but only taking in
account the Actions occurred in a given period of time. For
example: if the last week is selected as P, This rank will
calculate how often the object has been downloaded
(inserted or retained) on the last 7 days. This value could be
normalized using the age of the object, obtained from the
related Creation action. This metric will help to rank higher
object that have been reused frequently and are relatively
new.
)
(
)
(
)
(
object
age
object
inDegree
object
RRR
=
; for links inside period P
Manual Rank (MR): Using the information that is stored in
the Annotation Action, the number of times that an object
has been positively (or negatively) rated or reviewed could
be considered to calculate a metric. A 2-partite graph (Users
and Objects partitions) is created. The procedure will weight
the link as 1 if it is a positive rate or review, -1 if it is a
negative rate or review. The actual value of the rate is only
used to evaluate if the rate is a positive or negative "vote",
because different users and system have different scales to
grade. The reviews can only be considered if their
positiveness or negativeness value is included in the
Annotation Action or could be automatically inferred from
the text.
)
(
)
(
)
(
object
inDegree
object
inDegree
object
MR
Negative
Positive
=
These metrics can be calculated off-line because they are not user
or query specific. They calculate an average importance or
relevance of the learning objects based in the agglutination of
attention information. These metrics, and others that can be
developed afterwards, could be integrated in a final ranking
metric. This Compound Popularity Metric (CP) can be calculated
as the weighted sum of the values of the individual metrics. For
example, Google integrates more than 100 of different simple
metrics in order to provide its results [21].
MR
RRR
WP
ACP
PR
CP
+
+
+
+
=
The weighted coefficients (
, , etc) should be estimated (not
trivial procedure) to provide an optimal result ordering. Methods
to make these estimates are described in [22] and [23]. Also,
manual rates should be used carefully because the Annotation
Information is optional and could not exist for all the objects
involved in the calculation.
3.2 Similarity Metrics for Recommendation
One property of a 2-partite graph is that it can be folded over one
of its partitions, generating a normal graph with just one entity
and links between its nodes. For example if we have a 2-partite
graph of Users that have download Learning Objects, we can fold
over the Learning Object partition and we will end up with a
graph where the Users are linked between them. Each link mean
that those two users have download the same object at least once.
This new graph could be used to calculate similarity between the
users based on the download patterns. In Figure 2 we can see a
representation of the folding result. The first part of the figure
represents a 2-partite graph with the User and Objects partitions.
The graph shows that, for example, that User 5 had downloaded
Object 2 and Object 3 and User 1 had only downloaded Object 1.
The second part of the figure illustrates the folded version of the
graph. In this new graph, the users have a link between them if
they linked to the same object in the unfolded graph. The more
objects the users have in common, the thicker the line. For
example User 1 and User 4 are linked because they both have
downloaded Object 1. User 2 and User 5 have a stronger links
because they both have downloaded Object 2 and Object 3. This
technique is similar to the one applied in scientometrics to obtain
relations between different authors, based on the papers that have
co-authored[24].
U1
U2
U3
O1
O2
O3
U4
U5
U6
U1
U2
U3
U4
U6
U5
2-Partite Graph (User and Objects)
Folded Normal Graph (Users)
Figure 2. Unfolded and Folded 2-Partite Graph
We present several similarity metrics that can be calculated using
the information contained in CAM Actions detailed in the
previous section.
Object Similarity based on Number of Downloads:
Create a 2-partite graph with the information of the Select
Actions (when a User download a Learning Object), and fold
over the User Partition. A link between two Objects in the
final graph means that those objects have been downloaded
by the same user. The strength of the similarity is number of
users that have downloaded both objects.
Object Similarity based on Re-Use: Create a 2-partite
graph with the information from the Publish Actions (when a
Learning Object is inserted into a Course), and fold over the
Course Partition. A link between two Objects in the final
graph means that two objects have been inserted in the same
course. The strength of the similarity is number of courses
that include both objects.
Users similarity based on Downloads: Create a 2-partite
graph with the information from the Select Actions, and fold
over the Object Partition. A link between two Users means
that they have downloaded the same object. The strength of
the similarity is the number objects that the users have in
common.
Author similarity based on Re-Use of Components: The
Creation Action information could be use to identify re-use
of learning object components. For example several authors
could use the same picture or diagram inside they
presentations. As the Creation Action store information
about which existing components have been reused (see
Section 2), a 2-partite graph between Authors and
Components can be created and then folded over the
12
Components partition. The new graph will represent
relationship between different authors. More components
those authors have used in common, the stronger their
similarity.
The similarity metric obtained from the graph could be then
applied in recommendation tools. For example: If a user finds an
object useful, a link to similar objects could be provided (similar
to what Amazon does with books [1]). Also, the similarity
between users can be exploited to recommend Learning Objects
to a user, based on what other users that are in the same
community have recently download (similar to collaborative
browsing applications [3]). To automatically extract the
communities from the graph, an algorithm like EdgeBetweeness
[25] can be applied. The same procedure could be applied to the
Author Similarity graph. The communities of authorship can be
automatically extracted from the graph. The author then can be
recommended with components that have been created by other
authors in the same authorship community.
Beside recommendation systems for Learning Objects, these
similarity metrics could be considered as distance metrics. The
distance metric can be used inside clustering algorithms to
automatically find groups of similar objects. These clusters could
be used to improve the presentation of results of a search, much as
Vivisimo [26] does for Web Pages.
3.3 Personalized Ranking
To be able to personalize the search result order for a given user,
the application should have a representation of such user in a
profile. While this profile could be created explicitly by the user,
CAM information could help the application to learn it form the
user interaction with the tool. For example, the information about
stored in the Select, Publish and Retain from a user could help us
to determine in which objects is he/she interested, and rank higher
objects that are similar to those.
This work proposes the creation of a fuzzy profile that could
easily account for the evolving and not fixed behavior of an
instructor downloading learning objects. Instead of having a crisp
preference for one type of object, this profile will provide
different grades of likeness for several characteristics of the
learning object. This profile is constructed with several Fields.
The Fields could be a subset of the fields considered in the LOM
standard, especially the ones that use a vocabulary or represent a
classification. Each field will contain 2 or more fuzzy sets that
represent the values that the field could have (from the vocabulary
or the classification values). A user could "prefer" in different
degrees 1 or more of the values of a Field. The preference of the
user for each one of the values is calculated based on the number
of objects that the user have download before that contained that
value in the corresponding LOM field. This fuzzy profile has
been derived from research done to produce automatic TV
recordings for PVRs [27] like TIVO.
The fuzzy profile could be easily operationalized to provide a
personalized rank for Learning Objects. First, each field should
have a weighting value (that express how important that field is).
That value could be assigned by an expert or could be calculated
automatically for entropy of the distribution of the field values for
that user. For example if a user downloads objects from a wide
variety of topics, the weight of topic as a good ranking
measurement is low. Instead, if the user only downloads objects
in one language, the weight of that field should be high. Second,
each LOM record from the result list is converted to a similar
representation, using the same fields and a preference value of 1.0
for the value found in the metadata. Finally, the object
representation is operated with the profile in order to obtain how
well the object fits the preference of the user. This operation is
described in the following equation:
j
i
i
j
j
i
i
value
object
Field
value
user
Field
user
object
nk
PersonalRa
).
(
).
(
)
,
(
=
For example, lets consider a user that have download 20 objects,
16 with topic Computer Sciences and 4 with topic Physics. Of
those 20, also, 12 are in English, 4 are in French, 4 are in Spanish.
A fuzzy profile that represents that user could be expressed as:
U1 = {(0.8/ComputerScience + 0.2/Physics),
(0.6/English + 0.2/Spanish + 0.2/French)}
The fields weighting terms are 0.9 for Topic and 0.6 for
Language. Lets now considered 2 objects represented also as
fuzzy sets:
O1 = {(1.0/ComputerScience), (1.0/Spanish)}
O2 = {(1.0/Physics, 1.0/English)}
The calculated rank for both objects is:
O1 = 0.9*0.8 + 0.6*0.2 = 0.84
O2 = 0.9*0.2 + 0.6*0.6 = 0.54
O1 will be ranked higher than O2 as it is more similar to the user
profile.
The personalized calculation could be combined with the
popularity ranking described before to create a better ranking
algorithm, the same way as Google personalized Search mix the
standard Popularity measure with information from the user
profile to order the results.
3.4 Contextual Recommending
If the CAM is considered not only as a source for historic data,
but also as a continuous stream of contextualized attention
information, we can use very recent CAM (in the order of seconds
or minutes) to generate recommendations based on what the user
is focusing his/her attention at the moment. For example, the
recommender system could use the information stored in the if the
user has inserted an object inside a Course in a Learning
Management System (LMS), the LMS will generate a CAM
record with contextual information about which object was
inserted and in which lesson of the course. The recommending
system could use that information to present the user with similar
objects to the one inserted or others that have been used in similar
courses, based on the topic of the course or in similarity metrics
as the one explained in the Section 3.2.
The recommending system could also present objects that suit the
application that the user is using at a given moment, based on the
information about the object (LOM record). For example, if the
user is working Microsoft Power Point authoring tool,
presentations, slides, small texts, images and diagrams will be
recommended. If he/she is working with a SCORM Packager,
complete learning objects will be presented instead.
Contextual recommending techniques have been tried before in
several fields [28] [29]. Blinkx [30] is an example of this kind of
applications. It recommends web pages, videos and news based
13
on the present content of the screen of the user. A similar
application could be developed in a LMS for example, where the
system could recommend to the instructor materials to add to each
lesson, or could recommend the learner with similar or
complementary materials to the one that the instructor has added
to the course.
APPLICATION IN OTHER CONTEXTS
While the CAM based metrics proposed in this work were
designed for Learning Objects, they could be easily extended or
adapted to work for other kind of reusable components where
CAM could be collected. For example, given the exponential
grow of open source software libraries that could be reused inside
software projects, programmers are sometimes overwhelmed with
the amount of available choices. It makes sense to develop some
kind of ranking or recommending system that could help the
developers to select the right tools.
To construct the ranking application we can use the same methods
proposed for learning objects. The k-partite graph used to
calculate the popularity metric could be constructed using the
metadata information about the library (who is the author of a
software library) and contextual attention information about how
and when the programmers interact with the library (which
programmers have download it, in which software project they
have been used). Most of this information could be obtained from
open source project repositories like SourceFourge [31]. The
rationale behind the ranking would be: A library that have been
downloaded more often / at a higher rate is more useful. A library
produced by authors with highly useful libraries could also be
useful. A library re-used in many projects is probable highly
useful. This metrics are parallel to the ones described for learning
objects.
Recommending systems for software libraries could also be
constructed in a similar way to the ones proposed for learning
objects. For example, we can fold the Libraries-Programmers 2-partite
graph over the Libraries Partition, creating a graph that
relate Programmers between them based on the Libraries that they
have downloaded/used. Communities could be extracted form the
resulting graph and could be used for recommending a
programmer with new libraries that other members of his/her
community have used in their projects.
The precaution to have when applying this metrics to other
domains is the semantics of the relations that are created with the
graphs. For example, if two learning objects are used in the same
course, those two learning objects must have something in
common (same topic for example), while if two libraries are used
inside the same project, that does not mean that the libraries are
related (you could use a database access library and graphical
interface library inside the same project).
Other contexts where CAM information could be exploited to
rank and recommend elements with a similar strategy as the one
presented in this work are music mixes (component songs or
loops) and news aggregators.
FURTHER WORK
This work is just an introduction to how CAM information could
be used to rank and recommend Learning Objects. Several topics
should be solved before a big scale application that use the
metrics presented could be built:
Collection and Integration of the different CAM sources:
While today exist several applications that generate CAM,
there is not an established multi-application CAM repository
that could be used to collect and integrate attention
information.
Combination of different ranking strategies: When
different ranking strategies are combined, some weighting
coefficient must be applied. The calculation of those
coefficients is not trivial and should be made using extensive
user feedback.
Critical mass vs. Closed Community: To be useful, the
metrics should be calculated over a significant amount of
CAM data. But if we integrate data from different
communities to obtain a bigger amount of CAM (for
example attention from different LORs), there will probably
not exist common objects, users or courses that could be
used to generate relations between the communities.
RELATED WORK
Broisin et al in [32] propose a framework to capture usage
information about Learning Object from different Learning
Management Systems and Repositories in order to analyze the
usage patterns of the users through a Management Application.
The approach of this paper goes a step further, using the attention
information to calculate metrics that could be used to improve
existing tools. Broisin's work also uses a simplified format of
attention (basically usage information) in a non-standard format,
limiting the possible use of the information by other systems,
because existing applications should be reprogrammed to produce
that format. This work proposes the use of an extension of
AttentionXML standard to be able to capture the CAM from a
variety of systems that already produce it.
In a related area, digital libraries, Nicholson in [33] propose the
fusion of bibliometrics analysis with user-related data mining to
generate a new field of study, bibliomining. His proposal could
be compared with the one presented in this work: Using the
information about the book and the usage information generated
by the interaction of the users with the digital library system to
improve the user experience. While Nicholson mentions several
ways in which the attention metadata could be used, he does not
detail any specific metric to improve digital library systems.
CONCLUSIONS
The current immaturity of the tools to search and find Learning
Objects could be overcome if CAM information is store through
the lifecycle of the Learning Object and used to compute metrics
for ranking and recommendation. These metrics should generate
a meaningful and automated way in which Learning Object could
be ranked. This work presented detailed methods to calculate
various metrics and propose several uses for those metrics. The
proposed calculations could also be applied to rank and
recommend other reusable components from which CAM could
be gathered, as it was shown for the case of open source software
libraries example.
While the metrics are easy to calculate, and some initial data is
also present, more research is needed to be able to assemble a
large scale system that could gather the necessary amount of
CAM in order to render the calculations meaningful.
14
REFERENCES
[1] Linden, G.; Smith, B. and York, J. Amazon.com
Recommendations: Item-to-Item Collaborative Filtering.
IEEE Internet Computing, 7, 1 (2003), 76-80.
[2] Shepherd, M.; Watters, C. and Marath, A. Adaptive User
Modeling for Filtering Electronic News. In Proceedings of
the 35th Annual Hawaii International Conference on System
Sciences, 2002. HICSS. (2002), 1180- 1188.
[3] James, S. Outfoxed Collaborative Browsing,
http://www.getoutfoxed.com. Retrieved on May, 2006.
[4] Najjar, J., Meire, M. and Duval, E. Attention Metadata
Management: Tracking the use of Learning Objects through
Attention.XML. In Proceedings of World Conference on
Educational Multimedia, Hypermedia and
Telecommunications. (2005). 1157-1161.
[5] Najjar, J., Wolpers, M. and Duval, E., Attention
Metadata:Collection and Management", WWW2006
workshop on Logging Traces of Web Activity, Edinburgh,
Scotland, (2006).
[6] AttentionXML, AttentionXML specifications,
http://developers.technorati.com/wiki/attentionxml.
Retrieved on June, 2006
[7] Duval, E. and Hodgins, W., A LOM research agenda. In
Proceedings of WWW2003: Twelfth International World
Wide Web Conference, (2003), 659-667.
[8] Ochoa, X. Learning Object Repositories are Useful, but are
they Usable? In Proceedings of IADIS International
Conference Applied Computing. (2005), 138-144
[9] Duval, E. LearnRank: the Real Quality Measure for
Learning Materials. Policy and Innovation in Education -
Quality Criteria, (2005)
[10] Aizawa, A. An information-theoretic perspective of tfidf
measures. Information Processing and Management, 39,
(2003), 45-65.
[11] ISO/IEC JTC1 SC36. International LOM Survey: Report.
http://mdlet.jtc1sc36.org/doc/SC36_WG4_N0109.pdf
(2004).
[12] Ariadne Foundation. Ariadne Foundation.
http://www.ariadne-eu.org (2005).
[13] Collis, B. and Strijker, A. Technology and Human Issues in
Reusing Learning Objects. Journal of Interactive Media in
Education, 4, (2004).
[14] Verbert, K. Jovanovic, J. Gasevic, D. and Duval, E.
Repurposing Learning Object Components. OTM 2005
Workshop on Ontologies, Semantics and E-Learning, (2005).
[15] IEEE-LOM Editor, http://www-i5.informatik.rwth-aachen
.de/i5new/staff/chatti/LOMEditor/index.html.
Retrieved June 2006.
[16] Cardinels, K., Meire, M., and Duval, E. Automating
metadata generation: the simple indexing interface. In
Proceedings of the 14th WWW conference, (2005), 548-556
[17] Dalziel, J. Implementing Learning Design: The Learning
Activity Management System (LAMS), ASCILITE (2003)
[18] ADL, SCORM Standard, http://www.adlnet.gov/index.cfm,
Retrieved March, 2006
[19] IEEE. IEEE Standard for Learning Object Metadata.
http://ltsc.ieee.org/doc/wg12/ (2002).
[20] Page, L., Brin, S., Motwani, R. and Winograd, T. The
PageRank Citation Ranking: Bringing order to the Web.
Technical Report, Computer Science Department, Stanford
University (1998)
[21] Google Technology, http://www.google.com/technology/.
Retrieved, August 2006.
[22] Radlinski, F. and Joachims, T. Query Chains: Learning to
Rank from Implicit Feedback, Proceedings of the ACM
Conference on Knowledge Discovery and Data Mining.
(2005).
[23] Fan, W., Gordon, M. and Pathak, P. A generic ranking
function discovery framework by genetic programming for
information retrieval, Information Processing and
Management. 40 (2004) 587602
[24] Nascimento, M., Sander, J. and Pound, J. Analysis of
SIGMOD's co-authorship graph. ACM SIGMOD Record,
32, 3. (2003). 8-10
[25] Girvan, M. and Newman, M. Community structure in social
and biological networks. Proc. Natl. Acad. Sci. 11. (2002).
[26] Vivisimo Clustering Engine. http://www.vivisimo.com.
Retrieved August 2006.
[27] Pigeau, A., Raschia, G., Gelgon, M., Mouaddib, N. and
Saint-Paul, R. A fuzzy linguistic summarization technique
for TV recommender systems. The 12th IEEE International
Conference on Fuzzy Systems, 2003. FUZZ '03. 1 (2003)
743-748.
[28] Google AdSense, https://www.google.com/adsense/.
Retrieved August 2006.
[29] Fan, W., Gordon, M. and Pathak, P. Incorporating contextual
information in recommender systems using a
multidimensional approach. Information Processing and
Management. 40, 4. (2004). 587-602.
[30] Blinkx Contextual Search. http://www.blinkx.com.
Retrieved August 2006.
[31] Sourceforge, Open Software Repository.
http://www.sourceforge.net. Retrieved August 2006.
[32] Broisin, J., Vidal, P. and Sibilla, M. A Management
Framework Based On A Model Driven Approach For
Tracking User Activities In A Web-Based Learning
Environment. EDMEDIA, (2006) 896-903
[33] Nicholson, S. The basis for bibliomining: Frameworks for
bringing together sage-based data mining and bibliometrics
through data arehousing in digital library services.
Information Processing and Management. 42, 3 (2006). 785-804
.
15 | Learning Objects;Ranking;Attention Metadata;Recommending |
207 | Use of Relative Code Churn Measures to Predict System Defect Density | Software systems evolve over time due to changes in requirements, optimization of code, fixes for security and reliability bugs etc. Code churn, which measures the changes made to a component over a period of time, quantifies the extent of this change. We present a technique for early prediction of system defect density using a set of relative code churn measures that relate the amount of churn to other variables such as component size and the temporal extent of churn. Using statistical regression models, we show that while absolute measures of code churn are poor predictors of defect density, our set of relative measures of code churn is highly predictive of defect density. A case study performed on Windows Server 2003 indicates the validity of the relative code churn measures as early indicators of system defect density. Furthermore, our code churn metric suite is able to discriminate between fault and not fault-prone binaries with an accuracy of 89.0 percent. | INTRODUCTION
A "reliability chasm" often separates the quality of a software
product observed in its pre-release testing in a software
development shop and its post-release use in the field. That is,
true field reliability, as measured by the number of failures found
by customers over a period of time, cannot be measured before a
product has been completed and delivered to a customer. Because
true reliability information is available late in the process,
corrective actions tend to be expensive [3]. Clearly, software
organizations can benefit in many ways from an early warning
system concerning potential post-release defects in their product
to guide corrective actions to the quality of the software.
We use code churn to predict the defect density in software
systems. Code churn is a measure of the amount of code change
taking place within a software unit over time. It is easily extracted
from a system's change history, as recorded automatically by a
version control system. Most version control systems use a file
comparison utility (such as diff) to automatically estimate how
many lines were added, deleted and changed by a programmer to
create a new version of a file from an old version. These
differences are the basis of churn measures.
We create and validate a set of relative code churn measures as
early indicators of system defect density. Relative churn measures
are normalized values of the various measures obtained during the
churn process. Some of the normalization parameters are total
lines of code, file churn, file count etc. Munson et al. [17] use a
similar relative approach towards establishing a baseline while
studying code churn. Studies have shown that absolute measures
like LOC are poor predictors of pre- and post release faults [7] in
industrial software systems. In general, process measures based on
change history have been found be better indicators of fault rates
than product metrics of code [9]. In an evolving system it is highly
beneficial to use a relative approach to quantify the change in a
system. As we show, these relative measures can be devised to
cross check each other so that the metrics do not provide
conflicting information.
Our basic hypothesis is that code that changes many times pre-release
will likely have more post-release defects than code that
changes less over the same period of time. More precisely, we
address the hypotheses shown in Table 1.
Our experiments on Windows Server 2003 (W2k3) support these
four hypotheses with high statistical significance. We analyzed the
code churn between the release of W2k3 and the release of the
W2k3 Service Pack 1 (W2k3-SP1) to predict the defect density in
W2k3-SP1. The relative code churn measures are statistically
better predictors of defect density than the absolute measures.
They also they are indicative of increase in system defect density
and can accurately predict the system defect density with a high
degree of sensitivity. Our metric suite is able to discriminate
between fault and not fault-prone binaries in W2k3-SP1 with an
accuracy of 89.0 percent.
Table 1. Research Hypotheses
Hypothesis
H
1
Increase in relative code churn measures is
accompanied by an increase in system defect
density
H
2
Using relative values of code churn predictors is
better than using direct (absolute) values to explain
the system defect density
H
3
Relative code churn measures can be used as
efficient predictors of system defect density.
H
4
Relative code churn measures can be used to
discriminate between fault and not fault-prone
binaries.
The organization of this paper is as follows. Section 2 describes
the related work. Section 3 explains data collection and section 4
the relative code churn measures. Section 5 presents the case
study and the observed results. Section 6 discusses our
conclusions and future work.
RELATED WORK
Prior analyses on predicting defect density used code churn
measures as part of a larger set of metrics. Code churn measures
have not been studied in isolation as predictors of software defect
density. The background work presented below is from studies
that involved industrial software systems. The source code base of
W2k3 is two orders of magnitude larger than the largest example
considered below.
Munson et al. [17] observe that as a system is developed, the
relative complexity of each program module that has been altered
(or churned) also will change. The rate of change in relative
complexity serves as a good index of the rate of fault injection.
They studied a 300 KLOC (thousand lines of code) embedded real
time system with 3700 modules programmed in C. Code churn
metrics were found to be among the most highly correlated with
problem reports [17].
Khoshgoftaar et al.[13] define debug churn as the number of lines
of code added or changed for bug fixes. Their objective was to
identify modules where debug code churn exceeds a threshold, in
order to classify the modules as fault-prone. They studied two
consecutive releases of a large legacy system for
telecommunications. The system contained over 38,000
procedures in 171 modules. Discriminant analysis identified fault-prone
modules based on 16 static software product metrics. Their
model when used on the second release showed a type I and II
misclassification rate of 21.7%, 19.1% respectively and an overall
misclassification rate of 21.0%.
Ohlsson et al. [19] identify fault-prone modules by analyzing
legacy software through successive releases. They use a total of
28 measures, twelve of which are based on size and change
measures. These measures were used to identify 25 percent of the
most fault-prone components successfully.
Karunanithi [12] uses a neural network approach for software
reliability growth modeling in the presence of continuous code
churn, which he shows improves over the traditional time-domain
based models. Similarly Khoshgoftaar et al. [15] use code churn
as a measure of software quality in a program of 225,000 lines of
assembly language. Using eight complexity measures, including
code churn, they found neural networks and multiple regression to
be an efficient predictor of software quality, as measured by gross
change in the code. They suggest that using neural networks may
not work in all environments and the results obtained are
environment specific. Neural networks can be used for improving
software maintenance [15].
Ostrand et al. [20] use information of file status such as new,
changed, unchanged files along with other explanatory variables
such as lines of code, age, prior faults etc. as predictors in a
negative binomial regression equation to predict the number of
faults in a multiple release software system. The predictions made
using binomial regression model were of a high accuracy for
faults found in both early and later stages of development. [20]
Closely related to our study is the work performed by Graves et al.
[9] on predicting fault incidences using software change history.
Several statistical models were built based on a weighted time
damp model using the sum of contributions from all changes to a
module in its history. The most successful model computes the
fault potential by summing contributions from changes to the
module, where large and/or recent changes contribute the most to
fault potential [9]. This is similar to our approach of using relative
measures to predict fault potential.
Drawing general conclusions from empirical studies in software
engineering is difficult because any process depends to a large
degree on a potentially large number of relevant context variables.
For this reason, we cannot assume a priori that the results of a
study generalize beyond the specific environment in which it was
conducted [2]. Researchers become more confident in a theory
when similar findings emerge in different contexts [2]. Towards
this end we intend that our case study contributes towards
strengthening the existing empirical body of knowledge in this
field [7, 9, 13, 15, 17, 19, 20].
DATA COLLECTION
The baseline used for measuring the code churn and other
measures described below is Windows Server 2003 (W2k3). We
measured churn between this baseline and Windows Server 2003
Service Pack 1 (W2k3-SP1). We sometimes refer to W2k3-SP1 as
the "new version" of the code. Service packs are a means by
which product updates are distributed
1
. Service packs contain
updates for system reliability, program compatibility, security, etc.
that are conveniently bundled for easy downloading.
The size of the code base analyzed is 44.97 million LOC (44,970
KLOC). This consisted of 2465 binaries which were compiled
from 96,189 files. Some files contribute to more than one binary.
As defects for W2k3-SP1 are reported at the binary level, we
relate churn to defects at the level of binaries.
1
http://support.microsoft.com/
285
The absolute measures and methods of data collection are
described below:
Total LOC is the number of lines of non-commented
executable lines in the files comprising the new version
of a binary. Internal Microsoft tools were used to
compute this measure.
Churned LOC is the sum of the added and changed
lines of code between a baseline version and a new
version of the files comprising a binary.
Deleted LOC is the number of lines of code deleted
between the baseline version and the new version of a
binary. The churned LOC and the deleted LOC are
computed by the version control systems using a file
comparison utility like diff.
File count is the number of files compiled to create a
binary.
Weeks of churn is the cumulative time that a file was
opened for editing from the version control system.
Churn count is the number of changes made to the files
comprising a binary between the two versions (W2k3
and W2k3-SP1).
Files churned is the number of files within the binary
that churned.
RELATIVE CODE CHURN MEASURES
In this section we describe our relative code churn measures. The
churn measures are denoted by the elements M1-M8. The
elements and their relationship to defect density are explained
below (these relationships are verified in section 5.1):
M1: Churned LOC / Total LOC. We expect the larger
the proportion of churned (added + changed) code to the
LOC of the new binary, the larger the magnitude of the
defect density for that binary will be.
M2: Deleted LOC / Total LOC. We expect the larger
the proportion of deleted code to the LOC of the new
binary, the larger the magnitude of the defect density for
that binary will be.
M3: Files churned / File count. We expect the greater
the proportion of files in a binary that get churned, the
greater the probability of these files introducing defects.
For e.g. suppose binaries A and B contain twenty files
each. If binary A has five churned files and binary B has
two churned files, we expect binary A to have a higher
defect density.
M4: Churn count / Files churned. Suppose binaries A
and B have twenty files each and also have five churned
files each. If the five files in binary A are churned
twenty times and the five files in binary B are churned
ten times, then we expect binary A to have a higher
defect density. M4 acts as a cross check on M3.
M5: Weeks of churn / File count. M5 is used to
account for the temporal extent of churn. A higher value
of M5 indicates that it took a longer time to fix a smaller
number of files. This may indicate that the binary
contains complex files that may be hard to modify
correctly. Thus, we expect that an increase in M5 would
be accompanied by an increase in the defect density of
the related binary.
M6: Lines worked on / Weeks of churn: The measure
"Lines worked on" is the sum of the churned LOC and
the deleted LOC. M6 measures the extent of code churn
over time in order to cross check on M5. Weeks of
churn does not necessarily indicate the amount of churn.
M6 reflects our expectation that the more lines are
worked on, the longer the weeks of churn should be. A
high value of M6 cross checks on M5 and should
predict a higher defect density.
M7: Churned LOC / Deleted LOC. M7 is used in order
to quantify new development. All churn is not due to
bug fixes. In feature development the lines churned is
much greater than the lines deleted, so a high value of
M7 indicates new feature development. M7 acts as a
cross check on M1 and M2, neither of which accurately
predicts new feature development.
M8: Lines worked on / Churn count: We expect that
the larger a change (lines worked on) relative to the
number of changes (churn count), the greater the defect
density will be. M8 acts as a cross check on M3 and
M4, as well as M5 and M6. With respect to M3 and M4,
M8 measures the amount of actual change that took
place. M8 cross checks to account for the fact that files
are not getting churned repeatedly for small fixes. M8
also cross checks on M5 and M6 to account for the fact
that the higher the value of M8 (more lines per churn),
the higher is the time (M5) and lines worked on per
week (M6). ). If this is not so then a large amount of
churn might have been performed in a small amount of
time, which can cause an increased defect density.
Figure 1 illustrates the cross check relationships of these relative
code churn measures. As discussed above M1, M2 and M7 cross
check on each other and M8 cross checks on the set of M3, M4
and M5, M6. All these measures triangulate on their respective
dependent measures with the goal of providing the best possible
estimate of defect density with a minimum inflation in the
estimation.
CASE STUDY
We now describe the case study performed at Microsoft. Section
5.1 presents the correlation analysis between the relative code
churn measures and system defect density. Section 5.2 details the
model building activities and Section 5.3 the predictive ability of
the models. Section 5.4 discusses the discriminative power of the
relative code churn measures and Section 5.5 the limitations of the
study.
286
Figure 1. Relative Churn Measure Cross Check Relationships
Table 2. Cross Correlations. All correlations are significant at the 0.01 (99%) level (2-tailed).
M1
M2
M3
M4
M5
M6
M7
M8
Defects
/KLOC
M1
1.000
.834
.795
.413
.707
.651
.466
.588
.883
M2
1.000
.645
.553
.747
.446
.219
.492
.798
M3
1.000
.186
.749
.434
.445
.269
.868
M4
1.000
.531
.429
.210
.631
.288
M5
1.000
.263
.201
.390
.729
M6
1.000
.701
.843
.374
M7
1.000
.507
.288
M8
1.000
.262
Defects/
KLOC
1.000
As mentioned before, the system defect density for W2k3-SP1
was collected at the level of binaries. That is, for each binary we
have a count of the number of defects assigned to that binary.
Throughout the rest of the paper we assume a statistical
significance at 99% confidence (level of significance ( = 0.01)).
5.1 Correlation Analysis
Our goal is to verify that with an increase in the code churn
measures (M1-M8) there is a statistically significant increase in
the defects/KLOC. Table 2 shows the Spearman rank correlation
() among the defects/KLOC and the relative code churn
measures. Spearman rank correlation is a commonly-used robust
correlation technique [8] because it can be applied even when the
association between elements is non-linear.
Table 2 shows that there exists a statistically significant (at 99%
confidence) positive relationship between the measures and the
defects/KLOC (shown in bold). Thus, with an increase in the
relative churn measures there is a corresponding positive increase
in the defects/KLOC. This is indicated by the statistically
significant positive Spearman rank correlation coefficient . From
the above observations we conclude that an increase in relative
code churn measures is accompanied by an increase in system
defect density (H
1
).
In order to illustrate the cross checks better consider the measures
M1, M2 and M7 in Figure 2 with their Spearman rank correlation
coefficients from Table 2.
Figure 2: Cross Correlation Relationships
The Spearman correlation coefficient of 0.834 between M1 and
M2 indicates that there is a very strong correlation between the
two measures. But this might not be the case when there is a
higher proportion of churned code compared to deleted code (as
measured by M7 for new feature development). Since this cannot
be measured by M1 or M2, M7 acts as a cross check on them. The
correlation between M1 and M7 (0.466) indicates when there is a
M1
M2
M7
0.834
0.466
0.219
M7
M2
M1
M6
M3
M4
M5
M8
M1: Churned LOC / Total LOC
M2: Deleted LOC / Total LOC
M3: Files churned / File count
M4: Churn count / Files churned
M5: Weeks of churn/ File count
M6: Lines worked on / Weeks of churn
M7: Churned LOC / Deleted LOC
M8: Lines worked on / Churn count
Cross check
287
new feature addition there is a corresponding increase in the
churned code. For M2 and M7 this correlation is not as strong (but
is statistically significant) because there were relatively fewer new
feature additions compared to other changes in the W2k3-SP1
source base.
5.2 Model Fitting
We now compare predictive models built using absolute measures
against those built using the relative churn measures. For the
absolute model, defects/KLOC is the dependent variable and the
predictors are the absolute measures described in Section 3. For
the relative model, defects/KLOC is the dependent variable and
the predictors are the relative measures described in Section 4.
R
2
is a measure of variance in the dependent variable that is
accounted for by the model built using the predictors [4]. R
2
is a
measure of the fit for the given data set. (It cannot be interpreted
as the quality of the dataset to make future predictions). The
adjusted R
2
measure also can be used to evaluate how well a
model will fit a given data set [5]. Adjusted R
2
explains for any
bias in the R
2
measure by taking into account the degrees of
freedom of the predictor variables and the sample population. The
adjusted R
2
tends to remain constant as the R
2
measure for large
population samples.
The multiple regression model fit for absolute measures using all
the predictors has an R
2
value of 0.052 (F=16.922, p<0.0005).
(The F-ratio is used to test the hypothesis that all regression
coefficients are zero). This is a poor fit of the data and irrespective
of other transformations (like for e.g. log) we cannot get a marked
improvement in R
2
. The adjusted R
2
value for the absolute
measures is 0.49. Throughout the rest of this paper we present the
adjusted R
2
values in addition to the R
2
measures in order to
eliminate any bias in model building. But with respect to the large
sample size (2465 binaries) the adjusted R
2
and R
2
value
show
only minor variation, not sufficient enough to drop the R
2
value
and employ the adjusted
R
2
value.
There are different ways in which regression models [16] can be
built. Three common regression methods [16] are forward,
backward and step-wise regression. In forward regression, one
adds a single predictor at a time to the model based on the strength
of its correlation with the dependent variable. The effect of adding
each predictor is evaluated based on the results of an F-ratio test
[16]. Variables that do not significantly add to the success of the
model are excluded. In backward regression, a model is built
using all the predictors. The weakest predictor variable is removed
and the strength of the overall built model is assessed similar to
the forward regression procedure. If this significantly weakens the
model then the predictor is put back (and otherwise removed).
Step-wise regression [16] is the more robust technique of these
methods. The initial model consists of the predictor having the
single largest correlation with the dependent variable.
Subsequently, new predictors are selected for addition into the
model based on their partial correlation with the predictors already
in the model. With each new set of predictors, the model is
evaluated and predictors that do not significantly contribute
towards statistical significance in terms of the F-ratio are removed
so that, in the end, the best set of predictors explaining the
maximum possible variance is left.
A step-wise regression analysis using the absolute set of
predictors does not lead to any significant change in the R
2
values
(=0.051) (adjusted R
2
= 0.050). Only the LOC and the number of
times a file is churned are kept as predictors. This further confirms
the fact that using the absolute measures is not an appropriate
method for assessing the system defect density.
Several empirical studies use Principal Component Analysis
(PCA) [10] to build regression models [6]. In PCA a smaller
number of uncorrelated linear combinations of metrics, which
account for as much sample variance as possible, are selected for
use in regression. PCA is not a possible solution when using
absolute measures because the correlation matrix is not positive
definite. We still use the two principal components generated to
build a multiple regression equation. The multiple regression
equation constructed has an even lower value of R
2
=0.026,
(F=33.279, p<0.0005).
Based on the three results discussed above (multiple regression
using all the predictors, step-wise regression and PCA) we
conclude that the absolute measures are not good predictors of
system defect density.
As outlined in Section 3 we calculate the relative code churn
measures (M1-M8) and build regression models using all the
measures, step-wise regression and PCA. Table 3 shows the R
2
value of the regression equation built using all the measures. We
also present the adjusted R
2
value and the root MSE (Mean
Squared Error).
Table 3. Regression Fit Using All Measures
Model
R
2
Adjusted R
2
Root MSE
All Measures
.811
.811
1.301215
Table 4 shows how the R
2
value changes in step-wise regression
for all the models built during that process. In the step-wise
regression model the measure M7 is dropped. The best R
2
value in
Table 4 (without M7) is the same as that of Table 3 (.811) but
there is a change in the third decimal place of the standard error of
the estimate. M7 probably was dropped because there were
relatively fewer new feature additions compared to other changes
in the W2k3-SP1 source base. The adjusted R
2
values are also
shown but are not significantly different from the R
2
values due to
the large sample size used to build the models.
288
Table 4. Step-wise Regression Models
Model
R-Square
Adjusted
R-Square Root MSE
(a)
.592
.592
1.908727
(b)
.685
.685
1.677762
(c)
.769
.769
1.437246
(d)
.802
.801
1.331717
(e)
.808
.807
1.312777
(f)
.810
.809
1.305817
(g)
.811
.811
1.300985
a Predictors: (Constant), M2
b Predictors: (Constant), M2, M3
c Predictors: (Constant), M2, M3, M8
d Predictors: (Constant), M2, M3, M8, M1
e Predictors: (Constant), M2, M3, M8, M1, M6
f Predictors: (Constant), M2, M3, M8, M1, M6, M5
g Predictors: (Constant), M2, M3, M8, M1, M6, M5, M4.
The PCA of the eight relative code churn measures yields three
principal components. PCA can account for the multicollinearity
among the measures, which can lead to inflated variance in the
estimation of the defect density.
But for PCA to be applicable the KMO (Kaiser-Meyer-Olkin)
measure[11] of sampling adequacy should be greater than 0.6 [4].
The KMO measure of sampling adequacy is a test of the amount
of variance within the data that can be explained by the measures.
The KMO measure of the eight relative code churn measures is
0.594 which indicates that PCA might not be an appropriate
method to apply.
We still perform the analysis to investigate and present those
results as well on a comparative basis. The results for all three
models are summarized in Table 5.
Table 5. Relative Measures Model Fits
Model R
2
Adjusted
R
2
F-Test
sig.
All measures
0.811
0.811
1318.44,
(p<0.0005)
Step-wise
regression
0.811 0.811 1507.31,
(p<0.0005)
PCA 0.749
0.748
2450.89,
(p<0.0005)
From the above results we can see that using relative values of
code churn predictors is better than using absolute values to
explain the system defect density (H
2
).
Figure 3: Actual vs. Estimated System Defect Density
289
5.3 Defect Density Prediction
We use the technique of data splitting [18] to measure the ability
of the relative code churn measures to predict system defect
density. The data splitting technique was employed to get an
independent assessment of how well the defect density can be
estimated from a population sample. We randomly select two
thirds of the binaries (1645) to build the prediction model and use
the remaining one third (820) to verify the prediction accuracy.
We constructed models using all the measures, step-wise
regression and PCA (for purpose of completeness). Table 6 shows
the results for these models.
Table 6. Regression Data Fit
Model R
2
Adjusted
R
2
F-Test sig.
All measures
0.821
0.820
938.304,
(p<0.0005)
Step-wise
regression
(M7 dropped)
0.821 0.820 1072.975,
(p<0.0005)
PCA 0.762
0.761
1749.113,
(p<0.0005)
Using the fitted regression equation we estimate the system defect
density for the remaining 820 binaries. Figure 3 shows the
estimated and actual defect density using the regression equation
constructed using all the measures (sorted by estimated defect
density). The estimated defect density is shown by the thicker
continuous line. From the graph we can see that the estimated
defect density is similar to the actual defect density. The axes on
the graphs are removed in order to protect proprietary data
To quantify the sensitivity of prediction, we run a correlation
analysis between the estimated and actual values. A high positive
correlation coefficient indicates that with an increase in the actual
defect density there is a corresponding positive increase in the
estimated defect density. We perform Pearson and Spearman
correlations to indicate their sensitivity. The Pearson correlation
indicates a linear relationship. The Spearman correlation is a more
robust correlation technique.
Table 7 shows that the correlations are all positive and statistically
significant. The magnitude of the correlations indicates the
sensitivity of the predictions (the stronger the correlations the
more sensitive are the predictions). The models built using all the
measures and the step-wise method have the same sensitivity and
are better than the model built using PCA.
Table 7. Correlation Results
Model
Pearson (sig.)
Spearman (sig.)
All measures
0.889 (p<0.0005)
0.929 (p<0.0005)
Step-wise
regression
0.889 (p<0.0005)
0.929 (p<0.0005)
PCA
0.849 (p<0.0005)
0.826 (p<0.0005)
Analyses that are based on a single dataset that use the same data
to both estimate the model and to assess its performance can lead
to unreasonably negative biased estimates of sampling variability.
In order to address this we repeat the random sampling with 3
different random samples to verify if the above results are
repeatable. For each sample the model is fit with 1645 binaries to
build the model. Table 8 shows the fit of the various models built
for each sample.
Table 8. Random Splits Data Fit
Model R
2
Adjusted
R
2
F-Test (Sig.)
Random 1: All
0.836
0.835
1045.07,
(p<0.0005)
Random 1:
Stepwise (drop
none)
0.836 0.835 1045.07,
(p<0.0005)
Random 1: PCA
0.757
0.756
1701.98,
(p<0.0005)
Random 2: All
0.822
0.821
941.86,
(p<0.0005)
Random 2:
Stepwise (drop
M4)
0.821 0.820 1074.05,
(p<0.0005)
Random 2: PCA
0.765
0.764
1776.87,
(p<0.0005)
Random 3: All
0.799
0.798
813.12,
(p<0.0005)
Random 3:
Stepwise (drop
M7)
0.799 0.798 927.54,
(p<0.0005)
Random 3: PCA
0.737
0.736
1529.25,
(p<0.0005)
Using each of the above predictive models we calculate the
estimated defect density for the remaining 820 binaries. Table 9
shows the correlation between the estimated and the actual defect
density.
Table 9. Correlation Between Actual and Estimated
Defects/KLOC
Model Pearson
Correlation (sig.)
Spearman
Correlation (sig.)
Random 1:
All
0.873 (p<0.0005)
0.931 (p<0.0005)
Random 1:
Stepwise
0.873 (p<0.0005)
0.931 (p<0.0005)
Random 1:
PCA
0.858 (p<0.0005)
0.836 (p<0.0005)
Random 2:
All
0.878 (p<0.0005)
0.917 (p<0.0005)
Random 2:
Stepwise
0.876 (p<0.0005)
0.906 (p<0.0005)
Random 2:
PCA
0.847 (p<0.0005)
0.825 (p<0.0005)
Random 3:
All
0.899 (p<0.0005)
0.892 (p<0.0005)
Random 3:
Stepwise
0.901 (p<0.0005)
0.893 (p<0.0005)
Random 3:
PCA
0.880 (p<0.0005)
0.818 (p<0.0005)
Based on the consistent positive and statistically significant
correlations, indicating the sensitivity of predictions obtained in
Table 9 we can say that relative code churn measures can be used
as efficient predictors of system defect density (H
3
).
Our results demonstrate it is effective to use all eight measures
rather than dropping any of them from the predictive equation.
Each of these measures cross check on each other and any
290
abnormal behavior in one of the measures (for e.g. like a file
getting churned too many times) would be immediately
highlighted.
By interchanging the measures in a model equation we can get
estimated values for all the relative measures independently. For
example, in order to determine the maximum allowable code
churn with respect to the file size (i.e. M1), say for a particular
software model we fix the maximum allowable system defect
density. We then can build a regression model with M2-M8 and
defect density as predictors and M1 as the dependent variable.
5.4 Discriminant Analysis
Discriminant analysis, is a statistical technique used to categorize
programs into groups based on the metric values. It has been used
as a tool for the detection of fault-prone programs [13, 14, 18].
The ANSI-IEEE Std. [1] defines a fault as an accidental condition
that causes a functional unit to fail to perform its required
function. We use discriminant analysis to identify binaries as
fault-prone or not fault-prone. To classify if a binary is fault-prone
or not we use the system defect density in a normal
confidence interval calculation as shown in equation 1.
LB =
x
-z
/2
*Standard deviation of defect density... (1)
n
where
LB is the lower bound on system defect density;
x
is the mean of defect density;
Z
/2
is the upper /2 quantile of the standard normal
distribution;
n is the number of observations.
We conservatively classify all binaries that have a defect density
less than LB as not fault-prone and the remaining as fault-prone.
Table 10 shows the eigenvalue and overall classification ability of
using the eight measures and the three principal components. The
eigenvalue is a measure of the discriminative ability of the
discriminant function. The higher the eigenvalue the better is the
discriminative ability. For all measures, the function correctly
classifies nearly nine out of every ten binaries.
Table 10. Overall Discriminant Function Fit
Model Eigenvalue
Classification
ability
All Measures
1.025
2188/2465 (88.8%)
PCA 0.624
2195/2465
(89.0%)
As before, we split the data set into 1645 programs to build the
discriminant function and the remaining 820 binaries to verify the
classification ability of the discriminant function. We perform this
analysis using all the measures and the principal components. The
results of this fit and classification are shown below in table 11.
Table 11. Discriminant Analysis
For Model Fit (for 1645
binaries to build the model)
For Test Data
(820 binaries)
Model Eigen
value
Classification
ability
Classification
ability
All
Measures
1.063 1464/1645
(90.0%)
735/820
(89.6%)
PCA 0.601 1461/1645
(88.8%)
739/820
(90.1%)
Table 11 shows that the relative code churn measures have
effective discriminant ability (comparable to prior studies done on
industrial software [13]). We conclude that relative code churn
measures can be used to discriminate between fault and not fault-prone
binaries (H
4
).
5.5 Limitations of Study
Internal validity. Internal validity issues arise when there are
errors in measurement. This is negated to an extent by the fact that
the entire data collection process is automated via the version
control systems. However, the version control systems only
records data upon developer check-out or check-in of files. If a
developer made many overlapping edits to a file in a single check-out/check
-in period then a certain amount of churn will not be
visible. A developer also might have a file checked out for a very
long period of time during which few changes were made,
inflating the "weeks of churn" measure.
These concerns are alleviated to some extent by the cross check
among the measures to identify abnormal values for any of the
measures and the huge size and diversity of our dataset.
In our case study we provide evidence for using all the relative
churn measures rather than a subset of values or principal
components. This is case study specific and should be refined
based on further results.
External validity. External validity issues may arise from the fact
that all the data is from one software system (albeit one with many
different components) and that the software is very large (some 44
million lines of code) as other software systems used for a similar
analysis might not be of comparable size.
CONCLUSIONS AND FUTURE WORK
We have shown how relative code churn metrics are excellent
predictors of defect density in a large industrial software system.
Our case study provides strong support for the following
conclusions:
Increase in relative code churn measures is accompanied
by an increase in system defect density;
Using relative values of code churn predictors is better
than using absolute values to explain the system defect
density;
Relative code churn measures can be used as efficient
predictors of system defect density; and
Relative code churn measures can be used to
discriminate between fault and not fault-prone binaries.
We plan to validate our approach on other products developed
inside Microsoft like SQL Server and Office. We also plan to
develop standards for all the measures to provide guidance to the
developers on the maximum allowable change. We also plan to
investigate how testing can more effectively be directed towards
churned code.
ACKNOWLEDGEMENTS
We would like to express our appreciation to Brendan Murphy of
Microsoft Research for providing the Windows Server 2003 SP1
data set. We would like to thank Madan Musuvathi of Microsoft
Research, for critical feedback on the relative churn measures. We
would like to thank Jim Larus of Microsoft Research, Laurie
Williams, Jason Osborne of North Carolina State University for
291
reviewing initial drafts of this paper and the anonymous referees
for their thoughtful comments on an earlier draft of this paper.
REFERENCES
[1] ANSI/IEEE, "IEEE Standard Glossary of Software
Engineering Terminology, Standard 729," 1983.
[2] Basili, V., Shull, F.,Lanubile, F., "Building Knowledge
through Families of Experiments," IEEE Transactions on
Software Engineering, Vol. Vol. 25, No.4, No., 1999.
[3] Boehm, B. W., Software Engineering Economics.
Englewood Cliffs, NJ: Prentice-Hall, Inc., 1981.
[4]
Brace, N., Kemp, R., Snelgar, R., SPSS for Psychologists:
Palgrave Macmillan, 2003.
[5]
Brito e Abreu, F., Melo, W., "Evaluating the Impact of
Object-Oriented Design on Software Quality," Proceedings
of Third International Software Metrics Symposium, 1996,
pp. 90-99.
[6]
Denaro, G., Pezze, M., "An Empirical Evaluation of Fault-Proneness
Models," Proceedings of International
Conference on Software Engineering, 2002, pp. 241 - 251.
[7]
Fenton, N. E., Ohlsson, N., "Quantitative analysis of faults
and failures in a complex software system," IEEE
Transactions on Software Engineering, Vol. 26, No. 8, pp.
797-814, 2000.
[8]
Fenton, N. E., Pfleeger, S.L., Software Metrics. Boston,
MA: International Thompson Publishing, 1997.
[9]
Graves, T. L., Karr, A.F., Marron, J.S., Siy, H., "Predicting
Fault Incidence Using Software Change History," IEEE
Transactions on Software Engineering, Vol. 26, No. 7, pp.
653-661, 2000.
[10] Jackson, E. J., A User's Guide to Principal Components:
John Wiley & Sons, Inc., 1991.
[11] Kaiser, H. F., "An Index of Factorial Simplicity,"
Psychometrika, Vol. 39, No., pp. 31-36, 1974.
[12] Karunanithi, N., "A Neural Network approach for Software
Reliability Growth Modeling in the Presence of Code
Churn," Proceedings of International Symposium on
Software Reliability Engineering, 1993, pp. 310-317.
[13] Khoshgoftaar, T. M., Allen, E.B., Goel, N., Nandi, A.,
McMullan, J., "Detection of Software Modules with high
Debug Code Churn in a very large Legacy System,"
Proceedings of International Symposium on Software
Reliability Engineering, 1996, pp. 364-371.
[14] Khoshgoftaar, T. M., Allen, E.B., Kalaichelvan, K.S.,
Goel, N., Hudepohl, J.P., Mayrand, J., "Detection of fault-prone
program modules in a very large telecommunications
system," Proceedings of International Symposium Software
Reliability Engineering, 1995, pp. 24-33.
[15] Khoshgoftaar, T. M., Szabo, R.M., "Improving Code
Churn Predictions During the System Test and
Maintenance Phases," Proceedings of IEEE International
Conference on Software Maintainence, 1994, pp. 58-67.
[16] Kleinbaum, D. G., Kupper, L.L., Muller, K.E., Applied
Regression Analysis and Other Multivariable Methods.
Boston: PWS-KENT Publishing Company, 1987.
[17] Munson, J. C., Elbaum, S., "Code Churn: A Measure for
Estimating the Impact of Code Change," Proceedings of
IEEE International Conference on Software Maintenance,
1998, pp. 24-31.
[18] Munson, J. C., Khoshgoftaar, T.M., "The Detection of
Fault-Prone Programs," IEEE Transactions on Software
Engineering, Vol. 18, No. 5, pp. 423-433, 1992.
[19] Ohlsson, M. C., von Mayrhauser, A., McGuire, B., Wohlin,
C., "Code Decay Analysis of Legacy Software through
Successive Releases," Proceedings of IEEE Aerospace
Conference, 1999, pp. 69-81.
[20] Ostrand, T. J., Weyuker, E.J, Bell, R.M., "Where the Bugs
Are," Proceedings of the 2004 ACM SIGSOFT
International Symposium on Software Testing and
Analysis (ISSTA), 2004, pp. 86-96.
292 | principal component analysis;Relative code churn;defect density;fault-proneness;multiple regression |
208 | Using Case-Based Reasoning in Traffic Pattern Recognition for Best Resource Management in 3G Networks | With the underlying W-CDMA technique in 3G networks, resource management is a very significant issue as it can directly influence the system capacity and also lead to system QoS. However, the resource can be dynamically managed in order to maintain the QoS according to the SLA. In this paper, CBR is used as part of an intelligent-based agent management system. It uses information from previously managed situations to maintain the QoS in order to meet the SLA. The results illustrate the performance of an agent in traffic pattern recognition in order to identify the specific type of problem and finally propose the right solution. | INTRODUCTION
The third generation (3G) cellular system has been developed to
satisfy increasing customer demands for higher bit-rate access in
order to provide wireless Internet access anytime and anywhere. In
addition, 3G networks will integrate different type of services like
voice, data, and video.
With W-CDMA, all users share the same spectrum and use codes
to identify themselves. Hence the whole bandwidth can be reused
in every cell. The system is considered a soft capacity system as all
users simultaneously transmit so increasing the interference seen by
others. The system capacity is, therefore, limited by the total
interference that occurs from other users (in the case of the network
being uplink-capacity limited) or other base stations (in the case of
the network being downlink-capacity limited) and the background
noise. The benefit of this technique is therefore providing the
flexible, higher bandwidth services, and maintaining the best
system capacity. On the other hand, it leads to more complexity in
resource management.
Previous work [1] introduced the use of intelligent agents in
managing the resources to meet the service level agreement (SLA)
when congestion occurs. It shows that by using intelligent agents
together with the assignment and admission scheme, the system
environment can be monitored and the policy that is suitable for
that particular situation will be selected and applied to the system.
Also the quality of service (QoS) for each particular class of
customer can be monitored and controlled according to the SLA. In
[2], Case-Based Reasoning (CBR) is introduced as a mean of
giving the agent more "intelligence". The aim of using CBR is so
that the problem can be automatically solved by referring to a
similar traffic pattern that the system has seen before and kept in
the case library. The end solution from the previous case can then
be applied immediately to give a fast and efficient response. In this
paper, a wider range of traffic situations will be illustrated, which
will also show the benefit of using CBR in order to identify
different traffic patterns and to propose the best solution. In
addition, results show the outcome of system flexibility in giving
different priority patterns to customers according to the system
requirements.
The paper is organised as follows. The agent system and
architecture for the multi-agent system are described in section 2.
In section 3, the implementation of CBR in SLA-based control by
the agent will be introduced. The assignment and admission
scheme is presented in section 4 and section 5 covers the simulation
model. Traffic pattern recognition and numerical results are
illustrated and discussed in section 6. Lastly, the conclusions of the
paper are in section 7.
AGENT SYSTEM AND ARCHITECTURE
Critical in a radio network is the allocation of bandwidth to radio
cells in order to avoid local congestion or degradation of the QoS
and it is generally the capacity of the wireless link to the user that
limits the overall system capacity, rather than any back-haul part of
the network.
In [3], an agent approach for a distributed resource management
system is introduced. The main reason for using intelligent agents
is to give greater autonomy to the base stations; this gives an
increase in flexibility to deal with new situations in traffic load and
to decrease the information load (the messaging resulting from
taking, or determining control actions) on the network.
In the past, mobile network operators have generally restricted the
customer to only one service provider. With the influence of the
Internet, more widespread choice of service providers (SPs) will be
available to 3G users. By using an agent, it would be possible to
allow selection of SP by offering on price, QoS, or value added
service.
In this work, each agent uses three layers taking action and
decisions on different timescales: reactive, local planning and cooperative
planning.
As an individual connection must have the decision made in real-time
, the reactive layer is designed for a very fast response. More
complex functions have been implemented at the planning layers.
Generally the local planning layer is concerned with long-term
actions within its own instance, whereas the co-operative layer is
concerned with long-term actions between peer agents, or with
other types of agent. The reactive layer is, therefore very simple,
implementing policies being passed down by the higher layer. This
is discussed in more detail later in the paper.
CBR IN SLA-BASED CONTROL
CBR is an Artificial intelligence (AI) approach that can allow the
agent to learn from past successes. It is a method that finds the
solution to the new problem by analysing previously solved
problems, called cases, or adapting old solutions to meet new
demands.
Figure 1 Case-based reasoning process model
(Based on the CBR cycle in [4])
Figure 1 shows the process model of the case-based reasoning. The
process of CBR starts when there is a new problem or new case
happening. The first step is case retrieval, which uses the
characterizing indexes of the event to find the best-match solved
case(s) from the case library. The solution from the retrieved
case(s) will be reused.
However, the solution might need to be modified to fit the new
situation as the new situation will rarely match the old one exactly:
this step is called "revising". Once the new solution is proposed,
the next step is to test it with the real environment. The result is
either success or failure. If the solution fails, a monitoring process
will analyse the failure, repair the working solution, and test again.
If the solution succeeds, this new solution will be indexed and
retained in the case library to use for future problem solving.
The work shown in [5] gives an example of using CBR in network
traffic control by using it to control traffic flow in the standard
public switched telephone network of the Ile de France. In another
work in [6], CBR is used to correct the error estimation of the
required bandwidth computed by conventional connection
admission control schemes.
In the work described in this paper, CBR is used to recognise traffic
patterns as congestion occurs in a 3G network and to define the
policies to respond to that congestion in the reactive layer of the
resource agent. Congestion here means the situation where system
could not maintain the QoS required by the SLA. (This is explained
in more detail in section 5)
3.2 Resource agent
In this work the resource agent is the focus of attention as it is an
important agent in managing the resource within the network. The
architecture of the resource agent is illustrated in figure 2.
Figure 2 Resource agent internal architecture
The reactive layer is designed to be fast, performing the same
function that would be in a conventional RNC (Radio Network
Controller), assigning the connection a Node B, and performing
Previous
Cases
General
Knowledge
Suggestion
Solution
Confirmed
Solution
RETRIEVE
Problem
Retrieve
Case
Tested /
Repaired
Case
Solved
Case
Case
New
New
Case
Learned
Case
RETAIN
REUSE
REVISE
Co-operative Planning Layer
- Action between cells
Local Planning
Layer
- Act on changing QoS within cell
Case - Based Reasoning
Reactive Layer
Assignment
OK
OK
CAC
Exception
handle
No
Yes
No
Yes
Assigned
request
Modified request
Set up
connection
Reject
Request
pol
i
cy
pol
i
cy
pol
i
cy
S
t
at
us
re
p
o
rti
ng
253
CAC (Connection Admission Control) but it does this according to
policies assigned by the planning layer.
The connection request (containing information about the service
provider, QoS, type of connection) is first considered for
assignment to a Node B using an algorithm or set of rules passed
down from the planning layer. As a result, the system performance
can be monitored at all times. Any congestion occurring can be
detected and reported to the planning layer which, will then find the
best solution using the CBR approach in order to maintain the SLA.
ASSIGNMENT AND ADMISSION SCHEME
Assignment and admission control together determine which base
station will have power control over a mobile, which means that
base station must have available bandwidth to support the new call,
and also must make sure that none of the existing connection will
be dropped.
A great deal of work has been done in this area. In [7], a
comparison is made between a transmitted power-based call
admission control (TPCAC) that protects the ongoing calls and a
received power-based call admission control (RPCAC) that blocks
new calls when the total received power at a base station exceeds a
threshold. The result shows that the RPCAC scheme is found to
offer significant performance benefits. In [8], the number-based
CAC and interference-based CAC are compared. SIR-based CAC
(signal-to-interference based CAC) has been proposed in [9], the
benefit being in improving the system performance at traffic hot-spots
.
In this paper, a combination between the ideal scheme and SIR-based
CAC has been chosen with uplink capacity limitation (which
means the signal-to-interference of the received signal from mobile
to base station is calculated) As for the ideal scheme, the system
has to make sure none of the existing connections will be dropped
when accepting a new connection request. Hence, two perfect
power control loops are run to verify that the new request can really
be accepted; otherwise it would be blocked or put into the buffer.
The admission process is as follow :
- With the new connection request, the new mobile's
transmitted power is estimated in order to get the target SIR.
(the open power control in section 5.4)
- If the estimated transmitted power is in the accepted range, it
means the new mobile can make a connection. Otherwise, it
will be blocked or held in the buffer.
- Set up the new connection and perform the first perfect power
control loop. With this, the new transmitted power that is
supposed to give each connection the target SIR can be
determined.
- The second perfect power control loop is performed to achieve
the actual SIR for each connection as a result of accepting a
new connection request.
- If any existing connection would be dropped (by having SIR
less than the threshold), the new connection is still rejected
otherwise it is accepted and the connection can be made.
The rejected connection request will be put into a queue until the
next calculation or new call arrival and it will be blocked at the
expiry of a timer: setting the timer to zero means that a request is
immediately accepted or rejected. Furthermore, the base station
serving the mobile can be reassigned at anytime during the
connection if the current base station cannot provide the required
link quality.
SIMULATION MODEL
The simulation model has been implemented in MatLab. The
system used for the results in this paper consists of 9 hexagonal
cells (25 cells have been used for other work but the large model
suffers from an excessively long run time) and each cell has its own
base station with an omni-directional antenna placed at the centre
of the cell. A number of mobiles have been generated randomly
according to the input traffic. When considering different classes of
user, it is quite common to use three classes: bronze, silver, and
gold. In the results described here, 50% of the users are bronze,
30% are silver and 20% are gold. It is assumed that the gold
customers will pay the highest service charge followed by silver
and bronze customers, so that the gold customer is paying for the
best service and more flexibility than the others.
5.1 Radio Propagation Model
In cellular systems, radio propagation is crucially influenced by the
path loss according to the distance, log-normal shadowing, and
multipath fading. The relationship between the transmitted power
and received power can be expressed as [9].
0
10
/
10
)
(
P
r
r
P
=
(1)
where P(r) is the received power, P0 is transmitted power, r is the
distance from the base station to mobile,
in decibels has a normal
distribution with zero mean and standard deviation of
(typical
value of 8 dB), and
2
represents the gain (typical values of
in a
cellular environment are 2.7-4.0).
5.2 Traffic Model
The model consists of two traffic types, voice and video. The
model has been simplified from the three type traffic model that
also included data traffic, which was used in [2]. The reason in
simplifying the traffic model is because modelling data traffic to
the level of packet results in unrealistically long simulation times.
5.2.1 Voice traffic
Voice traffic is considered to be real-time traffic. The common
model for a single voice source is illustrated by the ON-OFF
process. It consists of two stages, active (ON) and silent (OFF)
stage, with a transition rate
from ON to OFF and from OFF to
ON stage.
Figure 3 illustrates the ON-OFF model. The silent period is
assumed to be the period that cannot be used to transmit data
message or voice call.
Figure 3 Traffic model for voice call (ON-OFF model)
Silent
Active
Duration in silent
state : exponential
distribution
Duration in active
state : exponential
distribution
1
1
254
To simplify the simulation, the approach of [9] is used with an
activity factor of 0.45 is used. The transmission rate for voice
traffic is assumed to be 8kbit/s and mean holding time is
180second.
5.2.2 Video traffic
Video traffic is also considered as real-time traffic. The common
model for video source is illustrated by the discrete-state
continuous time Markov process illustrated in figure 4.
The bit rate of video traffic is quantized into finite discrete level (0
A 2A... MA). Transitions between levels occur with exponential
transition rates that may depend on the current level. [11]
Figure 4 Video source model (Discrete-state continuous time
Markov process)
The
and are the state transitions and they are obtained by:
+
=
M
N
04458
.
5
1
9
.
3
(2)
=
9
.
3
(3)
where N is a Number of aggregated video sources (typical
assumption 1) and M is a number of quantization levels (typical
assumption 8).
Implementing this video traffic model in the simulation causes
simulation times to be very long. Many authors therefore simplify
this by using activity factor. [9][12] Here, an activity factor of 1
has been assumed for the real-time video source as use in [9]. The
transmission rate for video traffic is assumed to be 64, 144, or
384kbit/s and mean holding time is 300 second.
5.3 Receiver Model
For the uplink capacity limited, the SIR of each transmission is
calculated at the base station and it can be expressed as follows,
(based on [13])
thermal
er
ra
N
I
I
R
W
SIR
+
+
=
int
int
Pr
(4)
where, (W/R) is the processing gain, Pr is the received signal
strength, I
intra
is the sum of the received signal powers of other
transmissions within the same cell, I
inter
is the sum of the received
signal powers from the other cells, and N
thermal
is the thermal noise
power.
5.4 Power Control Model
Power control is the crucial part in the system since it is necessary
to minimise the interference in the system by minimising the level
of transmitted power to the optimum level, which means just
enough to maintain the link quality. Power control in UMTS
consists of three main functions: (i) open-loop power control, (ii)
inner-loop power control, and (iii) outer-loop power control [14].
As the simulation focuses on the uplink-limited capacity, power
control for the uplink is applied. In this work, the first two types are
applied in
the simulation since they have the major effect on the
simulation result. Without outer-loop power control, the target SIR
has to be fixed; here, it is assumed to be 6 dB and the threshold is 4
dB. [10] The power control step is assumed to be 1 dB at each
power control cycle. [15][16]
5.4.1 Open-Loop Power Control
Open-loop power control is applied when new connection requests
arrive in the system as initial step of the admission process (section
4). The total interference at the base station is calculated as it is the
parameter that User Equipment (UE) needs to use in the estimation
of its initial transmit power. According to the parameter and the
target SIR, UE estimates the transmit power and uses it as an initial
transmit power.
5.4.2 Inner-Loop Power Control
This is done periodically to allow the transmitted power of each
connection to be kept as low as possible, yet maintain the target
SIR. Firstly, the base station calculates the received SIR from the
UE. If the SIR is less than the target SIR
,
the TPC (Transmit Power
Control) command "up" is sent to the UE which increases the
transmitted power by one step. If the SIR is more than the target
SIR+1, the TPC command "down" is sent to the UE which
decreases the transmitted power by one step. Otherwise, the UE
maintains the same transmitted power. After the power control
cycle has been performed, the new SIR for each mobile can be
calculated. Any mobile that has an SIR less than the threshold will
not be dropped immediately; instead the system will try to
reallocate that mobile to another base station nearby that still has
available bandwidth and can provide the link quality. If it is
possible, the mobile will be handed over to the next base station,
otherwise the mobile will be dropped. The transmitted power has a
maximum of 21dBm; if the calculated transmit power is more than
the maximum power, the maximum transmitted power will be
applied.
In this simulation, the inner-loop power control is performed every
10ms. Experiments have been done by varying the power control
time step and the results show that the blocking rate becomes
erratic as the timing gets too high. From the experiment, the time
step of 10ms has been chosen as it is the highest value that gives
consistent results. From the simulation point of view, it is
preferable to use the highest time as this reduces the length of the
simulation.
The power control is essentially that used in 3G without including
outer-loop power control.
5.5 Verification and Validation
One of the most important aspects in developing the simulation
model is its credibility; therefore the validation and verification of
any simulation model are essential. The simulation model was
validated by comparing the result with the relevant result from [8]:
this is discussed in [1]
.
0
MA
(M-1)A
(M-2)A
A
2A
2
M
M
)
1
(
M
2
)
1
(
M
255
5.6 CBR Model
According to [17], there are several proposed schemes of
organizational structure and retrieval algorithms for CBR. In this
work, the hierarchical memory with parallel search is used as it
provides an efficient retrieval that is less time consuming, as the
matching and retrieving happen in one step, which also give less
complexity.
The monitoring process of the system performed every 10 seconds.
This means the monitoring parameters will be collected for 10
seconds and sent to the local planning layer of an agent where the
CBR model is located as shown in figure 1. The parameters will
then be compared with the SLA requirements and any deviation
from the SLA can be reported. The CBR model will then be used
to find the best solution for the situation. Base on the process model
in figure 1, a solution will be proposed, or where the best matched
case cannot be found or the evaluating process fails, a calculation
might be used instead in order to find the solution according to
certain rules.
As the parallel search has been chosen for the CBR model, the
whole library will be searched for each characterizing index in one
step. If the new case is to be retained in the library, the library
indexes have to be re-sorted according to the priority of the
characterizing index of the new case.
5.7 Monitoring and case matching process
As explained above, the monitoring process is done every 10
seconds. The call blocking, call dropping and the accumulative
value of blocking rate are calculated and by comparing them with
the SLA requirements, the error can be detected. If the error being
reported is significant, the CBR model will be called.
There are seven characterising indexes used to describe the case at
the moment. Currently, they are obtained by matching the actual
monitoring factors into a suitable range where the value belongs.
Therefore, the characterising indexes will be in form of small
integer numbers. The seven monitoring factors are as follows-:
Total throughput for the whole system
Offered traffic for the whole system
Offered traffic for silver customer for the whole system
Offered traffic for gold customer for the whole system
Cell identity where offered traffic exceeds limit
Accumulative blocking rate for silver class
Accumulative blocking rate for gold class
In case that there is not an exact match, there needs be a way to
identify whether the closest match is acceptable for the situation.
However, in the current model, only the best match will be chosen.
In future work, the acceptable level for each case will be
determined by the distance to the seven-dimensional coordinates
defining the individual point of the case. If it is within the tolerable
range, the case will be used.
TRAFFIC PATTERN RECOGNITION AND NUMERICAL RESULTS
The previous work [1] was done to support the basic idea that the
reactive layer of the agent system can be controlled by the planning
layer in order to ensure system compliance with the SLA. Here, the
SLA assumptions made for the maximum acceptable level of call
blocking rate.
- Maximum acceptable call blocking rate for gold : 0.03
- Maximum acceptable call blocking rate for silver : 0.05
The gold customer pays the highest rate for the least elasticity of
service level. These rates can either be instantaneous values or
measured over a period of time; naturally the numerical limits
would be different.
In [1], the random overload situation has been tested with the traffic
load being increased after the system reached stability. The call
dropping rate is acceptable before high traffic load was applied, but
after changing the load, the call dropping rate increases, then
slowly declines to about the same level as before, because of the
implementation of ideal assignment and admission control. On the
other hand, the call blocking rate increases as a greater number of
mobiles attempt to get into the system and the system tries not to
drop any existing connection, so more will be blocked. The call
buffering time for all classes of customer and all types of service
has been set to zero to give immediate accept or reject decisions.
Figure 5 shows the comparison between the result from the
conventional system that does not chance the policy with the
dashed lines and the result of policy chance as the solid lines.
Without the SLA based control, the call blocking rate for all
customer classes raises as the traffic load increases.
For the SLA based control, the implementation here uses a buffer
mechanism so the a call request that cannot be served immediately
is held for a short time in case resources become available. The
buffering time is configurable. From solid lines at the point when
the level of call blocking rate for gold customer reaches the
maximum level, policy 2 is applied which allocates a short
buffering time to call requests from gold and silver customers, with
that for bronze customers still being set to zero. The result shows
that the call blocking rate of gold and silver customers stabilises,
but does not go below the limit set for gold. After waiting for a
short period (here 2 minutes) to ensure the trend is stable, a further
change in policy is applied; this gives longer buffering time for
gold customers and slightly longer buffering time for silver
customers, so increasing still further the probability of gold and
silver customers (especially gold) being accepted at the expense of
bronze.
Figure 5 Comparison between the result from the conventional
system and SLA based control system
Silver maximum level
Gold maximum level
Mean interarrival 100ms
Mean interarrival 25 ms.
Policy 1
Policy 2
Policy 3
Bronze
Silver
Gold
With SLA control
Without SLA control
Gold
Bronze
Silver
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
18000
90000
2E+05
2E+05
3E+05
4E+05
5E+05
5E+05
6E+05
Simulation time (ms.)
Rate of call blocking over total avtice call
256
A simple case library has been generated partly from this previous
work and the knowledge from the work under current study. By
using the simulation model mentioned before, a few traffic patterns
can be implemented to test the system performance.
The two main environments being tested here are the random
overload situation and the hot spot situation. As the system detects
the congestion, the CBR model is called to analyse the situation and
the simulation results will be divided in two sections.
6.1 Random overload case
For this case, the simulation repeated the previous work explained
before by adding the CBR model and also use the simulation model
illustrated in section 5. (In previous work [1] the less detailed
simulation model has been used.)
Figure 6 shows the simulation results of the call blocking across the
simulation time as the traffic load increases in a conventional
system that does not change policy. The call buffering time for all
classes of customers and all types of services has been set to zero to
give immediate accept or reject decisions.
Figure 6 The simulation result from conventional system for
the random overload situation
Figure 7, 8 and 9 show the effect of using the CBR approach to
identify the current traffic pattern and manage the reactive layer
policies accordingly.
It might be thought that these results are simply the normal result of
applying priorities, but the technique is more powerful. In many
SLAs, it is not short-term violations that are important: an SLA
might specify for instance that the blocking rate must not exceed a
certain value during a day or a month.
The new policy has been applied to the reactive layer as soon as the
system recognises congestion, in this example using the
accumulative error rate over a period of 10s.
The implementation here again uses a buffer mechanism to give
short buffering time to call request that cannot be served
immediately, especially for the higher priority customer. The
buffering time is configurable. It can be seen from the result that
CBR keeps the call blocking rate for gold and/or silver customers
within the SLA bounds, according to the congestion pattern.
In figure 7, the traffic reaches overload when the accumulative call
blocking rate for gold exceeds the limit, at that point silver is still in
an acceptable range. In this case the chosen policy gives the highest
buffering time to gold and lower value for silver with that for
bronze still at zero.
Figure 7 Simulation results showing the effect of SLA-based
control by CBR approach for the first random overload case
It can be seen that the system detects the overload situation at the
point where the traffic load increases and generates the appropriate
policy. As the new policy gives priority to gold, the call blocking
for gold customer is maintained within an acceptable range at the
expense of both silver and bronze.
Figure 8 shows the result from the second case, where the traffic is
overloaded with the accumulative call blocking rate for gold and
silver exceeding the maximum value. In this case, both silver and
gold QoS need to be handled. By giving highest buffering time to
silver and slightly lower for gold, the blocking for both can be kept
within the range. As the buffer in this implementation uses the
priority arrangement, gold customers are always in the top of the
queue, so, in order to also give priority to silver customers, their
buffering time has to be higher.
Figure 8 Simulation results showing the effect of SLA-based
control by CBR approach for the second random
overload case
Silver
Gold maximum level
Silver maximum level
Normal traffic
Overload traffic
Gold
Bronze
Silver
Gold maximum level
Silver maximum level
Normal traffic
Overload traffic
Gold
Bronze
Silver
Gold maximum level
Silver maximum level
Normal traffic
Overload traffic
Gold
Bronze
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
5
35
65
95 125 155 185 215 245 275 305 335 365 395
Simulation time (s.)
Call blocking rate
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
5
35
65 95 125 155 185 215 245 275 305 335 365 395
Simulation time (s.)
Call blocking rate
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
5
35
65
95 125 155 185 215 245 275 305 335 365 395
Simulation time (s.)
Call blocking rate
257
In Figure 9 the situation is that the long-term value for gold
customers has been met, but that for silver is at the limit. When
congestion occurs then, silver customers have to be given priority
in order that their long-term blocking is not exceeded, but gold
customers can be allowed to have worse service since there is still
"slack" in their SLA.
Figure 9 Simulation results showing the effect of SLA-based
control by CBR approach for the
last random overload case
The SLA monitoring here is looking at the long-term blocking, has
detected that silver needs priority and has applied that priority.
These results show the flexibility of the control system which
assigns different policies to different scenarios and also shows that
the highest priority can also be a sacrifice in order to maintain the
customer who normally has the overall long-term values.
In fact any SLA that can be evaluated numerically can be used as
the basis for controlling the policy: the system is that flexible.
6.2 Hot spot case
With hot spots, the monitoring process is able to identify the
congestion from the individual blocking and dropping parameters
of each cell. CBR model will then match the pattern with the cases
in the library. The proposed mechanism here can be seen in figure
10.
The bronze and silver users near the boundary will be transferred to
the neighbouring cells that have normal traffic: effectively then by
controlling the cell size in a more comprehensive manner than
simple cell breathing from power control. By doing this, some of
the capacity will be released for the hot spot cell in order to
maintain the users nearer to the centre and high priority users.
Figure 10 Hot spot situation and the proposed solution
In the initial work for this hot spot case, the transferring process or
handover will be done every 10s, which is the frequency of the
monitoring process. An example of results from the initial work in
this case are shown in figure 11 and 12.
In figure 11, after the traffic load has increased in the hot spot cell,
the call blocking rate for the hot spot call rise up while the other
cells still have low blocking rate as the traffic was controlled within
normal level.
Figure 11 Result from the conventional system for the hot
spot situation
Gold
Silver
Bronze
Silver
Gold maximum level
Silver maximum level
Normal traffic
Overload traffic
Gold
Bronze
Silver
Gold maximum level
Silver maximum level
Normal traffic
Overload traffic
Gold
Bronze
Hot spot cell
Normal traffic cell
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
5
35 65
95 125 155 185 215 245 275 305 335 365 395
Simulation model (s.)
Call blocking rat
e
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
5
35 65 95 125 155 185 215 245 275 305 335 365 395
Simulation Time (s.)
Call block rate
258
Figure 12 shows the result of using the CBR model which
instructs the system to perform the handover for bronze and silver
users near boundary to the neighbouring cells every 10s. The
blocking rate for the hot spot cell still increases after the traffic
load has increased but by comparing with the result in figure 11,
the call blocking rate is lower. Further work is being done on
evaluating more complex scenarios.
Figure 12 Result from SLA based control system with CBR
model for the hot spot situation
CONCLUSIONS
This paper has introduced the concept of combining CBR with an
intelligent agent layered architecture to manage SLAs in W-CDMA
networks. The simulation results show that the CBR
system has been able to detect congestion occurring and then
apply the appropriate policy to manage the behaviour of the CAC
to block those customers who, at that time, are perceived as less
important to the operator.
The scenarios illustrated are fairly simple but further work is
evaluating the approach over a much more complex range of
situations.
REFERENCES
[1]
Chantaraskul, S. and Cuthbert, L.G. SLA Control for
Congestion Management in 3G Networks, in Proceeding of
the IASTED International Conference on Wireless and
Optical Communications (WOC2003), Banff, Alberta,
Canada, 2003, pp. 447-452.
[2]
Chantaraskul, S. and Cuthbert, L.G. Introducing Case-Based
Reasoning in SLA Control for Congestion Management in
3G Networks, in Proceeding of the
IEEE Wireless
Communications and Networking Conference 2004 (IEEE
WCNC2004), Atlanta, Georgia, USA, 2004.
[3]
Cuthbert, L.G., Ryan, D., Tokarchuk, L., Bigham, J.,
Bodanese, E. Using intelligent agents to manage resource in
3G Networks, Journal of IBTE, vol. 2 part 4, Oct.-Dec. 2001,
pp. 1-6.
[4]
Admodt A. and Plaza E., Case-Based Reasoning:
Foundational Issues, Methodological Variations, and System,
AI Communications, The European Journal of Artificial
Intelligence, vol. 7:1, pp. 39-59. 1994.
[5]
Caulier, P. and Houriez, B. A Case-Based Reasoning
Approach in Network Traffic Control, in Proceeding of the
IEEE International Conference on Systems, Man and
Cybernetics, 1995. Intelligent Systems for the 21st Century,
Volume 2, 1995, pp. 1430-1435.
[6]
Hassanein, H., Al-Monayyes, A. and Al-Zubi,M. Improving
Call Admission Control in ATM Networks Using Case-Based
Reasoning, in Proceeding of the IEEE International
Conference on Performance, Computing, and
Communications, 2001, pp. 120-127.
[7]
Huang, C. Y. and Yates, R. D. Call Admission in Power
Controlled CDMA Systems, in Proceeding of the IEEE
Vehicular Technology Conference, 1996, pp.227-231.
[8]
Capone, A., Redana S. Call Admission Control Techniques
for UMTS, in Proceeding of the IEEE Vehicular Technology
Conference, 2001, pp.925-929.
[9]
Liu, Z. and Zarki, M. E. SIR Based Call Admission Control
for DS-CDMA Cellular System, IEEE Journal on Selected
Areas in Communications, vol. 12, issue 4, May 1994, pp.
638-644.
[10]
Kuri, J. and Mermelstein, P. Call Admission on the Uplink of
a CDMA System based on Total Received Power, in
Proceeding of the IEEE International Conference on
Communications, vol. 3, 1999, pp. 1431-1436.
[11]
So, J.W. and Cho, D.H. Access Control of Data in Integrated
Voice/Data/Video CDMA Systems, in Proceeding of the VTC
Spring 2002, IEEE 55
th
vol. 3, 2002, p.1512-1516.
[12]
Angelou, E.S., Koutsokeras, N.Th, Kanatas, A.G. and
Constantinou, Ph. SIR-Based Uplink Terrestrial Call
Admission Control Scheme with Handoff for Mixed Traffic
W-CDMA Networks , in Proceeding of the 4
th
International
Workshop on Mobile and Wireless Communications Network,
2002, pp. 83-87, 2002.
[13]
Radio Frequency (RF) system scenarios, 3GPP TR 253942,
Avialable: htpp://www.sgpp.org
[14]
Laiho, J., Wacker, A. and Novosad, T. Radio Network
Planning and Optimisation for UMTS, John Wiley & Sons,
Ltd., 2002.
[15]
Baker, M.P.J., Moulsley, T.J. Power Control in UMTS
Release '99, 3G Mobile Communication Technologies, 2000.
First International Conference on (IEE Conf. Publ.
No.471),27-29, March 2000, pp.3640
.
[16]
Thong, W.S., Bigham, J. Hierachical Managament of CDMA
Network Resources, in Proceeding of the Third International
Conference on 3G Mobile Communication Technologies,
2002
(Conf. Publ. No. 489),
8-10 May 2002
pp. 216 220.
[17]
Kolodner, J. Case-Based Reasoning, Morgan Kaufmann
Publishers, Inc., 1993, pp. 289-320.
Silver
Gold maximum level
Silver maximum level
Normal traffic
Overload traffic
Gold
Bronze
Hot spot cell
Normal traffic
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
5
35 65
95 125 155 185 215 245 275 305 335 365 395
Simulation Time (s.)
Call block rate
259
| Service Level Agreement;Intelligent agent and Case-based reasoning;3G Resource Management |
209 | Using Roles and Business Objects to Model and Understand Business Processes | Business process modeling focus on describing how activities interact with other business objects while sustaining the organization's strategy. Business objects are object-oriented representations of organizational concepts, such as resources and actors, which collaborate with one another in order to achieve business goals. These objects exhibit different behavior according to each specific collaboration context. This means the perception of a business object depends on its collaborations with other objects. Business process modeling techniques do not clearly separate the multiple collaborative aspects of a business object from its internal aspects, making it difficult to understand objects which are used in different contexts, thus hindering reuse. To cope with such issues, this paper proposes using role modeling as a separation of concerns mechanism to increase the understandability and reusability of business process models. The approach divides a business process model into a business object model and a role model. The business object models deals with specifying the structure and intrinsic behavior of business objects while the role model specifies its collaborative aspects. | INTRODUCTION
Representing and keeping the alignment between the multiple
elements of an organization is fundamental to understand how it
operates and how it can be adapted to cope with a changing business
environment [5]. This requires understanding how business
activities interact and are aligned with other organizational elements
while supporting the operation of the business.
In the past years, significant work, particularly in the area of
business process modeling has been proposed, ranging from general
modeling concepts to business automation languages [10, 16,
17, 18]. Business process modeling can be used for multiple purposes
, such as facilitating human understanding and communication
[29], supporting process improvement and re-engineering
through business process analysis and simulation [8, 17] and
automating the execution of business processes [1, 22].
A business process model captures the relationships that are
meaningful to the business between different organizational concepts
, such as activities, the resources used by activities and the
human or automated actors who perform these activities. Identifying
the properties and relationships of these concepts is fundamental
to help understanding and evolving the business since it facilitates
the communication between stakeholders, business specialists
and support system specialists.
We model business concepts as classes of business objects in a
consistent object-oriented glossary of business concepts from
where objects can be composed, specialized and reused.
However, fully characterizing the type of a business object, its
properties and relationships is not straightforward. This results
from a business object generally being used in different contexts
and relating to several other business objects in the organization.
For example, a business object modeling a
Product
may be
brought into play in several processes, such as
Manufacturing
,
Logistics
and
Selling
. In each of these contexts, it relates with different
activities and resources, displaying different and possibly overlapping
properties and behavior that are context-dependent. This
means the object acts as a multi-dimensional concept.
If business objects are modeled as one-dimensional concepts,
i.e. without its properties and behavior being described as dependent
on the context, then the objects will not have explicit information
on how to guide the design of a business support system that
is able to cope with evolution. For example, if the
Manufacturing
process changes, there may be changes to the
Product
object.
However, if the
Product
object does not explicitly represent the
aspects related to its manufacture, then there will be no information
on the properties requiring modifications.
This paper focus on describing on how to break up the universe
of process modeling and its business objects into different aspects
or areas of concern, of which each can then be handled independ-ently
and later composed to synthesize a complete model. To do
so, we propose defining two complementary conceptual models, a
role model and a business object model. The role model describes
business object collaborations and the properties of business objects
that are concerned with each role, being each role a type on
its own. The business object model describes the structure and the
properties of a business object that are independent of a specific
context. The relationships between business objects are specified
by roles the objects play while collaborating. We argue that using
roles and business objects to model business processes improves
the understandability of the individual business objects and of the
process model. It also improves model reengineering since it promotes
reuse and makes explicit the dependencies between the
model elements.
The remainder of this paper is structured as follows: next section
reviews some of the research on business process modeling. Section
3 reviews role modeling, describes how roles can be identified
and defines the concepts of business objects and role. Section 4
presents how the business object and the role model can describe a
business process, followed by an example of application in section
5. Finally, section 6 sets out the conclusions and future work.
MODELING BUSINESS PROCESSES
The Workflow Reference Model [31] defines a business process
as a set of one or more connected activities, which collectively
realize a business objective or policy goal, normally within the
context of an organizational structure defining functional responsibilities
and relationships. This definition extends the definition
proposed by Davenport and Short [7] stating that a business process
is a set of logically related tasks performed to achieve a defined
business outcome. Most approaches to business process
modeling concentrate on some sort of process map or diagram,
which shows how activities are scheduled in the course of a business
process. Indeed, there is little disagreement about the key
elements process diagrams. There are usually ways to represent
decision points and to express various activity coordination patterns
, such as sequential flow, branching and parallel execution.
Some techniques introduce swim-lanes to indicate the responsibilities
of participants, such as departments or individuals. This allows
representing the activities performed by actors in the context of a
process.
Two representative coordination-oriented business process modeling
techniques that make use of actors, activities and swim-lanes
are Role Interaction Networks [24] and Role Activity Diagrams
[21]. Role Activity Diagrams provide the means to identify roles
and interactions. Roles organize a process' activities into sets of
operations associated with a given participant in the process. Interactions
show the dependencies between those participants. While
this approach improves the understandability of a process model
since it depicts what a participant does in a process, it falls short to
explain the behavior of the business objects in a specific context of
interaction. Additionally, roles are defined as groups of activities
and not as types, so they cannot be explicitly composed or specialized
.
Business process modeling is not limited to process diagrams.
The focus of this paper is not on process diagrams but on describing
the roles that are used to specify the responsibilities of business
objects. A business object is the model of a concept in the business
universe of discourse. It plays roles in a business process by means
of participating in different activities. Business objects participate
in different business processes in different contexts, thus playing
multiple roles. It is important to note that process diagrams do not
fully describe the business object structure and relationships, and
do not emphasize why activities are performed or roles are enacted
. Besides, they only identify actor roles, i.e. the roles of the
performer of an activity. This means, for example, that the properties
of a resource that is used by multiple activities are not separated
according to its usage context. The next section introduces
the fundamental concepts behind role theory and role modeling.
ROLE MODELING
In the late 1920s, role theory started to generated interest among
social scientists from many backgrounds, such as psychology and
sociology. Its central concern has been with patterns of human
conduct, context and social structure as well as with individual
response. The motivation for roles is to allow particular viewpoints
regarding the factors presumed to be influential in governing behavior
. It lies on a theatrical analogy of actors playing parts or
roles in a play. As Biddle and Thomas [4] have stated: "When actors
portray a character in a play, their performance is determined
by the script, the director's instructions, the performances of fellow
actors, and reactions of the audience as well as by the acting
talents of the players. Apart from differences between actors in the
interpretation of their parts, the performance of each actor is pro-grammed
by all of these external factors; consequently, there are
significant similarities in the performances of actors taking the
same part, no matter who the actual actors are."
There are many complementary definitions for the concept of
role but still there is no consensus on the properties to represent it.
In the late 1970s, sociological role theorists defined a role as "a
comprehensive pattern for behavior and attitude" [26] or as "be-havioral
repertoire characteristic of a person or a position" [3].
Nonetheless, the concept of role is used in computer science and
software engineering as a modeling technique that deals with separation
of concerns, i.e. the separation of the behavioral repertoire
characteristics of some concept. It is used in methodologies such
as RM-ODP [14] and in several object-oriented frameworks [10,
12, 14, 15, 25].
3.1 Business Objects and Roles
Modeling is an abstraction technique that consists of identifying
concepts of interest in some universe of discourse and representing
its essential features for a specific purpose in a model. In business
modeling, the universe of discourse corresponds to what is per-ceived
of an organization as being reality by business domain experts
.
Ontologies typically distinguish entities (nouns) from activities
(verbs). Entities are things that exist in the business, either concrete
(e.g. a person) or abstract (e.g. an organization). Activities
are things that happen in the business. Activities make use of the
business entities. We model both of these concepts as business
objects. A business object is then the super type of all objects that
represent business concepts with a well-defined boundary and
identity. It encapsulates the definition, attributes, behavior and
relationships to other business objects [20]. The state of a business
object is characterized by the values of its attributes. The behavior
is given by the actions that the business object is capable of performing
to fulfill its purpose, including changing its intrinsic attributes
and collaborating with other business objects.
Business objects have intrinsic and extrinsic features. Intrinsic
features describe it in isolation, while extrinsic features arise from
the relationships or collaborations with other business objects. For
example, a
Person
has intrinsic features such as
Age
and
Sex
, and
extrinsic features such as
Job Position
and
Salary,
which derive
from a transitory relationship between the
Person
and some
Organization
or
Company
. Intrinsic features may change over time
(e.g.
Age
) but always characterize the object. However, extrinsic
features may become inappropriate (e.g. the
Job Position
property
is not relevant when characterizing an unemployed person).
One way to separate the intrinsic features from the extrinsic features
of an object is by means of roles [4, 15, 23]. Roles, as a modeling
construct, aim at separating the concerns that arise from
1309
business object collaborations. We define a role as the observable
behavioral of a business object defined in a specific collaboration
context. Thus, a role represents the extrinsic features of a business
object when it collaborates with other business objects.
3.2 Identifying Roles
To distinguish roles from entities, Guarino et al. proposed two
criteria [11]. A role is a type that (1) is founded and (2) lacks semantic
rigidity. Something is founded if it is defined in terms of
relationships with other things in a given context. For instance, the
concept of
Reader
is founded since for a
Person
to be a
Reader
there must be something being read. Conversely, a
Person
is not
founded for the reason that its intrinsic properties are defined on
their own regardless of the collaborations with other things.
Something is semantically rigid if its identity directly depends
on being kind of some class. A
Book
is semantically rigid since its
identity is still that of a
Book
regardless someone is reading it or
not. In contrast,
Reader
is not rigid because an entity filling the
role of
Reader
retains its identity outside the context of that role.
For example, a
Person
is a
Reader
while reading a
Book
, but when
it stops reading it, it is still a
Person
.
Therefore, roles are founded, semantically non-rigid types while
entities are non-founded, semantically rigid types.
ROLE-BASED PROCESS MODELING
The proposed approach deals with decomposing the business
process modeling universe into two complementary models, the
business object model and the role model, and later binding these
two models into an integrated specification of the business process
. The business object model deals with the structure and intrinsic
properties of business objects. Here, a process is modeled as a
network of business objects. However, business objects relate to
other business objects in specific contexts and are often used in
more than one context, where they may play different roles. So,
the roles for a business object only need to be included in its definition
when the object acts in the collaboration contexts described
by the roles. It is also impossible to forecast all of the possible
roles of a business object. Thus, adding superfluous roles to the
object impairs several design quality attributes such as understandability
, maintainability and reusability. To deal with such a concern
, roles and business objects should be dealt with separately and
later bound together.
The concept of role allows a system to be decomposed into a set
of business objects capable of clearly separating core parts and
collaboration-dependent parts and then to abstract and compose
such objects. Consequently, a set of roles helps business objects to
be defined to be more reusable and extensible. Roles may also be
reused as an independent unit encapsulating specific collaborations
. Roles are organized into role models, which deal with specifying
the network of related roles required for a collaboration to
happen.
We propose defining and represented both of these models using
the Unified Modeling Language [19] since its graphical syntax and
semantics is well-know by software specialists and, although to a
lesser scale, by business specialists. However, the standard UML
does not have explicit constructs to represent the required business
domain concepts. We make use of the UML extensibility package
to define such concepts. The extensibility package specifies how
UML model elements can be extended and customized with new
graphical representations and new semantics by specifying stereotypes
, tagged values and constraints. A coherent set of such extensions
defined for a specific purpose makes up a UML profile [2,
19]. The next subsections describe how the business object models
and role models are represented.
4.1 The Business Object Model
The business object model specifies the structure and intrinsic
properties of business objects. Business objects are coordinated
towards the achievement of goals that describe why actions occur
. A business process describes how objects are coordinated.
Figure 1. Classes in the business object model profile.
Figure 1 is a class diagram describing the UML stereotypes
(classes in white) that are used in the business object model. A
Business Object
is a UML
Class
and it is specialized as a noun or
verb by means of the
Entity
and
Activity
class stereotypes.
Business object models are represented as UML class diagrams
and the intrinsic behavior of its objects is represented using UML's
behavioral diagrams. Note that collaborations between business
objects are not represented in this model but in the role model.
The stereotypes within the business object model can be summa-rized
as follows:
Business Object
: an abstraction of a concept of interest in the
organization. It is a UML
Class
.
Activity
: a specialization of
Business Object
. It is a verb describing
how a piece of work is performed.
Activities
are performed
by
Actors
, and operate over
Business Objects
, especially those
acting as
Resources
.
Entity
: a specialization of
Business Object
. It is a noun describing
a concrete or abstract business concept.
Resource
: a specialization of
Entity
, which is the input or output
of an
Activity
. It represents things such as materials or information
.
Actor
: a specialization of
Entity
. It is someone (a human actor) or
something (an automated actor, such as an information system
or a production machine) that can perform the actions required
by an
Activity
.
Goal
: a specialization of
Entity
that represents a measurable state
that the organization intends to achieve.
Goals
are achieved by
Business Objects
, especially
Activities
.
A business process is composed of
Activities
that use input
Resources
, such as materials or information, to produce output
Resources
. Nevertheless, the input of an
Activity
may be any other
Business Object
or a composition of
Business Objects
. For instance,
changing or reengineering a business process is in itself a process.
This process takes as input a business object model (i.e. a network
of relationships between business objects) and produces a modified
model. Therefore, the composed business object model is being
used as a resource in this context.
Activities
are performed to achieve specific business
Goals
. Analyzing
Goals
and their relationships with the
Activities
produces an
alignment measure between the processes and the organization's
operational strategy. The
Activities
of a business process are not
1310
autonomous in the sense they require one or more
Actor
or
Business
Support Systems
to perform them.
Actors
represent people,
systems (mechanical or computer based) or a combination of both.
At a large scale, business processes are aggregated into value
chains (which are also business processes) that produce a measurable
value that is visible to external customers.
Figure 2. Example of activity composition and specialization.
Business objects are classes conforming to a type. They can be
specialized and composed just like ordinary objects. Figure 2
shows an example of a class diagram depicting composition and
specialization. Each chevron icon represents an activity or process
as previously defined. The
Sell Product
activity is composed by a
set of sub-activities such as
Identify Customers
and
Handle Order
.
These activities can be further decomposed into actions that are
more refined. The activity
Sell Product
is specialized as
Sell by Mail
Order
and
Sell Online
. Note that composition and specialization do
not imply any collaboration constraints between the activities.
4.2 The Role Model
Roles are a separation of concerns mechanism that allows business
objects to be observed from different perspectives. Role models
identify roles as types and describe the network of roles required
for a specific collaboration to happen.
As a player of a collaboration, a role defines the set of extrinsic
properties and behavior necessary to realize its participating collaborations
.
Figure 3. Representation of a role model package (left). Pair
or related roles (right).
Role models are represented as UML packages with two compartments
(v. Figure 3, left). The bottom compartment of the role
model is a standard UML activity or interaction diagram describing
how the roles are orchestrated. The top compartment of the
package depicts the roles within the role model. Roles are represented
by rounded rectangles, connected by a navigable collaboration
relationship between the roles. The representation of a role
always shows its name. Optionally, it also depicts in parenthesis
the name of the role model to where the role belongs so that its
scope is clearly defined (v. Figure 3, right).
Figure 4 show an example of three role collaborations contained
in two role models. The
Tutorship
role model defines a collaboration
pattern between two roles,
Tutor
and
Student
. In this
Course
role model defines two pairs of collaborations:
Participant
/
Taken
Course
and
Lecturer
/
Given Course
.
Figure 4. Example of role collaborations.
Roles are modeled as classes and represented in class diagrams
Methods and attributes concerning the specific collaboration context
can be specified in the class diagram. Roles can also be constrained
. A constraint asserts conditions between the roles in a role
model. It can be expressed informally or formally (e.g. in plain
text or OCL). An example of a constraint is disallowing two roles
to be played simultaneously by the same player, such as forbidding
an object playing the role of
Tutor
and that of
Student
simultaneously
and in the same context.
Figure 5. Example of role specialization.
Figure 5 is a class diagram that shows how the
Teacher
role is
specialized as
Tutor
and
Lecturer
. Role specialization means that if
a business object is able to play a child role, then it is also able to
play the super role. We have not yet found the need to define abstract
roles, i.e., a role that may only have its non-abstract speciali-zations
instantiated.
4.3 Binding Roles to Objects
Roles are bound to business objects pertaining to a given business
object model. The binding is accomplished via the
play
relationship stereotype, which links a business object to a role. It
means the business object is able to exhibit or play the behavior
specified by the target role.
Figure 6. Binding roles to business objects.
Figure 6 shows a class diagram where the pairs of roles
Tutor
/
Student
,
Lecturer
/
Given Course
and
Participant
/
Taken Course
defined
earlier in Figure 4 and Figure 5, are bound to two different
business objects,
Person
and
Course
. The binding between objects
and roles is depicted as a strong arrow. The light arrow represents
the relationships between roles. The model also defines a constraint
in the
Tutorship
role model. It asserts that the instances actually
playing the
Tutor
and
Student
role must be distinct. In this
example, it means the
Person
acting as a
Tutor
and the
Person
acting
as
Student
must be different objects, as expected.
1311
EXAMPLE
Figure 7 shows two base role models,
Supply
and
Pay
and a
composed role model,
Purchase
. Each role is a class and has methods
and attributes concerning the specific collaboration context
(e.g. the
Supplier
role in the
Supply
role model has the inquire and
order methods). The
Purchase
role model describes the collaborations
between a
Client
and a
Supplier
, while the
Pay
role model,
specifies
Payer
and
Payee
.
Figure 7.
Supply
,
Pay
and
Purchase
role models.
Purchase
is
composed of the role model
Supply
and role model
Pay
.
The
Purchase
role model is a composition of the
Supply
and
Pay
role models. A purchase results from supplying a product and paying
for it. Figure 8 shows the binding of the roles within the
Purchase
role model to a set of business objects. In the first case, a
Retailer
acts as the
Client
and
Payer
to a
Producer
who is a
Supplier
and a
Payee
to the
Retailer
. However, the
Retailer
also acts as a
Supplier
(and a
Payee
) to a customer.
Client
(BankClient)
play
play
play
play
play
play
Supplier
(Purchase)
Client
(Purchase)
Supplier
(Purchase)
Client
(Purchase)
business object
Customer
Payer
(Purchase)
Payee
(Purchase)
business object
Bank B
Banker
(BankClient)
Payer
(Purchase)
play
Payee
(Purchase)
business object
Bank A
Banker
(BankClient)
Client
(BankClient)
play
business object
Producer
business object
Retailer
Figure 8. Binding roles to business objects.
CONCLUSIONS
This paper has presented the fundamental concepts towards a
conceptual object-oriented framework for role-based business
process modeling. It relies on defining two distinct models. The
business object model focus on describing the components of a
business process (activities, goals, resources and actors) as business
objects. This model depicts the type of each business object,
its intrinsic behavior and properties but does not address the representation
of the object's features that are related to its collaborations
with other objects.
The role model depicts the collaborative behavior between roles
and the constraints that regulate them. Roles are bound to business
objects in a specific business object model, thus defining their
usage context. This model describes roles as types on their own
that can be specialized and aggregated. Role reuse is possible
whenever the semantics of the interaction pattern is the same, regardless
of the interaction context.
The proposed approach separates the specification of the intrinsic
features of a business object from its extrinsic features, meaning
that the properties and behavior that arise from the collaborations
with other objects are separated from the properties concerning
the object. This separation results in an increase of the understandability
of the business process since each different aspect of
the business object may be discussed, analyzed and dealt with
separately.
Additionally roles also contribute to keep the alignment between
the multiple organizational levels where a business process is defined
. When a business object specified at business level is
mapped to a component at business process support systems level,
roles provide information on how to design the component so that
changes to other levels can be traced and managed. Since the collaborative
aspects of a business object are specified outside the
object as roles, changes to a business process only interfere with
the roles which derive from the corresponding activities, leaving
the intrinsic properties of the object and its remaining roles un-changed
. This means that only the implementation of the concerned
roles needs modifications. The same reasoning applies the
other way around. When the implementation of a specific role or
business object is changed due to technical modifications or to the
evolution of the software, these changes can be traced up to the
processes and goals depending on it.
The value of using role modeling increases with the need of
making explicit the patterns of interaction between business objects
. This is the case of processes where its business objects relate
to several other business objects. In this case, understanding and
reengineering such a process is often difficult due to the number of
dependencies between objects, which are not separated or organized
according to the interaction context. This also makes difficult
to abstract common behavior patterns so that the business process
elements may be reused in other contexts.
We are currently extending this framework to enhance the representation
of the interaction between business objects and the
corresponding business support systems. The goal is to analyze the
gap between the existing human skills and information system
services of an organization and the requirements imposed by the
as-is and to-be business models so that the alignment between
these two levels may be improved.
REFERENCES
1.
W.Aalst, K.Hee, Workflow Management, MIT Press, 2002.
2.
S.Alhir, Unified Modeling Language Extension Mechanisms,
Distributed Computing, 1998.
3.
C. Bachman, M. Daya. The role concept in data models,
Proceedings of the 3
rd
International Conference on VLDB,
1977.
4.
B. Biddle, E. Thomas, Role Theory, Concepts and Research,
Kluwer Publishers, 1979.
1312
5.
Y. Chan, Why Haven't We Mastered Alignment?: The
Importance of the Informal Organization Structure, MISQ
Executive, Vol.1, No.2, 2002.
6.
B.
Curtis,
M.
Kelner,
J.
Over,
Process
Modeling,
Communications of the ACM, Vol. 35, No. 9, 1992.
7.
T. Davenport, J. Short, The New Industrial Engineering:
Information Technology and Business Process Redesign.
Sloan Management Review, 1990.
8.
H. Eertink, W. Janssen, P. Luttighuis, W. Teeuw, C. Vissers, A
Business Process Design Language, World Congress on
Formal Methods, Springer, 1999, pp. 76-95.
9.
H. Eriksson, M. Penker, Business Modeling with UML, OMG
Press, 2001.
10. G. Gottlob, M. Schrefl, B. Rck, Extending Object-Oriented
Systems with Roles, ACM Transactions on Information
Systems, Vol, 14, 1996 pp. 268-296.
11. N. Guarino, M. Carrara, and P. Giaretta. An ontology of meta-level
categories. Proceedings of the Fourth International
Conference on Knowledge Representation and Reasoning,
pages 270280. Morgan Kaufmann, 1994.
12. T. Halpin, Augmenting UML with Fact-orientation, 34th
Hawaii International Conference on System Sciences, IEEE
Press, Hawaii, USA, 2001.
13. ISO, ISO/IEC 10746 ODP Reference Model, International
Standards Organization, 1995.
14. E. Kendall, Agent Roles and Role Models, New Abstractions
for Multiagent System Analysis and Design, International
Workshop on Intelligent Agents, 1998.
15. B. Kristiansen, Object-Oriented Modeling with Roles, 1
st
Conference on Object Information Systems, 1996.
16. M. Madhavji, The Process Cycle, Software Engineering
Journal, Vol. 6, No. 5, 1991.
17. C. McGowan, L. Bohmer, Model-based business process
improvement, 15thInternational Conference on Software
Engineering, IEEE Computer Society Press, 1993.
18. D. Miers, Business Process Engineering, C-T Colin, Kogan
Page, London, 1996.
19. OMG, Unified Modeling Language Specification, Version 1.5,
formal/03-03-01, 2003.
20. OMG, Business Object Management Special Interest Group
(BOMSIG) Glossary of Terms, 1995.
21. M. Ould, Business Processes, Modeling and Analysis for
Reengineering and Improvement, John Wiley & Sons, 1995.
22. A. Scheer, ARIS Business Process Modeling, 2
nd
edition,
Springer, 1999.
23. T. Reenskaug et al., Working With Objects: The OOram
Software Engineering Method. ManningPublication Co.,
1996.
24. B. Singh, G. Rein. Role Interaction Nets (RINs): A Process
Description Formalism, MCC, 1992
25. D. Taylor, Business Engineering with Object Technology,
John Wiley & Sons, 1995.
26. R. Turner, Strategy for Developing an Integrated Role Theory.
Humboldt Journal of Sociology and Religion 7: 123-139,
1979.
27. M. Uschold, M. King, S. Moralee, Y. Zorgios, The Enterprise
Ontology, The Knowledge Engineering Review, Vol. 13,
1998.
28. E. Verharen, A Language-Action Perspective on the Design of
Cooperative Information Agents, CIP-Gegevens Koninklijke
Biibliotheek, 1997.
29. T. Walford,
Business
Process
Implementation
for
IT
Professionals and Managers, Arthech House, MA, 1999.
30. E. Yourdon, Modern Structured Analysis, Prentice-Hall,
Englewood Cliffs, NJ, 1989.
31. Workflow Management Coalition, The Workflow Reference
Model, 1995
1313 | Business Object;Business Process Modeling;Role Modeling;Organizational Engineering |
21 | A Taxonomy of Ambient Information Systems: Four Patterns of Design | Researchers have explored the design of ambient information systems across a wide range of physical and screen-based media. This work has yielded rich examples of design approaches to the problem of presenting information about a user's world in a way that is not distracting, but is aesthetically pleasing, and tangible to varying degrees. Despite these successes, accumulating theoretical and craft knowledge has been stymied by the lack of a unified vocabulary to describe these systems and a consequent lack of a framework for understanding their design attributes. We argue that this area would significantly benefit from consensus about the design space of ambient information systems and the design attributes that define and distinguish existing approaches. We present a definition of ambient information systems and a taxonomy across four design dimensions: Information Capacity, Notification Level, Representational Fidelity, and Aesthetic Emphasis. Our analysis has uncovered four patterns of system design and points to unexplored regions of the design space, which may motivate future work in the field. | INTRODUCTION
From the very first formulation of Ubiquitous Computing, the
idea of a calmer and more environmentally integrated way of
displaying information has held intuitive appeal. Weiser called this
"calm computing" [35] and described the area through an elegant
example: a small, tangible representation of information in the
world, a dangling string that would wiggle based on network
traffic. When information can be conveyed via calm changes in
the environment, users are more able to focus on their primary
work tasks while staying aware of non-critical information that
affects them. Research in this sub-domain goes by various names
including "ambient displays", "peripheral displays", and
"notification systems". The breadth of the systems in these broad
categories is quite large. We seek to disentangle the terminology
used to describe and categorize the wide array of systems in order
to provide a common language for discussing research therein.
An ambient display can represent many types of data, from
stock prices, to weather forecasts, to the presence or absence of
colleagues. Maintaining awareness of co-located and distant work
and social groups has been a long-term research thread in the area
of Computer Supported Cooperative Work (CSCW) [5, 8]. The
Tangible Media Group at the MIT Media Lab, directed by Ishii,
also helped shape the field of ambient computation. They coined
the term "tangible media," citing inspiration from Weiser's vision
[35] and from Pederson and Sokoler's AROMA system [29] and
developed AmbientROOM [17] and Ambient Fixtures [6, 18].
These systems use ambient displays to make people aware of both
group activity and other information such as network traffic.
Recent work in Ambient Intelligence has brought techniques from
Artificial Intelligence to ambient systems, spearheaded by the
Disappearing Computer initiative of the European Union [31].
This research thrust seeks to imbue ambient systems with
contextual knowledge about the environment. The Roomware
project has resulted in smart architectural spaces that support
information conveyance (and group collaboration) [33].
Researchers have developed systems that use a multitude of
everyday objects to display information. Examples include lights
of various sorts [2, 17], sounds [25], shadows [8], artificial flowers
[18], mobiles [24], and office-dcor water fountains [12, 16].
Further research has sought to use framed photographs [26] and
larger artistic pictures to represent information from the world in
an art-like manner [14, 30, 32]. There are also peripheral display
"modes" of a user's main desktop, including screensavers like
What's Happening [36], information bars and menus such as those
leveraged in Sideshow and Irwin [6,
22], and alternate panes, like
Apple's Dashboard [3]. As one can see, the design space is large.
All these systems provide a rich history of system design
principles, approaches, and decisions, but accumulating theoretical
and craft knowledge has been stymied by the lack of a unified
vocabulary to define and describe these systems. In this paper we
propose a set of design choices that developers of ambient
information systems must confront to build successful and
compelling systems. First we set out a definition of an ambient
information system that is a synthesis of the varied definitions
given in published research. We hone the intuitive set of
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
AVI '06, May 23-26, 2006, Venezia, Italy.
Copyright 2006 ACM 1-59593-353-0/06/0005. $5.00.
67
characteristics that distinguish ambient systems from other
ubiquitous computing research systems. Next, we propose a set of
design dimensions for ambient information systems. The four
dimensions of system design elucidate the main decisions one
confronts when designing an effective ambient system. Finally, we
explore the clusters across dimensions to uncover four coherent
combinations of system designs, which work as design patterns for
the field. The results also identify new ways of combining the
design attributes to explore new possibilities for ambient
information systems.
AMBIENT INFORMATION SYSTEMS
Many different terms have been used to describe the types of
systems we discuss in this paper. Three of the most commonly
used terms are "ambient display," "peripheral display," and
"notification system." But how does one differentiate these
terms? Based on general understandings, we claim that:
all ambient displays are peripheral displays,
some notification systems are peripheral displays
(some notification systems are not peripheral but are
instead the object of focused work and attention)
The words of researchers themselves likely best explain their
conceptions of the systems that they have built. Below, we present
germane definitional quotes.
Ishii et al: "[In Ambient Displays] information is moved off
the screen into the physical environment, manifesting itself as
subtle changes in form, movement, sound, color, smell,
temperature, or light. Ambient displays are well suited as a
means to keep users aware of people or general states of large
systems, like network traffic and weather." [17]
Matthews et al: Peripheral displays, then, are displays that
show information that a person is aware of, but not focused on.
[24]
Matthews et al: "Ambient displays might be defined as those
that are "minimally attended" (e.g. just salient enough for
conscious perception) while alerting displays are "maximally
divided" (e.g. slightly less salient than focal tasks). [24]
Stasko et al: Ambient displays typically communicate just one,
or perhaps a few at the most, pieces of information and the
aesthetics and visual appeal of the display is often paramount.
Peripheral displays refer to systems that are out of a person's
primary focus of attention and may communicate one or more
pieces of information." [32]
Mankoff et al: "Ambient displays are abstract and aesthetic
peripheral displays portraying non-critical information on the
periphery of a user's attention... They generally support
monitoring of non-critical information." "Ambient displays
have the ambitious goal of presenting information without
distracting or burdening the user." [20]
Rounding and Greenberg: "The [notification collage] is
designed to present info[rmation] as lightweight and peripheral
objects. It does not demand the full attention of its users: rather
it can be attended to in passing, where people collaborate should
the need or desire arise." [14]
McCrickard et al: "Often implemented as ubiquitous systems or
within a small portion of the traditional desktop, notification
systems typically deliver information of interest in a parallel,
multitasking approach, extraneous or supplemental to a user's
attention priority." [21]
McCrickard et al: Notification systems are defined as
interfaces that are typically used in a divided-attention,
multitasking situation, attempting to deliver current, valued
information through a variety of platforms and modes in an
efficient and effective manner [21].
The easiest way to explain the differences between systems is
to look at the design motivations that informed them. Ambient
displays are those that have pointed aesthetic goals and present a
very small number of information elements. These systems are a
proper subset of peripheral displays, which can appear either in the
environment or on secondary or even primary computer displays.
Notification systems' design motivation results from divided
attention situations. As such, they can be equal to a primary work
task in their attentional needs or be secondary. When notification
systems are designed to be secondary to a primary task, the
systems are appropriately defined as peripheral.
In this paper, we propose the term ambient information system
as the unit of study and define the behavioral characteristics of
such as systems as follows:
Display information that is important but not critical.
Can move from the periphery to the focus of attention and
back again.
Focus on the tangible; representations in the environment.
Provide subtle changes to reflect updates in information
(should not be distracting).
Are aesthetically pleasing and environmentally appropriate.
PREVIOUS TAXONOMIES
A small number of research papers that describe ambient
information systems also include extended discussions of the
design dimensions that motivate and contextualize their work. The
authors provide dimensions to compare and contrast their systems
to others in order to explain their design rationales.
Matthews et al use the dimensions notification level,
transition, and abstraction to characterize systems in this space
[24]. They developed the Peripheral Display Toolkit [23] that
helps people to develop ambient information displays more easily.
Their concept of notification level means the relative importance
of a particular data stream. Transitions are the programmatic
changes to the display, based on the data. Transitions include
fading, scrolling, or animation effects. They define abstraction as
the mapping that takes a piece of numerical or ordinal data and
turns it into something that the ambient display can use, something
"more easily interpreted with less [user] attention."
Matthews et al segregate notification level into five levels:
Ignore, Change Blind, Make Aware, Interrupt, and Demand
Attention. The gradations run from low, a system ignoring the
change in the data, to high, a system demanding attention in a way
that must also be explicitly dismissed. They propose categories of
transition: interrupt, make aware, and change blind. Finally, they
bifurcate abstraction into feature abstraction or degradation.
McCrickard et al introduce a different set of three dimensions
to classify notification systems: interruption, reaction, and
comprehension [21]. Interruption is defined psychologically,
similar to Matthews' notion, "as an event prompting transition and
reallocation of attention focus from a [primary] task to the
notification." Reaction is defined as the rapid response to a given
stimulus, while comprehension is the long-term notion of
remembering and sense-making.
68
McCrickard et al then plot the design space as a 3-tuple of
interaction, reaction, and comprehension (IRC). Each dimension is
assigned a rating of high (1) or low (0), creating models like 0-1-0.
They label these models with meaningful names like "Ambient
Media, 0-0-1" "Indicator, 0-1-0" and "Critical Activity Monitor,
1-1-1." Eight models serve as the corners of a design space. The
resulting space, it should be noted, is larger than the design space
of ambient information systems as we discuss in this paper
because it contains games, secondary displays, and critical activity
monitors (which by our definition, are notification systems that are
not also peripheral systems). McCrickard also classifies a set of 14
extant systems in the design space on the three dimensions.
Both of these taxonomies deal thoroughly with interruption
and detail some of the criteria for categorizing systems along this
design dimension. We extend this analysis to other dimensions of
data representation, flexibility, and aesthetics. This more holistic
view points out design trade-offs between aesthetic emphasis and
and flexibility, and between a system's information display style
and display capacity.
Mankoff et al proposed a set of heuristics for evaluating
ambient systems [20], which may also assist system builders. The
heuristics attempt to give guidance for the formative evaluation of
ambient systems, but they also can be viewed as high-level design
guidelines, such as "The display should be designed to give `just
enough' information. Too much information cramps the display,
and too little makes the display less useful."
DESIGN DIMENSIONS OF AMBIENT SYSTEMS
Designers of ambient information systems make decisions
about how much information to display, what specific aspects to
depict, and how exactly to display it, transparently or abstractly,
on a monitor or via a decorative sculpture. We present four design
dimensions that capture the space of ambient information systems.
The dimensions can be thought of as design choices or design
questions that system builders must answer. The dimensions are:
information capacity
notification level
representational fidelity
aesthetic emphasis
We rank 19 research systems and three consumer ambient
information systems on each of the four axes. Each axis is divided
into 5 bands, from low to high. We place systems into groups
based on information from published conference and journal
proceedings, including images and videos of systems in use if
available. The 19 systems we chose are not intended to be an
exhaustive list of all ambient information systems in the research
literature. The 19 systems are representative of the breadth of the
field and we feel that attempting an exhaustive list, while
amplifying completeness, would not significantly alter the design
dimensions.
Research systems that we analyzed include: Bus Mobile [24],
Dangling String [35], Digital Family Portrait [26], InfoCanvas
[33], Informative Art [30], Information Percolator [16], Irwin [22],
Kandinsky [11], Kiumra [19], Lumitouch [5], Notification Collage
[14], Scope [34], Sideshow [7], Table Fountain [12], Water Lamp
[8], and What's Happening [36]. We include three consumer
systems that fit our definition of ambient information systems,
Ambient Devices Ambient Orb [2], the My Yahoo! web portal
[27] and Apple's Dashboard [3].
Figure 1 shows the four dimensions for our analysis, and
each of the 19 systems placed into a group along each. Thin
colored lines trace the rankings of systems on each axis, similar to
a parallel coordinates plot. Each axis has values that range from
low to high through five grades. The dimensions of notification
level and representational fidelity have more descriptive axis
labels that will be explained in detail below.
4.1 Information Capacity
Ambient information systems are created to convey
information to users--information that typically is important to a
user's sense of wellbeing and general awareness, but not critical to
their work or personal life. Information capacity represents the
number of discrete information sources that a system can
represent. Some systems are capable of displaying a single piece
of data such as the current price of a stock index. Others can
display the value of 20 (or more) different information elements
on one screen. We rank systems from "Low" to "High" on this
design dimension.
Information elements are discrete information "nuggets". For
example, if a system monitors campus shuttle buses, each bus is a
single nugget. If the system can represent both the time to a
location and a direction of travel, then there are two nuggets of
information for each bus that is monitored.
Information capacity makes visible the design trade-off
between space and time. A designer can increase the information
capacity of a display by increasing the space for information to be
presented or by creating a display that transitions through a set of
views over time. If a system is designed with multiple views or
uses scrolling, we rank it in the top tier, since the number of pieces
of information that it could display is arbitrarily large.
A further caveat about information capacity is necessary.
Some of the analyzed systems such as InfoCanvas, Sideshow, and
Dashboard are user-configured and user-customizable. This means
that these and other systems could potentially be made to display
hundreds of elements. Instead of attempting to calculate a
theoretical maximum throughput for the display in these cases, we
use the system designer's naturalistic portrayal in their published
work to determine the "everyday maximum." Each of these
systems is also in the top tier of information capacity.
The design dimension of information capacity has a barbell
distribution. Five of the 19 systems display a single information
element and are ranked "Low". Conversely, there are eight
systems that display from ten to 20 information elements, with
some systems having the potential to display more and these are
ranked "High." Only a few systems take a middle-ground
approach, attempting to display a small number (from two to ten)
of information elements.
The systems with low ratings on the attribute of information
conveyance are those that are physical displays. Fountains,
glowing lights, and office-decoration sculptures afford designers
only so much flexibility for changes.
69
Figure 1: Parallel Coordinate plot of 19 existing ambient information systems across four design dimensions. Colored lines trace
each system's ranking along the design dimensions. Different colors are used to denote groups of systems which are similar as
explained more fully in Section 5.
Since the number of changes possible is small, the total number
of information nuggets that can be represented is
correspondingly small. The systems with high information
conveyance are those that are presented on LCD screens. The
systems that run at full screen (instead of as a small section of a
focused main monitor) are ranked the highest.
4.2 Notification Level
Notification level is the degree to which system alerts are
meant to interrupt a user. Notification level is a design attribute
that is present in the two taxonomies of ambient and peripheral
information systems we reviewed earlier. Matthews et al
subdivides notification level into five categories: ignore, change
blind, make aware, interrupt, and demand attention. For our
analysis we adopt those categories but replace the lowest level
of system alert function, ignore (a degenerate case) with user
poll. Systems such as Apple Dashboard and My Yahoo! do not
always appear in a user's environment and must be explicitly
called to the fore.
Notification level can be thought of as the "ambience" of
the systems in question. Some systems in the ambient space are
quiet, and afford opportunistic glances to the information, while
others provide more strident alerts by blinking, flashing,
beeping, or even opening dialog windows. Systems that provide
unobtrusive change blind or make aware notifications to the user
are at the core of the ambient information system design space.
Systems that interrupt users with alarms or that demand
attention (by launching system dialog windows) are not subtle,
so are further from the core concept of ambient information
systems, though, as Matthews et al argues, the smooth transition
from more subtle to more jarring is an interesting design
direction for ambient system designers.
Notification level is the designer-intended level of alert.
We do not take pains to distinguish between systems that are
proven to be "change blind" through user experimentation
versus those that merely claim change blindness. We remain
agnostic here about the techniques used for ensuring subtlety
including slow animation, scrolling, and fading (these
implementation details are at a lower level of design rationale).
Once the decision has been made to produce a system with
change blind transitions, the designer must then produce system
transitions that meet the goal in the specifics of the system. Our
analysis focuses on the high level decision on the part of the
designer or design team.
The distribution of systems here shows a good fit to our
definition of ambient information systems. It is apparent that
most ambient information systems adhere to the central notion
of subtle visual or representational changes. The vast majority of
ambient information systems fall into the change blind and make
aware transition categories (somewhat low and medium). Few
systems are designed to interrupt users or demand attention.
70
Two that do however are Scope and Sideshow. Note that most
systems that are physical displays do not have make-aware or
interruption-level alerts, much less demand attention alerts. The
Bus Mobile does enable make-aware transitions, when, for
example, the last bus of the day approaches.
4.3 Representational Fidelity
Representational fidelity describes a system's display
components and how the data from the world is encoded into
patterns, pictures, words, or sounds. Some systems reproduce
the information being monitored in a very direct way, while
others are much more abstract in their representation. Matthews
et al's taxonomy characterizes this design choice as abstraction,
but only distinguishes two sub-types, feature degradation and
feature abstraction. We consider this design dimension to be rich
and complex, so we will try to tease apart the many different
types of abstraction that appear in ambient information systems.
Representational fidelity can be described in the language
of Semiotics, the branch of Philosophy that deals with signs, sign
systems (such as natural languages) and their meanings. As such
it has an accepted vocabulary for the elements of a symbolic
representation. Semiotics can help analyze the way that
particular signifiers--words, pictures, sounds, and other
things--stand for the things they represent.
A semiotic sign is made up of three parts [28]. The object
is called the signified; it is the physical thing or idea that the
sign stands for. The signifier is the representation of the object,
which could be a word, a picture, or a a sound. The sense is the
understanding that an observer gets from seeing or experiencing
either the signified or its signifier. The signifier and the signified
need not have any direct relationship. However, both the
signified and the signifier create the same sense in the head of an
observer; seeing a log aflame and seeing the word "fire" create
the same meaning for a person.
Ambient information systems, in the vocabulary of
semiotics, contain one or more signs. Each sign has its object,
information in the world, and its representation, the lights,
pictures, or sounds used to signify that information. Many
ambient information systems contain multiple signs--each
picture element standing for a different piece of information.
The theory of Semiotics also helps to explain the notion
that some signs are transparent, easily understood, while others
are metaphorical and still others are abstract. Signs can be
symbolic, iconic, or indexical. Symbolic signs are those that are
completely arbitrary. For example languages are arbitrary, for
the word "bachelor" has no more natural relation to an
unmarried man than does the word "foobar." Symbolic signs
are those signs for which a code, or rule-following convention,
is required to understand. Language characters and numbers are
all symbolic, as are abstract visual representations (the color red
standing for "danger"). Iconic signs are those signs that have an
intermediate degree of transparency to the signified object.
Iconic signs include metaphors as well as doodles, drawings,
and caricatures. Icons represent their objects by having some
similarity or resemblance to the object or to an essential aspects
of the object. Indexical signs are those that are directly
connected to the signified. Examples include measuring
instruments, maps, and photographs.
We have subdivided the three main categories of
representational fidelity to distinguish between ambient
information systems. We propose five groups, ranked from
indexical (high) to symbolic (low):
INDEXICAL: measuring instruments, maps,
photographs
ICONIC: drawings, doodles, caricatures
ICONIC: Metaphors
SYMBOLIC: language symbols (letters and numbers)
SYMBOLIC: abstract symbols
Some ambient information systems have displays that do
not afford representational flexibility, because of the constraints
of the display. For example, the LiveWire system and the
Ambient Orb cannot represent language symbols, nor can they
convey indexical forms like photographs. However, some
flexibility is present. The systems might map information in an
arbitrary way, remaining fully abstract (representing stock
increases with the color green and losses with the color red), or
it could map information more metaphorically, as would be the
case if LiveWire were connected to information from a
seismograph or ocean tides. As one can see, the question
concerning representational flexibility requires one to consider
both the display and the information that is displayed.
The InfoCanvas is a very flexible system when considering
representational fidelity. The InfoCanvas uses all five types of
representational fidelity. It uses abstract symbols, such as the
color red standing for traffic being stopped, metaphors, like a
cartoon drawing of a cloud representing cloudy conditions, and
also photographs and words of news stories, which are fully
indexical. We show this ability for a system to straddle multiple
representational forms by duplicating the system in each
category and noting them with an asterisk (see Figure 1).
Systems which are designed to represent information at multiple
levels of fidelity are: Apple's Dashboard, InfoCanvas,
Informative Art, Notification Collage, Sideshow, and What's
Happening. In these cases, we draw the parallel coordinate plot
to the top-most tier of representational fidelity for each system.
The majority of systems however, only afford a single level
of representational fidelity. Many of the sculptural displays only
afford symbolic, that is abstract, representations, while a smaller
number afford text and photographic representations.
4.4 Aesthetic Emphasis
The final dimension concerns the relative importance of the
aesthetics of the display. Some system designers seek to build
displays and artifacts with sculptural or artistic conventions. For
these systems, being visually pleasing is a primary objective.
Others however place relatively little focus on aesthetics and
typically focus more on information communication ability.
Since aesthetic judgment is at its core a subjective phenomenon,
we do not judge systems on their relative artistic merits. Instead
we attempt to rank ambient information systems by our
perception of the importance given to aesthetics. There is often a
tradeoff
made
between
communication
capacity,
representational fidelity, and aesthetics, a relationship that we
explore in this section.
Ambient information systems are intended to be visible;
positioned on a shelf, hung on the wall, or placed as a small
sculpture on a desk, the systems are seen not just by a user, but
also by co-workers, colleagues, or family members. There are a
71
multitude of approaches when it comes to building aesthetically
pleasing devices. One approach is to build systems that mirror
existing artworks by a particular artist, as is the case in
Kandinsky and Informative Art. A second approach is to design
a display that is representative of a particular style or art
movement. InfoCanvas, through its use of themes, allows the
display to take on characteristics of Asian water-color paintings,
for example.
We rank systems on the design dimension of aesthetic
emphasis as low, somewhat low, medium, somewhat high and
high. Note again that we are not assessing the degree to which
the systems are successful as art. We are providing a subjective
measure of how much the system designers focused on
aesthetics and how much they emphasized aesthetic
considerations in their research and design decisions.
Most systems that we analyzed had medium or somewhat
high degrees of aesthetic emphasis (12 of 19). The decisions of
designers to strive for visually pleasing displays is most clear in
the cases where the display is intended to leverage the work of
existing artists. The physical ambient information displays are
often sculptural in their design decisions. They attempt to set
themselves off from the rest of the environment, often on
pedestals or stands. Their capability to display much information
(information capacity) is often limited by their design clarity and
austerity. We consider this design trade-off in the next section.
Systems that we ranked at the middle of the spectrum of
aesthetic emphasis are those which are not intended by their
designers to be art worthy of contemplation as art objects. But
they are explicitly intended to be viewed as calm pleasing
objects and displays. Apple's Dashboard widgets have a clean
design sense about them, as does Kimura, What's Happening
and the Information Percolator. The systems that are ranked low
on aesthetic emphasis are Scope, Sideshow, Bus Mobile, Elvin,
and My Yahoo!. These systems put information conveyance at a
higher priority than being aesthetically pleasing. They are still
calm and environmentally appropriate, but their designers did
not emphasize their aesthetic qualities. Cleary, some systems
that are early-stage prototypes like Bus Mobile, may not have
the aesthetic polish of more finished systems.
FOUR DESIGN PATTERNS
In this section, we introduce four design patterns for
ambient information systems, after Alexander's pattern language
for architectural studies [1]. The design patterns illustrate four
coherent combinations of the four design dimensions previously
presented. We have already pointed out trends and clusters that
are present in each particular design dimension. However, there
are fruitful conclusions for system designers as we consider the
interaction between the design dimensions to form design
patterns.
Considering the clusters of systems in each dimension and
the correspondences that are visible in the parallel coordinate
plot, we find four main archetypes in existing ambient
information system design: Symbolic Sculptural Display,
Multiple-Information Consolidators, Information Monitor
Display, and High Throughput Textual Display. Figure 2
shows the pattern of each archetype across the dimensions.
Figure 2: a-d System design archetypes shown in the context
of the design space. Heavy boxes indicate core design
decisions, while light boxes show alternate choices.
Symbolic Sculptural Displays are ambient information systems
that display very few pieces of information, usually a single
element. They represent information in an abstract sculptural
way with light, water, or moving objects. They are intended to
be decorative objects for a home or office setting and as such are
highly aesthetic in their design (see Figure 2a). This design
pattern is a core of ambient system design, and accounts for six
of our analyzed systems: Ambient Orb, Dangling String, Digital
Family Portrait, Information Percolator, Lumitouch, Table
Fountain, and Water Lamp. The Digital Family Portrait
combines multiple information sources and so truly represents
more information than the other members of this type.
72
Multiple Information Consolidators are ambient systems that
display many individual pieces of information in a consolidated
manner. They are typically screen-based in order to convey
much information and make users aware of changes to that
information (usually by blinking the visual representation of a
certain element). They are reasonably aesthetically motivated,
but all clearly demonstrate the trade-off between aesthetics and
customization and information capacity (see Figure 2b). Systems
which illustrate this design pattern are: Kandinsky, Kimura,
InfoCanvas, Notification Collage, and What's Happening.
Kandinsky departs from the other systems in that it is explicitly
modeled on the fine art of Kandinsky, and as such is highly
stylized and design-focused. It does so at the expense of
flexibility, since it can only display photographs in its slots.
Information Monitor Displays are displays that are a
peripheral part of a user's computer desktop. As such, they
afford different interactions and design choices. They display
multiple sources of information, and do so usually by visual
metaphors. They are capable of notifying users in multiple ways
about changes in the source data, including subtle awareness,
interrupting, and even demanding user attention when necessary
(i.e., requiring the user to switch focus to dismiss a notification).
The systems achieve aesthetics, but their primary purpose is not
good looks (see Figure 2c). Examples of this design archetype
include: Scope, and Sideshow.
High Throughput Textual Display systems are those that use
text and very simple graphics (icons) to denote information.
They are capable of representing voluminous information, but
do not draw attention with interruption-level notifications. These
systems are not primarily as concerned with aesthetics as they
are with information conveyance (see Figure 2d). These systems
are simple but efficient for certain types of tasks. Examples of
this design archetype are: Elvin, and My Yahoo!.
The four design archetypes cover nearly all of the analyzed
systems, but do not cleanly categorize three systems. Apple's
Dashboard system is most similar to a Multiple Information
Consolidator. It fails being a pure example of this archetype
because of its inability to alert users to changes in information
it requires users poll the system by calling up the transparent
pane via a hot key. The Bus Mobile is an early stage prototype,
and as such is not concerned with aesthetics to a large degree.
With a higher degree of aesthetic emphasis, it might be closer to
a Information Monitor Display (albeit a physical instead of
screen-based system). Informative Art is quite unlike the four
design archetypes. Informative Art has high aesthetic emphasis,
but low information capacity (e.g. 5 or 6 city's weather forecast
information). It is metaphorical and abstract in its information
mapping fidelity.
EXTENDING THE PATTERNS
The four patterns for system design can help designers to
make appropriate choices as they develop new ambient
information systems. The design patterns can be used as models
so a designer can decide to build "an information monitor
display for a home health awareness application", or "a set of
symbolic sculptural displays for work-group collaboration".
Further, the designer may be depart from the pattern, by building
up a system's range of possible notification levels, or by
choosing to trade aesthetics for increased information capacity.
However, our analysis also points at what has not yet been
explored. The four design patterns show four coherent
combinations, but they are not the only possibilities for building
useful ambient systems. Combined with longer-term trends in
the fields of Ambient Intelligence and Ubiquitous Computing,
new archetypes for system design are emerging. We note
possibilities here, which change both the dimensions and the
four design patterns.
We do not expect the information capacity for ambient
systems to increase by dramatically. Though scrolling or time-divided
ambient systems (What's Happening, Elvin) can already
display data elements numbering in the hundreds, simultaneous
visual displays are usually limited to 25 or 30 elements by
readability and user learnability. Ambient information systems
will not turn into information visualization systems showing
thousands of data points. However, contextual sets of
information may be useful for ambient systems in specialized
environments. Systems which display contextual sets of
information like that of the Bus Mobile (all of the buses on a
college campus) or Scope (email and calendar data) would
increase the number of systems in the middle portion of this
design dimension.
We also expect to see changes to the design dimension of
representational flexibility. Designers have begun to explore the
affordances of abstract and symbolic mappings between
information sources and their representations. We see this
continuing, with new systems focusing on personally relevant
symbolic representations, and utilizing metaphors from the
natural and built worlds. Another shift that we foresee is the
designers creating systems where multiple information sources
and aspects interact to affect a single part of the representation.
This is apparent already in Digital Family Portrait where the size
of the butterflies represents "activity," even though activity is
not the reading from a single sensor, but it instead a reading
from multiple sensors in a home. Informative Art also has
aspects of this approach, changing both the color and
dimensions of squares based on two different aspects of weather.
As regards aesthetic emphasis, we foresee a more radical
change. We predict further exploration of the space of truly
artistically motivated ambient information systems. These
generative artworks use information from the world to drive
their behavior and ask (and answer) art questions as well as
technology questions. Though most of these works are outside
the academy (they are shown in galleries instead of computer
science conferences), Bolen and Mateas' Office Plant #1 [4] is a
sculpture that characterizes the mood of a user's email stream
and conveys it via transformations of a robotic plant. These
systems are going to create a new design space above the top tier
that we depict in this work.
CONCLUSIONS
In this work we synthesize a definition that distinguishes
research in ambient information systems from that of
notification systems and peripheral displays. We propose four
design dimensions, rank systems to show clusters, and uncover
four design patterns on which system developers may model
their system designs. Future work will expand the four
dimensions to include aspects of the social interaction and
impact that system have on the behavior of individuals and
groups.
In this work we point toward open areas in the design
space, and we point to new design directions that may fill these
gaps. Future work may also turn this taxonomy into an
evaluation framework for ambient information systems.
73
REFERENCES
1.
Alexander, C., A Pattern Language: Towns, Buildings,
Construction. Oxford University Press, 1977.
2.
Ambient Orb. http://www.ambientdevices.com/
3.
Apple Mac OS X Dashboard. http://www.apple.com/
macosx/features/dashboard/index.htm
4.
Bohlen, M., and Mateas, M. Office Plant #1. Leonardo 31:5. pp.
345-349.
5.
Chang, A., Resner, B., Koerner B., Wang, X and Ishii, H.,
Lumitouch: An emotional communication device. Extended
Abstracts of CHI 2001, pp. 371-372.
6.
Cadiz, J
.,
Fussell, S., Kraut, R., Lerch, J., and Scherlis, W.
The
Awareness
Monitor:
A
Coordination
Tool
for
Asynchronous, Distributed Work Teams. Unpublished
manuscript. Demonstrated at CSCW 1998.
7.
Cadiz, J., Venolia, G., Janke, G., ans Gupta, A. Designing
and deploying an information awareness interface.
Proceedings of CSCW 2002, pp. 314 - 323.
8.
Dahley, A., Wisneski, C., and Ishii, H. Water lamp and
pinwheels: Ambient projection of digital information into
architectural space. CHI Conference Summary 1998, pp.
269270.
9.
Espinosa, A., Cadiz, J., Rico-Gutierrez L., Kraut, R., Sherlis,
W., and Lautenbacher, G. Coming to the Wrong Decision
Quickly: Why Awareness Tools Must be Matched with
Appropriate Tasks. Proceedings of CHI 2000, pp. 392-399.
10.
Fitzpatrick, G., Kaplan, S., Arnold, D., Phelps, T., and
Segall, B. Augmenting the Workaday World with Elvin.
Proceedings of ECSCW 1999, pp. 431-450.
11.
Fogarty, J., Forlizzi, J., and Hudson, S. Aesthetic
Information Collages: Generating Decorative Displays that
Contain Information. Proceedings of the UIST 2001,
pp. 141-150.
12.
Gellersen, H.-W., Schmidt, A. and Beigl. M. Ambient Media
for Peripheral Information Display. Personal Technologies
3, 4 : 199-208. 1999.
13.
Greenberg, S., and Fitchett, C. Phidgets: Easy development
of physical interfaces through physical widgets. Proceedings
of UIST 2001. pp 209-218.
14.
Greenberg, S., and Rounding, M. The Notification Collage:
Posting Information to Public and Personal Displays.
Proceedings of CHI 2001, pp. 515-521.
15.
De Guzman, E., Yau M, Park, A., and Gagliano, A.
Exploring the Design and Use of Peripheral Displays of
Awareness Information. Extended Abstracts of CHI 2004,
pp. 1247-1250.
16.
Heiner, J. M., Hudson, S., and Kenichiro, T. The
Information Percolator: Ambient information display in a
decorative object. In Proc. of UIST 1999, pp. 141-148.
17.
Ishii, H.,Wisenski, C., Brave, S., Dahley, A., Gorbet, M.,
Ullmer, B., and Yarin, P. AmbientROOM: Integrating
Ambient Media with Architectural Space. Summary of CHI
1998, pp.173-174.
18.
Ishii, H., Ren, S., and Frei, P. Pinwheels: visualizing
information flow in an architectural space. Extended
Abstracts of CHI 2001, pp. 111-112.
19.
MacIntyre, B., Mynatt, E., Voida, S., Hansen, K., Tullio, J.,
and Corso, G. Support For Multitasking and Background
Awareness
Using
Interactive
Peripheral
Displays.
Proceedings of UIST 2001, pp. 41-50.
20.
Mankoff, J., Dey, A., Heish, G., Kientz, J., Lederer, S., and
Ames, M. Heuristic evaluation of ambient displays.
Proceedings of CHI 2003, pp. 169-176.
21.
McCrickard, D. S., Chewar, C., Somervell, J., and
Ndiwalana, A. A Model for Notification Systems
Evaluation--Assessing User Goals for Multitasking
Activity. ACM Transactions on CHI 10,4 : 312 338. 2002
22.
McCrickard, D.S., Catrambone, R., and Stasko, J. Evaluating
animation in the periphery as a mechanism for maintaining
awareness. Proceedings of INTERACT 2001, pp. 148-156.
23.
Matthews, T., Dey, A.., Mankoff, J., Carter S., and
Rattenbury, T. A Toolkit for Managing User Attention in
Peripheral Displays. Proceedings of UIST 2004, pp. 247-256
.
24.
Matthews ,T., Rattenbury, T., Carter, S., Dey, A., and
Mankoff, J. A Peripheral Display Toolkit. Tech Report IRB-TR
-03-018. Intel Research Berkeley. 2002.
25.
Mynatt, E.D., Back, M., Want, R., and Ellis, J.B. Designing
audio aura. Proceedings of CHI 1998, pp. 566-573.
26.
Mynatt, E.D., Rowan, J., Jacobs, A., and Craighill, S. Digital
Family Portraits: Supporting Peace of Mind for Extended
Family Members. Proceedings of CHI 2001, pp. 333-340.
27.
My Yahoo!. http://my.yahoo.com/index.html
28.
Ogden, C., and Richards I. The Meaning of Meaning.
Routledge & Kegan. London, England. 1923.
29.
Pederson, E. R., and Sokoler, T. AROMA: Abstract
Representation of Presence Supporting Mutual Awareness.
Proceedings of CHI 1997, pp.51-58.
30.
Redstrom, J., Skog, T., and Hallanas, L. Informative Art:
Using Amplified Artworks as Information Displays.
Proceedings of DARE 2000, pp. 103-114.
31.
Russel, D., Streitz, N., and Winograd, T. Building
Disappearing Computers. Communications of the ACM.
48(3):42-48. 2005.
32.
Stasko, J., Miller, T., Pousman Z., Plaue, C., and Ullah, O.
Personalized Peripheral Information Awareness through
Information Art. Proceedings of UbiComp 2004, pp. 18-35.
33.
Streitz, N., Tandler, P., Muller-Tomfelde, C., and Konomi,
S. Roomware: Towards the Next Generation of Human-Computer
Interaction based on an Integrated Design of Real
and Virtual Worlds. In: J. Carroll (Ed.): Human-Computer
Interaction in the New Millennium, Addison-Wesley. pp.
553-578. 2001.
34.
Van Dantzich, M., Robbins, D., Horvitz, E., and Czerwinski,
M. Scope: Providing Awareness of Multiple Notifications at
a Glance. Proceedings of AVI 2002. pp. 157-166.
35.
Weiser, M. and Brown, J.S. Designing Calm Technology.
PowerGrid Journal, 1:1, 1996.
36.
Zhao, A., and Stasko, J. What's Happening?: Promoting
Community Awareness through Opportunistic, Peripheral
Interfaces. Proceedings of AVI 2002, pp. 69-74.
74 | Peripheral Display;four main design patterns;calm computing;symbolic Sculptural display;high throughput textual display;Notification System;information monitor display;ambient information system;Taxonomy;framework to understand design attributes;user interface;notification systems and peripheral displays;Design Guidelines;multiple-information consolidators;Ambient Display;definition and characteristics of ambient systems;Ubiquitous Computing |
210 | Using Web Helper Agent Profiles in Query Generation | ABSTRACT Personalized information agents can help overcome some of the limitations of communal Web information sources such as portals and search engines. Two important components of these agents are: user profiles and information filtering or gathering services. Ideally, these components can be sep-arated so that a single user profile can be leveraged for a variety of information services. Toward that end, we are building an information agent called SurfAgent;in previous studies, we have developed and tested methods for automatically learning a user profile [20]. In this paper, we evaluate alternative methods for recommending new documents to a user by generating queries from the user profile and submitting them to a popular search engine. Our study focuses on three questions: How do different algorithms for query generation perform relative to each other? Is positive relevance feedback adequate to support the task? Can a user profile be learned independent of the service? We found that three algorithms appear to excel and that using only positive feedback does degrade the results somewhat. We conclude with the results of a pilot user study for assessing interaction of the profile and the query generation mechanisms. | INTRODUCTION
The Web has become an indispensable source of information
for many people. Based on surveys of the most popular
Web sites [14], users deal with the overwhelming amount and
constantly updating nature of the information by routinely
visiting hub sites (e.g., Netscape, Yahoo, CNN) and making
copious use of search engines (e.g., AltaVista, Excite, Magellan
). Users have derived tremendous leverage from shared
information resources such as those just mentioned. Hubs or
portals provide communally useful information about perennial
(e.g., financial management, child rearing) and timely
(e.g., September 11 events, stock quote) topics. Search engines
satisfy specific, spontaneous information needs.
As our appetite for information increases, so does its availability
on the Web. Studies (e.g., [21, 12]) have identified
limitations with these tools for satisfying users' needs;for
example, users appear to lack the motivation to learn how to
formulate complex queries or to view long lists of potential
matches. Meta-search engines, such as Meta-Crawler [18],
SavvySearch [6], and NECI [11], propose to overcome the
Web coverage problem by combining the indexing power of
multiple stand-alone search engines. However, because they
leverage the capabilities of many search engines, they tend
to generalize the search task: limiting the access to search-engine
-specific advanced search capabilities and, perhaps,
introducing even more noise into the return results.
One promising approach to compensating for the limitations
is to personalize the tools. Pretschner and Gauch
divide personalization into two types: personalized access
to resources and filtering/ranking [15].
For example, My
Yahoo (http://my.yahoo.com) provides personalized access
by allowing users to design their own Yahoo page with pertinent
information;many search and meta-search engines
support some customization (e.g., types of search, return
amount, and search engine selection). "Softbot"s or information
agents augment searching and information gathering
(filtering/ranking). Personalized information agents, such
as Letizia [13], WebWatcher [1, 10], and WebMate [5], can
provide a range of services from automatically retrieving
Web pages to assisting in formulating queries.
These agents generally comply with the architecture presented
in Figure 1. The agent intercedes between the user
and their Web access, monitoring the user's activities to
construct a model of user requests (the profile) to be used
for specific information services (e.g., modifying requests
and/or filtering results). In our research, we adopt the principle
that user overhead needs to be minimized:
The profile should be learned by asking the user to
simply click on a feedback button positioned on the
bottom of each page to indicate interest.
Learning should track changes in user interests.
The profile should support multiple information services
.
In previous papers, we have assessed some alternative approaches
to learning user profiles [20, 19]. In this paper,
we examine alternative approaches to one of the services:
query generation to find new documents (i.e., automatically
retrieving Web pages that match a user's interests by submitting
queries to a Web search engine). In particular, we
are interested in answering the following questions:
1. How do different algorithms for query generation perform
relative to each other? For our case, query generation
involves constructing queries from a user profile
that are submitted to a search engine for the purpose
of harvesting documents.
2. Is positive relevance feedback adequate to support the
task?
To minimize user overhead, we have solicited
only positive relevance feedback. Obviously, this provides
relatively weak knowledge, requiring the profiling
mechanism to self-organize the categories of interest
and to trade-off precision.
3. Can a user profile be learned independent of the service
? If so, then user overhead can be minimized and
multiple services provided as part of the same agent.
This paper describes a framework for evaluating alternative
algorithms for information gathering agents and a study that
was designed to address the three questions above. In summary
, we found: Three algorithms perform best in terms of
extracting sufficient numbers of documents from the search
engine and in terms of the relevance of the links returned.
We did find evidence that soliciting only positive feedback
hampers query generation;however, it is not clear that the
degradation in performance is worth the cost of obtaining
the negative feedback. As often happens, the study raised
some issues that are still to be resolved (particularly about
the evaluation criteria and the interaction of profiling and
query generation);we conclude with a pilot study in which
we investigate how to resolve these issues.
SURFAGENT
SurfAgent [19] is a personalized Web information agent,
which follows the basic architecture in Figure 1. It is designed
as a testbed for expediting plug-and-play and evaluation
of alternative algorithms, front-ends, and service tasks.
Its two primary components are the user profile and the
module which generates requests for document streams. Monitoring
should be simple and unobtrusive. Filtering depends
on the representation and construction of the user profile,
forcing a relatively tight coupling of those two components.
This section provides an overview of its user profiling and
document stream generation.
2.1
Building User Profiles and Filtering
The user profile maintained by a Web helper agent is a
model of what documents the user finds relevant. Like most
other personal Web helper agents, SurfAgent uses TF-IDF
vectors [17] as the basis of its user profile representation.
One such vector is used to represent each of the several different
topics of interest associated with each user.
Over time, topic descriptions are learned from user-supplied
examples of relevant documents, which are added to the existing
TF-IDF vectors in what is known as relevance feedback
[16]. Associated with each vector is a dissemination threshold
, which is used when new documents are filtered by the
agent: if the similarity between the new document's vector
and a vector in the profile exceeds the associated dissemination
threshold, the document is considered relevant and
shown to the user. We found that learning the appropriate
dissemination threshold was critical to filtering performance
and that one could be learned with relatively little feedback
(i.e., 10 relevance judgments) [19].
TF-IDF vectors and their associated dissemination thresholds
are known in the Information Retrieval (IR) literature
as filtering queries. This type of query is distinguished from
a typical retrieval query (used with search engines or at a
library) by a few characteristics. Filtering queries tend to
be used repeatedly over a long period of time, during which
they can be improved and maintained through learning and
relevance feedback, whereas retrieval queries are typically
used only once. Also, filtering queries typically contain lots
of terms with complex weighting schemes, whereas retrieval
queries tend to be a boolean combination of relatively few
terms, with no weighting at all.
Each filtering query in SurfAgent's user profile corresponds
to a distinct topic of interest. User profiles are learned in one
of two ways. First, relevant documents provided as training
by the user can be explicitly associated with the topic
of interest. Alternatively, to minimize overhead to the user,
incremental clustering can be used by the agent to automatically
classify relevant examples into internally learned topics
[20, 5]. In the latter situation, the user only needs to prompt
the agent when a relevant document has been encountered,
without having to associate it with a particular topic of interest
. To minimize user disruption, we request only positive
examples. We augmented existing IR clustering techniques
to accommodate Web needs (i.e., avoid storing the documents
themselves, require minimal user overhead and be
associated with a user). In our earlier study, we found that
a tuned version of the Doubling algorithm [4] achieved high
recall, without a great sacrifice on precision.
2.2
Incoming Document Streams
Personal information agents use a wide range of techniques
to generate incoming streams.
Letizia pre-fetches
Web pages by exploring the links in the Web page cur-rently
being viewed. Similarly, WebWatcher analyzes text
in and around links in the current page to predict relevant
links worth following. Fab builds a list of likely to be relevant
documents through best-first search;documents that
pass the filtering phase are then included in a list of recom-813
User
User Profile
Extract Query
Search
Engine
Filter Documents
Figure 2: Incoming document streams generated by
querying a search engine
mended links. Finally, WebMate filters articles found on a
list of well-known news sources in order to compile a personal
newspaper for its user.
Our goals are to maximize the quality of the incoming
document stream generated for SurfAgent, while at the same
time minimizing effort. For this purpose, a promising technique
appears to be the construction of queries that are suitable
for a large-scale search engine such as Google [3]. Well-formulated
queries have the potential to off-load significant
portions of the filtering task to the search engine, which
should provide the agent with a document stream consisting
of more relevant documents.
In this paper, we explore several methods of generating
search engine queries from the user profile of a personal
Web helper agent. We wish to find both the method and
the query length that would optimize the relevance of the
documents returned by the search engine with respect to the
user profile. This process is illustrated in Figure 2.
TECHNIQUES FOR QUERY GENERATION
Filtering queries are not directly suitable for being submitted
to a search engine. They are complex models representing
a possibly large collection of documents;they contain a
large number of terms with associated weights, which would
overly restrict the range of documents a search engine might
return.
Query generation techniques have evolved from the more
general query refinement mechanism of relevance feedback
[16]. For instance, in [9, 7] search engine queries are extended
with features extracted from positive and negative
examples in order to bias them toward a more relevant sub-topic
. Several other researchers have been concerned with
extracting only a few highly representative terms from the
representation of a large document cluster. For WebACE
[2], the authors propose a mechanism for generating queries
from a cluster of documents based on the average word count
(term frequency, TF) of the terms contained in the documents
of the cluster, and the document frequency (DF),
i.e., the number of documents in the cluster that contain
a specified term. A set of
k terms with the highest TF,
and another set of
k terms with the highest DF are selected
from the cluster. The intersection of these two sets was submitted
to Yahoo search as a query, yielding a total of 2280
results, which were later refined to 372 by augmenting the
query with terms resulting from the difference between the
TF and DF sets.
CorpusBuilder [8] uses automatic query generation to collect
documents in a specified language (Slovenian) from the
Web, with the purpose of building a corpus in that language.
The point is to preserve computation power by avoiding a
brute-force crawl of the Web where an expensive classifier
would have to be run on each encountered document. By
generating queries from the already existing corpus, the authors
hope to significantly increase the likelihood that the
resulting documents would already be in Slovenian, thus
speeding up document collection. Several methods for generating
queries are used interchangeably:
uniform select n terms from the relevant documents
with equal probability;
term-frequency select n most frequent terms from
the relevant documents;
probabilistic term-frequency select n terms from the
relevant documents with probability proportional to
their term frequency;
odds-ratio select n terms with highest odds-ratio
scores, as given by the formula:
OR
t
= log
2
P (t|relevant) (1 - P (t|nonrelevant))
P (t|nonrelevant) (1 - P (t|relevant))
where
t is the term, and P (t|relevant) and P (t|nonrelevant)
are the probabilities of
t appearing in a relevant and
non-relevant document, respectively;
probabilistic odds-ratio select n terms with probability
proportional to their odds-ratio score;
The authors report best results with
n = 4 and the simple
odds-ratio method. However, this method is not necessarily
applicable to our task because identifying relevance with
respect to a query cluster is somewhat more subtle than
determining whether a returned document is in a particular
language such as Slovenian.
OUR STUDY OF QUERY GENERATION
The purpose of this study is to examine the role of query
generation technique for a Web information agent: what
techniques work well, how much user overhead is warranted
and how query generation interacts with profiling. These
three factors correspond to the three questions articulated
in the Introduction.
4.1
Experiment Design
The basic protocol for our study was as follows:
1. construct user profiles,
2. generate queries from those profiles using each of the
query generation methods,
3. submit queries of different lengths to Google,
4. evaluate the results.
This led to a factorial experiment in which the independent
variables were query generation method and query length
and the dependent variables were two evaluation metrics:
return count and relevance.
814
4.1.1
Constructing the profiles
To expedite experimental control and data collection, we
constructed two user profiles from TREC
1
disk #5. TREC
data consist of a large number of articles (over 250,000), of
which a large portion have been judged as either relevant
or non-relevant with respect to a set of 450 different topics.
Our first topic is on airport security measures, and was constructed
based on 46 relevant documents which appeared in
the Los Angeles Times over the course of two years (1989-1990
);this topic will be referred to as LATIMES. The second
topic is on genetic research, and was constructed based on
55 relevant documents from the Foreign Broadcasting Information
Service appeared in 1996;this topic will be referred
to as FBIS. One topic was picked randomly from each of the
two document collection on the TREC disk.
We used synthetically generated topics in order to test the
hypothetical scenario in which negative feedback is available.
By default, SurfAgent does not collect negative feedback in
the form of documents which are non-relevant to a given
topic. Thus, we are interested in how much performance
might be sacrificed by restricting feedback to only positive
examples. The number of positive documents used in the
construction of each topic (46 and 55, respectively) is realistic
compared to what a human user would have been capable
of providing while building her profile in real life.
4.1.2
Generating Queries
We implemented several methods, including both methods
which use such negative examples (e.g., odds-ratio) against
methods which do not (e.g., Boley's method [2] and term
frequency based methods).
In addition to the methods mentioned in Section 3, we
add two methods: deterministic extraction of highest weight
terms for SurfAgent's TF-IDF profile vectors and probabilistic
weighted extraction from the TF-IDF profile vectors.
The complete list of methods used is given below:
Uniform (Unif ) baseline case, select
n terms with uniform
probability;
Boley select the intersection of the
k top ranking terms
according to term frequency in one set, and document
frequency in the other;
TF select
n top ranking terms according to term frequency;
Probabilistic TF (P-TF) select
n terms with probability
proportional to their term frequency;
OR select the top ranking
n terms according to their odds-ratio
score;
Probabilistic OR (P-OR) select
n terms with probability
proportional to their odds-ratio score;
TFIDF select
n terms with the highest TF-IDF weight
from the profile vector;
Probabilistic TF-IDF (P-TFIDF) select
n terms with
probability proportional to their TF-IDF weights;
1
Text REtrieval Conference: TREC benchmark disks are
publicly available and can be ordered from the conference
homepage at http://trec.nist.gov
The probabilistic versions were included because injection of
small amounts of randomness has been found to be helpful
in other search problems.
Code from SurfAgent was modified to collect the data
required by all query generation methods employed. For
each topic, we collected the following data:
average term frequencies for terms in relevant documents
;
document frequencies for terms in relevant documents;
TF-IDF vector built from relevant documents;
odds-ratio scores for terms in relevant documents (odds-ratio
scores are based on both relevant and non-relevant
documents related to the topic).
From these data, we generated queries of seven lengths (two
to eight terms) for each of the eight methods.
For the
four probabilistic methods, we generated 10 queries of each
length, which means their reported results will be averages
over those 10 queries.
For Boley's method, we repeatedly computed the intersection
of the sets consisting of the top
k ranking terms w.r.t.
TF and DF, while incrementing
k. We retained all distinct
queries of length between 2 and 8. For FBIS, no value of
k generated a query of length 6. Similarly, for LATIMES,
there was no value of
k resulting in a query of length 7.
4.1.3
Submit the Queries
The previous step resulted in 614 queries (307 for each
topic). We submitted these queries to the Google search engine
and collected back the first page of results. By default,
a page can contain up to 10 responses.
4.1.4
Collect the results
The results of the queries (the URLs returned) were parsed
out of the page returned by Google, and their corresponding
documents were retrieved from the Web and stored locally.
We discarded (and decremented our hit count accordingly)
all dead links and all hits that were in a format other than
ASCII
2
or PDF: a total of 312 out of 2917 hits were discarded
. The PDF documents were converted into ASCII
using the pdftotext utility.
4.2
Results
For each valid hit, we computed the similarity between
the document's TF-IDF vector and the TF-IDF vector of
the appropriate topic, which is a measure of the document's
relevance. For each combination of query generation method
and query length, we recorded the number of hits received,
the relevance of the best hit, and the average relevance over
all hits received. For the probabilistic methods, these measurements
represent average values obtained over the ten
repetitions for each combination of parameters. The results
are summarized in Table 1 for the FBIS topic, and Table 2
for the LATIMES topic. The three rows corresponding to
each method indicate average relevance (top), maximum relevance
, and number of returned hits (bottom).
All methods return at least seven documents with query
lengths of 2, but most taper off in the number returned
2
ASCII includes all variants and versions of HTML, XML,
etc.
815
query length
method
2
3
4
5
6
7
8
avg:
.022
.046
.059
.018
.011
Unif
max:
.051
.077
.101
.019
.011
cnt:
7.7
4.3
3.2
0.4
0
0
0.1
avg:
.026
.054
.044
.053
.069
.091
.082
P-TF
max:
.059
.096
.079
.102
.120
.192
.138
cnt:
8.9
7.7
7.9
5.2
6.5
7.2
6.3
avg:
.039
.047
.019
-P
-OR
max:
.099
.090
.019
-cnt
:
9.0
3.2
0
0.1
0
0
0
avg:
.045
.058
.088
.069
.035
.034
.030
P-TFIDF
max:
.100
.110
.178
.097
.054
.035
.055
cnt:
9.1
6.1
8.4
2.4
2.7
0.6
1.4
avg:
.053
.077
.090
.081
.111
.088
Boley
max:
.120
.112
.136
.168
.239
.239
cnt:
9
9
10
7
0
8
9
avg:
.036
.031
.048
.082
.081
.087
.083
TF
max:
.065
.059
.129
.134
.103
.130
.135
cnt:
10
9
10
9
9
10
9
avg:
.123
.186
.102
-OR
max:
.155
.361
.190
-cnt
:
9
8
2
0
0
0
0
avg:
.100
.144
.160
.176
.214
.278
.242
TFIDF
max:
.377
.377
.377
.279
.399
.404
.242
cnt:
9
10
10
7
10
4
1
Table 1:
Average relevance, Maximum relevance,
and count of returned hits for the FBIS topic on
genetic technology
query length
method
2
3
4
5
6
7
8
avg:
.012
.012
.012
.013
.004
-Unif
max:
.024
.024
.028
.019
.006
-cnt
:
8.0
5.3
3.9
1.0
0.6
0
0
avg:
.017
.026
.025
.028
.032
.024
.010
P-TF
max:
.042
.073
.062
.061
.046
.042
.011
cnt:
9.1
9.5
6.0
6.5
2.0
4.0
0.7
avg:
.017
.018
.016
.011
.007
-P
-OR
max:
.052
.039
.029
.013
.007
-cnt
:
8.2
8.3
4.0
0.9
0
0.1
0
avg:
.026
.036
.064
.063
.059
.020
.010
P-TFIDF
max:
.058
.103
.125
.156
.106
.036
.014
cnt:
9.2
8.1
8.1
5.7
5.3
1.3
0.2
avg:
.040
.098
.135
.193
.199
.167
Boley
max:
.086
.199
.299
.343
.359
.299
cnt:
8
9
8
8
8
0
7
avg:
.107
.058
.030
.048
.041
.069
-TF
max:
.222
.093
.051
.075
.069
.069
-cnt
:
7
10
10
7
6
1
0
avg:
.048
.036
.348
-OR
max:
.122
.096
.402
-cnt
:
9
9
2
0
0
0
0
avg:
.115
.144
.155
.171
.144
.153
.143
TFIDF
max:
.331
.331
.357
.299
.276
.349
.349
cnt:
9
7
8
8
9
9
9
Table 2:
Average relevance, Maximum relevance,
and count of returned hits for the LATIMES topic
on airport security
with longer query lengths. For the deterministic methods,
the relevance increases as the query length increases (until 7
or 8), but the relevance for the probabilistic methods tends
to plateau early.
All methods consistently outperform the baseline uniform
term selection. Probabilistic methods are outperformed by
Figure 3: Box plot of relevance by method for FBIS
topic at query length 2
Figure 4: Box plot of relevance by method for FBIS
topic at query length 3
the non-probabilistic ones, which is consistent with the observations
in [8]. The best results for the FBIS topic were
obtained using TFIDF at query length 7: a total of 4 hits
were returned, with an average relevance of
.278, and a maximum
relevance of
.404. The best results for the LATIMES
topic were obtained using OR at query length 4: two hits
were returned, with average relevance of
.348 and maximum
relevance of
.402.
Query lengths 2 and 3 were the only ones where all methods
lead to non-empty returns for both topics.
To test
whether the differences between the methods were significant
, we ran an analysis of variance (ANOVA) study on
each topic at query lengths 2 and 3, with the query generation
method as the independent variable, and relevance as
the dependent. The effects were significant in each case: for
FBIS, we obtained
F = 14.007, p < 0.001 at query length 2,
and
F = 8.692, p < 0.001 at query length 3;for LATIMES,
we obtained
F = 24.027, p < 0.001 at query length 2, and
F = 20.277, p < 0.001 at query length 3.
Box plots of relevance by method for query lengths 2 and
3 are given in Figures 3 and 4 for FBIS, and Figures 5 and
816
Figure 5: Box plot of relevance by method for LATIMES
topic at query length 2
Figure 6: Box plot of relevance by method for LATIMES
topic at query length 3
6 for LATIMES. Note that medians rather than means are
marked on the graph. These plots illustrate several points
obscured by the previous table. First, while TFIDF returns
the best match and second best median in all cases, overall
better results are obtained using OR for FBIS, and TF and
Boley for LATIMES. Second, each method can show high
variance in the results, although TFIDF tends generally to
higher variance. Additionally, the results for query length 3
have higher variance than those for query length 2. Finally,
the distributions are often skewed. For example, Boley consistently
has a higher median than mean.
Because relevance for all returned documents is measured
against the TF-IDF vector of the corresponding topic, the
experiments are slightly biased in favor of the TFIDF query
generation method.
Our experiments cannot prove that
TFIDF query generation is sufficient, but its good performance
coupled with the not always good performance of OR
suggest that we do not incur a significant loss by leaving out
negative feedback. Collecting other information based on
positive feedback in addition to TF-IDF topic vectors may
be required with SurfAgent: e.g., straight TF vectors and
query length
method
2
3
4
5
6
7
8
TFIDF
nrel:
8
3
4
4
4
-cnt
:
10
10
10
10
9
0
0
P-TFIDF
nrel:
1.0
1.8
0.7
0.4
0.0
0.5
-cnt
:
9.5
8.1
7.7
4.1
1.3
1.3
0.0
Table 3: Number of relevant documents and count of
returned hits for the user-generated topic on stress
relief
topic-specific document frequency information would allow
us to use TF and Boley query generation in addition to the
TFIDF method. As the results show, sometimes TF and
Boley perform better than OR and TFIDF.
Both Boley and TFIDF consistently result in many links
being returned, even for long query lengths. Many hits are
desirable when the agent will be pruning out previously visited
sites and filtering the returned links according to a filtering
threshold.
The computation time required for query generation by
each of the studied methods is negligible when compared to
the network-related delays incurred while downloading the
actual documents.
PILOT USER STUDY
To gain an understanding of the performance of TFIDF
query generation without the bias present in our experiments
with synthetically generated topics, we have also performed
a pilot study with a user-generated topic, containing
34 documents on stress relief. Since SurfAgent only collects
TF-IDF information at this time, query generation was limited
to the TFIDF and P-TFIDF methods. We followed the
same protocol as with the synthetically generated topics:
query lengths were between 2 and 8, and results were aver-aged
over ten trials for the probabilistic version P-TFIDF.
A total of 343 distinct URLs were returned by Google. We
shuffled these URLs and asked the user to examine each one
of them, and mark the ones she found to be relevant to her
topic. 56 documents out of the total of 343 were found relevant
. Table 3 presents the number of relevant documents
and the number of hits returned for each parameter combination
.
This pilot study supports the hypothesis that TFIDF based
queries will generate an adequate incoming stream: queries
of length up to six returned at least nine hits from Google.
Unlike the previous study, the shorter queries yielded lower
relevance, which could be due to the way the user was judging
relevance or to the nature of the topic.
As a followup, we will be designing a user study that includes
the three apparently best methods (TF, TFIDF, and
Boley). We will focus on three issues: Does method performance
vary among users and topics (as is suggested by
our current study)?
Should profile construction incorporate
more information? Does relevance assessment change
as profiles become more mature? Can the best query length
be determined a priori?
CONCLUSIONS
We studied several methods for generating queries from a
user profile to answer three questions related to the design
of Web information agents. First, how do different query
817
generation algorithms perform relative to each other? In
fact, we observed significantly different performance among
the eight methods tested. Overall, Boley, TFIDF and to a
lesser extent TF provided a good number of hits and relatively
high relevance.
Second, is positive relevance feedback adequate to support
the task? We found that leaving out negative training
examples does not incur a significant performance loss.
Odds-Ratio was found to excel on one topic, but its competitive
advantage does not appear to be worth the additional
overhead expected from the user. TFIDF and Boley, requiring
only positive relevance feedback, generated queries that
resulted in relevant hits.
Third, can user profiles be learned independently of the
service? The results from TFIDF and the pilot experiment
do suggest it. However, the pilot study also suggests that either
user relevance judgments may be a bit lower (harsher)
than the automated method or that the profile may not
adequately reflect the users' interests. In fact, the good performance
of Boley and TF indicates that in some cases it
might be worthwhile to collect more than TF-IDF information
from the user-supplied positive training examples. This
last question will be examined in more detail in the future.
Our study confirmed that additional user burden in the
form of negative feedback appears unwarranted to support
document generation and that queries generated based on
automatically learned profiles can guide harvesting of new
documents of interest. This last result is excellent news for
the development of agents that leverage a single learned profile
to personalize a multitude of web information services.
ACKNOWLEDGMENTS
This research was supported in part by National Science
Foundation Career Award IRI-9624058. The United States
Government is authorized to reproduce and distribute reprints
for governmental purposes notwithstanding any copyright
notation herein.
REFERENCES
[1] R. Armstrong, D. Freitag, T. Joachims, and
T. Mitchell. WebWatcher: A Learning Apprentice for
the World Wide Web. In Proceedings of the AAAI
Spring Symposium on Information Gathering from
Heterogeneous, Distributed Resources, Stanford, CA,
1995.
[2] D. Boley, M. Gini, R. Gross, E. Han,
K. Hastingsand G. Karypis, V. Kumar, M. Bamshad,
and J. Moore. Document Categorization and Query
Generation on the World Wide Web Using WebAce.
AI Review, 13(5-6):365391, 1999.
[3] S. Brin and L. Page. The Anatomy of a Large-scale
Hypertextual Web Search Engine. Computer Networks
and ISDN Systems, pages 107117, 1998.
[4] M. Charikar, C. Chekuri, T. Feder, and R. Motwani.
Incremental Clustering and Dynamic Information
Retrieval. Proceedings of the 29th ACM Symposium on
Theory of Computing, 1997.
[5] L. Chen and Katia Sycara. WebMate: A Personal
Agent for Browsing and Searching. In Proceedings of
the Second International Conference on Autonomous
Agents, Minneapolis, MN, 1998.
[6] D. Dreilinger and A.E. Howe. Experiences with
selecting search engines using meta-search. ACM
Transactions on Information Systems, 15(3):195222,
1997.
[7] G.W. Flake, E.J. Glover, S. Lawrence, and C.L. Giles
Extracting Query Modifications from Nonlinear
SVMs. In Proceedings of the Eleventh International
World Wide Web Conference (WWW 2002),
Honolulu, HI, U.S.A., 2002.
[8] R. Ghani, R. Jones, and D. Mladenic. On-line learning
for query generation: Finding documents matching a
minority concept on the web. In Proc. of the First
Asia-Pacific Conference on Web Intelligence, 2001.
[9] E.J. Glover, G.W. Flake, S. Lawrence,
W.P. Birmingham, A. Kruger, C.L. Giles, and
D. Pennock. Improving Category Specific Web Search
by Learning Query Modifications. In Proceedings of
the IEEE Symposium on Applications and the Internet
(SAINT 2001), San Diego, CA, U.S.A., 2001.
[10] T. Joachims, D. Freitag, and T. Mitchell.
WebWatcher: A Tour Guide for the World Wide Web.
In Proc. of the 15th International Joint Conference on
Artificial Intelligence, Nagoya, Japan, 1997.
[11] S. Lawrence and C.L. Giles. Context and page
analysis for improved web search. IEEE Internet
Computing, 2(4):3846, 1998.
[12] S. Lawrence and C.L. Giles. Searching the world wide
web. Science, 280:98100, April 3 1998.
[13] H. Lieberman. Letizia: An agent that assists web
browsing. In Proceedings of the 14th International
Joint Conference on Artificial Intelligence (IJCAI-95),
Montreal, Canada, 1995.
[14] Inc. Netmation. 100 most popular web sites.
http://netmation.com/list100.htm.
[15] A. Pretschner and S. Gauch. Personalization on the
web. Technical Report ITTC-FY2000-TR-13591-01,
Dept. of Electrical Engineering and Computer Science,
University of Kansas, December 1999.
[16] J.J. Rocchio. Relevance feedback in information
retrieval. In G. Salton, editor, The SMART Retrieval
System: Experiments in Automatic Document
Processing. Prentice-Hall, 1971.
[17] G. Salton. Automatic Text Processing: The
Transformation, Analysis, and Retrieval of
Information by Computer. Addison-Wesley, 1988.
[18] E. Selberg and O. Etzioni. The metacrawler
architecture for resource aggregation on the web.
IEEE Expert, 12(1):814, 1997.
[19] G.L. Somlo and A.E. Howe. Adaptive lightweight text
filtering. In Proceedings of the 2001 Conference on
Intelligent Data Analysis (IDA '01), Lisbon, Portugal,
September 2001.
[20] Gabriel L. Somlo and Adele E. Howe. Incremental
clustering for profile maintenance in information
gathering web agents. In Proceedings of the 2001
International Conference on Autonomous Agents
(AGENTS'01), Montreal, Canada, May 2001.
[21] A. Spink, J. Bateman, and B.J. Jansen. Searching
heterogeneous collections on the web: Behavior of
excite users. Information Research: An Electronic
Journal, 5(2), 1998.
http://www.shef.ac.uk/~is/publications/infers
818
| query generation;information agents;user modeling |
211 | Very Low Complexity MPEG-2 to H.264 Transcoding Using Machine Learning | This paper presents a novel macroblock mode decision algorithm for inter-frame prediction based on machine learning techniques to be used as part of a very low complexity MPEG-2 to H.264 video transcoder. Since coding mode decisions take up the most resources in video transcoding, a fast macro block (MB) mode estimation would lead to reduced complexity. The proposed approach is based on the hypothesis that MB coding mode decisions in H.264 video have a correlation with the distribution of the motion compensated residual in MPEG-2 video. We use machine learning tools to exploit the correlation and derive decision trees to classify the incoming MPEG-2 MBs into one of the 11 coding modes in H.264. The proposed approach reduces the H.264 MB mode computation process into a decision tree lookup with very low complexity. The proposed transcoder is compared with a reference transcoder comprised of a MPEG-2 decoder and an H.264 encoder. Our results show that the proposed transcoder reduces the H.264 encoding time by over 95% with negligible loss in quality and bitrate. | INTRODUCTION
During the past few years, technological developments, such as
novel video coding algorithms, lower memory costs, and faster
processors, are facilitating the design and development of highly
efficient video encoding standards. Among the recent works in
this area, the H.264 video encoding standard, also known as
MPEG-4 AVC occupies a central place [1].
The H.264 standard is highly efficient by offering perceptually
equivalent video quality at about 1/3 to 1/2 of the bitrates offered
by the MPEG-2 format. However, these gains come with a
significant increase in encoding and decoding complexity [2].
Though H.264 is highly efficient compared to MPEG-2, the wide
and deep penetration of MPEG-2 creates a need for co-existence
of these technologies and hence creates an important need for
MPEG-2 to H.264 transcoding technologies. However, given the
significant differences between both encoding algorithms, the
transcoding process of such systems is much more complex
compared to the other heterogeneous video transcoding processes
[3-6]. The main elements that require to be addressed in the
design of an efficient heterogeneous MPEG-2 to H.264 transcoder
are [7]: the inter-frame prediction, the transform coding and the
intra-frame prediction. Each one of these elements requires to be
examined and various research efforts are underway. In this
paper, we focus our attention on a part of the inter-frame
prediction: the macroblock mode decision, one of the most
stringent tasks involved in the transcoding process.
A video transcoder is comprised of a decoding stage followed by
an encoding stage. The decoding stage of a transcoder can
perform full decoding to the pixel level or partial decoding to the
coefficient level. Partial decoding is used in compressed domain
transcoding where the transform coefficients in the input format
are directly transcoded to the output format. This transformation
is straightforward when the input and output formats of the
transcoder use the same transform (e.g., MPEG-2 to MPEG-4
transcoding) [5]. When these transforms differ substantially, the
compressed domain transcoding becomes computationally
expensive. The utility of this compressed domain transcoding is
limited to intra MB transcoding. For predicted MBs, the
transcoding in compressed domain becomes prohibitively
expensive. The substantial differences in MPEG-2 and H.264
make even intra transcoding in the compressed domain relatively
expensive [8]; pixel domain transcoding is shown to produce
better results [9].
Pixel domain transcoders have a full decoding stage followed by a
reduced complexity encoding stage. The complexity reduction is
achieved by reusing the information gathered from the decoding
stage. It is assumed that the input video is encoded with
reasonable RD optimization. The MPEG-2 to H.264 complexity
reduction techniques reported in the literature fall into two
categories: 1) MB mode mapping in H.264 based on the MB
modes of the incoming video [10] and 2) Selective evaluation of
MB modes in H.264 based on heuristics [11]. Because of the large
number of inter and intra MB coding modes supported by H.264,
there is no one-to-one mapping between MPEG-2 and H.264 MB
modes. A direct mapping leads to either a sub-optimal decision if
the mapped mode is the final MB mode or an increase on
complexity if additional evaluations have to be made to improve
the mode decision. Selective evaluation is based on the
observation that certain MB modes are less likely to occur for a
class of videos and bitrates. If the selective evaluation is
aggressive in limiting the number of allowed modes, the
performance is sub-optimal. On the contrary, increasing the
number of allowed modes increases the complexity.
We have developed an innovative approach that is not limited by
the inefficiencies of mode mapping or selective evaluation
approaches. The proposed approach is based on the hypothesis
that MB coding mode decisions in H.264 video have a correlation
with the distribution of the motion compensated residual in
MPEG-2 video. Exploiting this correlation together with the MB
coding modes of MPEG-2 could lead to a very low complexity
transcoder. Figure 1 shows a plot of the mean and variance of the
MPEG-2 MB residual in the input video and the H.264 MB
coding mode of the corresponding MB in the transcoded video.
As the coding mode changes, the shift in the mean and variance of
the corresponding MB can be clearly seen. This correlation can be
effectively exploited using machine learning approaches. Thus,
the H.264 MB mode computation problem is posed as a data
classification problem where the MPEG-2 MB coding mode and
residual have to be classified into one of the several H.264 coding
modes. The proposed transcoder is developed based on these
principles and reduces the H.264 MB mode computation process
into a decision tree lookup with very low complexity.
Figure 1. Relationship between MPEG-2 MB residual and
H.264 MB coding mode.
The rest of the paper is organized as follows. Section 2 reviews
the principles of operation of the prediction of inter-coded
macroblocks in p-slices used by the H.264 encoding standard.
Section 3 introduces our macroblock mode decision algorithm for
inter-frame prediction based on machine learning techniques,
specifically designed for MPEG-2 to H.264 transcoders. In
Section 4, we carry out a performance evaluation of the proposed
algorithm in terms of its computational complexity and rate-distortion
results. We compare the performance of our proposal to
the reference transcoder with the encoding stage using the H.264
reference implementation. Finally, Section 5 draws our
conclusions and outlines our future research plans.
MACROBLOCK MODE DECISION AND MOTION ESTIMATION IN H.264
In the H.264 standard, the macroblock decision mode and motion
estimation are the most computationally expensive processes.
H.264 uses block-based motion compensation, the same principle
adopted by every major coding standard since H.261. Important
differences from earlier standards include the support for a range
of block sizes (down to 4x4) and fine sub-pixel motion vectors
(1/4 pixel in the luma component). H.264 supports motion
compensation block sizes ranging from 16x16 to 4x4 luminance
samples with many options between the two. The luminance
component of each macroblock (16x16 samples) may be split up
in 4 ways: 16x16, 16x8, 8x16 or 8x8. Each of the sub-divided
regions is a macroblock partition. If the 8x8 mode is chosen, each
of the four 8x8 macroblock partitions within the macroblock may
be further split in 4 ways: 8x8, 8x4, 4x8 or 4x4 (known as sub-macroblock
partitions). These partitions and sub-partitions give
rise to a large number of possible combinations within each
macroblock (see Figure 2). This method of partitioning
macroblocks into motion compensated sub-blocks of varying size
is known as tree structured motion compensation.
Figure 2. Macroblock partitions, sub-macroblock partitions
and partition scans.
A separate motion vector (previously calculated in the motion
estimation module) is required for each partition or sub-partition.
Each motion vector must be coded and transmitted; in addition,
the choice of partition(s) must be encoded in the compressed
bitstream. Choosing a large partition size (e.g. 16x16, 16x8, 8x16)
means that a small number of bits are required to signal the choice
of motion vector(s) and the type of partition; however, the motion
compensated residual may contain a significant amount of energy
in areas with high detail. Choosing a small partition size (e.g. 8x4,
4x4, etc.) may give a lower-energy residual after motion
compensation but requires a larger number of bits to signal the
motion vectors and choice of partition(s). The choice of partition
size therefore has a significant impact on compression
performance. In general, a large partition size is appropriate for
homogeneous areas of the frame and a small partition size may be
beneficial for areas with high detail
.
Va
ri
a
n
ce
MPEG-2 Res. MB Var.
H.264 MB Mode
MB Number
Mea
n
MPEG-2 Res. MB Mean
H.264 MB Mode
932
The resolution of each chroma component in a macroblock (Cr
and Cb) is half that of the luminance (luma) component. Each
chroma block is partitioned in the same way as the luma
component, except that the partition sizes have exactly half the
horizontal and vertical resolution (an 8x16 partition in luma
corresponds to a 4x8 partition in chroma; an 8x4 partition in luma
corresponds to 4x2 in chroma; and so on). The horizontal and
vertical components of each motion vector (one per partition) are
halved when applied to the chroma blocks.
Each partition in an inter-coded macroblock is predicted from an
area of the same size in a reference picture. The offset between
the two areas (the motion vector) has -pixel resolution (for the
luma component). If the video source sampling is 4:2:0, 1/8 pixel
samples are required in the chroma components (corresponding to
-pixel samples in the luma). The luma and chroma samples at
sub-pixel positions do not exist in the reference picture and so it is
necessary to create them using interpolation from nearby image
samples. Sub-pixel motion compensation can provide
significantly better compression performance than integer-pixel
compensation, at the expense of increased complexity. Quarter-pixel
accuracy outperforms half-pixel accuracy.
Encoding a motion vector for each partition can take a significant
number of bits, especially if small partition sizes are chosen.
Motion vectors for neighboring partitions are often highly
correlated and so each motion vector is predicted from vectors of
nearby, previously coded partitions. The method of forming the
prediction MVp depends on the motion compensation partition
size and on the availability of nearby vectors.
In H.264, the macroblock mode decision is the most
computationally expensive process. Mode decision is a process
such that for each possible block-size a cost is evaluated. The
encoder selects the coding-modes for the macroblock, including
the best macroblock partition (sub-macroblock partition) and
mode of prediction for each macroblock partition, such that the
cost is optimized. In the JM reference code (version 10.2) [12],
the motion estimation and the mode decision are executed
together. This implies that for each macroblock partition (sub-macroblock
partition) within the MB, motion estimation is done
first and the resulting cost is used for the mode decision.
In the H.264, two methods have been defined to evaluate the cost
for MB mode decision: RD-cost and SAE-cost. In the following,
we describe these two methods.
2.1 The RD-Cost
The Rate-Distortion (RD) optimization method is based on a
Lagrange multiplier [13] [14]. The H.264 standard can make use
of this optimization method to choose the best macroblock mode
decision. Different from evaluating the cost of coding a
macroblock on a pixel by pixel basis (SAE cost), the RD-cost
consists of making the selection based on a Lagrange function. In
this way, the H.264 standard selects the macroblock mode
exhibiting the minimum Lagrange cost. This implies that for each
existing macroblock partition (sub-partition) within the MB, bitrate
and distortion are calculated by actually encoding and
decoding the video. Therefore, the encoder can achieve the best
Rate-Distortion performance results, at the expense of calculation
complexity.
For evaluating the RD-cost, the standard has to obtain the
encoding rate, R, and the distortion, D, of each macroblock
partition (sub-macroblock partition). The former is obtained by
first computing the difference between the original macroblock
and its predictor. Thereafter, a 4x4 Hadamard Transform (HT)
has to be applied followed by a quantization process. The
distortion, D, is obtained by performing an inverse quantization
process followed by its inverse HT and then comparing the
original macroblock to the reconstructed one. The H.264 standard
chooses then the decision mode having the minimum cost, J. The
cost is evaluated using the Lagrange function J=D + x R, where
is the Lagrange multiplier. Figure 3 depicts the overall process.
One of the main drawbacks of this method is its excessive
computational cost. On the contrary, the encoder can achieve the
best Rate-Distortion performance results. However, for many
applications, the use of the Lagrange multiplier may be
prohibitive. This is the case when developing a transcoding
architecture aimed to work in real-time.
HT
+
QP
Encoder H.264/AVC with
loop Rate-Distorsion
QP
-1
IHT
+
Compute
rate
Prediction
Frame
+
Determine
distorsion
+
+
Compute cost
(J = D+
x R)
Decision
R
D
Figure 3. RD-cost method in the H.264 encoder.
2.2 The SAE-Cost
In this method, the H.264 encoder selects the best macroblock
mode by using the Sum of Absolute Errors (SAE). This implies
that for each existing macroblock partition (sub-partition) within
the MB, a predictor within the pixel-domain is created from the
motion estimation of the current partition and the SAE costs is
evaluated. For each MB and for each color component (Y,Cr,Cb),
one prediction mode have to be obtained. The best mode is
determined corresponding to the mode exhibiting the minimum
SAE cost. One of the main advantages of this method is its low
computational cost. On the contrary, the Rate-Distortion
performance results are sub-optimal.
2.3 The Fast Motion Estimation Option
Motion estimation is one of the most important tools in H.264
encoder for exploiting the high temporal redundancy between
successive frames to improve video coding efficiency. And
motion estimation is also the most time consuming part in the
H.264 encoder (since it is also used for mode decision). Generally
motion estimation is conducted into two steps: first is integer pel
motion estimation; and the second is fractional pel motion
estimation around the position obtained by the integer pel motion
estimation.
Algorithms on Fast Motion Estimation (FME) are always hot
research spot, especially fast integer pel motion estimation has
achieved much more attention because traditional fractional pel
933
motion estimation only take a very few proportion in the
computation load of whole motion estimation. Fast motion
estimation algorithms such as EPZS [15], UMHexagonS [16], and
SEA [17] have been proposed to reduce the number of searching
points in motion estimation.
The UMHexagonS algorithm proposed by Tsinghua University
was adopted by the H.264/MPEG-4 Part 10 (AVC) reference
software implementation [12]. This algorithm uses the hybrid and
hierarchical motion search strategies. It includes four steps with
different kinds of search pattern: 1) Predictor selection and
prediction mode reordering; 2) Unsymmetrical-cross search; 3)
Uneven multi-hexagon-grid search; 4) Extended hexagon-based
search. With the second and third step, the motion estimation
accuracy can be nearly as high as that of full search. But the
computation load and operations can be reduced even more.
Unsymmetrical-cross search uses prediction vector as the search
center and extends in the horizontal and vertical directions
respectively. Uneven multi-hexagon-grid search includes two sub-steps
: first a full search is carried out around the search center.
And then a 16-HP multi-hexagon-grid search strategy is taken.
Extended hexagon-based search is used as a center based search
algorithm, including hexagon search and diamond search in a
small range.
In the H.264 reference software, the Fast Motion Estimation
(FME) algorithm (based in the UMHexagonS algorithm) can be
employed for the motion estimation in addition to the original
Full Search (FS) algorithm.
MACHINE LEARNING
Machine learning refers to the study of algorithms and systems
that "learn" or acquire knowledge from experiences. Deductive
machine learning deduces new rules/knowledge from existing
rules and inductive machine learning uses the analysis of data sets
for creating a set of rules to take decisions. These rules can be
used, in the machine learning, to build a tree decision using a set
of experiments or examples, named the training data set. This set
of data must have the following properties [18]:
1. Each attribute or variable can take nominal or numerical
values, but the number of attributes cannot vary from an
example to another. This is to say, all the samples in the
training data set used for training the model must have
the same number of variables.
2. The set of categories that the examples can be assigned
to must a priori be known to enable supervised learning.
3. The set of categories must be finite and must be
different from one another.
4. Since the inductive learning consists of obtaining
generalization from examples, it is supposed the
existence of a sufficiently great number of examples.
Machine learning uses statistics with different kinds of algorithms
to solve a problem by studying and analyzing the data. Machine
learning has been used in an extensive range of applications
including search engines, medical diagnosis, stock market
analysis, classifying DNA sequences, speech and handwriting
recognition, object recognition in computer vision, game playing
and robot motion, etc.
In this paper, we describe the process of using machine learning
to build a decision tree for very low complexity transcoding. The
decision tree will be used to determine the coding mode of an MB
in P frames of the output H.264 video, based on the information
gathered during the MPEG-2 decoding stage. Figure 4 depicts the
process for building the decision trees to be used in the MPEG-2
to H.264 transcoding process. The incoming MPEG-2 video is
decoded and during the decoding stage, the MB coding mode, the
coded block pattern (CBPC), and the mean and variance of the
residual information for this MB (calculated for its 4x4 sub-blocks
resulting in 16 means and 16 variances for each MB) are
saved. The decoded MPEG-2 video is then encoded using the
standard H.264 encoder. The coding mode of the corresponding
MBs in H.264 is also saved. Based on the MPEG-2 data and the
corresponding H.264 coding mode decision for each MB, a
machine learning algorithm is used to create decision trees that
classify an MB into one of the 11 H.264 MB coding modes.
Figure 4. Process for building decision trees for MPEG-2 to
H.264 transcoding.
3.1 Creating the Training Files
A decision tree is made by mapping the observations about a set
of data to a tree made of arcs and nodes. The nodes are the
variables and the arcs the possible values for that variable. The
tree can have more than one level; in that case, the nodes (leafs of
the tree) represent the decisions based on the values of the
different variables that drive the decision from the root to the leaf.
These types of trees are used in the machine learning processes
for discovering the relationships in a set of data. The tree leafs are
the classifications and the branches are the features that lead to a
specific classification. A tree decision is a classifier based on a set
of attributes allowing us to determine the category of an input
data sample.
The decision tree for the transcoder was made using the WEKA
data mining tool [18]. The files that are used for the WEKA data
mining program are known as Attribute-Relation File Format
(ARFF) files. An ARFF file is written in ASCII text and shows
the relationship between a set of attributes. Basically, this file has
two different sections:1) the header which contains the name of
the relation, the attributes that are used, and their types; and 2) the
section containing the data.
The training sets were made using MPEG-2 sequences encoded at
higher than the typical broadcast encoding rates for the same
quality, since the B frames are not used. The H.264 decisions in
the training set were obtained from encoding the MPEG-2
934
decoded sequence with a quantization parameter of 25 and RD
optimization enabled. After extensive experimentation, we found
that sequences that contain regions varying from homogenous to
high-detail serve as good training sets. Good sample sequences
could be Flower and Football. The goal is to develop a single,
generalized, decision tree that can be used for transcoding any
MPEG-2 video.
Figure 5 shows the decision trees built using the process depicted
in Figure 4. As shown in Figure 4, the Decision Tree for the
proposed transcoder is a hierarchical decision tree with three
different WEKA trees: 1) classifier for Intra, Skip, Inter 16x16,
and Inter 8x8, 2) classifier to classify inter 16x16 into one of
16x16, 16x8, and 8x16 MBs and 3) classifier to classify inter 8x8
into one of 8x8, 8x4, 4x8, or 4x4. This paper focuses on the Inter
MB mode computation and the further classification and
processing for Intra MBs is not discussed in this paper.
For creating the first WEKA tree (Figure 5 node 1), the first
training data set uses the mean and variance of each one of the
sixteen 4x4 residual sub-blocks, the MB mode in MPEG-2 (skip,
intra, and three non-intra modes, labeled as 0, 1, 2, 4 and 8 in the
code shown below), the coded block pattern (CBPC) in MPEG-2,
and the corresponding H.264 MB coding mode decision for that
MB as determined by the standard reference software. The header
section of the ARFF files has the attribute declaration depicted
herein:
The supposed dependent variable, namely class in the example, is
the variable that we are trying to understand, classify, or
generalize. The other attributes are the variables that determine
the classification. The ARFF data section has the instance lines,
which are the samples used to train our model. Each macroblock
sample is represented on a single line. In this case the variable
class can take four values (skip, 16x16, 8x8 or Intra labeled as 0,
1, 8 and 9 in the code).
The second training data set, used for creating the second WEKA
tree (Figure 5 node 2), was made using the samples (MBs) that
were encoded as 16x16 MBs in the H.264 reference encoder. It
uses the mean and variances of each one of the sixteen 4x4
residual sub-blocks, the MB mode in MPEG-2 (in this case only
the three non-intra modes), the coded block pattern (CBPC) in
MPEG-2, and the corresponding H.264 MB coding sub-mode
decision in the 16x16 mode, as determined by the standard
reference software: 16x16, 16x8 or 8x16. This tree determines the
final coding mode of the MBs classified as inter 16x16 by the first
tree.
The third and last training data set, was used to create the third
WEKA tree (Figure 5 node 3) and was made using the samples
(MBs) that were encoded as inter 8x8 MBs in the H.264 reference
encoder. It uses four means and four variances of 4x4 residual
sub-blocks, the MB mode in MPEG-2 (the three non-intra
modes), the coded block pattern (CBPC) in MPEG-2, and the
corresponding H.264 MB sub-partition decision in the 8x8 mode,
as determined by the standard reference software: 8x8, 8x4, 4x8
or 4x4. Since this decision is made separately for each 8x8 sub-block
, only the four means and four variances of 4x4 residual sub-blocks
are used in each sample for training the model.
Based on these training files, the J48 algorithm implemented in
the WEKA data mining tool was used to create the three decision
trees. The J48 algorithm is an implementation of the C4.5
algorithm proposed by Ross Quinlan [19]: the algorithm widely
used as a reference for building decision trees.
The decision tree, that is proposed to solve the inter-prediction
problem, is a model of the data that encodes the distribution of the
class label in terms of the attributes. The final goal of this
decision tree is to help find a simple structure to show the
possible dependences between the attributes and the class.
3.2 The Decision Tree
This sub-section discusses the proposed macroblock mode
decision algorithm aiming to accelerate the inter-frame prediction.
This goal is achieved by making use of the MPEG-2 MB coding
modes, the coded block pattern (CBPC), and the mean and
variance of the residual information for this MB calculated for its
4x4 sub-blocks. MPEG-2 uses 16x16 motion compensation (MC)
and does not temporally decorrelate an image fully. The MC
residual can thus be exploited to understand the temporal
correlation of variable block sizes in H.264. The open source
WEKA data mining tool is used to discover a pattern of the mean,
variance, MPEG-2 coding modes, and the coded block pattern in
MPEG-2 (CBPC) for H.264 coding mode decisions. Figure 5
shows the decision tree used in the proposed transcoder.
The decision tree consists of three WEKA decision trees, shown
in Figure 5 with grey balls. The first WEKA tree is used to check
for the skip, Intra, 8x8 and 16x16 MBs modes. If an MB is 8x8 or
16x16, a second and a third decision tree is used for selecting the
final coding mode of the MB. The WEKA tool determined the
mean and variance thresholds for each of the three WEKA trees in
the decision tree. Due to space constraints we cannot show all the
rules being evaluated in the WEKA decision nodes. The process
described in herein should be sufficient for interested people to
develop the decision trees and repeat these experiments. The
decision tree works as follows:
Node 1. The inputs for this node are all the MPEG-2 coded MBs.
In this node a tree decision generated with WEKA is used to
decide whether the MB should be coded in H.264. This tree
examines whether the MB has a very high residual or a medium
residual. The output of this node is a first level decision mode that
should be used for coding the MB: skip, Intra, 8x8 or 16x16. The
intra decision process is not discussed in this paper. In the other
cases, the algorithm has to make a second level decision based in
the first decision. For example, the following rules were given by
WEKA:
If the MPEG-2 MB was "MC not coded", (non-zero MV
present, none of the 8x8 block has coded coefficients), then
@RELATION mean-variance_4x4
@ATTRIBUTE mean0 Numeric
@ATTRIBUTE variance0 Numeric
@ATTRIBUTE mean1 Numeric
@ATTRIBUTE variance1 Numeric
.................................................................................
@ATTRIBUTE mean15 Numeric
@ATTRIBUTE variance15 Numeric
@ATTRIBUTE mode_mpeg2 {0,1,2,4,8}
@ATTRIBUTE CBPC0 {0,1}
.................................................................................
@ATTRIBUTE CBPC6 {0,1}
@ATTRIBUTE class {0,1,8,9}
935
the MB will be coded as 16x16 in H.264. Again, a second
decision level will be made to select the best choice in this
case (see node 2).
If the MPEG-2 MB was coded in intra mode, the MB will be
coded as intra or inter 8x8 mode in H.264. In some cases the
algorithm will propose Intra, and the algorithm will end, and
in other cases the algorithm will propose 8x8 mode, so a
second level decision will be done (see node 3).
If the MPEG-2 MB was coded in skip mode, then the H.264
decision mode should be skip. The decision will be made in
node 4.
Figure 5. The Decision Tree.
Node 2. The inputs for this node are the 16x16 MBs classified by
the node 1. In this node we use again a decision tree generated
with WEKA to decide whether the MB should be coded in H.264
(16x16, 16x8 or 8x16). This tree examines if there are continuous
16x8 or 8x16 sub-blocks that might result in a better prediction.
The output of this node is the 16x16 sub-mode decision mode that
should be used for coding the MB: 16x16, 16x8 or 8x16. When
the node decision is 16x8 or 8x16 the coding mode is finalized. In
the other case, the evaluation continues in node 4, where the final
decision will be made.
Node 3. The inputs for this node are the MBs classified by the
node 1 as 8x8. This node evaluates only the H.264 8x8 modes
using the third WEKA tree and selects the best option: 8x8, 8x4,
4x8 or 4x4. As explained in the previous section, this tree is run 4
times, once for each of the four sub-macroblocks in the MB. This
tree is different from the others because this one only uses four
means and four variances to make the decision.
Node 4. The inputs for this node are skip-mode MBs in the
MPEG-2 bitstream classified by the node 1, or the 16x16 MBs
classified by the node 2. This node evaluates only the H.264
16x16 mode (without the sub-modes 16x8 or 8x16). Then, the
node selects the best option, skip or inter 16x16.
Since the MB mode decision, and hence the thresholds, depend on
the quantization parameter (QP) used in the H.264 encoding
stage, the mean and variance threshold will have to be different at
each QP. The two solutions here are: 1) develop the decision trees
for each QP and use the appropriate decision tree depending on
the QP selected and 2) develop a single decision tree and adjust
the mean and variance threshold used by the trees based on the
QP. The first option is complex as we have to develop and switch
between 52 different decision trees resulting in 156 WEKA trees
in a transcoder. Since the QP used by H.264 is designed to change
the quantization step size and the relationship between the QPs is
well defined, this relationship can be used to adjust the mean and
variance thresholds. The proposed transcoder uses a single
decision tree developed for a mid-QP of 25 and then adjusted for
other QPs. Since the quantization step size in H.264 doubles when
QP increases by 6, the thresholds are adjusted by 2.5% for a
change in QP of 1. For QP values higher than 25, the thresholds
are decreased and for QP values lower than 25 thresholds are
proportionally increased.
Figure 6 shows an example of the results obtained by applying
our proposed algorithm. Figure 6a illustrates the residual for the
MPEG-2 encoded Tempete sequence. Figures 6b and 6c show the
mean and variance of the residual. Figures 6.e and 6.f show the
differences between the inter mode selection made by the H.264
standard (with the RD-optimized option enabled), and the
proposed algorithm, with a value of 10 for QP. From these
figures, it is clear that our algorithm obtains very similar results to
those obtained using the full estimation of the H.264 standard.
(a) MPEG-2 residual (+128)
(b) Mean of the MPEG-2 residual (+128)
(c) Variance of the MPEG-2 residual
(d) Different kinds of Macroblocks in the grid pictures
(e) H.264 Rd opt, first frame P, Tempete (CIF)
QP= 10. Inter mode selected by H.264
(f ) H.264 Rd opt, first frame P, Tempete (CIF)
QP= 10. Inter mode selected by our proposal
Inter 16x16 Macroblock
Skip Macroblock
Intra Macroblock
Inter 8x16 Macroblock
Inter 16x8 Macroblock
Inter 8x8 Macroblock
Inter 4x8 Sub-macroblock
Inter 8x4 Sub-macroblock
Inter 4x4 Sub-macroblock
Inter 8x8 Sub-macroblock
Figure 6. Macroblock partitions generated by the proposed
algorithm for the first P-frame in the Tempete sequence.
PERFORMANCE EVALUATION
The proposed low complexity MB coding mode decision
algorithm is implemented in the H.264/AVC reference software,
version JM 10.2 [12]. Figure 7 shows the overall operation of the
proposed transcoder. The MPEG-2 video is decoded and the
information required by the decision trees is gathered in this
stage. The additional computation here is the computation of the
mean and variance of the 4x4 sub-blocks of the residual MBs. The
MB coding mode decision determined by the decision trees is
used in the low complexity H.264 encoding stage. This is an
936
H.264 reference encoder with the MB mode decision replaced by
simple mode assignment from the decision tree. The H.264 video
encoder takes as input the decoder MPEG-2 video (pixel data)
and the MB mode decision from the decision tree and encodes the
H.264 video. The MPEG-2 motion vectors are not used and the
encoder performs the motion estimation just for the final MB
mode determined by the decision tree.
MPEG-2
Video
H.264
Video
Figure 7. Proposed transcoder.
The performance of the proposed very low complexity transcoder
is compared with a reference transcoder comprised of a full
MPEG-2 decoder followed by a full H.264 encoder. We compare
the performance of our proposal to the full H.264 encoder when
the RD-cost (with and without FME option enabled) and the SAE-cost
(with and without FME option enabled) are used. The metrics
used to evaluate the performance are the reduction in the
computational cost and rate distortion function. The time results
reported are for the H.264 encoding component as the MPEG-2
decoding cost is the same for both the proposed and reference
encoders.
We have conducted an extensive set of experiments with videos
representing wide range of motion, texture, and color.
Experiments were conducted to evaluate the performance of the
proposed algorithm when transcoding videos at commonly used
resolutions: CCIR-601, CIF, and QCIF. The input to the
transcoder is a high quality MPEG-2 video. Since the proposed
transcoder addresses transcoding P frames in MPEG-2 to H.264 P
frames, MPEG-2 bitstreams were created without B frames. Since
the B frames, which are much smaller than P frames, are not used
in the input video, the video has to be encoded at higher than the
typical encoding rates for equivalent broadcast quality. Table 1
shows the bitrates used for the input MPEG-2 video. The
experiments have shown that the proposed approach performs
extremely well across all bitrates and resolutions.
Table 1. Bitrates for the input sequences
Format Bitrate
CCIR-601 (720x480)
5 Mbps
CIF (352x288)
1.15 Mbps
QCIF (176x144)
0.768 Mbps
The sequences have been encoded with H.264 using the QP
factors ranging from 5 up to 45 in steps of 5. This corresponds to
the H.264 QP range used in most practical applications. The size
of the GOP is 12 frames; where the first frame of every GOP was
encoded as I-frame, and the rest of the frames of the GOP were
encoded as a P-frames. The rate control was disabled for all the
simulations. The ProfileIDC was set to High for all the
simulations, with the FRExt options enabled. The simulations
were run on a P4 HT at 3.0 GHz Intel machine with 512 MB
RAM. The results are reported for six different sequences: two for
each of the three resolutions shown in Table 1.
CCIR Sequences (720x480, 200 Frames, 25 Hz)
30
35
40
45
50
55
60
0
5000
10000
15000
20000
25000
30000
35000
Bit rate [kbits/s]
PSN
R
[
d
B
]
H.264 (Rd opt)
Proposed (Rd opt)
Ayersroc
Martin
(a)
CIF Sequences (352x288, 200 Frames, 25 Hz)
30
35
40
45
50
55
60
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
Bit rate [kbits/s]
PSN
R [d
B]
H.264 (Rd opt)
Proposed (Rd opt)
Paris
Tempete
(b)
QCIF Sequences (176x144, 200 Frames, 25 Hz)
30
35
40
45
50
55
60
0
500
1000
1500
2000
2500
3000
Bit rate [kbits/s]
P
S
NR [db
]
H.264 (Rd opt)
Proposed (Rd opt)
Foreman
News
(c)
Figure 8. RD results for RD-cost without FME option.
Figure 8 shows the RD results for the reference and proposed
transcoder with RD optimization enabled and fast motion
estimation (FME) disabled. Figure 9 shows the RD results for the
reference and proposed transcoder with RD optimization enabled
and fast motion estimation (FME) enabled. As seen from the
figures, the PSNR obtained with the proposed transcoder deviates
slightly from the results obtained when applying the considerable
937
more complex reference transcoder. Compared with the reference
transcoder, the proposed transcoder has a PSNR drop of at most
0.3 dB for a given bitrate and bitrate increase of at most 5% for a
given PSNR. This negligible drop in RD performance is more
then offset by the reduction in computational complexity. Tables
2 and 3 show the average encoding time per frame given in
milliseconds. As shown in Table 2 and Table 3, the transcoding
time reduces by more than 80% with RD optimization, and more
than 90% with FME enabled for both the reference and proposed
transcoders.
CCIR Sequences (720x480, 200 Frames, 25 Hz)
30
35
40
45
50
55
60
0
5000
10000
15000
20000
25000
30000
35000
Bit rate [kbits/s]
P
S
N
R [
d
B]
H.264 (Rd opt, Fast ME)
Proposed (Rd opt, Fast ME)
Ayersroc
Martin
(a)
CIF Sequences (352x288, 200 Frames, 25 Hz)
30
35
40
45
50
55
60
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
Bit rate [kbits/s]
P
S
NR [
d
B]
H.264 (Rd opt, Fast ME)
Proposed (Rd opt, Fast ME)
Paris
Tempete
(b)
QCIF Sequences (176x144, 200 Frames, 25 Hz)
30
35
40
45
50
55
60
0
500
1000
1500
2000
2500
3000
Bit rate [kbits/s]
P
S
NR
[
db]
H.264 (Rd opt, Fast ME)
Proposed (Rd opt, Fast ME)
Foreman
News
(c)
Figure 9. RD results for RD-cost with FME option.
Figure 10 shows the RD results for the reference and proposed
transcoder with SAE-Cost (RD optimization disabled) and fast
motion estimation (FME) disabled. Figure 11 shows the RD
results for the reference and proposed transcoder with SAE-Cost
(RD optimization disabled) and fast motion estimation (FME)
enabled. As seen from the figures, in some cases the proposed
transcoder have better results than the reference transcoder. This
happens because the best solution is obtained by enabling the RD
optimization, and in the experiments reported in the figures we
are comparing the faster configuration of a H.264 encoder (SAE
cost) with our proposed reduced-complexity transcoder. With
SAE based encoding (RD-optimization disabled), the proposed
transcoder continues to outperform the reference transcoder
computationally (Tables 2 and 3). The transcoder still maintains a
PSNR drop of less than 0.3 dB and bitrate increase of less than
5%. The computational cost is reduced by over 38% for the SAE
case and by over 82% with FME enabled for both the reference
and proposed transcoders.
Table 2. Mean encoding time (milliseconds) per frame with
the reference transcoder
Sequence
RD Opt
RD Opt
+ FME
SAE SAE
+
FME
Martin 7370
6420
2110
940
Ayersroc 7650 6820 2095 1030
Paris 2305
2020
590
235
Tempete 2360 2050 605 290
Foreman 565 495 155 68
News 550
470
150
55
Table 3. Mean encoding time (milliseconds) per frame with
the proposed transcoder
Sequence
RD Opt
RD Opt
+ FME
SAE SAE
+
FME
Martin 1460
470
1190
170
Ayersroc 1620 670 1160 190
Paris 415
95
360
45
Tempete 445 135 360 53
Foreman 102 24 93 12
News 103
21
92
11
Table 4. Mean Time Reduction (%) per frame with the
proposed transcoder
Sequence
RD Opt
RD Opt
+ FME
SAE SAE
+
FME
Martin 80,19
92,68
43,60
81,91
Ayersroc 78,82 90,18 44,63 81,55
Paris 82,00
95,30
38,98
80,85
Tempete 81,14 93,41 40,50 81,72
Foreman 81,95 95,15 40,00 82,35
News 81,27
95,53
38,67
80,00
Based on the results shown in the Tables 2 and 3, the proposed
transcoder with SAE and FME has the lowest complexity. The
proposed transcoder with RD optimization and FME is still faster
than the fastest case of the reference transcoder (SAE + FME).
Using FME reduces the complexity substantially. Selecting RD
optimization with the proposed transcoder doubles the complexity
compared with SAE+FME case. The decision to enable RD
optimization can be based on the operating bitrates and sensitivity
to the PSNR drop. At higher bitrates, RDOPT + FME option give
about 0.6 dB better than the SAE + FME option; this is doubling
938
the complexity for a gain of 0.6 dB. However, at lower bitrates,
the PSNR gain reduces to about 0.3 dB.
CCIR Sequences (720x480, 200 Frames, 25 Hz)
30
35
40
45
50
55
60
0
5000
10000
15000
20000
25000
30000
35000
Bit rate [kbits/s]
P
S
NR
[
d
B]
H.264 (SAE)
Proposed (SAE)
Ayersroc
Martin
(a)
CIF Sequences (352x288, 200 Frames, 25 Hz)
30
35
40
45
50
55
60
0
2000
4000
6000
8000
10000
12000
Bit rate [kbits/s]
P
S
N
R [dB]
H.264 (SAE)
Proposed (SAE)
Paris
Tempete
(b)
QCIF Sequences (176x144, 200 Frames, 25 Hz)
30
35
40
45
50
55
60
0
500
1000
1500
2000
2500
3000
Bit rate [kbits/s]
PS
N
R
[
d
b
]
H.264 (SAE)
Proposed (SAE)
Foreman
News
(c)
Figure 10. RD results for SAE-cost without FME option.
Table 4 summarizes the reduction in the computational cost due
to the proposed machine learning based mode decision algorithms
in the proposed transcoder. With RD optimization and FME, the
computational cost is reduced by over 90%. The cost reduction
reaches as high as 95.5% for QCIF sequences. With SAE and
FME, the computational cost is reduces by over 80%. The
computational cost reduction come at a cost of reduced quality.
The quality reduction, however, is very small and negligible for
most video applications. Table 5 shows the quality variation
versus time reduction of the proposed transcoder with respect the
reference transcoder for the same input bitrates shown in Table 1,
showing over 96% reduction in the computational complexity
characterizing our proposed scheme. As shown in the table, using
the proposed transcoder reduces the PSNR by at most 0.3dB with
RD optimization enabled and by at most 0.1 dB with SAE cost
based transcoder. Our results show that the proposed algorithm is
able to maintain a good picture quality while considerably
reducing the number of operations to be performed in all the
scenarios.
CCIR Sequences (720x480, 200 Frames, 25 Hz)
30
35
40
45
50
55
60
0
5000
10000
15000
20000
25000
30000
35000
Bit rate [kbits/s]
P
S
NR [
d
B
]
H.264 (SAE, Fast ME)
Proposed (SAE, Fast ME)
Ayersroc
Martin
(a)
CIF Sequences (352x288, 200 Frames, 25 Hz)
30
35
40
45
50
55
60
0
2000
4000
6000
8000
10000
12000
Bit rate [kbits/s]
PSN
R
[d
B
]
H.264 (SAE, Fast ME)
Proposed (SAE, Fast ME)
Paris
Tempete
(b)
QCIF Sequences (176x144, 200 Frames, 25 Hz)
30
35
40
45
50
55
60
0
500
1000
1500
2000
2500
3000
Bit rate [kbits/s]
P
S
N
R
[
db]
H.264 (SAE, Fast ME)
Proposed (SAE, Fast
ME)
Foreman
News
(c)
Figure 11. RD results for SAE-cost with FME option.
939
Table 5. Quality Variation vs Time Reduction (for transcoding rate)
Quality Variation from Reference
Transcoder (dB)
Time Reduction from Reference
Transcoder (%)
Sequence
MPEG-2
Bit Rate
(Mbps)
RD OPT
RD FME
SAE
SAE
FME
RD OPT
RD FME
SAE
SAE
FME
Ayersroc
5.0
- 0.3
- 0.3
0.0
- 0.1
80.0
90.5
43.3
82.3
Martin
5.0
- 0.2
- 0.2
- 0.1
- 0.1
80.5
92.8
42.1
82.0
Tempete
1.15
- 0.2
- 0.2
0.0
0.0
80.0
93.8
41.1
82.5
Paris
1.15
- 0.3
- 0.3
0.0
- 0.1
81.6
95.6
38.5
80.7
Foreman
0.768
- 0.3
- 0.3
0.0
0.0
83.5
95.5
37.4
82.6
News
0.768
- 0.2
- 0.2
0.0
0.0
84.1
96.0
35.1
81.1
CONCLUSIONS
In this paper, we proposed a novel macroblock partition mode
decision algorithm for inter-frame prediction to be used as part of
a high-efficient MPEG-2 to H.264 transcoder. The proposed
algorithms use machine learning techniques to exploit the
correlation in the MPEG-2 MC residual and the H.264 coding
modes. The WEKA tool was used to develop decision trees for
H.264 coding mode decision. The proposed algorithm has very
low complexity as it only requires the mean and variance of the
MPEG-2 residual and a set of rules to compare the mean and
variance against a threshold. The proposed transcoder uses a
single decision tree with adaptive thresholds based on the
quantization parameter selected in the H.264 encoding stage. The
proposed transcoder was evaluated using MPEG-2 videos at
CCIR, CIF, and QCIF resolutions. Our results show that the
proposed algorithm is able to maintain a good picture quality
while considerably reducing the computational complexity by as
much as 95%. The reduction in computational cost has negligible
impact on the quality and bitrate of the transcoded video. The
results show that the proposed transcoder maintains its
performance across all resolutions and bitrates. The proposed
approach to transcoding is novel and can be applied to develop
other transcoders as well.
Our future plans will focus on further reducing the complexity of
the proposed transcode by reusing the MPEG-2 motion vectors
followed by a motion vector refinement. By reusing the motion
vector, we believe, real-time transcoding of CIF resolution video
at 30 FPS is within reach.
REFERENCES
[1]
ITU-T RECOMMENDATION H.264 "Advanced Video Coding
for Generic Audiovisual Services". May 2003.
[2]
Implementation Studies Group, "Main Results of the AVC
Complexity Analysis". MPEG Document N4964, ISO/IEC
JTC11/SC29/WG11, July 2002.
[3]
T. Shanableh and M. Ghanbari, "Heterogeneous Video
Transcoding to Lower Spatio-Temporal Resolutions and
Different Encoding Formats," IEEE Transactions on
Multimedia, vol.2, no.2, June 2000.
[4]
A. Vetro, C. Christopoulos, and H.Sun "Video Transcoding
Architectures and Techniques: An Overview". IEEE Signal
Processing Magazine, vol. 20, no. 2, pp.18-29, March. 2003.
[5]
H. Kalva, A. Vetro, and H. Sun, "Performance Optimization of
the MPEG-2 to MPEG-4 Video Transcoder". Proceeding of
SPIE Conference on Microtechnologies for the New Millennium,
VLSI Circuits and Systems, May 2003.
[6]
S. Dogan, A.H. Sadka and A.M. Kondoz, "Efficient MPEG-4/H
.263 Video Transcoder for Interoperability of Heterogeneous
Multimedia Networks," IEE Electronics Letters, Vol. 35, No.11.
pp. 863-864.
[7]
H. Kalva. "Issues in H.264/MPEG-2 Video Transcoding".
Proceedings of Consumer Communications and Networking
Conference, January 2004.
[8]
Y. Su, J. Xin, A. Vetro, and H. Sun, "Efficient MPEG-2 to
H.264/AVC Intra Transcoding in Transform-Domain," IEEE
International Symposium on Circuits and Systems, 2005. ISCAS
2005. pp. 1234- 1237 Vol. 2, 23-26 May 2005.
[9]
B. Petljanski and H. Kalva, "DCT Domain Intra MB Mode
Decision for MPEG-2 to H.264 Transcoding" Proceedings of the
ICCE 2006. January 2006. pp. 419-420.
[10]
Y.-K. Lee, S.-S. Lee, and Y.-L. Lee, "MPEG-4 to H.264
Transcoding using Macroblock Statistics," Proceedings of the
ICME 2006, Toronto, Canada, July 2006.
[11]
X. Lu, A. M. Tourapis, P. Yin, and J. Boyce, "Fast Mode
Decision and Motion Estimation for H.264 with a Focus on
MPEG-2/H.264 Transcoding," Proceedings of 2005 IEEE
International Symposium on Circuits and Systems (ISCAS),
Kobe, Japan, May 2005.
[12]
Joint Video Team (JVT) of ISO/IEC MPEG and ITU-T VCEG,
Reference Software to Committee Draft. JVT-F100 JM10.2.
2006.
[13]
G. Sullivan and T. Wiegand, "Rate-Distortion Optimization for
Video Compression," IEEE Signal Processing Magazine, vol.
15, no. 6, pp. 74-90, November. 1998.
[14]
T. Wiegand et al., "Rate-Constrained Coder Control and
Comparison of Video Coding Standards," IEEE Transactions on
Circuits Systems and Video Technology, vol. 13, no. 7, pp. 688-703
, July 2003.
[15]
A.M. Tourapis, O.C. Au, M.L. Liou, "Highly Efficient
Predictive Zonal Algorithms for Fast Block-Matching Motion
Estimation," IEEE Transactions on Circuits and Systems for
Video Technology, Vol. 12, Issue 10, Oct. 2002.
[16]
Z. Chen, P. Zhou, and Y. He, "Fast Integer Pel and Fractional
Pel Motion Estimation for JVT", 6th Meeting. Awaji, December
2002
[17]
M. Yang, H. Cui, K. Tang, "Efficient Tree Structured Motion
Estimation using Successive Elimination," IEE Proceedings-Vision
, Image and Signal Processing, Vol. 151, Issue 5, Oct.
2004.
[18]
Ian H. Witten and Eibe Frank, "Data Mining: Practical Machine
Learning Tools and Techniques", 2nd Edition, Morgan
Kaufmann, San Francisco, 2005.
[19]
J.R. Quinlan, "C4.5: Programs for Machine Learning", Morgan
Kaufmann, 1993.
940 | H.264;Inter-frame;Machine Learning;Transcoding;MPEG-2 |
212 | Video-Streaming for Fast Moving Users in 3G Mobile Networks | The emergence of third-generation (3G) mobile networks offers new opportunities for the effective delivery of data with rich content including multimedia messaging and video-streaming. Provided that streaming services have proved highly successful over stationary networks in the past, we anticipate that the same trend will soon take place in 3G networks. Although mobile operators currently make available pertinent services, the available resources of the underlying networks for the delivery of rich data remain in-herently constrained. At this stage and in light of large numbers of users moving fast across cells, 3G networks may not be able to warrant the needed quality-of-service requirements. The support for streaming services necessitates the presence of content or media servers properly placed over the 3G network; such servers essen-tially become the source for streaming applications. Evidently, a centralized approach in organizing streaming content might lead to highly congested media-nodes which in presence of moving users will certainly yield increased response times and jitter to user requests . In this paper, we propose a workaround that enables 3G networks to offer uninterrupted video-streaming services in the presence of a large number of users moving in high-speed. At the same time, we offer a distributed organization for the network's media-servers to better handle over-utilization. | INTRODUCTION
The third generation (
3G) mobile phone system UMTS enables
better quality and allows for more convenient use of multimedia
messaging and video-services by offering higher bandwidth and
lower latency than its GSM and GPRS predecessors [15, 1].
UMTS
furnishes upto 2Mbps rates for indoor and upto 384Kbps for outdoor
environments. Clearly, much improvement in terms of allo-cated
resources has been made for the handling of "rich" data including
multimedia messages and video-services. Nevertheless, the
available resources still present significant limitations for the scale
up of services when very large number of clients are present in a
cell of a network. Perhaps, the most daunting challenge comes from
moving users who access multimedia data and video feeds with the
help of their mobile phones and PDAs while traveling on either
private vehicles or mass transportation means such as commuter
trains and public busses. Evidently, a large number of concurrent
connections soliciting data resources in a cell and being handled
in real-time pose significant capacity problems for the underlying
3G network. The situation becomes even more challenging when
users attempt to follow streaming sources while on the move. We
consider streaming services to be of key importance as they will
ultimately offer information on demand for the moving user in any
geographic position and at any time. In order to facilitate video
streaming over
UMTS networks a number of issues have to be addressed
so that users do not experience delays and discontinuities
during playback. The two core aspects that require attention are
the variations of the available bandwidth as users enter and leave
cells as well as the effective management of handovers as roaming
users attach to different base-stations along their trajectory. The
problem of graceful transition when moving between base-stations
becomes more critical when the users are on high-speed motorways
. In this case, handovers become more frequent and traffic
load at successive base-stations may vary. In this paper, we outline
the above emerging problem and propose a scheme that allows for
not only improved network resource sharing but also for enhanced
management of streaming sources to the mobile user. It is expected
that base-stations are to transmit in different bitrates throughout the
journey of an individual as cells will undoubtedly present diverse
levels of congestion and availability of connections.
When considering vehicular users in general, one can exploit the
fact that the user's trajectory can be predicted in a satisfactory manner
. An early method to attain this goal is to keep an aggregate history
of observations made regarding the movement of users within
each cell [9]. Based on this information, probability density functions
for the prediction for the next-cell-to-move can be derived and
used. Traffic authorities imposed speed limits and road signals can
65
also assist in the more accurate estimation of a user's average speed.
In addition, the user's direction can be predicted reasonably well by
keeping track of her trajectory thus far. Although a model for precise
prediction is beyond the scope of this paper, we can assume
that there are already techniques that can offer a good estimation
of a moving user path. For instance, an individual's geographic
location could be tracked with the assistance of UMTS
Location
Service (LCS) [2] that can identify the cell that a user presently
appears in. If a user is moving along a highway, one could easily
estimate not only the direction of his movement but also his average
speed along a given trajectory. Finally, the soon anticipated incorporation
of Global Positioning System (GPS) receivers into mobile
phones through
A-GPS features [30] will help in the very accurate
user positioning and extraction of their movement characteristics.
It is our conjecture that at this stage, simply knowing the overall
direction of a user's trajectory in conjuction with the highway that
she travels on can ensure timely video-streaming and playout continuity
for users.
In our streaming environment, there exist three distinct types of
synergistic computing systems: media-servers, base-stations, and
user equipment. These systems are organized in three functional
layers as Figure 1 depicts. The role of the media-servers is to predominantly
manage content in a highly distributed fashion where
fast content retrieval and data resilience can be guaranteed. Base-stations
handle all user initiated connections to the
3G network and
through their channels offer users requested data. The last tier of
Figure 1 consists of cellular phones and PDAs equipped with appropriate
video-players and featuring minimal buffer capabilities to
support streaming.
Base Station
Media Server
Base Station
Base Station
Media Server
Base Station
WCDMA radio interface
Figure 1: Three-Tier Organization for Streaming Provision.
Media and base-stations are internetworked via high-speed wired
links while
UMTS offer wireless connections between user equipment
and base-stations. This distributed media-server architecture
provides for dividing of large video streams into multiple segments
[34, 14]. A media-server initially retrieves a solicited video
object either from its storage units or from another remote media-server
. In this paper, we take the approach that instead of transmitting
the entire object to a single base-station, we first segment it
and then forward its segments to the base-stations along the user's
path. Our rationale is that an individual base-station handles only a
section of the video file; the size of the section in discussion is commensurate
to the duration of a user's trip inside the cell. Clearly,
video-object segmentation reduces both network transmission costs
between media and base-stations and start-up latencies that users
experience upon cell entrance. On the other hand, segmentation
might get more complex if a user remains longer in a cell than
her estimated time and so she may face delays in the reception of
frames. We address this issue by continual monitoring of both user
speed and position and by doing so giving the base-station the option
to receive additional video increments and sufficiently feed a
user at all times.
Our work requires minimal buffer capabilities for mobile stations
so that a sufficient number of frames can be accommodated.
Buffer presence assures that the playout does not stop if the base-station
emits at lower bitrate due to
3G network congestion. We
propose a rate adaptation scheme which allows a base-station to
adjust its transmitting bitrate at any time according to base-station
load and the states of the client-side buffer. The
UMTS streaming
class defines that the bitrate assigned to a moving user is guaranteed
even though it might be less than the maximum video bitrate [15].
The base-station's decision of accepting a video streaming session
has an impact on all subsequent base-stations that follow up on the
video delivery process. While a streaming session is in progress,
the load of base-stations-to-service may dynamically change and
potentially lead to session drops. Such drops are highly undesirable
and we adopt a policy to address this issue. Our proposed scheme
gives a base-station the opportunity to appropriately alter the transmission
bitrate by taking into account the current base-station load
and simultaneously ensuring that the client buffer does not starve.
The rest of the paper is organized as follows: Section 2 presents
the overall system architecture and examines the interaction between
media-servers and base-stations. Section 3 proposes our bitrate
adaptation scheme and Section 4 discusses the results of our
preliminary experimentation. Finally, related work and conclusions
can be found in Sections 5 and 6 respectively.
MEDIA-SERVERS/BASE-STATIONS INTERACTION
The fast movement of users via different cells of the
3G network
imposes a set of new requirements for the entire video delivery
system. As the user relocates rapidly, she faces a high number
of handovers during her journey and, as a consequence, a large
video stream has to be fetched from different base-stations in order
to warrant continuous playback. As suggested earlier, we assume
the deployment of dedicated media-servers which undertake both
the storage and distribution of video-streams to underline base-stations
. It is imperative that media-servers, base-stations, and user-equipment
involved in a streaming session must cooperate in order
to guarantee QoS for the video reception of the moving individual
. In this section, we outline our overall architecture, discuss the
content delivery process that media-servers carry out, and present
specific algorithms for video segmentation and content distribution.
2.1
Architecture
The three distinct types of cooperative computing systems
(shown in Figure 1) organized in a multi-tier architecture constitute
our proposed streaming environment. We assume that base-stations
communicate with the mobile stations through the
WCDMA radio
interface [15]. Each cell of the
UMTS network is served by
a different base-station whose responsibility is to deliver the video
streams to its constituent mobile users. A streaming service necessitates
the use of media servers that will handle the storage and delivery
of video files [3]. Although we could adopt a centralized approach
to accommodate the media in delivery, high contention and
resource over-utilization would impact user request response times
greatly. Clearly a distributed approach that webs media-servers together
is required. High-speed wired communication means link
these servers and all share required meta-data structures.
66
Due to incurred costs and the fact that users move at high-speeds,
having a dedicated media server for each base-station would be a
poor decision. If the mobile user is traveling at a speed of 100 km/h
and the cell radius is 0.7 km, then he will pass through the given
cell in 50.4 seconds at maximum, assuming an hexagonal shape.
This implies that the number of frequent handovers taking place
increase the interaction among media-servers that will have to be
involved throughout the streaming session. Furthermore, in order
to avoid under-utilization of media-servers and strike a balance in
the aggregate use for facilitating streaming, we group
3G cells into
groups as Figure 2 depicts. This assignment is expected to happen
in a static manner although it could be modified to reflect emerging
new realities in the core network. In this regard, Figure 2 shows
a network layout in which sets of sixteen cells are configured to
function as a group. In this example, the mobile user is currently
in a cell of group
A
and is heading towards group
F
. The media-servers
that can be involved in the delivery of video objects are
A,
D, E
and
F
. The server
A
is expected to interact with server
D
,
D
with
E
, and
E
with
F
. In this chained-fashion, we anticipate that
the media servers notify each other about the streaming session of
the oncoming mobile user. In addition, the media-servers send and
receive in pipelined fashion the video object under transmission.
!
"
#
$
%'&
(0)1324
576
86
98
#9@
A
B
5
Figure 2: Grouping of Cells
There may be other interactions as well. For instance, there must
be cooperation between media server
A
and
F
if the requested video
object is initially located at
F
. A media server accepts video requests
from base-stations residing in its group; if it does not currently
have the object it is responsible for locating using the meta
data structure and fetching it. Due to the location awareness of our
approach, we assume that each server predominantly stores streams
specific to its own geographic area. For instance, if the route drawn
in Figure 2 crosses a county, video clips with showing traffic conditions
ahead in specific points may be requested. Similarly in a city
setting, such requests may entail multimedia location-based virtual
presentations.
2.2
Content Distribution
Provided that a media server has a video clip in place, a straightforward
approach would be to transmit the object into all base-stations
operating in cells located on or near-by the user's trajectory
. This is not only wasteful in both network and base-station
resources but also increases the user-perceived playout latency.
Therefore, we resort to using video segmentation [14], [21] in order
to reduce network transmission costs between the video-holding
media server and its subordinates base-stations. Segmentation also
decreases the start-up latency that users experience upon cell entrance
.
The length of a video-segment, denoted
S
t
, sent to each base-station
is proportional to the average time that the user is expected
to stay in the specific cell. The process of segmenting the video
streams into chunks of frames of specific duration assumes that the
media servers are aware of the underlying cell configuration. In
particular, a media-server has to be aware of the precise manner
with which a motorway cuts across its subordinate cells, the direction
as well as the speed of moving users. With this data available,
the media-server in discussion can approximate the time that a user
spends in a cell. For example, if a motorist moving at 100km/h has
just entered a cell and departs after traversing a 1km route portion,
the server can compute the duration of the user's stay to be at ap-proximately
36 seconds. The media-server can capitalize on this
very information to appropriately segment the streamed video; it
only dispatches enough frames for a playout period of 36 seconds.
The duration of a user's presence within a cell may vary according
to speed changes with clearly lower speeds leading to elongated
stays in the cell and vice versa. Should the speed be decreasing,
the base-station will ultimately require more frames from the media
server than the number predicted once the user appeared in the
cell. Such a request constitutes a "cache miss" which will not be
noted by the user if detected on time and acted upon by the coordinating
base-station. Imposing a minimum threshold in the number
of frames always available for delivery at a base-station may
help overcome such "cache-misses". Therefore, when the number
of frames awaiting transmission on a base-station falls below
the abovementioned threshold, the base-station signals its need for
additional frames to its overseeing media-server; should the latter
act upon this request, additional frames arrive on time at the base-station
for delivery. On the other hand, as soon as a user increases
speed, she will depart the cell extend earlier than initially anticipated
. The drawback here is that the media-server has provided
the base-station with more frames than those eventually needed.
During the handover process, the coordinating media-server has to
generate a video-fragment which in its opening contains frames that
have already been transmitted to the previous base-station but not-yet
-seen by the user.
Our approach allows a base-station to dynamically alter the
transmission bitrate according to its current load. Under light load,
a base-station may opt to increase the transmission rate for a specific
video-stream thus leading to potential frame shortage. To easily
avoid such shortage, we use the minimum allowed vehicle speed
to compute the size of the video-segment
S
t
to be transported to
base-stations:
S
t
=
Distance
M inimumSpeed
(1)
In most freeways there are authority-posted limits for minimum allowed
speed in each road segment. As media-servers are aware of
the geographic area that they serve, such minimum speed rates are
statically designated for each cell in their jurisdiction. Evidently,
the video-segment size that we potentially use as safety against
frame shortage is:
S
t
=
Distance
M inimumSpeed Distance
AverageSpeed
(2)
67
Algorithms 1 and 2 depict the video segmentation and distribution
that we follow in our media distribution. Upon a new video-streaming
request, we assume that the media-server can efficiently
retrieve the corresponding video-file either from local storage options
or remote servers via its low-latency/high-bandwidth wired
networking infrastructure. The identification of the user's current
location, the precompiled knowledge of the traveled distance within
a cell, in conjunction with the minimum allowed speed on pertinent
highway segments, permit for the estimation of the maximum
user stay
S
t
in a specific cell. Subsequently, the media-server can
create the first segment of frames needed for transmission via the
base-station to the requesting client. The size of a video-segment is
given by
V =
P
N
i=1
F
i
, where
F
i
is the size of the
i-th frame and
N is the number of frames in the segment; we can easily compute
N by multiplying the frame-rate (frames/second) with the duration
of stay in a cell
S
t
.
Algorithm 1
Video Segmentation at Media-Server
1: Get Minimum User Speed MinSpeed
2: Get PathLength in cell range
3:
p last frame transmitted
4: if (New Session) then
5:
S
t
=
P athLength
M inSpeed
6: else
7:
// Shortage of Frames
8:
S
t
=
P athLength
M inSpeed
9:
// with
<< 1
10: end if
11:
V i
P
p+N
i=p+1
F
i
,
where
F
i
is size of
i-th frame, N =F rameRate S
t
In light of frame shortage, our video-segmentation algorithm dispatches
into the base-station with need an increment of frames.
This is defined as a fixed fraction of the length of
S
t
in the current
cell (line 7 of Algorithm 1). Requests of such increments may
occur multiple times before a motorist leaves a cell due to congestion
.
A handover might find a moving user either serviced by a base-station
in the realm of the current media-server or under the authority
of a completely new media-server along the motorist's path.
In the first case, the media-server initiates the delivery of the next
video-fragment to the next-base-station encountered. The just departed
base-station can help in determining the appropriate stream
position
p from which the segmentation will have to resume. The
duration of the video-segment is computed anew using the same algorithm
that now takes into consideration the data points from the
new cell. Clearly, the length of the route as well as designated minimum
speed limits may be different from those encountered in the
previous cell.
In the second scenario, a handover may force a user to operate
in an entirely new group of cells supported by a new media-server.
In general, the portion of the "not-yet-seen" stream has to be forwarded
from the previous to the new media-server unless the latter
already maintains its own copy. If we are not interested in reducing
the transmission costs, we can transport the entire video object
to the new media-server using the assumed high-speed link.
The media-server now in charge takes over the session identifier of
the moving user and along with user state data from its previous
location can help coordinate the delivery of the video in the new
cell. To enhance coordination among media-servers in the highest-level
of Figure 1, prefetching could be used [7, 34]. We could de-Algorithm
2
Video Distribution from Media-Server to Base-Station
(s)
1: while (OutstanindRequests) do
2:
if
New Session then
3:
Start session (user's location, video stream, cell ID)
4:
if
Video Stream Not in Storage then
5:
Get Video Stream from corresponding Media-Server
6:
end if
7:
end if
8:
if
(not(Handover)) then
9:
Apply Video Segmentation Algorithm
10:
Send Video-Segment
V to Base-Station
11:
else
12:
if
(new Base-Station within Media-Server realm) then
13:
Apply Video Segmentation Algorithm
14:
Send Segment
V to Base-Station
15:
Send playback position
p to new Base-Station
16:
else
17:
Send Video Stream to next Media-Server
18:
Send playback position
p to next Media-Server
19:
end if
20:
end if
21: end while
ploy prefetching of video-segments to media-servers and/or base-stations
to facilitate playout continuity and minimize the start-up
latencies.
Users moving with similar speed and nearby to the streaming
user can benefit from the already segmented video stream and start
the playout immediately. Caching efficiency is limited by the fact
that only users with similar traveling behavior may use the video
segments. We can overcome this limitation if the starting point of
the video segment at the next cell corresponds to users traveling
at the maximum speed within the current cell. At the same time,
the total size of the segment caters for users that travel at minimum
speed within the next cell, thus remaining longer in the cell's range.
This ensures that successive base stations hold a sufficient amount
of frames to serve users traveling at different speeds.
RATE ADAPTATION
In this section, we propose a rate adaptation scheme whose objective
is to better serve the overall needs of fast-roaming users.
More specifically, we present a mechanism used by base-stations
to control the rate at which they transmit video to each user when
the cell becomes overloaded and the transmission bitrate eventually
needs to be decreased. In light of this reduction, we seek ways
to avoid discontinuities in user playback and cell bandwidth over-utilization
lowering so the probability for a session drop.
While focusing on bitrate management between the second and
last tier of Figure 1, we assume that pertinent video-segment data
is available at a base-station. Upon session initiation, the size
Q
of the mobile device buffer becomes known to the managing base-station
. In general, we assume that a video-object is divided into
frames of constant duration. Frames that belong to the same file
vary in size depending on the encoding rate and the scene content.
A time domain perspective allows us to control the transmission
rate examining the time interval between transmission of successive
frames rather than their respective sizes. If the inter-departure
time corresponds to the rate instructed by the file's frame rate
1
,
1
Typical frame rate values are 25 frames/sec for the PAL color sys-68
Figure 3: Rate Adaptation Modules within a base-station
the transmission bitrate corresponds to the file's encoding bitrate.
Alterations in the inter-departure times result in the inversely proportional
changes in transmission bitrate.
Let
{X
i
k
}
k
1
denote the departure process of the video frames
from the base-station, for the
i-th user. If
i
k
is the departure time
for frame
k, X
i
k
=
i
k+1
i
k
is the inter-departure interval for the
k-th frame. In the absence of buffering capabilities on the mobile
device, the smoothness of
{X
i
k
} is critical for the smoothness of
playback at the user's end. Thus, the following should hold:
P {X
i
k
= T } 1
(3)
where
T is the inter-departure interval specified by the video-object
frame rate. The buffer support that we assume available at the user-end
enables the modification of the
{X
k
} process reflecting modifications
to the actual transmitting bitrate of the base-station.
Video Streaming modules are integral parts of the base-station
configuration and each such module handles the transmission of a
video-stream. Hence, a segment of a video-stream is assigned to
an instance of a Video Streaming module for final delivery to the
user's equipment. A Rate Adaptation (RA) element is assigned to
each user session for the specified video stream. An RA is aware
of the user's buffer size and is responsible for the forwarding of
video frames to the actual Transmitter of the base-station. As information
about the station's load is fed-back by the Transmitter,
the Rate Adaptation element regulates the inter-departure process
of video frames from the station to a user, in a way that preserves
playback continuity. Figure 3 depicts the interaction among these
two elements and the Transmitter at a base-station that serves
k
concurrent sessions for the same video-object.
The operation of the Rate Adaptation element is governed by
periodic time intervals of constant duration, termed Control Cycles.
Operating in the time domain, the module is aware of the exact
number of frames the media player at the user-end will need over a
specific period of time to ensure smooth playback. Let
A denote the
duration of the control cycle. Also, let
Q
A
> 0 be the occupancy
(i.e., number of frames) of the buffer at the beginning of the control
cycle and
N
A
the number of frames that will be reproduced at the
user-end during the control cycle. Since
N
A
frames are requested
tem that corresponds to an inter-departure time of 40 msec, and 30
fps for the NTSC system which corresponds to inter-departure time
of 33.3 msec.
from the media-player and
Q
A
frames are accumulated the rate
adapter needs only transmit
N
A
- Q
A
frames at minimum over the
control cycle.
The video frame rate instructs that a frame be transmitted every
T =
A
N
A
mseconds. Each one of the N
A
-Q
A
frames transmitted
at minimum during the control cycle will depart the base-station at
longer intervals equal to
T =
A
N
A
-Q
A
. The initial inter-departure
time has been spread by a tolerance factor
( 0) where:
A
N
A
- Q
A
= (1 + ) A
N
A
=
Q
A
N
A
- Q
A
(4)
The tolerance factor,
, is a parameter of the control cycle; may
turn negative only when
Q
A
> N
A
. Thus, a more specific definition
of
would be
=
Q
A
N
A
- Q
A
1
{Q
A
N
A
}
+ 1
{Q
A
>N
A
}
(5)
If the RA element forwards frames at the rate instructed by the
tolerance factor, the transmission bitrate over the control cycle will
be equal to
B/(1+), where B is the encoding bitrate of the video-stream
. A control cycle during which the base-station transmits at
the minimum bitrate instructed by
is called a degraded cycle. A
degraded cycle will lead to zero buffer occupancy at the end of the
control cycle and the tolerance factor for the next control cycle will
be equal to zero. Therefore, no two successive degraded cycles may
occur.
Non-zero buffer occupancy at the beginning of a control cycle
will be present only if the overall transmission rate over the previous
control cycles exceeded
B. This can be achieved if the RA
element forwards frames at a higher rate when the station is un-derutilized
. Let
denote the speed-up factor, the factor by which
the bitrate increases in this case. An expression for the speed-up
factor may be obtained if we consider that the maximum transmission
bitrate will lead to a full user buffer at the end of the control
cycle. If
Q
A
is the buffer occupancy at the beginning of the cycle
, then the station may transmit at maximum
Q - Q
A
frames
over the control cycle. In this case, each frame will be transmitted
every
A
Q
-Q
A
mseconds, with inter-departure interval having been
decreased by
:
A
Q - Q
A
= (1 - ) A
N
A
= 1 N
A
Q - Q
A
(6)
An upgraded cycle will transmit at a rate of
B/(1 - ). The
speed-up factor may turn negative only when the free buffer space
is less than the frames that will be played back during the control
cycle. In this case, the cycle is forced to operate in degraded mode,
so that we can avoid buffer overflow.
It is clear that the
n-th control cycle may forward frames at a rate
in the range of:
B
max{(1 - ), (1 + )} B
n
B
(1 - )
(7)
The respective inter-departure process,
{X
n
} will be in the range
of:
(1 - )T {X
n,k
} max{(1 - )T, (1 + )T }
(8)
The RA element knows at any time the exact number of frames
that have been transmitted to the user, and it also knows the time
that has passed since the session initiation, which corresponds to
the number of frames the playback process has consumed. The difference
between the two values denotes the user buffer occupancy,
69
so no feedback mechanism is required as far as the user buffer occupancy
is concerned. The algorithm followed by each Rate Adaptation
element in the Video Streaming module is outlined in Algorithm
3.
Algorithm 3 Rate Adaptation
element operation
1:
// Executed at the beginning of every control cycle
2:
// for user
i
3:
Q
i
A
= F ramesT ransmit
i
- F ramesP layed
i
4:
= Q
i
A
/(F ramesCycle - Q
i
A
)
5:
= 1 - F ramesCycle/(Q - Q
i
A
)
6:
M inInterval
i
= (1 - ) T
7: if
< 0 then
8:
M axInterval
i
= M inInterval
i
9: else
10:
M axInterval
i
= (1 + ) T
11: end if
12:
Interval
i
= M inInterval
i
+
(CellLoadP erc/100) (M axInterval
i
- M inInterval
i
)
Since the duration of the control cycle is constant, multiple control
cycles may occur during a user's presence in the range of a
single cell, depending on the size of the cell and the user's speed.
We assume that the each cell handover always initiates a new control
cycle.
Algorithm 3 allows for alteration in the transmission bitrate by
providing upper and lower bounds (i.e.,
M inInterval
i
and
M ax
Interval
i
) to ensure the smoothness of the playout process. The
choice of the actual bitrate within the specified range, at which the
base-station transmits during a control cycle, is ultimately a function
of the station's load at the time. This load is continually estimated
with the help of the Transmitter module. This feedback enables
each Rate Adaptation element to cater for buffer occupancy
increase, taking advantage of low system load periods. At the same
time, by detecting high system load, the Rate Adaptation element
lowers the transmission bandwidth, allowing for more sessions to
be accommodated, while at the same time the playback process is
not distorted.
EVALUATION RESULTS
In order to reproduce and experiment with the behavior of our
proposed architecture and bitrate adaptation scheme, we have setup
a simulation testbed. We have assumed a user trajectory with
duration of 200 seconds. The user traverses numerous cells of different
sizes. Each base-station is equipped with the Video Streaming
module as described earlier. A Control Cycle of 5 seconds is
adopted by all elements. The buffer size at the user-equipment is
assumed to be large enough to store 10 seconds which is readily
met by modern cellular phones and/or PDAs. We designate ten levels
of base-station load with load changing at random times. The
maximum duration of each load state is 30seconds. The
PAL color
system is assumed for the video being transmitted, so the default
inter-departure time for each frame is set at 40 mseconds.
At the beginning of each control cycle, the Rate Adaptation element
applies the proposed Algorithm 3. The testbed initially computes
the tolerance and speed-up factors thus generating the acceptable
inter-departure times range. The actual inter-departure
time for the control cycle is proportional to the station's load at the
time. If the station is lightly loaded, the minimum interdeparture
time (maximum bandwidth) is applied. Conversely when the base-station
is heavily loaded, the frames are forwarded to the transmitter
at the minimum rate instructed by the maximum interdeparture
time. For intermediate load levels an appropriate value from the
inter-departure times range is selected according to Algorithm 3 in
a uniform fashion.
Figure 4 shows the evolution of the base stations' load during the
user's trajectory, over all cells that the individual travels in. Having
defined load of value 5 as "normal" load, the base-station load
remains relatively high through the user trajectory with a few very
short periods of low load.
0
2
4
6
8
10
0
50
100
150
200
Load
Time
Base station load
Figure 4: Base-station load during user trajectory
Figure 5 shows the applied transmission bitrate, along with the
minimum and maximum bitrates allowed for every control cycle.
The
y-axis represents the percentage of the actual transmission bitrate
to the video encoding bitrate. The bitrate is inversely proportional
to the inter-departure times which are illustrated in figure 6.
If we compare the curves of Figures 5 and 4, we can easily establish
that the actual bitrate applied is a function of the base-station
load and the calculated allowed bitrate range. At times of high
load, the applied bitrate is closer to the minimum acceptable bitrate
and conversely at times of light load, the applied bitrate is closer
to the maximum acceptable bitrate. Figure 6 shows that the interdeparture
times are indeed proportional to the base-station load.
0
50
100
150
200
0
50
100
150
200
Transmission bitrate (%)
Time
Transmission bitrate
Figure 5: Allowed transmission bitrate limits and applied
transmission bitrate.
70
20
25
30
35
40
45
50
55
60
65
70
0
50
100
150
200
Interdeparture time
Time
Frame interdeparture time
Figure 6: Applied interdeparture interval.
We show the user buffer occupancy throughout the trajectory in
Figure 7. The buffer does not starve at any time suggesting the
usefulness of our proposed scheme. The occupancy increases at
times where the base station load falls below the normal load.
Under overall higher-than-normal base-station load settings, our
tests show that the playback continuity is preserved by only taking
advantage of relatively small periods of station underutilization to
increase the transmission bitrate. The range of the acceptable transmission
rate includes the bandwidth already guaranteed by the network
upon session acceptance at all times. Therefore, the network
was never forced to transmit at a higher-than-agreed bitrate. At
times of station over-utilization, the decreased transmission bitrate
allows for more calls to be accommodated significantly decreasing
the probability of session drop.
0
50
100
150
200
250
0
50
100
150
200
Occupancy
Time
Occupancy
Figure 7: User buffer occupancy.
A system that adopts no rate adaptation scheme would constantly
require a transmission bitrate equal to the 100% of the video encoding
bitrate throughout the session. Although the bitrate of real-time
video streaming sessions is guaranteed by the
UMTS specifications
, at near-capacity situations, the network would have to
either drop a session or be forced to transmit with lower bitrate.
The former case is clearly undesirable and the latter generates jitter
effects for the end-user. This would happen even if the station
gets overloaded only for a period of time equal to the proposed
scheme's control cycle duration. Discontinuities in the playback
process may be observed at a system adopting the proposed rate
adaptation scheme as well, however only in the case when the base-station
is constantly load saturated, thus not allowing for any upgraded
cycles to take place.
RELATED WORK
There has been a large amount of reported work in related areas
that include caching for video systems, management of moving objects
, and rate adaptation for streaming systems on wired networks.
Data caching and mirroring has been proposed as a way to help
the scalability of video delivery systems [32]. By placing content
closer to the consuming clients not only network costs can be curtailed
but also the load of streaming servers can be reduced. Various
aspects of the use of proxy servers for video objects has been examined
in [14, 26, 23, 10]. In [14], the segmentation and caching
of streaming objects is proposed and the merging of requests that
are temporally related is investigated. This merging idea has been
used in [8, 18, 11] to save bandwidth in light of requests for the
same video object that arrive closely in time.
The partial caching of two successive intervals of a video stream
is proposed in [10] as a way to speed-up the serving of follow-up
requests for the the same video object. In [26], the caching of initial
frames of a video object is used in a proxy-setting to reduce startup
latency. In the same direction, the storage of the bursty parts of a
video-stream in a proxy is advocated in [34]; the remaining parts of
the video are directly retrieved from the source helping significantly
reduce peak bandwidth requirements in the backbone. A caching
mechanism for layered encoded multimedia streams is suggested in
[23]; the objective of the technique is to selectively deliver stream
quality by differentiating on the client network connection. Stream
quality differentiation is also addressed in [24], in conjuction with
a seamless handoff mechanism for mobile users.
A formal spatiotemporal framework for modeling moving objects
and a query language is discussed in [27]. Efficient techniques
for indexing moving objects in one and two dimensions are
proposed in [12] while in [5] the trade-offs for indexing schemes to
answer interval, rectangle, approximate nearest-neighbor, approximate
farthest-neighbor and convex-hull queries are examined. The
indexing of current and anticipated positions of moving objects in
the context of location based services is examined in [25]. Much
work has been also reported in data broadcast and dissemination
over wireless networks during the last decade [4, 22, 33, 19, 6].
Rate adaptation for wired streaming systems has been exten-sively
studied in [28, 20, 31, 29, 16, 17]. These studies assume
an adaptable video encoding system that changes the encoding parameters
on the fly based on feedback information about the channel
state. The notion of cycle-based operation is used in [13] with
cycles being successively alternating between good and bad cycles.
Our work differs in that it does not require a prefetching period so
that an initial occupancy is built up in the buffer before the playback
begins. Also, our algorithm functions in a graceful manner when
the base-station load does not allow for aggressive use of channel
resources.
CONCLUSIONS
In this paper, we address the problem of efficient video delivery
in real-time to high-speed users roaming a
3G network. We propose
a network of media servers handling the content distribution
on top of the mobile environment that closely cooperates with the
71
base-stations and user-equipment for the provision of continuous
video playout. We segment video streams into variable-sized parts
according to the user's speed and traversal path through different
cells. In this manner, we minimize the transmission costs between
media-servers and base-stations as well as the start-up latency experienced
by users during handover. We adopt the use of Video
Streaming modules along with their Rate Adaptation elements with
the infrastructure of base-stations to ensure smoothness of the playout
process. Preliminary experimentation results through simulation
show that the proposed scheme rapidly adapts to changes in
load conditions at base-stations, thus minimizing the probability of
buffer starvation or even session drops. The low complexity of the
proposed mechanism makes it suitable for real-time applications.
REFERENCES
[1] 3rd Generation Partnership Project. Universal Mobile
Telecommunication System/IMT2000 Spectrum. Technical
Report 6, UMTS Forum, 1998.
[2] 3rd Generation Partnership Project. Stage 2 Functional Specification
of Location Services in URAN. Technical Report (3G TR 25.923
version 1.4.0), UMTS Forum, 1999. Technical Specification
Group(TSG) RAN, Working Group 2 (WG2).
[3] 3rd Generation Partnership Project. Transparent End-to-End Packet
Switched Streaming Service (PSS) General Description (Release 4).
Technical Report 3GPP-TS-26.233-V4.0.0, UMTS Forum, 2000.
Technical Specification Group Services and System Aspects.
[4] S. Acharya, M.J. Franklin, and S. B. Zdonik. Balancing Push and
Pull for Data Broadcast. In Proceedimgs of SIGMOD 1997, Tucson,
AZ, May 1997.
[5] P.K. Agarwal, L. Arge, J. Erickson, and H. Yu. Efficient Tradeoff
Schemes in Data Structures for Querying Moving Objects. In 12th
Annual European Symposium on Algorithms (ESA), pages 415,
Bergen, Norway, September 2004.
[6] D. Aksoy, M. Altinel, R. Bose, U. Cetintemel, M. Franklin, J. Wang,
and S. Zdonik. Research in Data Broadcast and Dissemination. In
Proceedings of International Conf. on Advanced Multimedia Content
Processing (AMCP), Osaka, Japan, November 1998.
[7] P. Cao, E. W. Felten, A. R. Karlin, and K. Li. A Study of Integrated
Prefetching and Caching Strategies. In Proceedings of ACM
SIGMETRICS Conf., pages 188197, Ottawa, Canada, May 1995.
[8] S. Chan and F. Tobagi. Caching schemes for distributed video
services. In Proceedings of the 1999 IEEE International Conference
on Communications (ICC'99), Vancouver, Canada, June 1999.
[9] S. Choi and K. G. Shin. Predictive and Adaptive Bandwidth
Reservation for Hand-Offs in QoS-Sensitive Cellular Networks. In
Proceedings of ACMSIGCOMM, pages 155166, 1998.
[10] A. Dan and D. Sitaram. A Generalized Interval Caching Policy for
Mixed Interactive and Long Video Environments. In Proceedings of
IST/SPIE Multimedia Computing and Networking Conference, San
Jose, CA, January 1996.
[11] A. Dan, D. Sitaram, and P. Shahabuddin. Dynamic Batching Policies
for an On-demand Video Server. Multimedia Systems, 4(3):112121,
1996.
[12] G. Kollios and D. Gunopulos and V.J. Tsotras. On Indexing Mobile
Objects . In Proceedimgs of 1999 ACM SIGACT-SIGMOD-SIGART
Symposium on Principles of Database Systems (PODS), Philadephia,
PA, 1999.
[13] M. Hassan, L. Atzori, and M. Krunz. Video Transport over Wireless
Channels: A Cycle-based Approach for Rate Control. In Proceedings
of the ACM Multimedia 2004 Conference. ACM Press, October 2004.
[14] M. Hofmann, E. Ng, K. Guo, S. Paul, and H. Zhang. Caching
Techniques for Streaming Multimedia over the Internet. Technical
report, Bell Laboratories, April 1999. BL011345-990409-04TM.
[15] H. Holma and A. Toskala. WCDMA for UMTS Radio Access for
Third Generation Mobile Communications. John Wiley & Sons Inc.,
New York, NY, 2nd edition, 2002.
[16] C.-Y. Hsu, A. Ortega, and A.R. Reibman. Joint Selection of Source
and Channel Rate for VBR Video Transmission Under ATM Policing
Constraints. IEEE Journal of Selected Areas in Communications,
15(6):10161028, 1997.
[17] P.-C. Hu, Z-L. Zhang, and M. Kaveh. Channel Condition ARQ Rate
Control for Real-time Wireless Video Under Buffer Constraints. In
Proceedings of the IEEE International Conf. on Image Processing,
Vancouver BC, Canada, September 2000.
[18] K.A. Hua, Y. Cai, and S. Sheu. Patching: a Multicast Technique for
True Video-on-demand Services. In Proceedings of the 6th ACM
International Conference on Multimedia, pages 191200. ACM
Press, 1998.
[19] T. Imielinski, S. Viswanathan, and B. R. Badrinath. Data on Air:
Organization and Access. IEEE Transactions on Knowledge and
Data Engineering, (3):353372, 1997.
[20] H.-J. Lee, T. Chiang, and Y.-Q. Zhang. Scalable Rate Control for
MPEG-4 Video. IEEE Transactions On Circuits and Systems for
Video Technology, 10(9):878894, September 2000.
[21] S.-J. Lee, W.-Y. Ma, and B. Shen. An Interactive Video Delivery and
Caching System Using Video Summarization. Computer
Communications, 25(4):424435, March 2002.
[22] E. Pitoura and P.K. Chrysanthis. Multiversion Data Broadcast. IEEE
Transactions on Computers, 51(10):12241230, 2002.
[23] R. Rejaie, H. Yu, M. Handley, and D. Estrin. Multimedia Proxy
Caching Mechanism for Quality Adaptive Streaming Applications in
the Internet. In Proceedings of INFOCOM(2), pages 980989, 2000.
[24] S. Roy and B. Shen abd V. Sundaram. Application Level Handoff
Support for Mobile Media Transcoding Sessions. In 12th
International Workshop on Network and Operating System Support
for Digital Audio and Video, Miami, FL, 2002.
[25] S. Saltenis and C.S. Jensen. Indexing of Moving Objects for
Location-Based Services. In Proceedings of the IEEE International
Conference on Data Engineering (ICDE), pages 463472, 2002.
[26] S. Sen, J. Rexford, and D. F. Towsley. Proxy Prefix Caching for
Multimedia Streams. In Proceedings of INFOCOM(3), pages
13101319, New York, NY, 1999.
[27] A.P. Sistla, O. Wolfson, S. Chamberlain, and S. Dao. Modeling and
Querying Moving Objects. In Proceedings of the 13th IEEE
International Conf. on Data Engineering, Birmingham, UK, April
1997.
[28] H. Song and C.-C.-J. Kuo. Rate control for Low Bit Rate Video via
Variable Encoding Frame Rates. IEEE Transactions On Circuits and
Systems for Video Technology, 11(4):512521, April 2001.
[29] W. Tawbi, F. Horn, E. Horlait, and J.-B. Stefani. Video Compression
Standards and Quality of Service. The Computer Journal,
36(1):4354, 1993.
[30] Texas Instruments. Mobile Connectivity: Assisted-GPS.
http://focus.ti.com/general/docs/wtbu, 2004.
[31] T. Wiegand, M. Lightstone, T. Campbell, D. Mukherjee, and
S. Mitra. Rate-Distortion Optimized Mode Selection for Very Low
Bit Rate Video Coding and the Emerging H.263 Standard. URL:
citeseer.ist.psu.edu/wiegand95ratedistortion.html, 1999.
[32] D. Wu, Y.T. Hou, W. Zhu, Y.-Q. Zhang, and J.M. Peha. Streaming
Video over the Internet: Approaches and Directions. IEEE
Transactions on Circuits and Systems for video Technology,
11(1):120, February 2001.
[33] X. Yang and A. Bouguettaya. Broadcast-Based Data Access in
Wireless Environments. In Proceedings of the EDBT International
Conference, Prague, Czech Republic, 2002.
[34] Z.-L. Zhang, Y. Wang, D.H.C. Du, and D. Shu. Video staging: a
proxy-server-based approach to end-to-end video delivery over
wide-area networks. IEEE/ACM Transactions on Networking,
8(4):429442, 2000.
72
| mobile multimedia services;rate adaptation;real-time streaming;Streaming for moving users |
213 | Web Taxonomy Integration through Co-Bootstrapping | We address the problem of integrating objects from a source taxonomy into a master taxonomy. This problem is not only currently pervasive on the web, but also important to the emerging semantic web. A straightforward approach to automating this process would be to learn a classifier that can classify objects from the source taxonomy into categories of the master taxonomy. The key insight is that the availability of the source taxonomy data could be helpful to build better classifiers for the master taxonomy if their categorizations have some semantic overlap. In this paper, we propose a new approach, co-bootstrapping , to enhance the classification by exploiting such implicit knowledge. Our experiments with real-world web data show substantial improvements in the performance of taxonomy integration. | INTRODUCTION
A taxonomy, or directory or catalog, is a division of a set of
objects (documents, images, products, goods, services, etc.) into
a set of categories. There are a tremendous number of
taxonomies on the web, and we often need to integrate objects
from a source taxonomy into a master taxonomy.
This problem is currently pervasive on the web, given that many
websites are aggregators of information from various other
websites [2]. A few examples will illustrate the scenario. A web
marketplace like Amazon
may want to combine goods from
multiple vendors' catalogs into its own. A web portal like
NCSTRL
may want to combine documents from multiple
libraries' directories into its own. A company may want to merge
its service taxonomy with its partners'. A researcher may want to
merge his/her bookmark taxonomy with his/her peers'.
Singapore-MIT Alliance
, an innovative engineering education
and research collaboration among MIT, NUS and NTU, has a
need to integrate the academic resource (courses, seminars,
reports, softwares, etc.) taxonomies of these three universities.
This problem is also important to the emerging semantic web [4],
where data has structures and ontologies describe the semantics
of the data, thus better enabling computers and people to work in
cooperation. On the semantic web, data often come from many
different ontologies, and information processing across
ontologies is not possible without knowing the semantic
mappings between them. Since taxonomies are central
components of ontologies, ontology mapping necessarily involves
finding the correspondences between two taxonomies, which is
often based on integrating objects from one taxonomy into the
other and vice versa [10, 14].
If all taxonomy creators and users agreed on a universal standard,
taxonomy integration would not be so difficult. But the web has
evolved without central editorship. Hence the correspondences
between two taxonomies are inevitably noisy and fuzzy. For
illustration, consider the taxonomies of two web portals Google
and Yahoo
: what is "Arts/Music/Styles/" in one may be
"Entertainment/Music/Genres/" in the other, category
"Computers_and_Internet/Software/Freeware" and category
"Computers/Open_Source/Software" have similar contents but
show non-trivial differences, and so on. It is unclear if a
universal standard will appear outside specific domains, and
even for those domains, there is a need to integrate objects from
legacy taxonomy into the standard taxonomy.
Manual taxonomy integration is tedious, error-prone, and clearly
not possible at the web scale. A straightforward approach to
automating this process would be to formulate it as a
classification problem which has being well-studied in machine
learning area [18]. Normally the classifier would be constructed
using objects in the master taxonomy as training examples, and
the source taxonomy would be completely ignored during
learning. However, the availability of the source taxonomy data
could be helpful to build better classifiers for the master
taxonomy if their categorizations have some semantic overlap,
particularly when the number of training examples is not very
large.
Possible useful semantic relationships between a master category
C and a source category S include:
C
S
=
(identical): an object belongs to C if and only if it
belongs to S ;
C
S
=
(mutual exclusion): if an object belongs to S it
cannot belong to C ;
C
S
(superset): any object that belonging to S must also
belong to C ;
C
S
(subset): any object not belonging to S also cannot
belong to C ;
C and S overlap but neither is a superset of the other.
In addition, semantic relationships may involve multiple master
and source categories. For example, a master category C may be
a subset of the union of two source categories
a
S
and
b
S
, so if
an object does not belong to either
a
S
or
b
S
, it cannot belong to
C . The real-world semantic relationships are noisy and fuzzy,
but they can still provide valuable information for classification.
For example, knowing that most (80%) objects in a source
category S belong to one master category
a
C
and the rest (20%)
examples belong to another master category
b
C
is obviously
helpful. The difficulty is that knowledge about those semantic
relationships is not explicit but hidden in the data.
In this paper, we propose a new approach, co-bootstrapping, to
enhance the classification by exploiting such implicit knowledge.
Our experiments with real-world web data show substantial
improvements in the performance of taxonomy integration.
The rest of this paper is organized as follows. In 2, we give the
formal problem statement. In 3, we describe a state-of-the-art
solution. In 4, we present our approach in detail. In 5, we
conduct experimental evaluations. In 6, we review the related
work. In 7, we make concluding remarks.
PROBLEM STATEMENT
Taxonomies are often organized as hierarchies. In this work, we
assume for simplicity, that any objects assigned to an interior
node really belong to a leaf node which is an offspring of that
interior node. Since we now have all objects only at leaf nodes,
we can flatten the hierarchical taxonomy to a single level and
treat it as a set of categories [2].
Now we formally define the taxonomy integration problem that
we are solving. Given two taxonomies:
a master taxonomy
M with a set of categories
1
2
,
,...,
M
C C
C
each containing a set of objects, and
a source taxonomy
N with a set of categories
1
2
,
,...,
N
S S
S
each containing a set of objects,
we need to find the categories in
M for each object in N.
To formulate taxonomy integration as a classification problem,
we take
1
2
,
,...,
M
C C
C
as classes, the objects in
M as training
examples, the objects in
N as test examples, so that taxonomy
integration can be automatically accomplished by predicting the
classes of each test example. Such a classification problem is
multi-class and multi-label, in the sense that there are usually
more than two possible classes and one object may be relevant to
more than one class.
A STATE-OF-THE-ART SOLUTION
Agrawal and Srikant recently proposed an elegant approach to
taxonomy integration by enhancing the Nave Bayes algorithm
[2].
The Nave Bayes (NB) algorithm is a well-known text
classification technique [18]. NB tries to fit a generative model
for documents using training examples and apply this model to
classify test examples. The generative model of NB assumes that
a document is generated by first choosing its class according to a
prior distribution of classes, and then producing its words
independently according to a (typically multinomial) distribution
of terms conditioned on the chosen class [15]. Given a test
document d , NB predicts its class to be arg max Pr[
| ]
C
C d
. The
posterior probability Pr[
| ]
C d
can be computed via Bayes's rule:
Pr[
| ]
C d
Pr[ , ]
Pr[ ]
C d
d
=
Pr[ ] Pr[ |
]
Pr[ ]
C
d C
d
=
Pr[ ] Pr[ |
]
C
d C
(
)
( , )
Pr[ ]
Pr[ |
]
n d w
w d
C
w C
=
,
where ( , )
n d w
is the number of occurrences of w in d . The
probability Pr[ ]
C
can be estimated by the proportion of training
documents in C . The probability Pr[ |
]
w C
can be estimated by
(
)
( , )
( ,
)
i
i
w
V
n C w
n C w
+
+
, where ( , )
n C w
is the number of
occurrences of w in training documents in C , V is the
vocabulary of terms, and 0
1
<
is the Lidstone's smoothing
parameter [1]. Taking logs, we see that NB is actually a linear
classifier:
log Pr[
| ]
C d
(
)
(
)
( , )
log Pr[ ]
Pr[
|
]
n d w
w d
C
w C
(
)
log Pr[ ]
( , ) log Pr[ |
]
w d
C
n d w
w C
=
+
.
The enhanced Nave Bayes (ENB) algorithm [2] uses the
categorization of the source taxonomy to get better probability
estimations. Given a test document d that is know to be in
category S in
N, ENB predicts its category in M to be
arg max Pr[
| , ]
C
C d S
. The posterior probability Pr[
| , ]
C d S
can
411
be computed as Pr[
| , ]
C d S
Pr[ , , ]
Pr[ , ]
C d S
d S
=
Pr[ ] Pr[ ,
| ]
Pr[ , ]
S
C d S
d S
=
Pr[ ,
| ]
C d S
. ENB invokes a simplification that assumes d
and S are independent given C , therefore
Pr[ ,
| ]
C d S
Pr[
| ] Pr[ | , ]
C S
d S C
=
Pr[
| ] Pr[ |
]
C S
d C
=
(
)
( , )
Pr[
| ]
Pr[
|
]
n d w
w d
C S
w C
=
.
The probability Pr[
|
]
w C
can be estimated in the same way of
NB. For the probability Pr[
| ]
C S
, ENB estimates it by
(
)
i
i
i
C
C
C
S
C
C
S
, where C is the number of documents
in C , C
S
is the number of documents in S classified into
C by the NB classifier, and
0
is a parameter reflecting the
degree of semantic overlap between the categorizations of
M
and
N. The optimal value of
can be found using a tune set (a
set of objects whose categories in both taxonomies are known).
The tune set can be made available via random sampling or
active learning [2]. Taking logs, we see that ENB is still a linear
classifier:
log Pr[
| , ]
C d S
(
)
(
)
( , )
log Pr[
| ]
Pr[
|
]
n d w
w d
C S
w C
(
)
log Pr[
| ]
( , ) log Pr[ |
]
w d
C S
n d w
w C
=
+
.
Comparing the classification functions of NB and ENB, it is
obvious that all ENB does is to shift the classification threshold
of its base NB classifier, no more and no less.
To achieve multi-class multi-label classification that is required
by taxonomy integration, we use the "one-vs-rest" method to
create an ensemble of binary (yes/no) NB or ENB classifiers, one
for each category C in
M.
OUR APPROACH
Here we present our approach in detail. In 4.1, we introduce the
boosting technique. In 4.2, we propose the co-bootstrapping
method. In 4.3, we discuss the advantages of our approach.
4.1
Boosting
In our approach to taxonomy integration, we utilize a powerful
machine learning method, boosting [17, 23], to build classifiers.
The main idea of boosting is to combine many weak hypotheses
(simple and moderately accurate classification rules), into a
highly accurate classifier. In this paper, we focus on boosting for
text classification. Generalization to other kinds of data and
learning algorithms would be straightforward.
4.1.1
Term-Features
Text objects (documents) can be represented using a set of term-features
1
2
{
,
,...
}
T
T
T
T n
F
f
f
f
=
. The term-feature
Th
f
(1
)
h
n
of a given object x is a binary feature indicating the presence or
absence of
h
w
(the h-th distinct word in the document collection)
in x , i.e.,
1 if
0 if
h
Th
h
w
x
f
w
x
=
.
4.1.2
Weak Hypotheses
Let
X denote the domain of possible objects, and let Y be a set
of k possible classes. A labeled example is a pair ( , )
x Y
where
x
X is an object and Y
Y is the set of classes which x
belongs to. We define [ ]
Y l
for l
Y to be
1 if
[ ]
1 if
l
Y
Y l
l
Y
+
=
.
A hypothesis is a real-valued function :
h
R
X Y
. The sign
of ( , )
h x l
is a prediction of [ ]
Y l
for x , i.e., whether object x is
contained in class l . The magnitude of ( , )
h x l
is interpreted as
a measure of confidence in the prediction.
Based on a binary feature f , we are interested in weak
hypotheses h which are simple decision stumps of the form
1
0
if
1
( , )
if
0
l
l
c
f
h x l
c
f
=
=
=
, where
1
0
,
l
l
c
c
.
4.1.3
AdaBoost Algorithm
The most popular boosting algorithm is AdaBoost introduced in
1995 by Freund and Schapire [12]. Our work is based on a multi-class
multi-label version of AdaBoost, AdaBoost.MH [24, 25],
which is described in Figure 1.
Given m training examples
1
1
( , ),...,(
,
)
m
m
x Y
x
Y
where each
i
x
X ,
i
Y
Y , AdaBoost.MH dynamically maintains a
distribution
t
D
over all objects and classes. Initially this
distribution
1
D
is uniform. In the t-th round, the optimal weak
hypothesis
t
h
is selected based on the set of training examples
and the current distribution
t
D
. Then a parameter
t
is chosen,
and the distribution
t
D
is updated in a manner that puts more
weights on "difficult" examples (object-class pairs) that are
misclassified by
t
h
. Please be referred to [24, 25] for the details
on computing optimal
t
h
and
t
. This procedure repeats for T
rounds. The final hypothesis
( , )
H x l
is actually a weighted vote
Given:
1
1
( , ),...,(
,
)
m
m
x Y
x
Y
where each
i
x
X
,
i
Y
Y
.
Initialize
1
( , )
1 (
)
D i l
mk
=
.
for
1,...,
t
T
=
do
Pass distribution
t
D
to weak learner.
Get weak hypothesis
:
ht
R
X Y
.
Choose
t
.
Update:
1
( , ) exp(
[ ] ( , ))
( , )
t
t i
t
i
t
t
D i l
Y l h x l
D
i l
Z
+
=
where
t
Z
is the normalization factor
end for
Output the final hypothesis:
1
( , )
( , )
T
t t
t
H x l
h x l
=
=
.
Figure 1: The boosting algorithm AdaBoost.MH.
412
of weak hypotheses
1
( , )
T
t t
t
h x l
=
, and the final prediction can be
computed according to the sign of
( , )
H x l
.
4.2
Co-Bootstrapping
Thus far we have completely ignored the categorization of
N.
Although
M and N are usually not identical, their
categorizations often have some semantic overlap. Therefore the
categorization of
N contains valuable implicit knowledge about
the categorization of
M. Hereby we propose a new approach, co-bootstrapping
, to enhance the classification by exploiting such
implicit knowledge.
4.2.1
Category-Features
If we have indicator functions for each category in
N, we can
imagine taking those indicator functions as features when we
learn the classifier for
M. This allows us to exploit the semantic
relationship among the categories of
M and N without
explicitly figuring out what the semantic relationships are. More
specifically, for each object in
M, we augment the ordinary
term-features with a set of category-features
1
2
{
,
...,
}
N
F
f
f
f
=
N
N
N
N
derived from
N. The category-feature
j
f
N
(1
)
j
N
of a given object x
is a binary feature
indicating whether x belongs to category
j
S
(the j-th category
of
N), i.e.,
1 if
0 if
j
j
j
x
S
f
x
S
=
N
.
In the same way, we can get a set of category-features
1
2
{
,
...,
}
M
F
f
f
f
=
M
M
M
M
derived from
M to be used for
supplementing the features of objects in
N. The remaining
problem is to obtain these indicator functions, which are initially
not available.
4.2.2
Co-Bootstrapping Algorithm
When building the classifier for
M, the training examples are
the objects in
M and the test examples are the objects in N. To
leverage the categorization of
N to reinforce classification, our
classifier uses term-features
T
F
as well as category-features
F
N
. However, we do not know the exact values of F
N
of the
training examples.
Our proposed algorithm overcomes the above obstacle by
utilizing the bootstrapping idea. Let
( )
r
F
T
B
denote a boosting-classifier
for taxonomy
T's categorization based on feature set
F
at step r . Initially we build a classifiers
0
(
)
T
F
B
N
based on
only term-features, then use it to classify the objects in
M (the
training examples) into the categories of
N, thus we can predict
the value of each category-feature
j
f
F
N
N
for each object
x
M . At next step we will be able to build
1
(
)
T
F
F
M
N
B
using the predicted values of F
N
of the training examples.
Similarly we can build
0
(
)
T
F
M
B
and
1
(
)
T
F
F
N
M
B
. The new
classifier
1
(
)
T
F
F
N
M
B
ought to be better than
0
(
)
T
F
B
N
because
1
(
)
T
F
F
N
M
B
leverages more knowledge. Hence we
can predict the value of each category-feature
j
f
F
N
N
for
each object x
M more accurately using
1
(
)
T
F
F
N
M
B
instead of
0
(
)
T
F
B
N
, and afterwards we can build
2
(
)
T
F
F
M
N
B
. Also
2
(
)
T
F
F
M
N
B
is very likely to be
better than
1
(
)
T
F
F
M
N
B
because
2
(
)
T
F
F
M
N
B
is based
on a more accurate prediction of F
N
. This process can be
repeated iteratively in a "ping-pong" manner. We name this
approach co-bootstrapping since the two classifiers
(
)
r
T
F
F
B
M
N
and
(
)
r
T
F
F
B
N
M
collaborate to bootstrap
themselves together. Figure 2 presents the co-bootstrapping
algorithm, and Figure 3 depicts its process.
4.3
Discussion
4.3.1
Why Choose Boosting
We have selected to employ the boosting technique to build
classifiers in our co-bootstrapping approach to taxonomy
integration because of its following virtues.
Boosting has shown outstanding classification performance on
many kinds of data such as text documents [17, 23, 24].
Boosting finds the optimal combination of heterogeneous
weak hypotheses automatically, therefore alleviates the
problem of how to weight ordinary features (e.g. term-features)
and category-features appropriately. In contrast, approaches
based on other machine learning algorithms like Support
Vector Machines (SVMs) [9] would require to adjust relative
combination weights, which is a non-trivial problem.
Boosting generates descriptive and human-readable
hypotheses as the final classifier, and the learned classifier is
usually sparse despite the large feature set.
Although boosting looks an ideal choice, other machine learning
algorithms can also be utilized in the co-bootstrapping approach.
We have not investigated this issue yet.
4.3.2
Comparison with ENB
Although ENB [2] has been shown to work well for taxonomy
integration, we think that a more general approach is still
attractive. It has been experimentally shown that AdaBoost is
more promising than NB for text classification [24]. The co-bootstrapping
approach allows more powerful machine learning
algorithms like AdaBoost to be utilized.
Both ENB and our co-bootstrapping approach exploit the
categorization of
N to enhance classification. While all ENB
does is to shift the classification threshold of its base NB
classifier (see 3), co-bootstrapping has the ability to achieve
more complex adjustments on the classification function of its
base classifier.
413
Furthermore, ENB needs a stand-alone tune set to find the
optimal value of parameter
which controls the influence of
source categorization information on classification, whereas co-bootstrapping
based on boosting does not have such burdens.
Although co-bootstrapping looks more effective, ENB still holds
an advantage in efficiency.
EXPERIMENTS
We have collected 5 datasets from Google and Yahoo. One
dataset includes the slice of Google's taxonomy and the slice of
Yahoo's taxonomy about websites on one specific topic, as
shown in Table 1.
In each slice of taxonomy, we take only the top level directories
as categories, e.g., the "Movie" slice of Google's taxonomy has
categories like "Action", "Comedy", "Horror", etc.
For each dataset, we show in Table 2 the number of categories
occurred in Google and Yahoo respectively.
In each category, we take all items listed on the corresponding
directory page and its sub-directory pages as its objects. An
object (list item) corresponds to a website on the world wide web,
which is usually described by its URL, its title, and optionally a
short annotation about its content. Here each object is considered
as a text document composed of its title and annotation. All
documents are pre-processed by removal of stop-words and
stemming.
For each dataset, we show in Table 3 the number of objects
occurred in Google (G), Yahoo (Y), either of them (G
Y), and
both of them (G
Y) respectively. The set of objects in GY
covers only a small portion (usually less than 10%) of the set of
objects in Google or Yahoo alone, which suggests the great
benefit of automatically integrating them. This observation is
consistent with [2].
Figure 3: The co-bootstrapping process.
Given: two taxonomies
M
and
N
.
Build classifier
0
(
)
T
F
B
M
, then use it to predict the
value of each category-feature
i
f
F
M
M
for each
object
x
N
.
Build classifier
0
(
)
T
F
B
N
, then use it to predict the
value of each category-feature
j
f
F
N
N
for each
object
x
M
.
for
1,...,
r
R
=
do
Build classifier
(
)
r
T
F
F
B
M
N
, then use it to
predict the value of each category-feature
i
f
F
M
M
for each object
x
N
.
Build classifier
(
)
r
T
F
F
B
N
M
, then use it to
predict the value of each category-feature
j
f
F
N
N
for each object
x
M
.
end for
For each object
x
N
, if the value of its category-feature
i
f
F
M
M
is positive, then we classify it
into
i
C
M
.
For each object
x
M
, if the value of its category-feature
j
f
F
N
N
is positive, then we classify it
into
j
S
N
.
Figure 2: The co-bootstrapping algorithm.
Table 1: The datasets.
Google Yahoo
Book
/ Top/ Shopping/
Publications/ Books/
/ Business_and_Economy/
Shopping_and_Services/
Books/ Bookstores/
Disease
/ Top/ Health/
Conditions_and_Diseases/
/ Health/
Diseases_and_Conditions/
Movie
/ Top/ Arts/ Movies/
Genres/
/ Entertainment/
Movies_and_Film/
Genres/
Music
/ Top/ Arts/ Music/ Styles/
/ Entertainment/ Music/
Genres/
News
/ Top/ News/ By_Subject/
/ News_and_Media/
Table 3: The number of objects.
Google Yahoo G
Y GY
Book
10,842
11,268
21,111
999
Disease
34,047
9,785
41,439
2,393
Movie 36,787 14,366 49,744 1,409
Music 76,420 24,518 95,971 4,967
News 31,504
19,419
49,303 1,620
Table 2: The number of categories.
Google Yahoo
Book 49
41
Disease 30
51
Movie 34
25
Music 47
24
News 27
34
414
The number of categories per object in these datasets is 1.54 on
average. This observation justifies the necessity of building
multi-class multi-label classifiers.
5.2
Tasks
For each dataset, we pose 2 symmetric taxonomy integration
tasks: G
Y (integrating objects from Yahoo into Google) and
Y
G (integrating objects from Google into Yahoo).
As described in 2, we formulate each task as a classification
problem. The objects in G
Y can be used as test examples,
because their categories in both taxonomies are known to us [2].
We hide the test examples' master categories but expose their
source categories to the learning algorithm in training phase, and
then compare their hidden master categories with the predictions
of the learning algorithm in test phase. Suppose the number of
the test examples is n . For G
Y tasks, we randomly sample n
objects from the set G-Y as training examples. For Y
G tasks,
we randomly sample n objects from the set Y-G as training
examples. This is to simulate the common situation that the sizes
of
M and
N are roughly in same magnitude. For each task, we
do such random sampling 5 times, and report the classification
performance averaged over these 5 random samplings.
5.3
Measures
As stated in 2, it is natural to accomplish taxonomy integration
tasks via building multi-class multi-label classifiers. To measure
classification performance for each class (category in
M), we
use the standard F-score (F
1
measure) [3]. The F-score is defined
as the harmonic average of precision (p) and recall (r),
2
(
)
F
pr
p
r
=
+
, where precision is the proportion of correctly
predicted positive examples among all predicted positive
examples, and recall is the proportion of correctly predicted
positive examples among all true positive examples. The F-scores
can be computed for the binary decisions on each
individual category first and then be averaged over categories. Or
they can be computed globally over all the M
n
binary
decisions where M is the number of categories in consideration
(the number of categories in
M) and n is the number of total
test examples (the number of objects in
N). The former way is
called macro-averaging and the latter way is called micro-averaging
[27]. It is understood that the micro-averaged F-score
(miF) tends to be dominated the classification performance on
common categories, and that the macro-averaged F-score (maF)
is more influenced by the classification performance on rare
categories [27]. Providing both kinds of scores is more
informative than providing either alone.
5.4
Settings
We use our own implementation of NB and ENB. The Lidstone's
smoothing parameter
is set to an appropriate value 0.1 [1].
The performance of ENB would be greatly affected by its
parameter
. We run ENB with a series of exponentially
increasing values of
: (0, 1, 3, 10, 30, 100, 300, 1000) [2] for
each taxonomy integration task, and report the best experimental
results. We use BoosTexter [24] for the implementation of
AdaBoost, taking single words as terms. We set the boosting
rounds
1000
T
=
and the co-bootstrapping iteration number
8
R
=
(see Figure 1 & 2). In the following sections, we denote
the normal AdaBoost approach by AB, and denote the co-bootstrapping
approach based on AdaBoost algorithm by CB-AB.
5.5
Results
The experimental results of NB and ENB are shown in Table 4.
We see that ENB really can achieve much better performance
than NB for taxonomy integration.
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0
1
2
3
4
5
6
7
8
maF(G
Y)
miF(G
Y)
maF(Y
G)
miF(Y
G)
Figure 4: The taxonomy integration performance
increases along with the number of co-bootstrapping
iterations, on the Book dataset.
Table 5: Experimental Results of AB and CB-AB.
AB CB-AB
maF miF maF miF
Book
0.1740 0.4499 0.2540 0.6030
Disease 0.5375 0.6674 0.6533 0.7703
Movie 0.1930 0.4892 0.3172 0.6716
Music 0.3316 0.5025 0.4851 0.6826
G
Y
News 0.2150 0.4625 0.3083 0.6218
Book
0.2436 0.3853 0.3516 0.6341
Disease 0.3719 0.6350 0.4371 0.7287
Movie 0.2559 0.5214 0.3922 0.7154
Music 0.4369 0.6397 0.5799 0.7994
Y
G
News 0.3774 0.4942 0.4340 0.6421
Table 4: Experimental Results of NB and ENB.
NB ENB
maF miF maF miF
Book
0.1286 0.2384 0.1896 0.5856
Disease 0.4386 0.5602 0.5230 0.6895
Movie 0.1709 0.3003 0.2094 0.5331
Music 0.2386 0.3881 0.2766 0.5408
G
Y
News
0.2233 0.4450 0.2578 0.5987
Book
0.1508 0.2107 0.2227 0.5471
Disease 0.2746 0.4812 0.3415 0.6370
Movie 0.2319 0.4046 0.2884 0.5534
Music 0.3124 0.5359 0.3572 0.6824
Y
G
News
0.2966 0.4219 0.3639 0.6007
415
The experimental results of AB and CB-AB are shown in Table 5.
Obviously AB beats NB, which is consistent with the conclusion
of [24]. Also we find that CB-AB works better than AB for
taxonomy integration, which suggests that co-bootstrapping
makes effective use of the categorization of
N to enhance
classification for
M.
Figure 4 shows that the taxonomy integration performance
increases along with the number of co-bootstrapping iterations,
on the Book dataset. This implies that the two boosting-classifiers
learned from two taxonomies do mutually boost each
other until they become stable.
The experimental results of ENB and CB-AB are compared in
Figure 5 and 6. It is clear that CB-AB outperforms ENB
consistently and significantly.
RELATED WORK
Most of the recent research efforts related to taxonomy
integration are in the context of ontology mapping on semantic
web. An ontology specifies a conceptualization of a domain in
terms of concepts, attributes, and relations [11]. The concepts in
an ontology are usually organized into a taxonomy: each concept
is represented by a category and associated with a set of objects
(called the extension of that concept). The basic goal of ontology
mapping is to identify (typically one-to-one) semantic
correspondences between the taxonomies of two given ontologies:
for each concept (category) in one taxonomy, find the most
similar concept (category) in the other taxonomy. Many works in
this field use a variety of heuristics to find mappings [7, 16, 19,
21]. Recently machine learning techniques have been introduced
to further automate the ontology mapping process [10, 13, 14, 20,
26]. Some of them derive similarities between concepts
(categories) based on their extensions (objects) [10, 13, 14],
therefore they need to first integrate objects from one taxonomy
into the other and vice versa (i.e., taxonomy integration). So our
work can be utilized as a basic component of an ontology
mapping system.
As stated in 2, taxonomy integration can be formulated as a
classification problem. The Rocchio algorithm [3, 22] has been
applied to this problem in [14]; and the Nave Bayes (NB)
algorithm [18] has been applied to this problem in [10], without
exploiting information in the source taxonomy. To our
knowledge, the most advanced approach to taxonomy integration
is the enhanced Nave Bayes (ENB) algorithm proposed by
Agrawal and Srikant [2], which we have reviewed and compared
with our approach.
In [6], AdaBoost is selected as the framework to combine term-features
and automatically extracted semantic-features in the
context of text categorization. We also choose AdaBoost to
combine heterogeneous features (term-features and category-features
), but it is for a different problem (taxonomy integration)
and it works in a more complex way (through co-bootstrapping).
In [8], an approach called co-boosting is proposed for named
entity classification. Essentially co-boosting is a co-training [5]
method that attempts to utilize unlabeled data to help
classification through exploiting a particular form of redundancy
in data: each instance is described by multiple views (disjoint
feature sets) which are both compatible and uncorrelated
(conditionally independent). However, the multi-view
assumption does not hold in the context of taxonomy integration:
the set of category features should not be considered as a view
because category features alone are not sufficient for
classification and they are strongly correlated with term features.
In contrast to co-boosting (co-training), co-bootstrapping works
with two taxonomies but not two views.
CONCLUSION
Our main contribution is to propose a new approach, co-bootstrapping
, that can effectively exploit the implicit knowledge
in the source taxonomy to improve taxonomy integration.
The future work may include: theoretical analysis of the co-bootstrapping
approach, incorporating commonsense knowledge
and domain constraints into the taxonomy integration process,
and so forth.
ACKNOWLEDGMENTS
We would like to thank the anonymous reviewers for their
helpful comments and suggestions.
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
B
ook
Di
s
e
a
s
e
Mo
v
i
e
Mu
s
i
c
Ne
ws
B
ook
Di
s
e
a
s
e
Mo
v
i
e
Mu
s
i
c
Ne
ws
G
Y
Y
G
ENB
CB-AB
Figure 5: Comparing the macro-averaged F-scores of
ENB and CB-AB.
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
B
ook
Di
s
e
a
s
e
Mo
v
i
e
Mu
s
i
c
Ne
ws
B
ook
Di
s
e
a
s
e
Mo
v
i
e
Mu
s
i
c
Ne
ws
G
Y
Y
G
ENB
CB-AB
Figure 6: Comparing the micro-averaged F-scores of
ENB and CB-AB.
416
REFERENCES
[1] Agrawal, R., Bayardo, R. and Srikant, R. Athena: Mining-based
Interactive Management of Text Databases. in
Proceedings of the 7th International Conference on
Extending Database Technology (EDBT), Konstanz,
Germany, 2000, 365-379.
[2] Agrawal, R. and Srikant, R. On Integrating Catalogs. in
Proceedings of the 10th International World Wide Web
Conference (WWW), Hong Kong, 2001, 603-612.
[3] Baeza-Yates, R. and Ribeiro-Neto, B. Modern Information
Retrieval. Addison-Wesley, New York, NY, 1999.
[4] Berners-Lee, T., Hendler, J. and Lassila, O. The Semantic
Web Scientific American, 2001.
[5] Blum, A. and Mitchell, T. Combining Labeled and
Unlabeled Data with Co-Training. in Proceedings of the
11th Annual Conference on Computational Learning Theory
(COLT), Madison, WI, 1998, 92-100.
[6] Cai, L. and Hofmann, T. Text Categorization by Boosting
Automatically Extracted Concepts. in Proceedings of the
26th Annual International ACM SIGIR Conference on
Research and Development in Information Retrieval
(SIGIR), Toronto, Canada, 2003, 182-189.
[7] Chalupsky, H. OntoMorph: A Translation System for
Symbolic Knowledge. in Proceedings of the 7th
International Conference on Principles of Knowledge
Representation and Reasoning (KR), Breckenridge, CO,
2000, 471-482.
[8] Collins, M. and Singer, Y. Unsupervised Models for Named
Entity Classification. in Proceedings of the Joint SIGDAT
Conference on Empirical Methods in Natural Language
Processing and Very Large Corpora (EMNLP), College
Park, MD, 1999, 189-196.
[9] Cristianini, N. and Shawe-Taylor, J. An Introduction to
Support Vector Machines. Cambridge University Press,
Cambridge, UK, 2000.
[10] Doan, A., Madhavan, J., Domingos, P. and Halevy, A.
Learning to Map between Ontologies on the Semantic Web.
in Proceedings of the 11th International World Wide Web
Conference (WWW), Hawaii, USA, 2002.
[11] Fensel, D. Ontologies: A Silver Bullet for Knowledge
Management and Electronic Commerce. Springer-Verlag,
2001.
[12] Freund, Y. and Schapire, R.E. A Decision-theoretic
Generalization of On-line Learning and an Application to
Boosting. Journal of Computer and System Sciences, 55 (1).
119-139.
[13] Ichise, R., Takeda, H. and Honiden, S. Rule Induction for
Concept Hierarchy Alignment. in Proceedings of the
Workshop on Ontologies and Information Sharing at the
17th International Joint Conference on Artificial
Intelligence (IJCAI), Seattle, WA, 2001, 26-29.
[14] Lacher, M.S. and Groh, G. Facilitating the Exchange of
Explicit Knowledge through Ontology Mappings. in
Proceedings of the Fourteenth International Florida
Artificial Intelligence Research Society Conference
(FLAIRS), Key West, FL, 2001, 305-309.
[15] McCallum, A. and Nigam, K. A Comparison of Event
Models for Naive Bayes Text Classification. in AAAI-98
Workshop on Learning for Text Categorization, Madison,
WI, 1998, 41-48.
[16] McGuinness, D.L., Fikes, R., Rice, J. and Wilder, S. The
Chimaera Ontology Environment. in Proceedings of the
17th National Conference on Artificial Intelligence (AAAI),
Austin, TX, 2000, 1123--1124.
[17] Meir, R. and Ratsch, G. An Introduction to Boosting and
Leveraging. in Mendelson, S. and Smola, A.J. eds.
Advanced Lectures on Machine Learning, LNCS, Springer-Verlag
, 2003, 119-184.
[18] Mitchell, T. Machine Learning. McGraw Hill, 1997.
[19] Mitra, P., Wiederhold, G. and Jannink, J. Semi-automatic
Integration of Knowledge Sources. in Proceedings of The
2nd International Conference on Information Fusion,
Sunnyvale, CA, 1999.
[20] Noy, N.F. and Musen, M.A. Anchor-PROMPT: Using Non-Local
Context for Semantic Matching. in Proceedings of the
Workshop on Ontologies and Information Sharing at the
17th International Joint Conference on Artificial
Intelligence (IJCAI), Seattle, WA, 2001, 63-70.
[21] Noy, N.F. and Musen, M.A. PROMPT: Algorithm and Tool
for Automated Ontology Merging and Alignment. in
Proceedings of the National Conference on Artificial
Intelligence (AAAI), Austin, TX, 2000, 450-455.
[22] Rocchio, J.J. Relevance Feedback in Information Retrieval.
in Salton, G. ed. The SMART Retrieval System: Experiments
in Automatic Document Processing, Prentice-Hall, 1971,
313-323.
[23] Schapire, R.E. The Boosting Approach to Machine
Learning: An Overview. in MSRI Workshop on Nonlinear
Estimation and Classification, Berkeley, CA, 2002.
[24] Schapire, R.E. and Singer, Y. BoosTexter: A Boosting-based
System for Text Categorization. Machine Learning,
39 (2/3). 135-168.
[25] Schapire, R.E. and Singer, Y. Improved Boosting
Algorithms Using Confidence-rated Predictions. Machine
Learning, 37 (3). 297-336.
[26] Stumme, G. and Maedche, A. FCA-MERGE: Bottom-Up
Merging of Ontologies. in Proceedings of the 17th
International Joint Conference on Artificial Intelligence
(IJCAI), Seattle, WA, 2001, 225-230.
[27] Yang, Y. and Liu, X. A Re-examination of Text
Categorization Methods. in Proceedings of the 22nd ACM
International Conference on Research and Development in
Information Retrieval (SIGIR), Berkeley, CA, 1999, 42-49.
417 | Taxonomy Integration;Bootstrapping;Semantic Web;Classification;Ontology Mapping;Machine Learning;Boosting |
214 | WebKhoj: Indian language IR from Multiple Character Encodings | Today web search engines provide the easiest way to reach information on the web. In this scenario, more than 95% of Indian language content on the web is not searchable due to multiple encodings of web pages. Most of these encodings are proprietary and hence need some kind of standardization for making the content accessible via a search engine. In this paper we present a search engine called WebKhoj which is capable of searching multi-script and multi-encoded Indian language content on the web. We describe a language focused crawler and the transcoding processes involved to achieve accessibility of Indian langauge content. In the end we report some of the experiments that were conducted along with results on Indian language web content. | INTRODUCTION
India is a multi-language, multi-script country with 22 official
languages and 11 written script forms. About a billion
people in India use these languages as their first language.
English, the most common technical language, is the lingua
franca of commerce, government, and the court system, but
is not widely understood beyond the middle class and those
who can afford formal, foreign-language education. Not only
is there a large societal gap between the rich and poor, but
that gap appears to be widening due the dominance of English
in the society. About 5% of the population (usually the
educated class) can understand English as their second language
. Hindi is spoken by about 30% [5] of the population,
but it is concentrated in urban areas and north-central India
, and is still not only foreign but often unpopular in many
other regions. Computability of Indian languages could help
bridge the societal gaps in education, economy and health-care
. However the research and development, availability
of standards, support from operating systems and applications
in these directions moved very slow due to language
heterogeneity.
Today this phenomenon can also be observed on the world
wide web. The percentage of Indian language content is
very less compared to the official languages of United Nations
[7]. Even within the available content, majority is not
searchable and hence not reachable due to multiple encodings
used while authoring such websites. Web publishers of
such content were hesitant to use any available standards
such as Unicode due to very delayed support from operating
systems and browsers in rendering Indic scripts. Even
today Hindi is rendered properly only on Windows XP and
beyond. Linux has very little support for these languages.
Indian languages had barely any support till Windows 2000
operating system. This creates a major bottleneck for web
publishers in these languages to get viewership.
Despite all these issues, we found considerable amount of
content being published on the web. However such content
gets unnoticed or gets very less viewership since most of such
content is not accessible through search engines due to nonstandard
encodings being rendered using proprietary fonts.
This paper is organized into seven sections. In the next
sub-section we give an introduction to characters, glyphs
and fonts in order to appreciate the complexity involved in
rendering complex scripts. We then introduce to the complexity
of Indic scripts in the sub-section 1.2. In Section 2 we
make the problem statement and explain an implementation
to solve this problem in Section 3. We report some experiments
and results in Section 4, followed by a conclusion in
Section 5.
1.1
Fonts, characters and glyphs
In the history of mankind the act of writing has always
been considered as an activity producing visual results, namely
text. The computer has brought a more abstract layer to
801
it, by storing and transmitting textual data. The atomic
unit of this abstract representation of text, as defined in
the Unicode standard [8], is called a character. And indeed,
characters prove to be useful for obtaining alternative (non-visual
) representations of text such as Braille, speech synthesis
, etc. The visual representation of a character is called
a glyph [8]. Displaying textual contents, whether on screen
or on paper, involves translating characters into glyphs, a
non-trivial operation for many writing systems. Going in
the opposite direction (from glyphs to characters) is known
as OCR when done by a machine, or as reading when done
by a human [8]. The technology trend over the last few years
has been to use characters for most of the text processing
and to limit glyph issues to the last stage, namely rendering.
At that level, character to glyph translation is handled by
increasingly "intelligent" (cf. OpenType and AAT technologies
) fonts and font encodings. Unicode is an effort in this
direction. At the same time, restoring the original character
stream from a rendered electronic document output for
operations such as searching, indexing, or copy-pasting, no
general solution exists in today's popular document formats
yet. Despite the problems involved, web authors tend to use
proprietary encodings due to the complex characteristics of
Indic scripts as described in the following section.
1.2
Characteristics of Indic Scripts
Indic scripts are phonetic in nature. There are vowels and
consonant symbols. The consonants become a syllable after
the addition of a vowel sound to it. Further to compound
the problem there are `compound syllables' also referred as
ligatures. For instance, if we consider `tri' in `triangle', there
are three letters corresponding to three sounds `ta', `ra', `yi'.
But in the case of Indic Scripts the three are built together
to make a single compound consonant having a non-linear
structure unlike Latin based languages.
The main problem with display of Indic scripts is dealing
with their non-linear structures. Glyphs have variable
widths and have positional attributes. Vowel signs can be
attached to the top, bottom, left and right sides of the base
consonant. Vowel signs may also combine with consonants
to form independent glyphs. Consonants frequently combine
with each other to form complex conjunct glyphs. Although
the encoding encapsulates only the basic alphabetic characters
, the number of glyphs and their combinations required
for the exhaustive rendering of these scripts can be quite
large [11].
Since the character to glyph mappings have to be achieved
using a 256 character address space, web authors come up
with an intelligent way of representing all the characters in
the language using some 256 glyphs. Most of these glyphs
do not have any semantic significance in the language by
themselves. However when displayed together using some
positional parameters, they achieve human readable characters
. This situation makes the Indian language web content
inaccessible for machine processing.
PROBLEM STATEMENT
Many information seekers use a search engine to begin
their Web activity. In this case, users submit a query, typically
a list of keywords, and receive a list of Web pages that
may be relevant, typically pages that contain the keywords.
Today though considerable amount of content is available
in Indian languages, users are unable to search such
content. Because Indian language websites rely on unique
encodings or proprietary extensions of existing standard encodings
[11]. This plurality of encodings creates a problem
for information retrieval to function as desired. Also
many research groups in information retrieval and natural
language processing feel the need to collect corpora in these
languages from the web in the same way they obtain corpora
for other languages [14], [7], [1], [10]. Therefore in order to
search or process Indian language websites, we should be
able to transliterate all the encodings into one standard encoding
and accept the user's queries in the same encoding
and build the search index.
This task involves many steps. First step is to be able
to identify the various encodings in Indian languages on the
web. Since these encodings are non-standard, there is no one
comprehensive list of such possible encodings. Therefore
we need to somehow identify all such encodings and also
be able to classify these encodings into the existing types.
Second step is to build a transliteration mapping for the
given encoding into a standard encoding which is UTF-8
and hence convert any page into a standard and index it.
Third step is to be able to accept user's queries in the same
standard as that of the transliterated documents which is
UTF-8.
WEBKHOJ ARCHITECTURE
In this paper we report a search engine called WebKhoj
which can search web pages in the top 10 Indian languages
according to the number of native speakers. WebKhoj cur-rently
supports Hindi, Telugu, Tamil, Malayalam, Marathi,
Kannada, Bengali, Punjabi, Gujarati and Oriya. Before we
describe the architecture of WebKhoj, it is useful to understand
how a Web search engine is typically put together and
then see its extensions for our task.
3.1
General web search engine
Figure 1 shows a general web search engine schematically
[2]. The major modules of a web search engine are a Crawler,
an Indexer, a Query Engine and a Ranking Engine. Every
engine relies on a crawler module to provide the grist for
its operation (shown on the left in Figure 1). Crawlers are
small programs that browse the Web on the search engine's
behalf, similar to how a human user follows links to reach
different pages. The programs are given a starting set of
URLs whose pages they retrieve from the Web. The crawler
extracts URLs appearing in the retrieved pages and give this
information to the crawler control module. This module determines
what links to visit next and feeds these links back
to the crawler. (Some of the functionality of the crawler
control module may be implemented by the crawlers themselves
.) The crawlers also pass the retrieved pages into a
page repository. Crawlers continue visiting the Web until
local resources, such as storage, are exhausted. The indexer
module extracts all the words from each page and records
the URL where each word occurred. The result is a generally
very large "lookup table" that can provide all the URLs
that point to pages where a given word occurs (the text index
in Figure 1). The table is of course limited to the pages
that were covered in the crawling process. As mentioned
earlier, text indexing of the Web poses special difficulties,
due to its size and its rapid rate of change. In addition to
these quantitative challenges, the Web calls for some special,
less common, kinds of indexes. For example, the indexing
802
Figure 1: General web search engine architecture
module may also create a structure index, which reflects the
links between pages. Such indexes would not be appropriate
for traditional text collections that do not contain links.
The collection analysis module is responsible for creating a
variety of other indexes. During a crawling and indexing
run, search engines must store the pages they retrieve from
the Web. The page repository in Figure 1 represents this
possibly temporary collection. Search engines sometimes
maintain a cache of the pages they have visited beyond the
time required to build the index. This cache allows them
to serve out result pages very quickly, in addition to providing
basic search facilities. Some systems, such as the
Internet Archive, have aimed to maintain a very large number
of pages for permanent archival purposes. Storage at
such a scale again requires special consideration. The query
engine module is responsible for receiving and filling search
requests from users. The engine relies heavily on the indexes
, and sometimes on the page repository. Due to the
Web's size and the fact that users typically only enter one
or two keywords, result sets are usually very large. Hence
the ranking module has the task of sorting the results such
that results near the top are the most likely to be what the
user is looking for. In the rest of this section we describe the
additional modules that were used in a general web search
engine to make it work for Indian languages.
3.2
Language focused crawling
Since our goal is to be able to search web sites of specific
languages, we are looking for a relatively narrow segment of
the web. Crawlers that fetch pages related to a particular
topic of interest are called topic focused crawlers [6]. While
our crawler is very similar to the one mentioned in [6], we
use a language identification module instead of a classifier
and hence call it as language focused crawling. The language
identification module returns the name of the language for a
given web page. This module is aware of all the proprietary
encodings and also uses a bag of words to recognize unknown
encodings from meta-tag information that might be found
in an HTML page. In many cases web pages contain more
than one language, especially one of the languages being
English. This happens since many of the website organzi-ation
information such as menu items, or disclaimers and
other such formatting information. In some websites such
as blogs or forums majority of the content might be English
, with Indian language content being a minority. The
language identifier module returns a language only if the
number of words in a web page are above a given threshold
value.
3.3
Transcoding
Since Indian language content is being published in multiple
encodings on the web, transliteration of encodings to
a popular standard such as Unicode [15] is needed. In order
to transliterate a non-UTF-8 encoding into UTF-8 which is
a Unicode based encoding one has to come up with byte
sequence mappings between source and target encodings.
Such mappings are rarely one to one mappings, and involve
many to one, one to many and many to many mappings of
byte sequences. As it was explained in the beginning of this
paper, a sequence of bytes represent a sequence of glyphs
of a font, which in turn could render a single character or
a ligature in the Indic script. Ideally mappings are to be
created to all the unique characters in the language, which
could be a large number in the order of tens of thousands.
Since it would be tedious to list out all the characters and
ligatures, we make use of the large number of documents
collected by the crawler to come up with a semi-automatic
process of generating mappings.
We use a simple heuristic to identify the potential character
boundaries from byte sequences. First the text from the
collected web pages is divided into words using a suitable
word tokenizer. Then the algorithm lists all the possible
word beginning bytes in both the source and target font encodings
. Now each word is scanned from left to right until
one such byte occurs in the word. Whenever a valid word
beginner occurs in the word, we tokenize at that point, and
the byte sequence till that point is treated as a potential
character. For example in a given encoding if all the possible
word beginning bytes are `a', `b' and `c', a new word
`cars' is tokenized as `c', `ars', since neither `r' nor `s' are
803
Figure 2: Transcoding from Jagran encoding to UTF-8
valid word beginners. The byte sequences thus obtained by
segmentation are potential characters or ligatures in that
language.
Once such segmentation is done, the frequency of such
byte sequences (or potential characters) is calculated. It was
found from our experiments that the ranks based on the normalized
frequency of such potential characters is highly cor-related
(we present more details in our experiments section).
Therefore we use this algorithm to come up initial suggested
mappings for transcoding, and then the user would manually
correct any errors by going through the font mappings
as shown in the Figure 2. The transcoding tool sorts the
potential characters according to their ranks, so that the
user would find the equivalent match in the target encoding
among the top few possibilities. Also since the mappings
are ordered based on the normalized frequency found in the
corpus, mapping source and target bytes in this order ensures
optimal precision that can be obtained from a set of
mappings.
Once such transcoder mappings are generated for all possible
encodings in Indian languages, a transcoding module
is called during indexing of the web documents. If a web
document is encoded in an encoding other than UTF-8,
the transcoder module is called to transliterate the encoding
of the given web page into UTF-8 standard. In order
to do this, the HTML page is parsed to obtain its document
object model (DOM) using the JTidy utility
1
. All the
nodes of type "font" are extracted from the DOM and the
font encoding is checked against a known set of encodings
on the web. Based on the font encoding, the appropriate
transcoder mappings are used to transliterate the relevant
text into UTF-8. One word is transcoded at a time. In order
to transcode, the maximum byte sequence available in the
mapping table is used to transliterate the encodings and the
process is repeated to the remaining substring of the word.
This transliterated document is then sent to the indexer to
build the inverted index.
3.4
Retrieval Algorithm
The score of query q for document d is defined in terms
of TFIDF [13] metric as shown below:
score
(q, d) = c(q, d).q
n
(q).(
X
t in q
tf
(t in d).idf (t))
1
JTidy is a Java implementation of Dave Raggett's HTML
tidy. JTidy can be found at http://jtidy.sourceforge.net
3.4.1
tf (term frequency)
`tf' (also known as term frequency) is a score factor based
on a term or phrase's frequency in a document. Terms and
phrases repeated in a document indicate the topic of the
document, so implementations of this score usually return
larger values when frequency is large, and smaller values
when frequency is small.
3.4.2
idf (inverse document frequency)
`idf' is a score factor based on a term's document frequency
(the number of documents which contain the term).
Terms that occur in fewer documents are better discriminators
of topic, so implemenations of this method usually
return larger values for rare terms, and smaller values for
common terms.
3.4.3
c (coverage of query terms)
`c' is a score factor based on the fraction of all query
terms that a document contains. This value is multiplied
into scores. The presence of a large portion of the query
terms indicates a better match with the query, so implemenations
of this function usually return larger values when the
ratio between these parameters is large and smaller values
when the ratio between them is small.
3.4.4
q
n
(query normalization)
This is the normalization value for a query given the sum
of the squared weights of each of the query terms. This value
is then multiplied into the weight of each query term.
This does not affect ranking, but rather just attempts to
make scores from different queries comparable.
3.5
User Interface
Currently there is no easy means to key-in UTF-8 queries
to the search engine using the normal keyboard. So WebKhoj
is provided with a soft keyboard which displays the
UTF-8 character set of the language on the screen. The
layout of the keys is very similar to the Keylekh layout [9].
We also tried providing a roman to local language transliteration
keyboard which dynamically renders Indian language
text when its phonetic equivalent is typed using roman characters
. We had student volunteers from a near by village to
try out the keyboards. However, we found that the students
who are taught in the local language in schools are
not comfortable with English symbols. Also within the local
language, the way symbols are taught in schools is much
804
Figure 3: Hindi soft keyboard user interface for WebKhoj search engine
Figure 4: Search results being displayed for a Hindi query in UTF-8
805
different from the way UTF-8 characters need to be typed
in. However, with some training these students were able to
adapt to the soft keyboard.
Currently soft keyboards for 10 Indian languages are provided
in the searching interface. One language is shown to
the user at any given instance. The user can change the
keyboard to a different language by clicking on the desired
language hyperlink displayed on the interface as shown in
Figure 3. After thus framing the query, the user can search
for the web documents, and the results are ranked and displayed
much like Google as shown in Figure 4.
3.6
Word spelling normalization
Indian language words face standardization issues in spelling,
thereby resulting in multiple spelling variants for the same
word. For example we found widely used spelling variations
for the hindi word `angrezi' as shown below
The major reasons for this phenomenon can be attributed
to unavailability of proper website authoring tools equipped
with spell checkers for Indian languages and multiple dialects
of spoken language, transliteration of proper names
and words borrowed from foreign languages whose spellings
are not standardized. While we have to handle Indian language
words with spelling variations and errors, we also
showed that a considerable percentage of foreign language
words mainly English have entered into Indian language usage
which cannot be ignored. While such words are being
frequently used by people, there is no standardization in
spelling for such words thereby resulting in huge variations
due to transliteration. Given such variations in spelling it
becomes difficult for web Information Retrieval applications
built for Indian languages, since finding relevant documents
would require more than performing an exact string match.
It was shown that normalization rules for specific languages
work best with spelling normalization problems. We make
use of a set of rules [12] to normalize the words before indexing
them or looking them up from the index. These rules
are language specific and we describe the rules for Hindi in
the next sub-sections. We achieve normalization of word
spellings by mapping the alphabet of the given language L
into another alphabet L where L L. We use the following
rules to achieve such a normalized mapping.
3.6.1
Mapping
chandrabindu
to
bindu
Often people tend to use chandrabindu (a half-moon with
a dot) and bindu (a dot on top of alphabet) interchangeably.
Lots of confusion exists in common language usage on which
to use when. In order to equate all such words we convert all
occurrences of chandrabindu to bindu, which would equate
all the words shown below.
3.6.2
nukta
deletion
Unicode contains 10 consonant characters with nukta (a
dot under consonant) and one nukta character itself. We
delete all
occurrences of nukta character and replace all consonants
with nuktas with their corresponding consonant character
. This would equate words like the ones shown below.
3.6.3
halanth
deletion
Hindi and many other Indian languages face the problems
of 'schwa' (the default vowel 'a' that occurs with every
consonant) deletion. Lots of spelling variations occur due to
'schwa' deletion. In order to normalize such words we delete
all the halanth characters in the given word before making
a string match. This operation would normalize words as
shown in the example below.
3.6.4
vowel shortening
Many times in written script people use shorter vowels instead
of longer ones or vice versa. Therefore in our application
we convert all the longer vowels to their corresponding
shorter ones. Using this feature we can normalize words as
shown in this example.
3.6.5
chandra
deletion
'chandra' (half-moon) is used for vowel rounding. Usually
words borrowed from English at times require vowel rounding
operation. For example the word "documentary". But
this character is used inconsistently many times. Therefore
deleting such a character would normalize the words where
vowel rounding has been used.
These rules were compared with many approximate string
matching algorithms are were found to result in a better f-measure
[12].
EXPERIMENTS AND DISCUSSION
We report here some experiments that were conducted
in transcoding the proprietary encodings and present some
statistics from our language focused crawl about the Indian
language web.
The transcoding tool was designed to generate mappings
between two encodings in a semi-automatic fashion. In order
to achieve this the tool automatically gives some mapping
suggestions based on the rank correlation of the two
encodings in question. We found that the byte sequences
from two encodings of same language correlate very well, by
looking at the Spearman's rank correlation coefficient. In-tuitively
this phenomenon can be understood as the convergence
of unique lexicon from two encodings from sufficiently
large corpus, since they both belong to the same language.
To find the amount of correlation, we experimented with
two different encodings from Hindi. We ran the character
segmentation algorithm and computed the normalized
frequencies as mentioned above and ranked the character
sequences in both the encodings from a corpus of 2,000 documents
from each of these encodings. We manually marked
806
the corresponding frequency based rank positions of a given
character or a ligature from these encodings and calculated
the Spearman's rank correlation coefficient. We then plotted
a graph with the Spearman's correlation coefficient on
y-axis and the number of mappings on x-axis as shown in
Figure 5. We observed that the rank correlation is 100% for
the first 25 mappings that were automatically generated,
and are close to 90% for the first 200 mappings which can
achieve a transcoding precision of above 90%.
0
0.2
0.4
0.6
0.8
1
0
20
40
60
80
100
120
140
160
180
200
Rank Correlation coefficient
Number of byte sequences
Rank Correlation of byte sequence frequencies
"Spearman Correlation"
Figure 5: Spearman's rank correlation for number of
byte sequences between Jagran and Webdunia font
encodings
Since these byte sequences are an ordered set, ordered
by their normalized frequency, the precision of transliteration
obtained by providing mappings between encodings in
the order provided in the ordered set is optimal. We have
observed that with about 2,000 encoding mappings for each
encoding on average once can achieve around 99% precision.
However this number also depends on the language complexity
. For instance, the number of encodings required in
Telugu transliteration is more than the number of encodings
required in Hindi to obtain the same amount of precision.
We now report some of our experiments on the Indian language
focused crawling. We ran a daily crawl for 6 months
period. Our crawler was focused to fetch content in the
top 10 spoken languages in India, namely Hindi, Telugu,
Tamil, Bengali, Marathi, Gujarati, Kannada, Malayalam,
Oriya and Punjabi. In another experiment, in order to find
the effectiveness of language focused crawling, we executed
the crawler in two modes with a set of 100 seed URLs which
constitute popular Indian based web portals, news sites and
home pages of people of Indian origin. In the first mode
it was executed without language focus restriction using a
pure FIFO crawl queue while the second mode was with language
focus restriction using a priority queue from which the
crawler fetched the next crawl URL. We plotted the number
of relevant pages fetched in the first 50,000 URLs in both the
runs as shown in the Figure 6. The relevance of the fetched
pages was calculated by checking the encoding on that page.
It can be clearly seen that language focus restriction on the
crawler helps in downloading more relevant pages.
From the 6 month crawl, about half a million unique documents
were collected from all the languages. Unique web
pages were picked after eliminating approximate duplicate
pages using shingling technique [4]. These half a million
pages were distributed across the 10 languages as shown
in the Figure 7. Figure 8 shows the population of people
speaking the various Indian languages [3]. It can be observed
that even within India there is a divide in the web
publishing activity in various languages. For instance it can
be observed that the content is actively getting published in
south Indian languages like Telugu, Tamil and Malayalam
when compared to the northern languages such as Marathi,
Gujarati, Oriya, Bengali and Punjabi. Hindi has the majority
of content published on the web but Hindi is also the
language spoken by majority of Indian population.
It can be seen from Figure 10 that a very few websites
publish content using a global standard such as Unicode.
This explains the reason for most of the Indian language
not being indexed or searchable by the present day popular
web search engines. On the other hand it can be seen from
Figure 9 and Figure 11 that the number of unique encodings
found on the web for these languages is almost equivalent
to the number of websites. This observation suggests that
every web publisher is coming up with their own proprietary
encodings to publish web content. We did not consider the
websites that publish using images in this study, but our
preliminary study suggests that there are a large number of
websites that publish content as images as well.
Figure 6: Crawl with and without language focus
Figure 7: Languages on x-axis and number of unique
web pages on y-axis
807
Figure 8: Languages on x-axis and number of native
speakers on y-axis
Figure 9: Languages on x-axis and number of encodings
found on web including UTF-8 on y-axis
Figure 10: Languages on x-axis and number of UTF-8
websites on y-axis
CONCLUSIONS
In this paper we discussed the importance of being able
to search the Indian language web content and presented a
web search engine which takes the UTF-8 queries from a
soft keyboard and capable of searching 10 most spoken Indian
languages' web pages encoded in multiple encodings.
We presented a language focussed crawler which can fetch
web pages of specific languages and also the distribution of
Figure 11: Languages on x-axis and number of websites
(web servers) on y-axis
the Indian language content on web based on the pages that
were crawled. This distribution clearly shows the need for
processes and algorithms to transcode non-Unicode encodings
to Unicode. Hence we have discussed a semi-automatic
algorithm to generate the mappings between different encodings
. This shows that transcoding of proprietary encodings
into a standard encoding makes Indian language web
content accessible through search engines.
ACKNOWLEDGMENTS
We would like to thank the Department of Science and
Technology, Ministry of Communications and IT, Government
of India for funding this project.
REFERENCES
[1] J. Allan, J. Aslam, N. Belkin, C. Buckley, J. Callan,
B. Croft, S. Dumais, N. Fuhr, D. Harman, D. J.
Harper, D. Hiemstra, T. Hofmann, E. Hovy,
W. Kraaij, J. Lafferty, V. Lavrenko, D. Lewis,
L. Liddy, R. Manmatha, A. McCallum, J. Ponte,
J. Prager, D. Radev, P. Resnik, S. Robertson,
R. Rosenfeld, S. Roukos, M. Sanderson, R. Schwartz,
A. Singhal, A. Smeaton, H. Turtle, E. Voorhees,
R. Weischedel, J. Xu, and C. Zhai. Challenges in
Information Retrieval and Language Modeling:
Report of a Workshop held at the Center for
Intelligent Information Retrieval, University of
Massachusetts Amherst, September 2002. SIGIR
Forum, 37(1):3147, 2003.
[2] A. Arasu, J. Cho, H. Garcia-Molina, A. Paepcke, and
S. Raghavan. Searching the Web. ACM Trans. Inter.
Tech., 1(1):243, 2001.
[3] G. B. 14th ed. Ethnologue: Languages of the World.
SIL International, Dallas, TX, 2003.
[4] S. Brin, J. Davis, and H. Garcia-Molina. Copy
Detection Mechanisms for Digital Documents. In
SIGMOD '95: Proceedings of the 1995 ACM SIGMOD
International Conference on Management of Data,
pages 398409, New York, NY, USA, 1995. ACM
Press.
[5] G. E. Burkhart, S. E. Goodman, A. Mehta, and
L. Press. The Internet in India: Better times ahead?
Commun. ACM, 41(11):2126, 1998.
808
[6] S. Chakrabarti, K. Punera, and M. Subramanyam.
Accelerated Focused Crawling through Online
Relevance Feedback. In WWW '02: Proceedings of the
11th International Conference on World Wide Web,
pages 148159, New York, NY, USA, 2002. ACM
Press.
[7] F. Gey, N. Kando, and C. Peters. Cross Language
Information Retrieval: A Research Roadmap. SIGIR
Forum, 36(2):7280, 2002.
[8] Y. Haralambous and G. Bella. Injecting Information
into Atomic Units of Text. In DocEng '05:
Proceedings of the 2005 ACM Symposium on
Document Engineering, pages 134142, New York,
NY, USA, 2005. ACM Press.
[9] A. Joshi, A. Ganu, A. Chand, V. Parmar, and
G. Mathur. Keylekh: a Keyboard for Text Entry in
Indic Scripts. In CHI '04: CHI '04 Extended Abstracts
on Human Factors in Computing Systems, pages
928942, New York, NY, USA, 2004. ACM Press.
[10] L. S. Larkey, M. E. Connell, and N. Abduljaleel. Hindi
CLIR in thirty days. ACM Transactions on Asian
Language Information Processing (TALIP),
2(2):130142, 2003.
[11] D. P. Madalli. Unicode for Multilingual
Representation in Digital Libraries from the Indian
Perspective. In JCDL '02: Proceedings of the 2nd
ACM/IEEE-CS Joint Conference on Digital Libraries,
pages 398398, New York, NY, USA, 2002. ACM
Press.
[12] P. Pingali and V. Varma. Word Normalization in
Indian Languages. In ICON05: Proceedings of the
2005 International Conference on Natural Language
Processing, 2005.
[13] G. Salton and C. Buckley. Term-weighting Approaches
in Automatic Text Retrieval. Information Process.
Management, 24(5):513523, 1988.
[14] S. Strassel, M. Maxwell, and C. Cieri. Linguistic
Resource Creation for Research and Technology
Development: A Recent Experiment. ACM
Transactions on Asian Language Information
Processing (TALIP), 2(2):101117, 2003.
[15] F. Yergeau. UTF-8, a transformation format of ISO
10646. RFC Editor, United States, 2003.
809
| web search;Indian languages;non-standard encodings |
215 | What's There and What's Not? Focused Crawling for Missing Documents in Digital Libraries | Some large scale topical digital libraries, such as CiteSeer, harvest online academic documents by crawling open-access archives, university and author homepages, and authors' self-submissions. While these approaches have so far built reasonable size libraries, they can suffer from having only a portion of the documents from specific publishing venues. We propose to use alternative online resources and techniques that maximally exploit other resources to build the complete document collection of any given publication venue. We investigate the feasibility of using publication metadata to guide the crawler towards authors' homepages to harvest what is missing from a digital library collection. We collect a real-world dataset from two Computer Science publishing venues, involving a total of 593 unique authors over a time frame of 1998 to 2004. We then identify the missing papers that are not indexed by CiteSeer. Using a fully automatic heuristic-based system that has the capability of locating authors' homepages and then using focused crawling to download the desired papers, we demonstrate that it is practical to harvest using a focused crawler academic papers that are missing from our digital library. Our harvester achieves a performance with an average recall level of 0.82 overall and 0.75 for those missing documents. Evaluation of the crawler's performance based on the harvest rate shows definite advantages over other crawling approaches and consistently outperforms a defined baseline crawler on a number of measures. | INTRODUCTION
Digital libraries that are based on active crawling methods such as
CiteSeer often have missing documents in collections of archived
publications, such as ACM and IEEE. How do such digital
libraries find and obtain those missing? We propose using
external resources of publication metadata and focused crawlers
to search the Web for those missing.
The basic concept of a focused crawler (also known as a topical
crawlers) [1], is based on a crawling strategy that relevant Web
pages contain more relevant links, and these relevant links should
be explored first. Initially, the measure of relevancy was based on
keywords matching; connectivity-based metrics were later
introduced [2]. In [3] the concept of a focused crawler was
formally introduced: a crawler that seeks, acquires, indexes, and
maintains pages on a specific set of topics that represent a
relatively narrow segment of the Web.
Today, focused crawling techniques have become more important
for building specialty and niche (vertical) search engines While
both the sheer volume of the Web and its highly dynamic content
increasingly challenge the task of document collection, digital
libraries based on crawling benefit from focused crawlers since
they can quickly harvest a high-quality subset of the relevant
online documents.
Current approaches to harvesting online academic documents
normally consist of focused crawling of open-access archives,
author and institution web sites and directories of authors' self-submissions
. A random sample of 150 journals and conferences in
Computer Science show that less than 10% have websites that are
open to crawlers. Many of the top publishing venues that have
their documents electronically available to subscribers such as the
ACM Digital Library, the IEEE Digital, Library or the Springer-Verlag
Digital Library, normally use access permission
techniques and robots.txt to ban crawlers. A recent study indicates
that CiteSeer indexes 425, 000 unique research documents related
to Computer Science, DBLP contains 500,464 records and there
are 141,345 records in the Association for Computing Machinery
(ACM) Digital Library and 825,826 records in the more
comprehensive ACM Guide [4]. The study also shows that in
CiteSeer there is an overlapping portion of 86, 467 documents
(20.2% of CiteSeer's total archive) comprising 17.3% of the
Digital Bibliography & Library Project (DBLP) archive.
This research investigates alternative online resources and
focused crawling techniques to build a complete document
collection for any given publication venue. We propose to answer
the following:
Q1 - What are the best focused crawling techniques to maximally
exploit online resources, in order to harvest the desired papers
effectively and efficiently?
Q2 Is it effective to use authors' homepages as alternative
online resources to find the missing documents?
Q3 How can the above methods be automated to effectively
obtain missing documents?
The rest of the paper is organized as follows. In section 2 we
present a review of related work. In Section 3 we cover in much
detail the design rationale of the system. In Section 4 we describe
how we collect data and perform the evaluation, and present the
results with discussion. Finally, we conclude the paper with future
work proposed in Section 5.
RELATED WORK
The focused crawling literature shows that much has been focused
on enhancing the dynamic performance, scalability, effectiveness,
and efficiency of the crawler, namely, harvesting higher-quality
documents in a shorter period of time.
Breadth-first searching is probably the simplest strategy for
crawling, i.e. traversing the Web in a way that a directed graph is
traveled using a breadth-first search algorithm. Interestingly, a
breadth-first crawler is found to be capable of yielding high-quality
documents at an early stage of the crawl [5]. Although
more sophisticated crawlers tend to retrieve even higher quality
pages than their breadth-first counterparts, they are usually
computationally more expensive. In our study, we use a multi-threaded
breadth-first crawler as a baseline to compare to our own
crawling method.
Best-first crawling attempts to direct the crawler towards the best
(i.e. most relevant in terms of topic relevance) documents.
Different heuristics, such as link-based criteria, lexical similarity
measures, contextual knowledge, and fine-tuned combinations of
such have been explored in a number of studies over the years. In
[2], the authors find that PageRank [6] can yield the best
performance when ordering seed URLs. However, a more recent
study [7] shows that PageRank metrics may just be too general in
context without regard to the specific target topic. An updated
version of PageRank algorithm which reflects the importance with
respect to a particular topic has been proposed [8].
In [3], a Bayesian classifier is used to estimate the probability that
a page belongs to the target topic, in a way that a node belongs to
a certain position in an existing taxonomy hierarchy. In [9], a
keyword-based vector space model is used to calculate the
similarity of Web pages to the seed URLs, and if the similarity is
above a certain threshold, the pages are downloaded and indexed,
and their out-going links are followed.
A focused crawler [10] based on context graphs is proposed by so
that the crawler can extract information about the context within
which desired documents are usually found. A set of classifiers
are then trained to classify in-degree Web pages according to an
estimation of their relevance to the topic. The relevance
estimation then navigates the crawler towards desired documents.
Crawlers with a probability model are used for calculating
priorities, which combines Web page content-based learning,
URL token-based learning, and link-based learning [11]. In a later
work, [12] takes into account the users' access behavior and re-tunes
the previous model to connect this behavior with the
predicate satisfaction probability of the candidate Web pages
waiting to be crawled.
An interesting "reversed" approach is proposed in [13], which
suggests a given scientific document from a digital library be used
as an input to the focused crawler. The main title and reference
titles of the document are extracted and used to train a classifier to
learn topical knowledge. The crawler is then guided by such
knowledge to discover other topic-relevant documents on the Web.
More up-to-date reviews of focused crawling algorithms are
presented in [14] and [15]. In [14], five different methods are
implemented and evaluated within a unified evaluation
framework on small and large datasets.
Here we discuss two studies that bear similarities to ours. The
HPSearch and Mops presented in [16] support the search for
research papers close to the homepages of certain scientists.
However, their system does not investigate the issues of document
harvesting for digital libraries for different publishing venues.
Furthermore, our system outperforms theirs in terms of the
percentage of correct homepages returned. In a more recent study
[17], a Paper Search Engine (PaSE) is proposed, which uses
citation information to locate online copies of scientific
documents. While their study addresses a different research
question, the PaSE system employs similar heuristics as we do to
favor certain out-going links in order to quickly locate academic
papers.
SYSTEM DESIGN
We develop an automated system in which document metadata is
used to automatically locate the homepages of the authors and
focused crawl these homepages with the intent of finding missing
documents. Our system, shown in Figure 1, consists of a
Homepage Aggregator and a smart Focused Crawler.
The system accepts a user's request to harvest the desired papers
published in a specific venue (e.g. a conference or a journal). The
Homepage Aggregator will query a Public Metadata Repository
and extract useful metadata heuristics to assist in quickly and
accurately locating URLs of the authors' homepages. A list of
such URLs will be inserted into the Homepage URL Database.
The Crawler uses focused crawling techniques to search the
domains for desired publications. It accepts the seed URLs as an
input and uses them as starting points for the crawl. The Crawler
uses anchor text to determine link priorities and quickly navigates
through the websites using to get to the desired academic papers.
The harvested documents will be stored in the Document
Database.
302
Figure 1. System Architecture
3.2 Using Metadata to Locate Homepages
Crawling authors' homepages first requires the system to be able
to locate such websites quickly and accurately. A study of the
literature indicates that personal website and homepage finding
have been studied a lot since the birth of WWW. In [18], the
authors present AHOY! as the first working system for personal
homepage finding, which can filter irrelevant pages based on
pattern matching heuristics. Later, the TREC (Text REtrieval
Conference) hosted the task of Web homepage finding in 2001
and its subsequent years, and algorithms based on link analysis,
linguistic cues, and machine learning etc. are proposed [19, 20,
21]. Examples of current working systems include
HomePageSearch (hpsearch.uni-trier.de) which is a Homepage
Aggregator mainly for computer scientists, and compiled
directories (e.g. Google Directory)
See Figure 2 for the architecture of the Homepage Aggregator
component.
Figure 2. Architecture of the Homepage Aggregator
The goal of the Homepage Aggregator is to look for homepages
of the authors and save them as seed URLs to feed the Focused
Crawler. First it queries the Metadata Repository and retrieves the
document metadata. For each author, it extracts from metadata a
value pair of (N, P), where N is the name of the author and P is
the name of the venue (with a number of variations) in which the
paper is published. A list of such pairs is then submitted to a Web
search engine. Pages returned by the search engine will go
through a Homepage Filter where we use metadata heuristics to
remove false positives (pages that are not likely to be the
homepages of the authors) and disambiguate among namesakes, if
there is any. Different priority weights are assigned to the
remaining pages according to their likelihood of being the
homepage of the author. The more likely it's the homepage of the
author, the higher priority it receives. Eventually the page with
the highest priority weights will be inserted into the Homepage
URL Database, and will be crawled later.
Recall that we extract from metadata a pair value of (N, P). Now
let U be the URL and T be the title of a Web page P returned by
the Web search engine. When there are more than two authors for
the same paper, assume Ui are the URLs of the homepages of
other authors already found by the system. We have incorporated
the findings in [16] about major characteristics of personal
homepages. The metadata heuristics employed in the Homepage
Filter are explained in Table 1.
Table 1. Heuristics Employed in Homepage Filter
Function Heuristic
Rules
Remove false
positives
Remove U if U or T indicates a
publisher's website.
Remove U if U or T indicates a
digital library.
Remove U if U points to a file other
than .htm/.html
Disambiguate
between
namesakes
Choose U among the candidates if U
is in the same domain as Ui.
Remove U if its parent-domain is
already found by the system.
Assign priority
U receives high priority if T contains
N and any of the following:
homepage (home page), web
(website), research, publication,
papers.
U receives medium priority if T
contains any of the following:
homepage (home page), web
(website), research, publication,
papers.
U receives low priority when neither
one of the above two rules is fired.
3.3 Crawler Architecture
The Focused Crawler crawls web pages, using heuristics to
quickly navigate to the publications. The architecture of the
component is shown in Figure 3.
The crawler accepts two primary sets of inputs that vary for each
crawl. The first is a set of seed URLs that are the starting points of
the crawl. These are added to the crawl queue at low priority. The
second set of inputs is a collection of domain names that the
crawler is permitted to crawl.
Once the seed URLs are entered into the queue, the crawler
threads are started. Each thread gets one URL from the priority
queue, and downloads the page that it points to.
After a page is downloaded, the out-going links are examined and
those matched with the ignored list are removed, either because
they are out of the target domain or because their MIME types are
not processed by the crawler. At this point, if a PDF/PostScript
document is found, it will be inserted into the Document Database.
The rest of the out-going links will each be classified as high,
medium, or low priority, and inserted into different priority
queues.
Metadata
Extractor
Web
Search Engine
Homepage Filter
Homepage
Aggregator
Public
Metadata
Repository
Homepage
URL
Database
Focused
Crawler
Metadata
Heuristics
Public
Metadata
Repository
Document
Database
Homepage
URL
Database
Homepage
Aggregator
303
Figure 3. Architecture of the Focused Crawler
In order to concentrate or limit the crawls towards only desirable
content, the crawler is provided with three lists for reference. The
contents of the lists may be changed depending on the types of
domains being crawled.
The Ignore List is a set of file types that are to be ignored by the
crawler. The most common types of URLs that are ignored by the
crawler are links to image files. The list can also include parts of
the domain(s) being crawled, which the crawler is not supposed to
visit. Table 2 shows a sample Ignore List.
Table 2. Sample Ignore List
File Types
.jpg, .bmp, .gif, .png, .jpeg, .mpg, .mpeg, .avi
http://clgiles.ist.psu.edu/picture.html
Domains
http://clgiles.ist.psu.edu/courses.html
Files of type JPG, BMP etc will be ignored during the crawl. Also
any outgoing links to pages within the ignored domains will not
be considered for crawling.
The Allow List on the other hand is a collection of domain names
that make up the crawl space of the crawler. Links pointing
outside the specified domains are ignored by the crawler (unless
they are determined to be research documents). This list is useful
to limit the breadth of the crawl to only those domains that are of
interest. Table 3 shows a sample Allow List.
Table 3. Sample Allow List
Domains
http://clgiles.ist.psu.edu
So the link http://clgiles.ist.psu.edu will be considered for
crawling if it's discovered.
Priority lists contain a set of keywords and their assigned weights
that are used to determine the priorities of the extracted links. The
links will be visited by the crawler in the order of their assigned
priority.
The Crawl Queue holds the discovered URLs that are yet to be
crawled. This queue consists of three sub-queues: High-priority,
Medium-priority and Low-priority queue. The Low-priority queue
is the default queue. The seed URLs are entered into this queue.
We adopt a simple yet very effective heuristics to make the
priority classification based upon the likelihood of the link
eventually leading to academic publications.
We first train a classifier with data collected from two publishing
venues: the Very Large Data Bases (VLDB) Conference and the
Text REtrieval Conference (TREC). Several crawls are carried
out with a breadth-first policy. The logs of the crawls are
analyzed and a traverse tree is generated for each of the crawl that
indicates the URLs visited and the link path that is followed by
the crawler to reach the desired publications.
Consider a small website having 11 pages as shown in Figure 4.
Figure 4. Sample Website
The circles represent URL's in the website and the arrows are the
hyperlinks from one page to another. The link structure shown is
that which is followed by the breadth-first crawler to visit each
URL. All other links such as those that may point outside the
domain are ignored in the above diagram.
The node marked with `S' is the seed or start URL. The nodes
marked with `P' are research document files that are detected by
the crawler. Now the links that are of interest to us are S A P
and S C D P. The anchor text contained in these links `SA',
`AP', `SC', `SD', `DP' is extracted and marked as `interesting'.
The text in the remainder of the links is also noted, but goes in
`not interesting' set.
Similar analysis is done on all the logs that are generated by the
breadth-first crawl. All the keywords that are commonly
occurring in the "interesting" class and not so commonly
occurring in the "non-interesting" class are extracted. Weights are
assigned to each of these keywords depending on their placement
in the link structure. The keywords closer to the documents are
given more weight that those closer to the seed URL. For e.g.
keyword `SA' has a lesser weight than keyword `DP' as `DP' is
closer to P than to S as opposed to `SA'.
The formula for calculating keyword weight is:
W (OT
oq
) = D (Q) / D (P)
(I)
where OT
oq
is the anchor text of the out-going link from page O
to page Q; P is the desired academic paper found by following the
link from O to Q; D(P) denotes the distance (number of hops)
between P and the starting URL S on the path
S
A
B
C
P
E
D
P
Depth 1
Depth 2
Depth 3
Depth 0
Homepage
URL
Database
Document
Database
Ignore
List
Priority Queues
Priority
Heuristics
Download Pages
Link Extractor
Link Filter
Link
Priority
Analyzer
PDF/PS
Documents
Crawler
Thread
Allow
List
304
S...OQ...P; D(Q) denotes the distance between Q
and the starting URL S on the path S...OQ.
Now that a list of anchor texts and their corresponding priority
weights has been compiled during the training process, we can
classify each of them into different priority categories according
to the weights. Table 4 shows a few samples extracted from our
list.
Table 4. Sample Anchor Texts
Priority Anchor
Texts
p_High
volume, pub, paper, conf, journal, content,
program, research, list
p_Medium
topic, faculty, people, group, lab
We now need to consider how to prioritize out-going links that
are more likely to lead to desired academic publications. The
anchor text in these links is compared against the weighted
keywords. If any of the weighted keywords are present in the text,
the comparison is considered to be successful. There are no
keywords having more than one weight. The final priority of the
link is calculated by the following function.
The priority of a link may also depend on the priority of its parent.
This is mainly due to the fact that not all the links that emerge
from a page with a medium or high priority may lead to a research
document. For e.g. in Figure 4 the node `C' will be crawled with a
medium priority, however only node `D' leads to a research
document. The priority of the node `E' is thus reduced to low as it
will not have a weighted keyword attached to it and that of `D' is
increased to high. The priorities of links thus established are used
to insert the link in the proper priority queue for crawling. In
order to achieve high efficiency, the crawler spawns multiple
threads which will be fed with URLs on the descending order of
priority. When there is no URL left in the priority queues and no
crawler thread is currently running, the crawling task is finished.
RESULTS AND DISCUSSION
We have collected data from two Computer Science publication
venues: the ACM SIGMOD International Workshop on the Web
and Databases (WebDB), first held in 1998 and then each year in
conjunction with the annual ACM SIGMOD Conference, and
Journal of Artificial Intelligence Research (JAIR), which was
established in 1993 both as an electronic scientific journals and a
hard-copy semiyearly published by AAAI Press. We choose these
two venues because both of them are highly selective venues with
less than a 25% acceptance rate and we want to observe if there is
a major difference of performance between conferences and
journals.
We have extracted the metadata of WebDB and JAIR from the
DBLP repository. By analyzing these metadata, we successfully
identify the 593 unique authors who have in total published 289
papers in either one of these two venues during the period from
1998 to 2004. Please see Table 5 for more details of the dataset.
Table 5. Statistics of the collected data
WebDB JAIR
Year
Unique
Authors
Publication Unique
Authors
Publication
1998
32 13 40 20
1999
51 17 50 28
2000
61 20 33 20
2001
51 18 45 25
2002
47 17 64 27
2003
56 17 72 30
2004
51 16 57 21
Total
285 118 308 171
In order to examine whether our approach is effective in
recovering those missing documents from a digital library, we use
the CiteSeer Scientific Digital Library as another data source.
Cross-referencing the metadata of each of the two venues from
DBLP, we successfully identified 30 out of 118 (25.42%) WebDB
papers and 46 out of 171 (26.90%) JAIR papers that are not
indexed by CiteSeer (see Figure 5 for details). This is done by
exact title-matching between the records in the DBLP metadata
repository and the CiteSeer document archive.
WebDB
8
12
17
13
10
14
14
5
5
3
5
7
3
2
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
1998
1999
2000
2001
2002
2003
2004
NOT Indexed
Indexed
JAIR
17
25
20
21
17
16
15
3
3
0
4
10
14
6
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
1998
1999
2000
2001
2002
2003
2004
NOT Indexed
Indexed
Figure 5. Coverage of the two venues by CiteSeer
The metadata extracted from DBLP are also used as heuristics to
locate the homepages of the 593 authors. The name of the author
// Get_Priority(): Returns the priority for link L
T
with anchor
text T which has weight W
T
.
// Low=0, Medium=1, High=2 (for weight and priority)
Get_Priority {
If W
T
= 0 and (Priority(Parent(L
T
)) > 0 then
Priority(L
T
)
= Priority(Parent(L
T
)) -1;
Else if W
T
> 0
Priority(L
T
)
= W
T
;
End IF
Return (Priority(L
T
));
}
305
and the corresponding venue (with a number of variations) are
submitted to Google API and the first 10 URLs returned are
parsed automatically by the Homepage Filter component. Using
the heuristics discussed in the previous section, we assign priority
weights to each of the URLs. For each author, URLs with the
highest priority weights are inserted into the URL Database and
crawled by the Focused Crawler at a later stage.
We have manually examined the records in the URL Database in
order to evaluate the effectiveness of the Homepage Aggregator.
In total, homepages of 539 authors (90.89%) have been found.
Details about the 54 authors whose homepages cannot be found
by the system are shown in Table 6. Here we define Non-U.S.
authors to be those whose affiliations are not currently in the
States.
Table 6. Number of authors whose homepages are not found
WebDB JAIR
U.S. Authors
13 6
Non-U.S. Authors
25 10
Total (Percentage)
38 (13.33%)
16 (5.19%)
There are only 2 papers ([22], [23]) of which all the authors'
homepages are not found by the system, which account for less
than 1% of the 289 papers in our data set. In other words,
although the system fails to locate the homepages of about 9% of
the authors, it is not a major performance impact on the document
recall and the crawler should still be able to find 99.31% of all the
papers.
For the cases where the system fails to locate some of the
homepages, we notice that most of the 19 U.S. authors whose
homepages are not found were actually in their graduate programs
when they co-authored the paper, and their Web presences seem
to have disappeared after graduation. In addition, there's a
significant difference between the numbers of U.S. and non-U.S.
authors whose homepages cannot be found, with non-U.S. almost
twice the number of U.S. authors. Since this is our initial attempt
limited to only the domain of computer science, whether this
difference holds true for other disciplines and the reason behind
remain an open question. Finally, there are several cases where
the homepages of those with famous names actually show up
instead of the desired authors. For example, a search via Google
API for the first author in [24] returns the homepage of a comic
artist. The top 5 websites for George Russell, the first author of
[25], happen to belong to that of a famous Jazz musician. There
are also a few cases where the search engine actually returns the
homepage of the co-author instead of the author himself, because
the author's name is listed on the co-author's page as a
collaborator and the co-author's page receives a higher page
ranking. All these indicate that the disambiguation capability
needs to be improved.
4.1 Finding Desired Academic Publications
When the crawl is finished, we manually examine the
downloaded PDF/PostScript documents in order to evaluate the
performance of the crawler. In total, the crawler has acquired 236
out of the 289 papers (81.66%) published in WebDB (100 out of
118, 84.75%) and JAIR (136 out of 171, 79.53%) from 1998 to
2004. For details of the results for each venue, please see Figure 6
and 7.
WebDB, 1998 - 2004
0
5
10
15
20
25
1998
1999
2000
2001
2002
2003
2004
Numb
er
Papers found
Papers published
Figure 6. Number of WebDB Papers
JAIR,1998 - 2004
0
5
10
15
20
25
30
35
1998
1999
2000
2001
2002
2003
2004
Number
Papers found
Papers published
Figure 7. Number of JAIR Papers
Here we adopt one of the performance metrics, recall level, first
proposed in [16] and used in [17]. Recall level is defined as:
(i) = | S(i)
T | / | T |
where S(i) is the set of documents downloaded by the crawler
during a crawl on the dataset of a calendar year i; T is the set of
desired documents, which in this study are the papers published
by a specific venue in the same calendar year. This measure
represents the capability of the system to capture desired
academic papers.
Overall, our system has achieved a recall level of 0.8475 for
WebDB and 0.7953 for JAIR documents. See Figure 8 for more
details.
It's interesting to note that while the recall level of WebDB is
constantly increasing until reaching 1.0 in the last two years, the
recall level of JAIR seems to fluctuate around 0.8 over the 7-years
period. We find that 29 out of the 35 (82.86%) JAIR papers
not found by the system are actually downloadable via a link from
the authors' homepages to the publisher's website. Yet we miss
these papers simply because we limit our crawler not to go
beyond the domain of authors' homepages. We believe that a
more sophisticated domain restriction for the crawler can be
easily employed in order to achieve an even higher recall level.
306
0
0.2
0.4
0.6
0.8
1
1.2
1998
1999
2000
2001
2002
2003
2004
Reca
ll Level
WebDB
JAIR
Figure 8. Overall Recall Level, 1998 - 2004
We calculate the recall level for the documents published in
WebDB and JAIR yet missing from CiteSeer's collection (see
Figure 9). In this case, S(i) is the set of missing documents
downloaded by the crawler, and T is the set of the papers not
indexed by CiteSeer and missing from the collection. On average,
the recall level has achieved 0.78 for WebDB and 0.72 for JAIR.
Especially WebDB's recall level is constantly increasing,
reaching 1.0 for the last three years. This proves that it's practical
to harvest the missing documents for a given publishing venue.
0
0.2
0.4
0.6
0.8
1
1.2
1998
1999
2000
2001
2002
2003
2004
Reca
l
l
L
e
v
e
l
WebDB
JAIR
Figure 9. Recall Level for the Missing Documents
The trends shown in Figure 8 and 9 seem to indicate that a rising
number of academic papers have been put online, especially in
and after the year 2000. However, it's interesting to note that it
seems conference/workshop authors favor putting their
publications on their homepages, while journal authors don't. Due
to the limited size of our sample, we feel this is an open question
to be answered with more data across multiple venues.
4.2 Crawler Comparison: BF Crawler
In order to further evaluate the performance of our system, we
also compare our work to other crawling approaches. First we
crawled three conference websites using our system and a
breadth-first (BF) crawler. Figures 10, 11 and 12 show the results
of crawls on different conference websites. The BF crawls are
shown by the dashed line while the results of the focused crawler
are shown by the solid line on the figures. The horizontal axis
indicates the number of pages crawled and the vertical axis
represents the number of research documents found by searching
those pages. The number of documents found is a cumulative sum
of all PDF, PS and GZ files found on those sites. Since they may
contain duplicate files or the same content in different file types,
the numbers shown do not indicate unique papers. The number of
pages crawled does not include academic papers. The same crawl
restrictions applied to both the crawlers.
0
200
400
600
800
1000
1200
1400
1600
1800
0
10
20
30
Pages Crawled
Nu
m
b
e
r
o
f
Do
c
u
m
e
n
t
s
FC
BF
Figure 10. ACL Conference Crawl
Figure 10 shows the crawls done on parts of the Association for
Computational Linguistics (ACL) conference website. The total
number of pages crawled on this site were less than 30. Both
crawls overlap which indicates that there is virtually no difference
between the document detection rate of the BF crawler and our
focused crawler. For such a small website, both crawlers detect
the same number of documents after crawling the same number of
pages on the website.
0
1000
2000
3000
4000
5000
6000
7000
0
100
200
300
400
Pages Crawled
Nu
m
b
e
r
o
f
Do
c
u
m
e
n
t
s
FC
BF
Figure 11. TREC Conference Crawl
Figure 11 shows the crawls done on the Text Retrieval
Conference (TREC) pages. Here the total of pages crawled is
about 1000 (only first half of the crawl is shown in the graph).
Both crawlers start detecting documents at the same rate. After
detecting around 1393 documents (35 pages crawled) the
document detection rate of the focused crawler becomes slightly
better than the BF crawler. Although the difference is not very
significant, the focused crawler does detect the research
documents slightly earlier in the crawl as compared to the BF
crawler. The BF crawler detects the same amount of documents
(4800 documents) as the focused crawler but after crawling 20-30
307
pages more than the focused crawler. The total number of
documents found by both the crawlers is around 6000.
0
500
1000
1500
2000
2500
3000
3500
4000
0
500 1000 1500 2000 2500 3000 3500
Pages Crawled
Nu
m
b
e
r
o
f
Do
c
u
m
e
n
t
s
FC
BF
Figure 12. VLDB Conference Crawl
The crawls performed on the Very Large Database (VLDB)
conference pages as shown in Figure 12 indicate that the focused
crawler detects the documents much earlier in the crawl. Here the
total number of pages crawled is about 3500. Approximately 28%
(1000 out of 3500) of the documents are located by both the
crawlers after crawling around 8.5% (300 out of 3500) of the
domain. At this point the focused crawler continues to locate
more documents while the BF crawler does not uncover any new
documents until 28% (1000 out of 3500) of the total crawl. 85%
(3000 out of 3500) of the documents are located by the focused
crawler after completing just 33% (1189 out of 3500) of the total
crawl, while the breadth first crawler locates the same amount of
documents after completing 50% (1781 out of 3500) of the total
crawl. Towards the end of the crawl the breadth-first crawler
detects more papers as compared to the focused crawler. It takes
the focused crawler around 1000 more pages of crawls until it
makes up the difference. This seems to be due to the lack of
keywords associated with the links that eventually led to the
documents. The focused crawler evaluates other papers that have
a higher priority values before eventually discovering the
remaining documents.
The behavior of the BF crawler is consistent for all the three
crawls. Most of the documents located were in crawl depths 2, 3,
4 and 5. The BF crawler detects them after completing search of
the previous crawl depths. As the focused crawler prioritizes the
links for crawling, the higher depths with more priority are
crawled before the lower depths with less priority.
The above experiment indicates that the document harvest rate is
almost the same for smaller websites. The difference becomes
apparent when the size of the website being crawled is large. The
focused crawler is able to detect the documents much earlier in
the crawl as compared to the BF crawler. Since the crawls are not
terminated early for the focused crawler, the number of
documents found and the relevance of documents are same for
both the crawlers. Therefore as the size of websites being crawled
increases, the focused crawler detects more documents earlier
during the crawls as compared to the BF crawler.
We assess the crawler's capability of harvesting academic
publications in a more general sense which is not only limited to a
specific venue. We have manually examined the first 500
PDF/PostScript documents found by the two crawlers, classified
the documents into academic publications which are desirable
(papers published in conferences and journals; technical reports;
degree thesis, etc.), and non-publication documents which are
considered noise for a publication collection (course material;
presentation slides; project schedule; etc.) Percentage of both
categories is compared side-by-side and shown in Figure 13. Our
crawler has outperformed the breadth-first counterpart by having
much less of this noise.
423
480
77
22
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
BF
FC
Total: 50
0
Non-Publication
Documents
Academic
Publications
Figure 13. Composition of the First 500 PDF/PS Documents
4.3 Crawler Comparison: Nutch Crawler
We compare the performance of our system with Nutch
(http://www.nutch.org/docs/en/), an open source Web crawler and
search engine built upon Lucene. In our experiment, we run the
Nutch crawler on the official websites of WebDB and JAIR, and
identify those papers published between 1998 and 2004 from the
downloaded documents. We then compare the number of papers
harvested by Nutch and FC crawler (see Figure 14 for details).
Results show that guided by certain heuristics, crawling authors'
homepages can actually achieve almost the same recall level as
crawling publishers' websites.
0
50
100
150
200
WebDB
JAIR
Number of
Documents
Nutch
FC
Total
Publications
Figure 14. Comparison between Nutch and Focused Crawler
Figure 15
indicates the progress of the crawls conducted by both
the Focused Crawler and the Nutch Crawler on the ACL
conference website. The documents found are of PDF and PS
only. The focused crawler starts discovering documents earlier in
the crawl and the process continues gradually. Nutch on the other
hand discovers most of the documents after crawling around 84%
(22 out of 26) of the website.
308
Figure 6. ACL Conference Crawl
0
200
400
600
800
1000
1200
1400
1600
1800
0
10
20
30
Pages Crawled
Nu
m
b
e
r
o
f
Do
c
u
m
e
n
t
s
Nutch
FC
Figure 15. Crawling ACL Conference Websites
Documents found during the ACL conference crawl are classified
into two categories: relevant (i.e. academic publications) and non-relevant
(non-publication). Figure 16 shows the number of
documents in each category. Note that determining documents'
relevancy is an offline process. Here R indicates relevant and NR
indicated non-relevant documents.
R, 1588
R, 1404
NR, 0
NR, 0
0
200
400
600
800
1000
1200
1400
1600
1800
FC
Nutch
Crawler
Nu
m
b
e
r
o
f
Do
c
u
m
e
n
t
s
NR
R
Figure 16. Relevancy of the ACL Conference Crawl
Figure 16 indicates that all the documents (PDF and PS) found by
both the crawlers are academic publications (thus NR = 0).
However, the 184 documents Nutch failed to detect are
determined to be all relevant research publications.
The same comparison is also conducted by crawling the official
WebDB conference websites. Figure 17 shows that the Focused
Crawler starts detecting desired documents at an earlier stage as
compared to the Nutch crawler. Yet due to the small number of
pages crawled, a rigorous comparison cannot be made in this case.
Figure 18 shows that the focused crawler locates two more
academic publications than the Nutch crawler, both of which are
marked as relevant documents.
CONCLUSION AND FUTURE WORK
We have shown the feasibility of using authors' homepages as
alternative online resources to harvest the academic papers
missing from a collection of digital libraries, as well as the
techniques to maximize the crawler's performance in doing so.
We have designed and implemented a heuristic-based system
which utilizes document metadata to accurately locate authors'
homepages and performs a focused crawling to quickly navigate
to the desired publications. Evaluation has been conducted using a
large dataset collected from several publishing venues in the
Computer Science domain, and detailed results are presented and
discussed.
Figure 10. WEDB Conference Crawl
0
20
40
60
80
100
120
0
10
20
30
40
Pages Crawled
Nu
m
b
e
r
o
f
Do
c
u
m
e
n
t
s
Nutch
FC
Figure 17. Crawling WebDB Conference Websites
R, 104
R, 106
NR, 1
NR, 1
0
20
40
60
80
100
120
FC
Nutch
Crawler
Nu
m
b
e
r
o
f
Do
c
u
m
e
n
t
s
NR
R
Figure 18. Relevancy of the WebDB Conference Crawl
For the academic venues investigated in this study, we are able to
fill many of the missing documents in the CiteSeer digital library.
The designed focused crawling technique efficiently locates
desired publications on authors' homepages as well as conference
websites. The Homepage Aggregator detects homepages well and
the Focused Crawler outperforms the baseline crawler in a
number of measures.
Future work includes a more rigorous disambiguation scheme for
the Homepage Aggregator and a more sophisticated weighting
scheme for the Focused Crawler. In addition, we are now
developing a training process for the crawler to learn the URL
patterns of alternative resources other than author homepages,
such as institutional archives. Also, the automation of the process
cycle of crawling, log analysis, and heuristics generation can help
search engine based digital libraries scale and significantly reduce
costs. The actual URL of the web pages can also be used to assist
in priority assignment instead of just using the anchor text of the
link. A comparison of this approach to techniques other than a
Breadth-first crawl is currently underway. Furthermore, we plan
to evaluate the validity of this approach by expanding our
experiment on to disciplines other than the Computer Science
309
domain. We believe our study and its consequents will shed lights
on the question of finding missing papers for our digital library,
or "what's there and what's not".
ACKNOWLEDGEMENTS
We gratefully acknowledge P. Mitra and the anonymous
reviewers for their comments, I. Councill and P. Teregowda for
their work on the CiteSeer metadata, and E. Maldonado and D.
Hellar for the crawl list. This work is partially supported by
Microsoft.
REFERENCES
[1] De Bra, P., Houben, G., Kornatzky, Y., and Post, R
Information Retrieval in Distributed Hypertexts. In
Proceedings of the 4th RIAO (Computer-Assisted Information
Retrieval) Conference, pp. 481-491, 1994.
[2] Cho J., Garcia-Molina, H., and Page, L. Efficient Crawling
Through URL Ordering. In Proceedings of the 7th World Wide
Web Conference, Brisbane, Australia, pp. 161-172. April 1998.
[3] Chakrabarti, S., Van den Berg, M., and Dom, B. Focused
Crawling: A New Approach to Topic-Specific Web Resource
Discovery. In Proceedings of the 8th International WWW
Conference, pp. 545-562, Toronto, Canada, May 1999.
[4] Giles, C. L. and Councill, I. G. Who gets acknowledged:
Measuring scientific contributions through automatic
acknowledgement indexing. In Proceedings of the National
Academy of Sciences 101(51) pp. 17599-17604, Dec. 21, 2004.
[5] Najork, M. and Wiener, J. L. Breadth-First Search Crawling
Yields High-Quality Pages. In Proceedings of the 10th
International World Wide Web Conference, pp. 114-118, 2001.
[6] Page, L., Brin, S., Motwani, R., and Winograd, T. The
pagerank citation ranking: Bringing order to the web.
Technical report, Stanford University Database Group, 1998.
Available at http://dbpubs.stanford.edu: 8090/pub/1999-66
[7] Menczer, F., Pant, G., Ruiz, M., and Srinivasan, P. Evaluating
Topic-Driven Web Crawlers.' In Proceedings of the 2001
Annual Conference of the Association of Computing
Machinery, Special Interest Group in Information Retrieval,
241-249. New Orleans, September 2001.
[8] Haveliwala, T. H. Topic-Sensitive PageRank. In Proceedings
of the 11th International World Wide Web Conference, pp.
517-526. Honolulu, Hawaii, USA. May 2002.
[9] Mukherjea, S. WTMS: a system for collecting and analyzing
topic-specific Web information. Computer Networks 33(1-6):
457-471, 2000.
[10] Diligenti, M., Coetzee, F.M., Lawrence, S., Giles, C. L., and
Gori, M. Focused Crawling Using Context Graphs. In
Proceedings of the 26th International Conference on Very
Large Data Bases, pp. 527-534, 2000.
[11] Aggarwal, C. C., Al-Garawi, F., and Yu, P. S. Intelligent
Crawling on the World Wide Web with Arbitary Predicates. In
Proceedings of the Tenth International Conference on World
Wide Web, pp. 96-105, 2001.
[12] Aggarwal, C. C. On Learning Strategies for Topic Specific
Web Crawling. Next Generation Data Mining Applications,
January 2004.
[13] Pant, G., Tsjoutsiouliklis, K., Johnson, J., and Giles, C. L.
Panorama: Extending Digital Libraries with Topical Crawlers.
In Proceedings of the 2004 Joint ACM/IEEE Conference on
Digital Libraries, pp. 142-150, 2004.
[14] Menczer, F., Pant, G., and Srinivasan, P. Topical Web
Crawlers: Evaluating Adaptive Algorithms. ACM TOIT 4(4):
378-419, 2004.
[15] Pant, G., Srinivasan, P., and Menczer, F. Crawling the Web. In
M. Levene and A. Poulovassilis, eds.: Web Dynamics, Springer,
2004.
[16] Hoff, G. and Mundhenk, M. Finding scientific papers with
homepagesearch and MOPS. In Proceedings of the Nineteenth
Annual International Conference of Computer Documentation,
Communicating in the New Millennium, pp. 201-207. October
21-24, 2001, Santa Fe, New Mexico, USA.
[17] On, B. and Lee, D. PaSE: Locating Online Copy of Scientific
Documents Effectively. In Proceedings of the 7th International
Conference of Asian Digital Libraries (ICADL), pp. 408-418.
Shanghai, China, December 2004.
[18] Shakes, J., Langheinrich, M., and Etzioni, O. Dynamic
Reference Sifting: a Case Study in the Homepage Domain. In
Proceedings of the Sixth International World Wide Web
Conference, pp. 189-200, 1997.
[19] Xi, W. and Fox, E. A. Machine Learning Approach for
Homepage Finding Task. In Proceedings of the Tenth Text
REtrieval Conference (TREC 2001), pp. 686-698, 2001.
[20] Anh, V. N. and Moffat, A. Homepage Finding and Topic
Distillation using a Common Retrieval Strategy. In
Proceedings of the Eleventh Text REtrieval Conference (TREC
2002), 2002.
[21] Ogilvie, P. and Callan, J. Combining Structural Information
and the Use of Priors in Mixed Named-Page and Homepage
Finding. In Proceedings of the Twelfth Text REtrieval
Conference (TREC 2003), pp. 177-184, 2003.
[22] Sundaresan, N., Yi, J., and Huang, A. W. Using Metadata to
Enhance a Web Information Gathering System. In Proceedings
of the Third International Workshop on the Web and
Databases (WebDB 2000), pp. 11-16, 2000.
[23] Flesca, S., Furfaro, F., and Greco, S. Weighted Path Queries on
Web Data. In Proceedings of the Fourth International
Workshop on the Web and Databases (WebDB 2001), pp. 7-12,
2001.
[24] Ruiz, A., Lpez-de-Teruel, P. E., and Garrido, M. C.
Probabilistic Inference from Arbitrary Uncertainty using
Mixtures of Factorized Generalized Gaussians. Journal of
Artificial Intelligence Research (JAIR), Volume 9, pp. 167-217,
1998.
[25] Russell, G., Neumller, M., and Connor, R. C. H. TypEx: A
Type Based Approach to XML Stream Querying. In
Proceedings of the Sixth International Workshop on the Web
and Databases (WebDB 2003), pp. 55-60, 2003.
310
| Digital libraries;CiteSeer;focused crawler;DBLP;harvesting;ACM |
22 | A Two-Phase Sampling Technique for Information Extraction from Hidden Web Databases | Hidden Web databases maintain a collection of specialised documents, which are dynamically generated in response to users' queries. However, the documents are generated by Web page templates, which contain information that is irrelevant to queries. This paper presents a Two-Phase Sampling (2PS) technique that detects templates and extracts query-related information from the sampled documents of a database. In the first phase, 2PS queries databases with terms contained in their search interface pages and the subsequently sampled documents. This process retrieves a required number of documents. In the second phase, 2PS detects Web page templates in the sampled documents in order to extract information relevant to queries. We test 2PS on a number of real-world Hidden Web databases. Experimental results demonstrate that 2PS effectively eliminates irrelevant information contained in Web page templates and generates terms and frequencies with improved accuracy. | INTRODUCTION
An increasing number of databases on the Web maintain a
collection of documents such as archives, user manuals or news
articles. These databases dynamically generate documents in
response to users' queries and are referred to as Hidden Web
databases [5]. As the number of databases proliferates, it has
become prohibitive for specialised search services (such as
search.com) to evaluate databases individually in order to answer
users' queries.
Current techniques such as database selection and categorisation
have been employed to enhance the effectiveness of information
retrieval from databases [2, 5, 10, 11, 15]. In the domain of the
Hidden Web, knowledge about the contents of databases is often
unavailable. Existing approaches such as in [2, 10, 15] acquire
knowledge through sampling documents from databases. For
instance, query-based sampling [2] queries databases with terms
that are randomly selected from those contained in the sampled
documents. The techniques in [10, 15] sample databases with
terms obtained from Web logs to retrieve additional topic terms.
A major issue associated with existing techniques is that they also
extract information irrelevant to queries. That is, information
extracted is often found in Web page templates, which contain
navigation panels, search interfaces and advertisements.
Consequently, the accuracy of terms and frequencies generated
from sampled documents has been reduced.
In addition, approximate string matching techniques are adopted
by [13] to extract information from Web pages, but this approach
is limited to textual contents only. Alternatively, the approaches
proposed in [3, 4] analyse Web pages in tree-like structures.
However, such an approach requires Web pages with well-conformed
HTML tag trees. Furthermore, [3] discovers
dynamically generated objects from Web pages, which are
clustered into groups of similar structured pages based on a set of
pre-defined templates, such as exception page templates and
result page templates.
In this paper, we propose a sampling and extraction technique,
which is referred to as Two-Phase Sampling (2PS). 2PS aims to
extract information relevant to queries in order to acquire
information contents of underlying databases. Our technique is
applied in two phases. First, it randomly selects a term from those
found in the search interface pages of a database to initiate the
process of sampling documents. Subsequently, 2PS queries the
database with terms randomly selected from those contained in
the sampled documents. Second, 2PS detects Web page templates
and extracts query-related information from which terms and
frequencies are generated to summarise the database contents.
Our approach utilises information contained in search interface
pages of a database to initiate the sampling process. This differs
from current sampling techniques such as query-based sampling,
which performs an initial query with a frequently used term.
Furthermore, 2PS extracts terms that are relevant to queries thus
generating statistics (i.e., terms and frequencies) that represent
database contents with improved accuracy. By contrast, the
approaches in [2, 10, 15] extract all terms from sampled
documents, including those contained in Web page templates.
Consequently, information that is irrelevant to queries is also
extracted.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
WIDM'04, November 1213, 2004, Washington, DC, USA.
Copyright 2004 ACM 1-58113-978-0/04/0011...$5.00.
1
Figure 1. The Two-Phase Sampling (2PS) technique.
2PS is implemented as a prototype system and tested on a number
of real-world Hidden Web databases, which contain computer
manuals, healthcare archives and news articles. Experimental
results show that our technique effectively detects Web page
templates and generates terms and frequencies (from sampled
documents) that are relevant to the queries.
The remainder of the paper is organised as follows. Section 2
introduces current approaches to the discovery of information
contents of Hidden Web databases. Related work on the
information extraction from Web pages or dynamically generated
documents is also discussed. Section 3 describes the proposed
2PS technique. Section 4 presents experimental results. Section 5
concludes the paper.
RELATED WORK
A major area of current research into the information retrieval of
Hidden Web databases focuses on the automatic discovery of
information contents of databases, in order to facilitate their
selection or categorisation. For instance, the technique proposed
in [6] analyses the hyperlink structures of databases in order to
facilitate the search for databases that are similar in content. The
approach adopted by [10, 15] examines the textual contents of
search interface pages maintained by data sources to gather
information about database contents.
A different approach is to retrieve actual documents to acquire
such information. However, in the domain of Hidden Web
databases, it is difficult to obtain all documents from a database.
Therefore, a number of research studies [2, 10, 15] obtain
information by retrieving a set of documents through sampling.
For instance, query-based sampling [2] queries databases with
terms that are randomly selected from those contained in the
sampled documents. The techniques in [10, 15] sample databases
with terms extracted from Web logs to obtain additional topic
terms. These techniques generate terms and frequencies from
sampled documents, which are referred to as Language Models
[2], Textual Models [10, 15] or Centroids [11].
A key issue associated with the aforementioned sampling
techniques is that they extract information that is often irrelevant
to queries, since information contained in Web page templates
such as navigation panels, search interfaces and advertisements is
also extracted. For example, a language model generated from the
sampled documents of the Combined Health Information
Database (CHID) contains terms (such as `author' and `format')
with high frequencies. These terms are not relevant to queries but
are used for descriptive purposes. Consequently, the accuracy of
terms and frequencies generated from sampled documents has
been reduced. The use of additional stop-word lists has been
considered in [2] to eliminate irrelevant terms - but it is
maintained that such a technique can be difficult to apply in
practice.
Existing techniques in information extraction from Web pages are
of varying degrees of complexity. For instance, approximate
string matching techniques are adopted by [13] to extract texts
that are different. This approach is limited to finding textual
similarities and differences. The approaches proposed in [3, 4]
analyse textual contents and tag structures in order to extract data
from Web pages. However, such an approach requires Web pages
that are produced with well-conformed HTML tag-trees.
Computation is also needed to convert and analyse Web pages in
a tree-like structure. Moreover, [3] identifies Web page templates
based on a number of pre-defined templates, such as exception
page templates and result page templates.
Our technique examines Web documents based on textual
contents and the neighbouring tag structures rather than analysing
their contents in a tree-like structure. We also detect information
contained in different templates through which documents are
generated. Therefore, it is not restricted to a pre-defined set of
page templates.
Furthermore, we focus on databases that contain documents such
as archives and new articles. A distinct characteristic of
documents found in such a domain is that the content of a
document is often accompanied by other information for
supplementary or navigation purposes. The proposed 2PS
technique detects and eliminates information contained in
templates in order to extract the content of a document. This
differs from the approaches in [1, 4], which attempt to extract a
set of data from Web pages presented in a particular pattern. For
example, the Web pages of a bookstore Web site contain
information about authors followed by their associated list of
publications. However, in the domain of document databases,
information contained in dynamically generated Web pages is
often presented in a structured fashion but irrelevant to queries.
Other research studies [9, 8, 12] are specifically associated with
the extraction of data from query forms in order to further the
retrieval of information from the underlying databases.
TWO-PHASE SAMPLING
This section presents the proposed technique for extracting
information from Hidden Web document databases in two phases,
which we refer to as Two-Phase Sampling (2PS). Figure 1 depicts
the process of sampling a database and extracting query-related
2
information from the sampled documents. In phase one, 2PS
obtains randomly sampled documents. In phase two, it detects
Web page templates. This extracts information relevant to the
queries and then generates terms and frequencies to summarise
the database content. The two phases are detailed in section 3.1
and 3.2.
3.1 Phase One: Document Sampling
In the first phase we initiate the process of sampling documents
from a database with a randomly selected term from those
contained in the search interface pages of the database. This
retrieves top N documents where N represents the number of
documents that are the most relevant to the query. A subsequent
query term is then randomly selected from terms extracted from
the sampled documents. This process is repeated until a required
number of documents are sampled. The sampled documents are
stored locally for further analysis.
Figure 2 illustrates the algorithm that obtains a number of
randomly sampled documents. t
q
denotes a term extracted from
the search interface pages of a database, D. qt
p
represents a query
term selected from a collection of terms, Q, qt
p
Q, 1 p m;
where m is the distinct number of terms extracted from the search
interface pages and the documents that have been sampled. R
represents the set of documents randomly sampled from D. t
r
is a
term extracted from d
i
. d
i
represents a sampled document from D,
d
i
D, 1 i n, where n is the number of document to sample.
Figure 2. The algorithm for sampling documents from a
database.
2PS differs from query-based sampling in terms of selecting an
initial query. The latter selects an initial term from a list of
frequently used terms. 2PS initiates the sampling process with a
term randomly selected from those contained in the search
interface pages of the database. This utilises a source of
information that is closely related to its content. Moreover, 2PS
analyses the sampled documents in the second phase in order to
extract query-related information. By contrast, query-based
sampling does not analyse their contents to determine whether
terms are relevant to queries.
3.2 Phase Two: Document Content Extraction
and Summarisation
The documents sampled from the first phase are further analysed
in order to extract information relevant to the queries. This is then
followed by the generation of terms and frequencies to represent
the content of the underlying database. This phase is carried out
through the following processes.
3.2.1 Generate Document Content Representations
The content of each sampled document is converted into a list of
text and tag segments. Tag segments include start tags, end tags
and single tags specified in HyperText Markup Language
(HTML). Text segments are text that resides between two tag
segments. The document content is then represented by text
segments and their neighbouring tag segments, which we refer to
as Text with Neighbouring Adjacent Tag Segments (TNATS). The
neighbouring adjacent tag segments of a text segment are defined
as the list of tag segments that are located immediately before and
after the text segment until another text segment is reached. The
neighbouring tag segments of a text segment describe how the
text segment is structured and its relation to the nearest text
segments. Assume that a document contains n segments, a text
segment, txs, is defined as: txs = (tx
i
, tg-lst
j
, tg-lst
k
), where tx
i
is
the textual content of the i
th
text segment, 1
i n; tg-lst
j
represents p tag segments located before tx
i
and tg-lst
k
represents
q tag segments located after tx
i
until another text segment is
reached. tg-lst
j
= (tg
1
, ..., tg
p
), 1
j p and tg-lst
k
= (tg
1
, ..., tg
q
),
1
k q.
Algorithm SampleDocument
Extract t
q
from search interface pages of D, Q = t
q
For i = 1 to n
Randomly select qt
p
from Q
If (qt
p
has not been selected previously)
Execute the query with qt
p
on D
j = 0
While j <= N
If (d
i
R)
Retrieve d
i
from D
Extract t
r
from d
i
,
R = d
i
Q = t
r
Increase j by 1
End if
End while
End if
End for
Figure 3. A template-generated document from CHID.
Figure 3 shows a template-generated document retrieved from the
CHID database. The source code for this document is given in
Figure 4. For example, text segment, `1. Equipos Mas Seguros:
Si Te Inyectas Drogas.', can be identified by the text (i.e., `1.
Equipos Mas Seguros: Si Te Inyectas Drogas.') and its
neighbouring tag segments. These include the list of tags located
before the text (i.e., </TITLE>, </HEAD>, <BODY>, <HR>,
<H3>, <B> and <I>) and the neighbouring tags located after the
text (i.e., </I>, </B>, </H3>, <I> and <B>). Thus, this segment is
then represented as (`1. Equipos Mas Seguros: Si Te Inyectas
Drogas.', (</TITLE>, </HEAD>, <BODY>, <HR>, <H3>, <B>
,<I>), (</I>, </B>, </H3>, <I>, <B>)). Figure 5 shows the content
3
representation of the CHID document (given in Figure 3)
generated based on TNATS. Given a sampled document, d, with n
text segments, the content of d is then represented as: Content(d)
= {txs
1
, ..., txs
n
}, where txs
i
represents a text segment, 1
i n.
Figure 4. The source code for the CHID document.
Figure 5. The content representation of the CHID document
using TNATS.
3.2.2 Detect Templates
In the domain of Hidden Web databases, documents are often
presented to users through one or more templates. Templates are
typically employed in order to describe document contents or to
assist users in navigation. For example, information contained in
the document (as shown in Figure 3) can be classified into the two
following categories:
(i) Template-Generated Information. This includes information
such as navigation panels, search interfaces and
advertisements. In addition, information may be given to
describe the content of a document. Such information is
irrelevant to a user's query. For example, navigation links
(such as `Next Doc' and `Last Doc') and headings (such
`Subfile' and `Format') are found in the document.
(ii) Query-Related Information. This information is retrieved in
response to a user's query, i.e., `1. Equipos Mas Seguros:
Si Te Inyectas Drogas. ...'.
The 2PS technique detects Web page templates employed by
databases to generate documents in order to extract information
that is relevant to queries. Figure 6 describes the algorithm that
detects information contained in Web page templates from n
sampled documents. d
i
represents a sampled document from the
database D, d
i
,
D, 1 i n. Content(d
i
) denotes the content
representation of d
i
.
...
<HTML><HEAD><TITLE>CHID Document
</TITLE></HEAD>
<BODY>
<HR><H3><B><I> 1. Equipos Mas Seguros: Si Te Inyectas
Drogas.
</I></B></H3>
<I><B>Subfile: </B></I>
AIDS Education<BR>
<I><B>Format (FM): </B></I>
08 - Brochure.
<BR>
...
Algorithm DetectTemplate
For i = 1 to n
If
T =
If S =
S = d
i
Else if S
While l <= s AND T =
Compare (Content(d
i
),Content(d
l
))
If Content(d
i
) Content(d
l
)
wpt
k
= Content(d
i
)
Content(d
l
),
Store wpt
k
, T = wpt
k
Delete (Content(d
i
)
Content(d
l
)) from
Content(d
i
), Content(d
l
)
G
k
= d
i
, G
k
= d
l
Delete d
l
from S
End if
End while
If T =
S = d
i
End if
End if
Else if T
While k <= r AND d
i
G
k
Compare (Content(wpt
k
), Content(d
i
))
If Content(wpt
k
) Content(d
i
)
Delete (Content(wpt
k
)
Content(d
i
)) from
Content(d
i
)
G
k
= d
i
End if
End while
If S
AND d
i
G
k
While l <= s AND d
i
G
k
Compare (Content(d
i
),Content(d
l
))
If Content(d
i
) Content(d
l
)
wpt
k
= Content(d
i
)
Content(d
l
)
Store wpt
k
, T = wpt
k
Delete (Content(d
i
)
Content(d
l
)) from
Content(d
i
), Content(d
l
)
G
k
= d
i
, G
k
= d
l
Delete d
l
from S
End if
End while
End if
If d
i
G
k
S = d
i
End if
End if
End for
...
`CHID Document', (<HTML>, <HEAD>, <TITLE>),
(</TITLE>, </HEAD>, <BODY>, <HR>, <H3>, <B>,
<I>);
`1. Equipos Mas Seguros: Si Te Inyectas Drogas.',
(</TITLE>, </HEAD>, <BODY>, <HR>, <H3>, <B>,
<I>), (</I>, </B>, </H3>, <I>, <B>);
`Subfile:', (</I>, </B>, </H3>, <I>, <B>), (</B>, </I>
);
`AIDS Education', (</B>, </I>)
, (
<BR>, <I>, <B>);
`Format (FM):', (<BR>, <I>, <B>), (</B>, </I>);
...
Figure 6. The algorithm for detecting and eliminating the
information contained in Web page templates.
4
Similar to the representation for the contents of sampled
documents, the content of a Web page template, wpt, is
represented as Content(wpt) = {txs
1
, ..., txs
q
}, where q is the
number of text segments, txs
j
, 1
j q. T represents a set of
templates detected. T = {wpt
1
, ..., wpt
r
}, where r is the distinct
number of templates, wpt
k
, 1
k r. G
k
represents a group of
documents generated from wpt
k
. Furthermore, S represents the
sampled documents from which no templates have yet been
detected. Thus, S = {d
1
, ..., d
s
}, where s is the number of
temporarily stored document, d
l
, 1
l s.
The process of detecting templates is executed until all sampled
documents are analysed. This results in the identification of one
or more templates. For each template, two or more documents are
assigned to a group associated with the template from which the
documents are generated. Each document contains text segments
that are not found in their respective template. These text
segments are partially related to their queries. In addition to a set
of templates, the content representations of zero or more
documents in which no matched patterns are found are stored.
3.2.3 Extract Query-Related Information
This process analyses a group of documents associated with each
template from which documents are generated. It further identifies
any repeated patterns from the remaining text segments of the
documents in order to extract query-related information.
We compute cosine similarity [14] given in (1) to determine the
similarities between the text segments of different documents that
are associated the template where the documents are generated.
The textual content of each text segment is represented as a vector
of terms with weights. The weight of a term is obtained by its
occurrence in the segment.
where txs
i
and txs
j
represent two text segments in a document; tw
ik
is the weight of term k in txs
i
, and tw
jk
is the weight of term k in
txs
j
. This is only applied to text segments with identical adjacent
tag segments. Two segments are considered to be similar if their
similarity exceeds a threshold value. The threshold value is
determined experimentally.
The algorithm that extracts information relevant to queries is
illustrated in Figure 7. d
a
and d
b
represent the sampled documents
from the database, D, d
a
, d
b
G
k
, where G
k
denotes a group of
documents associated with the template, wpt
k
, from which the
documents are generated. tx
m
represents the textual content of a
text segment, txs
m
, contained in d
i
, d
i
G
k
. tx
n
represents the
textual content of a text segment, txs
n
, contained in d
l
, d
l
S. S
represents the sampled documents from which no templates are
detected.
The
results
of the above algorithm
extract text segments with
different tag structures. It also extracts text segments that have
identical adjacent tag structures but are significantly different in
their textual contents. Figure 8 shows the information extracted
from the document content (given in Figure 4) as a result of
eliminating information contained in the Web page template.
3.2.4 Generate Content Summary
Frequencies are computed for the terms extracted from randomly
sampled documents. These summarise the information content of
a database, which we refer to as Content Summary.
Algorithm ExtractQueryInfo
For each (d
a
G
k
)
For each (d
b
G
k
), d
a
d
b
Compare (Content(d
a
),Content(d
b
))
If Content(d
a
) Content(d
b
)
Delete (Content(d
a
)
Content(d
b
)) from
Content(d
a
), Content(d
b
)
End if
End for
End for
For each (d
i
G
k
)
Extract tx
m
of txs
m
from Content(d
i
)
End for
For each (d
l
S)
Extract tx
n
of txs
n
from Content(d
l
)
End for
Figure 7. The algorithm for extracting query-related
information from template-generated documents.
1. Equipos Mas Seguros: Si Te Inyectas Drogas.
AIDS Education
...
=
=
=
=
t
k
jk
t
k
t
k
ik
jk
ik
j
i
tw
tw
tw
tw
txs
txs
COSINE
1
2
1
1
2
)
(
)
(
)
(
)
,
(
.
(1)
Figure 8. The query-related information extracted from the
CHID document.
Previous experiments in [2] demonstrate that a number of
randomly sampled documents (i.e., 300 documents) sufficiently
represent the information content of a database.
In the domain of Hidden Web databases, the inverse document
frequency (idf), used in traditional information retrieval, is not
applicable, since the total number of documents in a database is
often unknown. Therefore, document frequency (df), collection
term frequency (ctf) and average term frequency (avg_tf) initially
used in [2] are applied in this paper. We consider the following
frequencies to compute the content summary of a Hidden Web
database.
Document frequency (df): the number of documents in the
collection of documents sampled that contain term t,
where d is the document and f is the frequency
Collection term frequency (ctf): the occurrence of a term
in the collection of documents sampled, where c is the
collection, t is the term and f is the frequency
Average term frequency (avg_tf): the average frequency
of a term obtained from dividing collection term
frequency by document frequency (i.e., avg_tf = ctf / df)
5
Table 1. 3 Hidden Web databases used in the experiments
Database URL
Subject Content Template
Help Site
<A href="http://www.help-site.com/">www.help-site.com
Computer manuals Homogeneous Multiple
templates
<A href="http://www.chid.nih.gov/">CHID www.chid.nih.gov
Healthcare
articles Homogeneous Single
template
Wired News
<A href="http://www.wired.com/">www.wired.com
General news articles
Heterogeneous Single
template
The content summary of a document database is defined as
follows. Assume that a Hidden Web database, D, is sampled with
N documents. Each sampled document, d, is represented as a
vector of terms and their associated weights [14]. Thus d = (w
1
,
..., w
m
), where w
i
is the weight of term t
i
, and m is the number of
distinct terms in d
D, 1 i m. Each w
i
is computed using term
frequency metric, avg_tf (i. e., w
i
= ctf
i
/df
i
). The content summary
is then denoted as CS(D), which is generated from the vectors of
sampled documents. Assume that n is the number of distinct terms
in all sampled documents. CS(D) is, therefore, expressed as a
vector of terms: CS(D)= {w
1
, ..., w
n
}, where w
i
is computed by
adding the weights of t
i
in the documents sampled from D and
dividing the sum by the number of sampled documents that
contain t
i
, 1
i n.
EXPERIMENTAL RESULTS
This section reports on a number of experiments conducted to
assess the effectiveness of the 2PS technique in terms of: (i)
detecting Web page templates, and (ii) extracting relevant
information from the documents of a Hidden Web databases
through sampling. The experimental results are compared with
those from query-based sampling (abbreviated as QS). We
compare 2PS with QS as it is a well-established technique and has
also been widely adopted by other relevant studies [5, 10, 11, 15].
Experiments are carried out on three real-world Hidden Web
document databases including Help Site, CHID and Wired News,
which provide information about user manuals, healthcare
archives and news articles, respectively. Table 1 summarises
these databases in terms of their subjects, contents and templates
employed. For instance, Help Site and CHID contain documents
relating to subjects on computing and healthcare, respectively.
Their information contents are homogeneous in nature. By
contrast, Wired News contains articles that relate to different
subjects of interest.
Where the number of templates is concerned, CHID and Wired
News generate documents from one Web page template. Help
Site maintains a collection of documents produced by other
information sources. Subsequently, different Web page templates
are found in Help Site sampled documents.
The experiment conducted using QS initiates the first query to a
database with a frequently used term to obtain a set of sampled
documents. Subsequent query terms are randomly selected from
those contained in the sampled documents. It extracts terms
(including terms contained in Web page templates) and updates
the frequencies after each document is sampled. By contrast, 2PS
initiates the sampling process with a term contained in the search
interface pages of a database. In addition, 2PS analyses the
sampled documents in the second phase in order to extract query-related
information, from which terms and frequencies are
generated.
Experimental results in [2] conclude that QS obtains
approximately 80% of terms from a database, when 300
documents are sampled and top 4 documents are retrieved for
each query. These two parameters are used to obtain results for
our experiments in which terms and frequencies are generated for
QS and 2PS after 300 documents have been sampled. The results
generated from QS provide the baseline for the experiments.
Three sets of samples are obtained for each database and 300
documents are retrieved for each sample. First, we manually
examine each set of sampled documents to obtain the number of
Web page templates used to generate the documents. This is then
compared with the number of templates detected by 2PS. The
detection of Web page templates from the sampled documents is
important as this determines whether irrelevant information is
effectively eliminated.
Next, we compare the number of relevant terms (from top 50
terms) retrieved using 2PS with the number obtained by QS.
Terms are ranked according to their ctf frequencies to determine
their relevancy to the queries. This frequency represents the
occurrences of a term contained in the sampled documents. Ctf
frequencies are used to demonstrate the effectiveness of
extracting query-related information from sampled documents
since the terms extracted from Web page templates are often
ranked with high ctf frequencies.
Table 2. The number of templates employed by databases and
the number detected by 2PS
Number of templates
Databases
Employed Detected
Sample 1
17
15
Sample 2
17
16
Help Site
Sample 3
19
17
Sample 1
1
1
Sample 2
1
1
CHID
Sample 3
1
1
Sample 1
1
1
Sample 2
1
1
Wired News
Sample 3
1
1
Experimental results for QS and 2PS are summarised as follows.
Firstly, Table 2 gives the number of Web page templates
employed by the databases and the number detected by 2PS. It
shows that 2PS effectively identifies the number of templates
found in the sampled documents. However, a small number of
templates are not detected from Help Site. For instance, 2PS does
not detect two of the templates from the first set of sampled
documents, since the two templates are very similar in terms of
content and structure.
6
Table 3 summarises the number of relevant terms (from top 50
terms ranked according to their ctf frequencies) obtained for the
three databases. These terms are retrieved using 2PS and QS. We
determine the relevancy of a term by examining whether the term
is found in Web page templates. Table 3 gives the number of
retrieved terms that do not appear in Web page templates. The
results show that 2PS obtains more relevant terms. For instance,
in the first set of documents sampled from CHID using 2PS, the
number of relevant terms retrieved is 47. By comparison, the
number of terms obtained for QS is 20.
The results generated from CHID and Wired News demonstrate
that 2PS retrieves more relevant terms, as a large number of terms
contained in the templates have been successfully eliminated from
the top 50 terms. However, the elimination of template terms is
less noticeable for Help Site. Our observation is that template
terms attain high frequencies since the CHID and Wired News
databases generate documents using a single Web page template.
By comparison, a larger number of Web page templates are found
in the documents sampled from Help Site. As a result, terms
contained in the templates do not attain high frequencies as those
found in the templates employed by CHID and Wired News.
Table 4 and 5 show the results of the top 50 terms ranked
according to their ctf frequencies retrieved from the first set of
sampled documents of the CHID database. Table 4 shows the top
50 terms retrieved for QS whereby terms contained in Web page
templates are not excluded. As a result, a number of terms (such
as `author', `language' and `format') have attained much higher
frequencies. By contrast, Table 5 lists the top 50 terms retrieved
using 2PS. Our technique eliminates terms (such as `author' and
`format') and obtains terms (such as `treatment', `disease' and
`immunodeficiency') in the higher rank.
Table 3. The number of relevant terms retrieved (from top 50
terms) according to ctf frequencies
Number of relevant terms
Databases
QS 2PS
Sample 1
46
48
Sample 2
47
48
Help Site
Sample 3
46
48
Sample 1
20
47
Sample 2
19
47
CHID
Sample 3
20
47
Sample 1
14
42
Sample 2
10
43
Wired News
Sample 3
11
39
CONCLUSION
This paper presents a sampling and extraction technique, 2PS,
which utilises information that is contained in the search interface
pages and documents of a database in the sampling process. This
technique extracts information relevant to queries from the
sampled documents in order to generate terms and frequencies
with improved accuracy. Experimental results demonstrate that
our technique effectively eliminates information contained in
Web page templates, thus attaining terms and frequencies that are
of a higher degree of relevancy. This can also enhance the
effectiveness of categorisation in which such statistics are used to
represent the information contents of underlying databases.
We obtain promising results by applying 2PS in the experiments
on three databases that differ in nature. However, experiments on
a larger number of Hidden Web databases are required in order to
further assess the effectiveness of the proposed technique.
Table 4. Top 50 terms and frequencies ranked according to ctf generated from CHID when QS is applied
Rank Term Rank Term Rank
Term
1
hiv
18 document 35
lg
2 aids 19 disease 36
ve
3 information 20 published 37
yr
4 health 21
physical
38
ac
5 prevention 22 subfile 39
corporate
6 education 23 audience 40
mj
7 tb 24
update
41 description
8 accession 25 verification 42
www
9 number 26 major 43
cn
10 author 27
pamphlet
44
pd
11 persons 28 chid 45
english
12 language 29 human 46
national
13 sheet 30 date 47
public
14 format 31 abstract 48
immunodeficiency
15
treatment
32 code 49
virus
16
descriptors
33 ab 50
org
17 availability 34
fm
7
Table 5. Top 50 terms and frequencies ranked according to ctf generated from CHID when 2PS is applied
Rank Term Rank
Term
Rank
Term
1 hiv 18
education
35
testing
2 aids 19
virus
36
programs
3 information 20 org 37
services
4 health 21
notes
38
clinical
5 prevention 22 nt 39
people
6 tb 23
cdc
40
hepatitis
7 persons 24
service
41
community
8
sheet
25 box 42
world
9 treatment 26
research
43
listed
10 disease 27
department
44
professionals
11 human 28
positive
45
training
12 pamphlet 29
tuberculosis
46
diseases
13 www 30
control
47
accession
14 http 31
drug
48
network
15 national 32
discusses
49
general
16 public 33 ill 50
std
17 immunodeficiency 34 organizations
REFERENCES
[1]
Arasu, A. and Garcia-Molina, H. Extracting Structured Data
from Web Pages. In Proceedings of the 2003 ACM SIGMOD
International Conference on Management, 2003, 337-348.
[2]
Callan, J. and Connell, M. Query-Based Sampling of Text
Databases. ACM Transactions on Information Systems
(TOIS), Vol. 19, No. 2, 2001, 97-130.
[3]
Caverlee, J., Buttler, D. and Liu, L. Discovering Objects in
Dynamically-Generated Web Pages. Technical report,
Georgia Institute of Technology, 2003.
[4]
Crescenzi, V., Mecca, G. and Merialdo, P. ROADRUNNER:
Towards Automatic Data Extraction from Large Web Sites,
In Proceedings of the 27th International Conference on Very
Large Data Bases (VLDB), 2001, 109-118.
[5]
Gravano, L., Ipeirotis, P. G. and Sahami, M. QProber: A
System for Automatic Classification of Hidden-Web
Databases. ACM Transactions on Information Systems
(TOIS), Vol. 21, No. 1, 2003.
[6]
He, M. and Drobnik, O. Clustering Specialised Web-databases
by Exploiting Hyperlinks. In Proceedings of the
Second Asian Digital Library Conference, 1999.
[7]
Hedley, Y.L., Younas, M., James, A. and Sanderson M.
Query-Related Data Extraction of Hidden Web Documents.
In Proceedings of the 27th Annual International ACM SIGIR
Conference, 2004, 558-559.
[8]
Lage, J. P., da Silva, A. S., Golgher, P. B. and Laender, A.
H. F. Automatic Generation of Agents for Collecting Hidden
Web Pages for Data Extraction. Data & Knowledge
Engineering, Vol. 49, No. 2, 2004, 177-196.
[9]
Liddle, S.W., Yau, S.H. and Embley, D. W. On the
Automatic Extraction of Data from the Hidden Web. In
Proceedings of the 20th International Conference on
Conceptual Modeling, (ER) Workshops, 2001, 212-226.
[10]
Lin, K.I. and Chen, H. Automatic Information Discovery
from the Invisible Web. International Conference on
Information Technology: Coding and Computing (ITCC),
2002, 332-337.
[11]
Meng, W., Wang, W., Sun, H. and Yu, C. Concept
Hierarchy Based Text Database Categorization.
International Journal on Knowledge and Information
Systems, Vol. 4, No. 2, 2002, 132-150.
[12]
Raghavan, S. and Garcia-Molina, H. Crawling the Hidden
Web. In Proceedings of the 27th International Conference on
Very Large Databases (VLDB), 2001, 129-138.
[13]
Rahardjo, B. and Yap, R. Automatic Information Extraction
from Web Pages, In Proceedings of the 24th Annual
International ACM SIGIR Conference, 2001, 430-431.
[14]
Salton, G. and McGill, M. Introduction to Modern
Information Retrieval. New York, McCraw-Hill, 1983.
[15]
Sugiura, A. and Etzioni, O. Query Routing for Web Search
Engines: Architecture and Experiments. In Proceedings of
the 9th International World Wide Web Conference: The
Web: The Next Generation, 2000, 417-430.
8 | Hidden Web Databases;search interface pages;Information Extraction;hypertext markup langauges;hidden web databases;2-phase sampling technique;neighbouring adjacent tag segments;string matching techniques;information extraction;web page templates;Document Sampling;query-based sampling;irrelavant information extraction |
23 | A Unified Approach for Improving QoS and Provider Revenue in 3G Mobile Networks | In this paper, we introduce a unified approach for the adaptive control of 3G mobile networks in order to improve both quality of service (QoS) for mobile subscribers and to increase revenue for service providers. The introduced approach constantly monitors QoS measures as packet loss probability and the current number of active mobile users during operation of the network. Based on the values of the QoS measures just observed, the system parameters of the admission controller and packet scheduler are controlled by the adaptive performance management entity. Considering UMTS, we present performance curves showing that handover failure probability is improved by more than one order of magnitude. Moreover, the packet loss probability can be effectively regulated to a predefined level and provider revenue is significantly increased for all pricing policies. | Introduction
The third generation (3G) of mobile networks is expected
to complete the worldwide globalization process of mobile
communication. Since different parts of the worlds emphasize
different issues, the global term 3G has regional synonyms
: In the US and Japan, 3G often carries the name International
Mobile Telephony 2000 (IMT2000). In Europe,
3G has become Universal Mobile Telecommunications System
(UMTS) following the ETSI perspective. The European
industrial players have created the 3rd Generation Partnership
Project (3GPP) [1] for the standardization of UMTS.
3G mobile networks provide the foundation for new services
with high-rate data not provided by current second generation
systems [26]. While the standardization of 3G is still ongoing
the discussion of technical issues beyond 3G has already
started [23,28]. Recently, Aretz et al. reported a vision for
the future of wireless communication systems beyond 3G that
consists of a combination of several optimized access systems
on a common IP-based medium access and core network platform
[5].
Charging and pricing are essential issues for network operations
of 3G mobile networks. A primary target of differen-tiated
pricing of Internet services is the prevention of system
overload and an optimal resource usage according to different
daytimes and different traffic intensities [12]. Among the
proposed pricing proposals, flat-rate pricing [11] is the most
common mode of payment today for bandwidth services.
Flat-rate pricing is popular because of its minimal accounting
overhead. A flat-rate encourages usage but does not offer
any motivation for users to adjust their demand. Dynamic
pricing models that take the state of the network into account
in the price determination have been proposed as being more
Corresponding author.
responsive. Usage-based pricing regulates usage by imposing
a fee based on the amount of data actually sent, whereas
congestion-sensitive pricing uses a fee based on the current
state of congestion in the network. Thus, a unified approach
considering both dynamic pricing and controlling quality of
service (i.e., performance management) provides an effective
tool for the operation of 3G mobile networks. However, in
previous work [8,13,19,21,25] the improvement of Quality of
Service (QoS) in 3G mobile networks and the optimization
of mobile service provider revenue has been considered sepa-rately
.
The Quality of Service (QoS) concept and architecture for
UMTS networks specified in [2] provides means for sharing
radio resources among different groups of users according
to their individual QoS demands. Furthermore, the concept
of UMTS management and control functions such as
admission controller and resource manager is roughly outlined
. Das et al. proposed a framework for QoS provisioning
for multimedia services in 3G wireless access networks [8].
They developed an integrated framework by combining various
approaches for call admission control, channel reservation
, bandwidth degradation, and bandwidth compaction.
In [19], we introduced a framework for the adaptive control
of UMTS networks, which utilizes online monitoring of QoS
measures (e.g., handover failure and call blocking probabilities
) in order to adjust system parameters of the admission
controller and the packet scheduler. The presented approach
is based on a lookup table called the Performance Management
Information Base (P-MIB). Entries of the P-MIB have
to be determined using extensive off-line simulation experiments
to determine optimal parameter configuration for the
considered scenarios. Given the entries of the P-MIB, we
showed how to improve QoS for mobile users by periodi-cally
adjusting system parameters. The practical applicability
of this approach is limited if the P-MIB comprises many en-210
C. LINDEMANN ET AL.
tries (i.e., many scenarios have to be considered) because of
the high computational effort for determining these entries by
simulation.
This paper introduces a unified approach for the adaptive
performance management for 3G mobile networks. As the
main result of the paper, the introduced approach is based on
a mathematical framework for the proposed update schemes
rather than a lookup table. As a consequence, the adaptive
control mechanism can be adjusted in an intuitive way and
optimal system parameter configuration can efficiently be determined
. We effectively utilize adaptive performance management
for improving not only QoS for mobile users but
also increase revenue earned by service providers. As in [19],
controlled system parameters comprise queueing weights for
packet scheduling, a threshold value of the access queue for
admission of non real-time traffic, and a portion of the overall
available bandwidth reserved for handover calls.
Beyond
[19], we propose a scheme for adjusting the queueing
weights for both improving QoS for higher priority users that
suffer from a high population of users with lower priority and
for increasing the revenue earned by the service provider. For
the analysis of the update strategy of the queuing weights, we
consider a usage-based and a usage-/throughput-based pricing
policy according to [11,12,21]. Furthermore, we introduce
a hybrid pricing policy combining the notion of flat-rate
and a usage-based pricing according to current policies
of GSM networks. Performance curves derived by simulation
evidently illustrate the gain of the unified approach for adaptive
performance management. In fact, for UMTS networks,
simulation results show that handover failure probability can
be improved by more than one order of magnitude. Moreover,
packet loss probability can be effectively regulated to a predefined
level and the provider revenue is significantly increased
for all considered pricing policies.
The paper is organized as follows. Section 2 introduces the
unified approach for adaptive performance management and
describes its embedding in the system architecture of 3G mobile
networks. Section 3 introduces strategies for controlling
the parameters of an admission controller in order to improve
QoS. Section 4 describes the parameter control of a packet
scheduler for the combined improvement of both QoS and
provider revenue. In section 5, we present simulation results
that illustrate the benefit of employing the proposed approach
for adaptive performance management. Finally, concluding
remarks are given.
Adaptive performance management for 3G mobile networks
This section introduces the unified approach for regularly adjusting
system parameters to changing traffic load, packet arrival
pattern or population of users, etc. We consider a cellular
mobile network in which a different transceiver station serves
each cell. The purpose of the transceiver station is the modu-Figure
1. System architecture for adaptive performance management.
lation of carrier frequencies and demodulation of signals. Furthermore
, a base station controller (BSC) is considered that is
responsible for a cluster of cells, i.e., several transceiver stations
. The BSC manages the radio resources, i.e., schedules
data packets, and controls handovers inside the cell cluster as
well as handovers towards and from neighboring cell clusters.
To improve QoS for mobile users as well as to increase
revenue earned by service providers, an entity for Adaptive
Performance Management (APM) is included in a BSC. Furthermore
, a BSC has to be extended by an online performance
monitoring component that derives QoS measures in a certain
time window (e.g., handover failure probabilities of mobile
users or packet loss probabilities). These QoS measures
form a system pattern that is submitted in fixed time intervals
(i.e., a control period) to the APM entity, which subsequently
updates corresponding system parameters (i.e., parameters of
traffic controlling components like the admission controller
and packet scheduler). Thus, the proposed approach closes
the loop between network operation and network control. Figure
1 shows the system architecture for performance management
embedded in a BSC.
2.1.1. Online performance monitoring
System parameters of a BSC can be effectively updated by
monitoring QoS measures, which are immediately affected by
these parameters. A current value for a QoS measure is determined
online based on a set of relevant events corresponding
to this QoS measure (e.g., packet arrivals are relevant events
for computing packet loss probabilities). The online monitoring
of QoS measures is done by a sliding window technique
as introduced in [19]. The width of the sliding window over
time depends on the number of relevant events that are occurred
according to a QoS measure. Upon arrival of a new
relevant event the sliding window moves in time. At the end
of a control period the QoS measures are derived for each
sliding window (e.g., packet loss probability can be derived
from number of lost packets divided by number of all packet
arrivals in the sliding window). These QoS measures and the
number of events occurred in the last control period form the
system pattern that is transferred to the adaptive performance
management entity (see figure 1).
IMPROVING QoS AND PROVIDER REVENUE IN 3G MOBILE NETWORKS
211
Note that an accurate online monitoring of QoS measures
requires a specific width for the sliding window. A certain
number of events representing the history of the QoS measure
have to be considered to get an expressive measure. On
the other hand considering a big sliding window prevents the
APM entity from fast reaction on changing traffic conditions.
A bigger sliding window contains more history and, thus,
more events have to be collected to cause a significant change
in the online monitored QoS measure. This tradeoff between
accurate online monitoring and fast reaction of the APM to
changing traffic conditions has to be studied carefully in several
experiments to get the optimal width of the sliding window
for each QoS measure.
2.1.2. Adaptive performance management
Whenever a system pattern S
= {(P
1
, n
1
), . . . , (P
m
, n
m
)
},
consisting of online monitored QoS measures P
1
, . . . , P
m
and the numbers of relevant events n
1
, . . . , n
m
occurred in
the last control period is transmitted to the APM an update
of the system parameters can be performed. In general
, an update of a system parameter is made according
to a function f depending on a subset of the QoS measures
P
1
, . . . , P
m
and the previous value
(
old)
of the system parameter
. Let P
(
1)
, . . . , P
(k)
, k
m
, be the QoS measures
corresponding to system parameter , then the update is made
if a certain minimum number n( ) of relevant events occurred
in the last control period. That is:
(
new)
= f P
(
1)
, . . . , P
(k)
,
(
old)
,
if min
{n
(
1)
, . . . , n
(k)
}
n( ).
(1)
We classify update functions in relative functions, that perform
a parameter update relative to the old parameter value
and absolute functions that set the new parameter value independent
of the old value, i.e., f is independent of
(
old)
in (1). With relative update functions strong fluctuations of
the corresponding system parameter in one update step can be
avoided. In section 3, we study a special class of relative update
functions in order to set the parameters of an admission
controller. Furthermore, we develop in section 4 an absolute
update function for adjusting the weights of a weighted fair
queueing packet scheduler.
2.2. Economics and pricing policies in 3G mobile networks
There are multiple requirements, which should be fulfilled
for any viable pricing mechanism in multi-service class data
communication networks [12]. A primary target of differen-tiated
pricing of Internet services is the prevention of system
overload and an optimal resource usage according to different
daytimes and different traffic intensities. Furthermore, the
pricing scheme should be implemented in a completely de-centralized
manner and there should be multiple priorities in
order to take into account the different QoS required by different
applications and users.
In general, pricing policies can be partitioned into usage-based
(pay-as-you-go) pricing, flat-rate (all-you-can-eat)
pricing, and dynamic pricing. In usage-based pricing policies
a user is charged according to a connection time or traffic
volume. Whereas connection based calls (e.g., in GSM) are
charged by connection time, packet-switched services (e.g., in
UMTS) are charging the transferred data volume. Dynamic
pricing models take into account the state of the mobile radio
network for determining the current price of a service.
Congestion-sensitive pricing as a particular dynamic pricing
model has been shown to be more responsive. MacKie-Mason
and Varian introduced the concept of congestion-sensitive
pricing in their smart market scheme [21]. Under this model,
the actual price for each packet is determined based on the
current state of network congestion. In [25], Rao and Petersen
discussed the optimal pricing of priority services. Analogously
to the smart market approach, Gupta et al. presented
a pricing scheme that uses priorities on the packet-level [13].
They proposed to differentiate Internet traffic according to delay
and loss requirements.
For the analysis of the update strategy of the queuing
weights, we consider in section 4 a usage-based and a usage-/
throughput-based pricing policy according to [11,12,21]. Furthermore
, we introduce a hybrid pricing policy combining the
notion of flat-rate and a usage-based pricing according to current
policies of GSM networks.
Strategies for improving Quality of Service
The proposed approach distinguishes three different types
of services: circuit-switched services, packet-switched real-time
services (RT), and packet-switched non real-time services
(NRT). Typically, circuit-switched services are voice
calls from a GSM mobile station. As proposed by 3GPP, RT
services belong to the conversational and streaming classes
and NRT services fall into the interactive and background
classes [2]. The bandwidth available in a cell must be shared
by calls of these different service classes and the different service
requirements have to be met. Before a mobile session
begins, the user needs to specify its traffic characteristics and
desired performance requirements by a QoS profile. Then, an
admission controller decides to accept or reject the users request
based on the QoS profile and the current network state
as, e.g., given by queueing length. The purpose of the admission
controller is to guarantee the QoS requirements of the
user who requested admission while not violating the QoS
profiles of already admitted users. The call admission criteria
will be different for each service class. The QoS profile for
RT sessions specifies a guaranteed bandwidth to be provided
for the application in order to meet its QoS requirements. If
the network cannot satisfy the desired bandwidth, the corresponding
admission request is rejected.
Data packets arriving at the BSC are queued until they are
scheduled to be transmitted over the radio link. For NRT sessions
, we consider an admission controller taking into account
free buffer space in the NRT queue [8]. In order to prevent
buffer overflow once a call is admitted, the current queueing
length is set against certain buffer availability threshold
212
C. LINDEMANN ET AL.
of the capacity, denoted by . The admission criteria for
voice and RT handovers are the same as for new voice calls
and RT sessions except that additional handover bandwidth
can be utilized. The analysis of several admission control
schemes for cellular systems presented in [24] showed that
the simple reservation scheme (i.e., reserving bandwidth for
handover calls) performs remarkably well. For simple cellular
networks, the optimal amount of bandwidth reserved for
handover calls can be determined by analytical models [14].
In the model presented here, we denote with b
h
the portion
of the overall bandwidth that is exclusively reserved for handover
calls from neighboring cells. The considered admission
controller does not prioritize NRT handovers over new NRT
sessions. Further details of the admission controller are given
in [19].
3.2. Adjusting the admission controller for QoS improvement
In this section, we show how to utilize equation (1) for setting
the parameters and b
h
of the admission controller in order to
reduce packet loss probability and handover failure probability
. For updating the system parameters, we split the general
function introduced in section 2.1 into separate functions each
depending only on one QoS measure. Let P
1
, . . . , P
k
be the
QoS measures corresponding to a system parameter . Then,
equation (1) can be simplified to
(
new)
= f
1
(P
1
)
+ + f
k
(P
k
)
k
(
old)
,
L
(
new)
R.
(2)
The interpretation of (2) is the following. Each update
function f
i
describes the influence that the QoS measure
P
i
should have on the system parameter . Subsequently,
the overall update is performed by computing the arithmetic
mean of the functions f
i
multiplied with the old value of the
system parameter. Note that the value
(
new)
must be truncated
at a certain lower bound L and an upper bound R in
order to guarantee that the computation of
(
new)
results in a
valid value of the system parameter. As basic update function
we consider a logarithmic linear function of the form:
f
i
(P
i
)
= m
i
log P
i
+ b
i
.
(3)
The reason for this choice is that we want to consider QoS
measures like loss probabilities and failure/blocking probabilities
, which are in the range of 10
-5
to 1. Therefore, a
logarithmic shape is more suitable. In previous work [19],
we have studied update schemes of system parameters of
an admission controller and a packet scheduler based on a
lookup table. In order to determine the optimal entries of this
lookup table extensive off-line simulation experiments have
been conducted. Applying regression statistics to the entries
of this lookup table shows that these entries are well represented
by functions with logarithmic shape. Thus, besides the
motivation of the update functions given here, their choice
is to a large extend originated from regression statistics conducted
in earlier work. The strength of the influence of f
i
on
(
new)
can be adjusted with the gradient m
i
. The parameter b
i
can be determined by the following interpretation: suppose
the desired level of the QoS measure P
i
is
i
(e.g., the desired
packet loss probability is 0.001). That is, if the online
measured value of P
i
is
i
the system parameter should
not be changed in the update step from the point of view of
measure P
i
. Therefore, we chose f
i
(
i
)
= 1 and from this
relation we get b
i
= 1 - m
i
log
i
. Inserting in equation (3)
results in the final form of the update function:
f
i
(P
i
)
= m
i
log P
i
i
+ 1.
(4)
For ease of notation, we abbreviate the QoS measures handover
failure probability and new call/session blocking probability
corresponding to voice calls and RT sessions by HFP
and CBP, respectively. The probability of a packet loss due
to buffer overflow in the NRT queue is abbreviated by PLP.
The update strategy according to equations (2)(4) is justified
by its intuitive understanding and the performance results presented
in section 5. The suitability of update functions other
than (2)(4), is subject for further study and out of the scope
of this paper.
3.2.1. Update of non real-time queue threshold
Recall that a system parameter update is performed each time
a system pattern arrives at the APM entity and the minimum
number of relevant events corresponding to this system parameter
is reached. Determining the update for the system parameter
, i.e., determining
(
new)
, is performed corresponding
to the old value
(
old)
and the actually observed QoS measure
PLP. That is:
(
new)
= f (PLP)
(
old)
,
0.001
(
new)
1.
(5)
The truncation of
(
new)
at the lower bound guaranties that
the value does not accumulate near zero for long periods of
low traffic load. The minimum number of relevant events required
for an update of is counted in data volume rather than
in packet arrivals (in the experiments this number is 5 MB).
The setting of the gradient m of the corresponding update
function is derived from a couple of experiments for different
values of the gradient. We found m
= -0.02 to be suitable.
Choosing a suitable value for the gradient is a similar tradeoff
as explained for the sliding window size. A large gradient results
in a fast update of the system parameter in a few number
of update steps, but also introduces higher fluctuations of the
system parameter over time. We demonstrate the speed of the
parameter adjustment in an experiment in section 5. Furthermore
, several experiments for different desired loss values
are presented.
3.2.2. Update of fraction of bandwidth reserved for handover
The update for the system parameter b
h
, i.e., determining
b
(
new)
h
, is performed based on the old value and the actually
observed QoS measures HFP and CBP. That is:
b
(
new)
h
= f
1
(
HFP)
+ f
2
(
CBP)
2
b
(
old)
h
,
0.001
b
(
new)
h
R.
(6)
IMPROVING QoS AND PROVIDER REVENUE IN 3G MOBILE NETWORKS
213
The value b
(
new)
h
is truncated at a lower bound of 0.1%
and a certain upper bound R which is a fraction of the overall
bandwidth available (in the experiments we fix R
= 0.7).
The truncation at the lower bound is for the same reason as
explained above. In fact, for computing b
(
new)
h
two QoS measures
corresponding to the actually observed HFP and CBP
are taken into account. A high HFP should increase b
(
new)
h
but this obviously also increases the CBP because less bandwidth
is available for new voice calls and RT sessions. Therefore
, the HFP and the CBP influence the handover bandwidth
b
(
new)
h
. In fact, m
1
= -m
2
holds in the update functions f
1
and f
2
. From a couple of experiments for different gradients,
we found m
1
= 0.08 to be suitable. A common assumption
in cellular networks is to prioritize handover calls over new
calls. Therefore, the desired handover failure level
1
should
be smaller than the desired call blocking level
2
. According
to these values the handover bandwidth is slightly increased,
if HFP is equal to CBP.
With the presented strategy the parameters of the update
functions can be chosen in an intuitive way and optimal parameter
configuration can efficiently be determined. This is the
major advantage over the approach based on a Performance
Management Information Base introduced in [19] which requires
extensive off-line simulation experiments.
Strategies for improving both QoS and provider revenue
At a BSC responsible for a cluster of cells, data packets from
various connections arrive and are queued until bandwidth
for transmission is available. In order to distinguish different
priorities for NRT traffic corresponding to the traffic handling
priority defined by 3GPP [2], scheduling algorithms
like Weighted Round Robin (WRR), Weighted Fair Queueing
(WFQ [9]) or Class Based Queueing (CBQ [10]) have to
be implemented. An overview of queueing issues for guaranteed
performance services can be found in [27]. In WFQ,
the weights control the amount of traffic a source may deliver
relative to other active sources during some period of
time. From the scheduling algorithm's point of view, a source
is considered to be active, if it has data queued in the NRT
queue. Let B be the overall bandwidth available for NRT
sessions at time t. For an active source i with weight w
i
, the
bandwidth B
i
that is allocated to this transfer at time t is given
by
B
i
=
w
i
j
w
j
B.
(7)
In (7) the sum is taken over all active NRT sources j . A class
based version of WFQ serves packets of each priority class
according to the weights rather than every active source.
4.2. Adjusting the packet scheduler for QoS and revenue
improvement
This section utilizes the proposed approach for the adaptive
control of the weights of a weighted fair queueing packet
scheduler in order to improve QoS as well as to increase the
revenue. The strategy for adjusting the weights combined
with the introduction of several pricing policies constitutes
a further contribution of the paper. Recall that the revenue
earned by a mobile service provider is determined by the
monthly payment of mobile users as well as by the additional
usage-based pricing after the monthly amount of data volume
is consumed. Note, that the monthly subscription rate is only
relevant for monthly revenue calculations. In this section, we
consider the revenue improvement in a certain small time period
regardless the monthly subscription rates. In section 5,
we briefly discuss monthly revenue calculation. Let P denote
the number of different priority classes, i.e., weights of the
weighted fair queueing scheduler. Define by b
i
(t)
the transferred
data volume in time t of users of priority i and by r
i
(t)
the payment of users of priority i at time t, i.e., the user pays
for the transferred data volume. We distinguish a pure usage-based
and a usage-/throughput-based pricing policy:
(a) A user of priority i has a fixed payment p
i
per kbit during
his session, i.e., r
i
(t)
= p
i
.
(b) The payment of a user of priority i consists of a fixed part
p
i
that is increased proportional to the additional throughput
i
(t)
he received due to the update of the queueing
weights, i.e., r
i
(t)
= p
i
i
(t)
.
According to the proposed data volume based pricing with
respect to different priority classes the revenue function
(t)
is given by
(t)
=
P
i
=1
r
i
(t)b
i
(t).
(8)
The revenue function of equation (8) is utilized in section 5
for evaluating the strategies for revenue improvement presented
below.
4.2.1. Update of WFQ weights
Recall that packets of NRT users arriving at the BSC are first
queued until they are scheduled for transfer by a weighted
fair queueing discipline. Let w
i
w
i
+1
, i
= 1, . . . , P - 1,
be the basic weights of the WFQ scheduler. The update of
the queueing weights, i.e., determining w
(
new)
i
is made according
to an absolute update function depending on the basic
weights w
i
and the current number of NRT sessions belonging
to priority i. Therefore, every system pattern that is transmitted
from the online monitoring component to the adaptive
performance management entity contains the current number
of active NRT sessions with priority i in the cell. For ease
of notation, the number of active non real-time sessions with
priority i is abbreviated by NRT
i
.
The idea behind the strategy for revenue improvement is
to shift the overall utilization of bandwidth for NRT traffic
214
C. LINDEMANN ET AL.
towards higher priority users, which pay more for the transferred
data volume. Note that the update strategy should be
conservative in a way that the transfer of packets of low priority
is not simply blocked if packets of higher priorities are
arriving, i.e., priority queueing. Assuming that the majority
of users will buy a cheaper low priority service class, priority
queueing will leave most users unsatisfied. Therefore, the update
strategy also considers the QoS aspect. The update strategy
concerning the queueing weights is developed according
to the following premises:
(i) If the number of active NRT users in the cell is the same
for each priority class, i.e., NRT
i
= NRT
j
, i
= j,
the weights w
(
new)
i
should be set according to the basic
weights w
i
for i
= 1, . . . , P .
(ii) Priority classes with low population of users compared
to other classes should be prioritized, i.e., the corresponding
weights should be increased.
(iii) The relative ordering of the weights should be preserved
in a strong way, i.e., w
(
new)
i
(w
i
/w
i
+1
)
w
(
new)
i
+1
for i
=
1, . . . , P
- 1.
Premise (i) constitutes the key of the update strategy. If all
priority classes have the same population of users the scheduling
should work as in the case without adaptive control of
the weights. The rationale behind premise (ii) is to prioritize
users that are consuming less bandwidth (relative to their
weights) than users belonging to other classes, i.e., users of
low population should be made more independent from the
influence of user classes with higher population. This premise
constitutes the basic idea for QoS improvement and is demon-strated
by the following example that considers two priority
classes, i.e., a high and low priority class. In WFQ the available
bandwidth is shared among all active users according to
their weights. That is, if the minority are high priority users,
the overall bandwidth consumed by these users will suffer
from a strong influence of low priority users that hold the
majority. Therefore, increasing the weights for high priority
users will result in a higher QoS for this user class. Updating
the weights according to this strategy will result in a scheduling
algorithm somewhere between a WFQ and a class based
queueing scheduler. In fact, the benefit of both is utilized: the
fair sharing of the bandwidth of WFQ and the higher bandwidth
guarantees for each priority class provided by a class
based queueing scheduler.
Preserving the relative ordering of the weights (i.e.,
premise (iii)) guarantees that QoS for higher priority users
and, therefore, the provider revenue can only be improved
due to the adaptive control of the weights. If the intention
of the update strategy is not primary on improving provider
revenue the weights can be also set in a weak relation, i.e.,
w
(
new)
i
w
(
new)
i
+1
. This might be useful to increase QoS for
users of low population independent of their priority class.
With the following algorithm the computation of the weights
w
(
new)
1
, . . . , w
(
new)
P
can be performed iteratively in P
-1 minimum
calculations. The iteration is given by
w
(
new)
1
= w
1
(NRT
1
)
,
(9)
w
(
new)
i
= min
w
i
w
i
-1
w
(
new)
i
-1
, w
i
(NRT
i
)
,
i
= 2, . . . , P.
(10)
In order to smooth the influence of the number of NRT
users on the queueing weights, an exponent
0 is considered
(e.g.,
= 1/2). It is easy to show that premises (i), (ii)
and (iii) hold for the weights set according to equations (9)
and (10). The iteration starts with setting w
(
new)
1
according to
NRT
1
and continues up to w
(
new)
P
. Note that this is only one
possibility to set the new weights. Any other starting position
for the iteration is possible and results in a slightly different
update of the weights. Nevertheless, the algorithms work in
a similar way, and therefore, we consider only the iteration
of (9) and (10). If currently no users of priority i are in the
cell, i.e., NRT
i
= 0, the algorithm skips the setting of the
corresponding weight w
(
new)
i
and the next iteration step i
+ 1
is related to step i
- 1. Subsequently, these weights were set
to zero. For other scheduling disciplines like weighted round
robin or a class based queueing corresponding update strategies
can be derived in a similar way.
4.2.2. Considering advanced pricing policies
In pricing policy (b) introduced above, users have to pay an
additional fee depending on the throughput improvement due
to the update of the queueing weights. This concept of pricing
indicates strong similarities to the congestion-sensitive
pricing of the smart market scheme [21], where the actual
price for each packet is determined based on the current state
of network congestion. Similarly, in our throughput-based
pricing policy the throughput of users is determined by their
willingness-to-pay additional costs (according to their choice
of priority class) for transmission of packets in a congested
network. The additional payment is justified because the
throughput for users of higher priority will be maintained,
even if more and more users of lower priority attend the cell,
i.e., the network is currently congested. We describe the relative
throughput increase of priority class i with the function
i
(t)
=
(
new)
i
= w
P
THR
i
w
i
THR
P
.
(11)
In equation (11), THR
i
is the current throughput of class i derived
from the corresponding sliding window and 0
1
is a scaling exponent (e.g.,
= 1/4) that has to be adjusted
by the service provider for appropriate revenue dimensioning
. In order to guarantee that revenue will be only improved,
i
(t)
has to be truncated, i.e.,
i
(t)
1.
Next, we adjust the weights according to an advanced pricing
policy that adopts ideas, which have been successful in
existing GSM networks. In GSM networks, the pricing of
a provided service is as follows: the proposed service is offered
based on a monthly payment for a dedicated amount
IMPROVING QoS AND PROVIDER REVENUE IN 3G MOBILE NETWORKS
215
of call time. If a user has consumed this amount of time
before the end of the month, he has to pay for any further
use of this service based on a time-dependent accounting.
This idea can be generalized and extended towards packet-switched
services in 3G networks. Analogously, a user has
to pay a monthly charge for a dedicated amount of data volume
, which can be transferred without further pricing. After
using up this monthly amount of data, the user has to pay for
the desired services according to the transferred data volume
(byte-based). Moreover, analogous to GSM networks a user
can utilize "unused" data volume, i.e., the unused fraction of
the prepaid monthly amount of data volume, in subsequent
months. If the monthly amount of data is unrestricted, this
pricing would become a flat-rate pricing and if there is no
monthly payment, the pricing follows a usage-based policy.
Thus, our pricing policy constitutes a hybrid approach of flat-rate
and usage-based pricing.
The update of the queueing weights can now be extended
in a way that users consuming their monthly amount of data
are served with a lower priority than users currently paying for
their data transfer. Therefore, we introduce a new weight w
corresponding to the not paying users. The weight w must
be sorted in the weights w
1
, . . . , w
P
and the iterative update
algorithm (9)(10) can be applied to the P
+ 1 weights as
described above. In order to distinguish not paying users
with different priorities these users are served by the WFQ
scheduler with weights w
1
, . . . , w
P
relative to w .
That
is, WFQ is applied to 2
P weights, i.e., w
1
, . . . , w
P
and
(w /w)
w
1
, . . . , (w /w)
w
P
with w
= w
1
+ + w
P
.
4.3. Implementation issues
As outlined in section 2.1, the controlled system parameters
for QoS and revenue improvement, i.e., , b
h
, w
i
,
i
, constitute
an integral component of the proposed extension to a
BSC. The adjustment of system parameters is only based on
implicit information that is directly measured by the online
monitoring component. Therefore, no additional signaling
with other BSCs is necessary for updating system parameters.
The online monitored QoS measures, i.e., PLP, HFP, CBP,
NRT
i
, and THR
i
, can easily be derived and stored within
the BSC (see figure 1). The PLP can directly be determined
by counting the number of IP packets, which are lost due to
buffer overflow in the NRT queue. HFP as well as the CBP is
determined by the non-admitted handover calls and new calls
in the admission controller, respectively. Admission, termination
, and handover of NRT calls enable the profiling of NRT
i
,
the number of non real-time sessions with priority i. Moreover
, the packet scheduler allows the throughput computation
of NRT users according to their individual priorities. Furthermore
, no time consuming signaling is needed to transfer
the system pattern inside the BSC because the online performance
monitoring component and the performance management
entity both reside in the BSC.
The question arises how call charging can be accomplished
for the considered pricing policies in 3G mobile networks.
For pricing policy (a), i.e., a fixed payment per kbit, call
charging can easily be processed by the subscription management
component of the operation subsystem (OSS) by means
of the call charging mechanism using the home location register
(HLR) [1]. Similarly, the hybrid pricing scheme can be
realized except that the remaining amount of prepaid data volume
has to be stored in the HLR for charging the transferred
data volume. Utilizing these existing charging mechanisms,
no additionally signaling overhead arises for charging data
services. The throughput-based pricing policy (pricing policy
(b)) just slightly changes the situation and can easily be
implemented within the BSC using a local copy of the user's
HLR charging data fields. This local data minimizes signaling
overhead of individual user charging. According to the
transferred data volume and current throughput of the user's
bandwidth class, this local charging profile is continuously
updated. Handovers with changing BSC of response induce
the transfer of this local charging profile to the new BSC of
response. Subsequently, these local data have to be updated
in the HLR for individual user accounting after termination of
the call. Note, that this transfer of local charging profiles can
naturally be embedded in the OSS functionality.
Evaluation of the adaptive performance management strategies
For traffic modeling of RT applications we utilize the approach
proposed in [18], where variable bit rate video traffic
is modeled in terms of time-discrete M/G/
input processes.
This model is based on measured video streams and efficiently
captures the correlation structure of the considered
video traffic applying the time-discrete M/G/
input
process. The generated traffic is transformed utilizing a hybrid
Gamma/Pareto numerical transformation in order to capture
the marginal distribution of the measured traffic stream.
Subsequently, the synthetically generated traffic is broken
down to IP packets of a maximum size of 1500 bytes, which
are uniformly distributed within a given frame-duration of the
MPEG video sequence comprising of 1/30 s. Note that this
traffic model does not propose information for modeling RT
session durations. Therefore, we assume session durations to
be exponentially distributed (see section 5.2).
Recent recommendations for modeling NRT traffic and analytical
traffic models for 3G mobile networks are proposed
in [15,16], respectively. The traffic model is based on real
measurements conducted at an Internet service provider dial-in
link, which comprises comparable characteristics of future
mobile networks [17], i.e., different access speeds, influence
of the user behavior due to different tariff limits, as well as
asymmetric up- and downlink traffic. Based on these measurements
a NRT traffic model is conducted, applying the idea
of the single user traffic model, which describes traffic characteristics
on session-level, connection-level, i.e., application-level
, and packet-level, respectively. The key insight of this
modeling approach lies in an appropriate scaling procedure of
216
C. LINDEMANN ET AL.
Table 1
Characteristics for different UMTS session types.
Circuit switched
Streaming real time (RT)
Interactive non real time (NRT)
voice service
Audio
Video
high priority
normal priority
low priority
Portion of arriving requests
25%
12%
3%
6%
18%
36%
Session duration
120 s
180 s
determined by session volume distribution
Session dwell time
60 s
120 s
120 s
the measured trace data towards typical bandwidth classes of
3G mobile networks, i.e., 64 kbps, 144 kbps, and 384 kbps.
In this context, a bandwidth class denotes the maximum bandwidth
capability of future handheld devices. We refer to [15]
for details of the NRT traffic model, especially for the parameterization
of the traffic characteristics.
5.2. The simulation environment
In order to evaluate the proposed approach for adaptive control
, we developed a simulation environment for a UMTS access
network, i.e., a UMTS Terrestrial Radio Access Network
(UTRAN [3]). The simulator considers a cell cluster comprising
of seven hexagonal cells with corresponding transceiver
stations (i.e., Node B elements), that are managed by
a base station controller (i.e., a Radio Network Controller,
RNC). We assume that a mobile user requests a new session
in a cell according to a Poisson process. When a mobile user
starts a new session, the session is classified as voice, RT,
or NRT session, i.e., with the session the user utilizes voice,
RT, or NRT services mutually exclusive. RT sessions consist
of streaming downlink traffic corresponding to the UMTS
streaming class specified by 3GPP [2] and NRT sessions consist
of elastic traffic and correspond to the UMTS interactive
class or background class, respectively. For the year 2010
an amount of about 50% voice calls is anticipated [26]. We
assume that one half of the voice calls are served over the frequency
spectrum for traditional GSM services (i.e., 890915
and 935960 MHz) and the second half is served over the new
frequency spectrum allocated for UMTS. Nevertheless, the
simulator considers only the new frequency spectrum. Therefore
, we assume that 25% of the call requests are voice calls
whereas RT and NRT sessions constitute 15% and 60% of the
overall arriving requests (see table 1).
Subsequently, we have to specify the QoS profile for RT
and NRT sessions. For RT sessions the simulator considers
two QoS profiles, i.e., a low bandwidth profile comprising of
a guaranteed bit rate of 64 kbps corresponding to streaming
audio and a high bandwidth profile comprising of a guaranteed
bit rate of 192 kbps corresponding to streaming video.
According to the RT traffic model presented in section 5.1, we
assume that 80% of the RT sessions utilize the low bandwidth
profile whereas the remaining 20% utilize the high bandwidth
profile. Following the single user traffic model, NRT sessions
are partitioned according to different bandwidth classes
as follows: 60% for 64 kbps, 30% for 144 kbps, and 10%
for 384 kbps, comprising of different priorities (see table 1),
respectively.
The amount of time that a mobile user with an ongoing
session remains within the cell is called dwell time. If the session
is still active after the dwell time, a handover toward an
adjacent cell takes place. The call/session duration is defined
as the amount of time that the call will be active, assuming it
completes without being forced to terminate due to handover
failure. We assume the duration of voice calls and RT sessions
to be exponentially distributed. As proposed in [6], the
dwell time is modeled by a lognormal distribution. All corresponding
mean values are shown in table 1. A NRT session
remains active until a specific data volume drawn according to
a bandwidth-dependent lognormal distribution is transferred.
To distinguish between NRT traffic classes, the UMTS simulator
implements a WFQ scheduler with three packet priorities
: 1 (high), 2 (normal), and 3 (low) with weights w
1
= 4,
w
2
= 2, and w
3
= 1. These priorities correspond to the
traffic handling priority specified by 3GPP. To model the user
behavior in the cell, the simulator considers the handover flow
of active mobile users from adjacent cells. The iterative procedure
introduced in [4] is employed for balancing the incoming
and outgoing handover rates. The iteration is based on the
assumption that the incoming handover rate of a user class at
step i
+ 1 is equal to the corresponding outgoing handover
rate computed at step i.
5.2.1. UMTS system model assumptions
The simulator exactly mimics UMTS system behavior on the
IP level. The focus is not on studying link level dynamics
. Therefore, we assume a reliable link layer as provided
by the automatic repeat request (ARQ) mechanism of the
Radio Link Control (RLC) protocol. As shown in [22] for
the General Packet Radio Service (GPRS), the ARQ mechanism
is fast enough to recover from packet losses before reliable
protocols on higher layers (e.g., TCP) recognize these
losses due to timer expiration. Thus, a reliable link level
can be assumed when considering higher layer protocol actions
(see, e.g., [20]). To accurately model the UMTS radio
access network, the simulator represents the functionality of
one radio network controller and seven Node B transceiver
stations, one for each of the considered cells. Since in the
end-to-end path, the wireless link is typically the bottleneck,
and given the anticipated traffic asymmetry, the simulator focuses
on resource contention in the downlink (i.e., the path
RNC
Node B MS) of the radio interface.
The simulator considers the UTRAN access scheme based
on Wideband-Code Division Multiple Access (W-CDMA) in
Frequency Division Duplex mode (FDD) proposed by 3GPP
[1]. In FDD downlink, a division of the radio frequencies into
IMPROVING QoS AND PROVIDER REVENUE IN 3G MOBILE NETWORKS
217
four physical code channels with data rates of 1,920 kbps each
up to 512 physical code channels with 15 kbps data rates each
is possible. Therefore, the overall bandwidth that is available
in one cell is 7,680 kbps. For the channel coding, we assume
a convolution-coding scheme with coding factor 2. In the experiments
without adaptive control the handover bandwidth
portion b
h
is 5% and the NRT queue threshold is set to 95%.
The simulation environment was implemented using the simulation
library CSIM [7]. In a presimulation run the handover
flow is balanced, for each cell at the boundary of the seven-cell
cluster. All simulation results are derived with confidence
level of 95% using the batch means method. The execution
of a single simulation run requires about 4060 min of CPU
time (depending on the call arrival rate) on a dual processor
Sun Sparc Enterprise with one GByte main memory.
5.2.2. Implementation of the hybrid pricing policy in the
simulator
According to the hybrid pricing policy as introduced in section
4.2.2, the user's overall remaining amount of prepaid data
volume d out of the user's monthly data volume D is determined
at the beginning of a session. Moreover, the remaining
amount of data volume of previous months r is determined
. For simulation study purposes, this is accomplished
by choosing the random value d uniformly out of the interval
[0, kD]. kD captures the monthly amount of data a user typically
transfers, i.e., a user typically transfers a multiple k of
the data volume D that is available for a fixed monthly payment
. The random value r, is sampled according to a uniform
distribution out of the interval
[0, 0.1D], where 0.1D measures
the maximum amount of "unused" data volume of previous
months. If d exceeds D
+ r the user has no remaining
prepaid data volume, including the data volume of the current
and the previous months. Otherwise, there is a remaining
amount of prepaid data volume D
+ r - d for the considered
user and additional pricing arises only, if the transferred data
volume of the user session exceeds D
+ r - d. Thus, during
the user session the remaining data volume has to be updated
according to the actually transferred data.
In the simulation studies we utilize the proposed hybrid-pricing
scheme with a prepaid monthly data volume of
150 MB. According to the different priority classes 1, 2,
and 3, the volume-based pricing for transferred data exceeding
the prepaid monthly data volume comprises of 20, 15,
and 10 cost-units per MB, respectively.
Considering the
changing traffic loads according to the daytime, this approach
can be refined, by the notion of different pricing for daily periods
of time. For the parameterization of the typically monthly
transferred data volume, we assume k
= 2. Note that the
parameterization of the pricing scheme is chosen for demonstration
purposes only. Due to the high flexibility of the hybrid
pricing scheme, it can be easily extended towards multiple
, concurrent pricing schemes comprising of, e.g., different
monthly amounts of prepaid data volumes, different payments
for the individual priority classes, or a pure usage-based pricing
as well as pure flat-rate pricing.
(a)
(b)
Figure 2. Impact of adaptive performance management on non real-time
traffic.
5.3. Performance results
Using simulation experiments, we illustrate the benefit of the
proposed unified approach for adaptive performance management
of UMTS systems. In particular, we show the improvement
of QoS measures and the increase in revenue earned by
service providers. The presented curves plot the mean values
of the confidence intervals for the considered QoS measures.
In almost all figures, the overall call/session arrival rate of
new mobile users is varied to study the cell under increasing
load conditions. For ease of notation, results with and without
adaptive performance management (APM) are abbreviated by
APM on and APM off, respectively.
In a first experiment, we investigate the effect of adaptive
control on the threshold for the buffer size of the NRT
queue denoted by . Figure 2 shows the NRT packet loss
probability (a) and the average number of NRT users in the
cell (b) for the UMTS system with and without adaptive control
. Furthermore, the figures distinguish between different
desired loss levels as introduced in section 3.2. We observe
that the APM achieves a substantially decrease in packet loss
probability. Moreover, the packet loss probability can be kept
below a constant level for increasing arrival rates of mobile
users. Note, that this level slightly differs from the desired
level of the QoS measure. This is due to the fact that the update
function only decreases the NRT threshold if the online
218
C. LINDEMANN ET AL.
Figure 3. Number of packet losses for a half day window of a weekly usage
pattern.
measured packet loss probability is greater than . Therefore
, the packet loss probability is in steady state also slightly
greater than . Nevertheless, figure 2 shows that the resulting
packet loss probability can be adjusted quite well. For very
low arrival rates, the packet loss probability is increased compared
to the case without adaptive control. This is because
the packet loss probability is below the desired level and is
adjusted towards 100%.
Figure 2(b) shows the average number of NRT users admitted
in the cell. For all curves, the number of NRT users in
the cell first increases up to about 70 users for an arrival rate of
1.0 arrivals per second. For higher arrival rates the admission
controller decides to reject requests depending on the choice
of the NRT threshold. In the case without APM the number
of NRT users approaches 100 whereas in the cases with
adaptive control less users are admitted in the cell because
the threshold parameter is decreased (e.g., about 80 users
for
= 0.001). For high arrival rates a slightly decrease
of the average number of NRT users can be observed. This is
due to the fact that with increasing arrival rate the competition
between voice, RT and NRT traffic decreases the bandwidth
capacity available for NRT traffic. Therefore, less NRT users
are admitted.
In the experiment presented in figure 3, we study the absolute
number of packet losses observed in one hour for a
transient scenario, i.e., the arrival rate of new calls is changing
every hour according to a half day window of a weekly
usage pattern [15]. The purpose of this experiment is to show
that the adaptive performance management is fast enough to
react on changing traffic conditions, i.e., to effectively adjust
the NRT threshold in order to reduce packet losses. The bars
shown in figure 3 correspond to the number of packet losses
for experiments with and without adaptive control. Furthermore
, the figure distinguishes between a desired loss level
of 0.01 and 0.001, respectively. The new call arrival rates considered
in one hour are depicted above the bars. We conclude
from figure 3 that for a real-life pattern of changing arrival
rates the packet losses can be effectively controlled by the
APM. This justifies the choice of the gradient m
= -0.02 in
the update function for the NRT threshold.
(a)
(b)
Figure 4. Impact of adaptive performance management on handover traffic.
Next, we study the effect of the APM on the handover
traffic. Figure 4 shows the handover failure probability (a)
and the new call blocking probability (b) for the UMTS system
with and without APM. Similar to figure 2, we distinguish
between different desired levels for the handover failure
probability. The desired level for new call blocking is
fixed to 0.1. Note, that for controlling the handover bandwidth
the desired level can be used only to adjust the degree
of prioritization of handover failure over new call blocking
. Distinct from the packet loss probability, it cannot be
expected to keep the handover failure probability at a constant
level for increasing traffic load. That is for two reasons
: (1) the handover bandwidth is adjusted according to
two QoS measures that have a contrary influence and (2) the
increase of the handover bandwidth must be limited by a
certain portion of the overall available bandwidth (see section
3.2). If this limit is reached handover failures occur
more frequently for further increasing call arrival rate. These
two effects can be observed in the curves of figure 4. Nevertheless
, the handover failure probability is improved more
than one order of magnitude for call arrival rates between
0.75 and 1.25 call requests per second and a desired loss
level
= 0.001. When studying the blocking probability
of new voice calls and RT sessions (see figure 4(b)),
we surly observe a higher blocking probability of new calls
in the case with adaptive control and high arrival rate. In
IMPROVING QoS AND PROVIDER REVENUE IN 3G MOBILE NETWORKS
219
Figure 5. Improving QoS for high priority non real-time users.
Figure 6. Effect of adjusting WFQ weights on bandwidth utilization of NRT
traffic.
fact, almost all call requests are blocked if system load is
high.
In a next experiment, we study the impact of NRT users on
QoS by the adaptive control of the queueing weights as introduced
in section 4.2. Figure 5 plots the average throughput
per user for each priority class of NRT traffic. As shown in
table 1, we assume 10% NRT users with high priority, 30%
with normal priority, and 60% with low priority. Recall, that
higher priority service is more expensive and, hence, more
users choose low priority service. If the overall load in the
cell is very low (i.e., less than 0.3 call arrivals per second)
each NRT user receives the maximal throughput independent
of the priority class. However, when the cell load is further
increased (arrival rates of more than 0.5 arrivals per second),
throughput for users of all priority classes decreases. The intention
of adaptively controlling the queueing weights is to
reduce heavy throughput degradation of high priority users
in this case. The performance increase of high priority users
and the decrease of low priority users are shown in figure 5.
Figure 6 plots the bandwidth portion utilized for each priority
class of NRT traffic. For low arrival rate (i.e., less than
0.5 call arrivals per second) NRT users with low priority utilize
the greatest portion of the NRT bandwidth because most
NRT users have priority low. When the cell load is increased
(arrival rates of more than 0.5 arrivals per second), the band
(a)
(b)
Figure 7. Revenue improvement for usage-based pricing policy.
width will be utilized more and more by high priority users.
The adaptive control of the WFQ weights decides to intensify
this effect because users belonging to priority high suffer from
the high population of low priority users. Figures 5 and 6 are
derived from simulation runs with
= 1/2 (see section 4.2).
In the following experiments, we study the impact of controlling
the queueing weights on the revenue function (see
equation (8)) for the three proposed pricing policies, i.e.,
usage-based, usage-/throughput-based, and the hybrid pricing
policy. From the revenue function the average (steady state)
provider revenue in the considered cell can be derived. Recall
that the available bandwidth for NRT traffic is variable for
different call arrival rates. Therefore, we consider the revenue
earned by the provider in one hour per available bandwidth
unit, i.e., per available kbit, for NRT traffic. Figure 7 shows
the provider revenue for the usage-based pricing policy (i.e.,
= 0) and different values of the exponent . As discussed
in section 4.2, the best revenue improvement will be achieved
with priority queueing. From the curves we conclude that the
update strategy increases the revenue in one cell successfully
for the considered traffic assumptions. Recall that the revenue
improvement stems from a shift in bandwidth utilization towards
higher priority users (see figure 6) if the population of
high priority users is low compared to users of lower priority.
Figure 7(b) shows the revenue improvement for different
user populations. In the experiment the percentage of high
220
C. LINDEMANN ET AL.
Figure 8. Revenue improvement for usage-/throughput-based pricing policy.
Figure 9. Revenue improvement for hybrid pricing policy.
priority users among the arriving user requests is varied. The
remaining users are assumed to be low priority users. Normal
priority users are not considered in this experiment (i.e., 0%
normal priority users). This figure shows how the adaptive
control of the queueing weights works. As expected, for a low
percentage of high priority users the corresponding weight
is increased. Therefore, QoS for high priority users and the
provider revenue is also increased. For more than 50% high
priority users the revenue is the same as in the case without
adaptive control. No further revenue improvement is allowed
because degradation of QoS for low priority users would be
unacceptable. Considering a weak relation among the weights
as introduced in section 4.2 would decrease the revenue compared
to the case without adaptive control for more than 50%
high priority users. This might be useful to increase QoS for
users of low population independent of their priority class.
Figure 8 shows the revenue improvement for the usage-/
throughput-based pricing policy and scaling exponents
=
1/4 and
= 1/16. In the last experiment we studied the
revenue improvement for the hybrid pricing policy (see figure
9). We assume that half of the arriving users start their
session in non-paying mode (i.e., k
= 2). The curves distinguish
between weights w
= 1 and w = 2 for the non-paying
users. Furthermore, the revenue for the case with and without
adaptive control is compared. The curves are derived from
simulations with
= 0 and = 1/2. From the revenue
curves of figures 79 the average monthly revenue can be
computed considering a daily/weekly usage-pattern and different
splits of call arrival rates of users requesting different
services (i.e., voice, RT, NRT with different priorities). Comparing
the monthly revenue for the pricing policies used in
figures 79 with the monthly revenue for the hybrid pricing
policy a provider can determine values such as the monthly
free data volume and monthly payment per user.
Conclusions
We introduced a unified approach based on a mathematical
framework for the adaptive performance management of 3G
mobile networks. Opposed to previous work [8,13,19,21,25],
the improvement of quality of service (QoS) and the optimization
of mobile service provider revenue was considered in an
integrated way. The unified approach aims at improving both
QoS for mobile subscribers and increasing revenue earned by
service providers. System parameters controlled by adaptive
performance management constitute the portion of bandwidth
reserved for handovers, the buffer threshold of the queue for
non real-time traffic, and the weights of a weighted fair queueing
packet scheduler.
Using the UMTS traffic model of [15] and a simulator on
the IP level for the UMTS system, we presented performance
curves for various QoS measures to illustrate the benefit of
the unified approach for adaptive performance management.
We introduced update functions that effectively control the
packet loss probability and the handover failure probability.
Considering usage-based, usage-/throughput-based, and hybrid
pricing policies, we showed that the provider revenue in
one cell can be significantly increased by the adaptive control
of the queueing weights.
Throughout the paper, we considered the services and QoS
profiles standardized for UMTS. Thus, the proposed approach
for adaptive control is tailored to UMTS networks. However
, by considering other services and QoS profiles, the basic
ideas underlying the unified approach for adaptive performance
management can also be applied for the adaptive control
of other kinds of multi-service IP networks.
References
[1] 3GPP, http://www.3gpp.org
[2] 3GPP, QoS concept and architecture, Technical Specification TS
23.107 (September 2001).
[3] 3GPP, UTRAN overall description, Technical Specification TS 25.401
(September 2001).
[4] M. Ajmone Marsan, S. Marano, C. Mastroianni and M. Meo, Performance
analysis of cellular mobile communication networks supporting
multimedia services, Mobile Networks and Applications 5 (2000) 167
177.
[5] K. Aretz, M. Haardt, W. Konhuser and W. Mohr, The future of wireless
communications beyond the third generation, Computer Networks
37 (2001) 8392.
[6] F. Barcel and J. Jordn, Channel holding time distribution in public
cellular telephony, in: Proceedings of the 16th International Teletraffic
Congress, Edinburgh, Scotland (1999) pp. 107116.
IMPROVING QoS AND PROVIDER REVENUE IN 3G MOBILE NETWORKS
221
[7] CSIM18 The Simulation Engine, http://www.mesquite.com
[8] S.K. Das, R. Jayaram, N.K. Kakani and S.K. Sen, A call admission and
control scheme for Quality-of-Service provisioning in next generation
wireless networks, Wireless Networks 6 (2000) 1730.
[9] A. Demers, S. Keshav and S. Shenker, Analysis and simulation of a fair
queueing algorithm, in: Proceedings of the International Symposium
on Communications Architectures and Protocols (SIGCOMM), Austin,
TX (1989) pp. 112.
[10] S. Floyd and V. Jacobson, Link-sharing and resource management models
for packet networks, IEEE/ACM Transactions on Networking 3
(1995) 365386.
[11] X. Geng and A.B. Whinston, Profiting from value-added wireless services
, IEEE Computer 34 (August 2001) 8789.
[12] A. Gupta, D.O. Stahl and A.B. Whinston, The economics of network
management, Communications of the ACM 42 (1999) 5763.
[13] A. Gupta, D.O. Stahl and A.B. Whinston, Priority pricing of integrated
services networks, in: Internet Economics, eds. L. McKnight
and J. Bailey (MIT Press, 1995) pp. 323378.
[14] G. Haring, R. Marie and K.S. Trivedi, Loss formulas and their application
to optimization for cellular networks, IEEE Transactions on Vehicular
Technology 50 (2001) 664673.
[15] A. Klemm, C. Lindemann and M. Lohmann, Traffic modeling and
characterization for UMTS networks, in: Proceedings of GLOBECOM
2001, San Antonio, TX (November 2001) pp. 17411746.
[16] A. Klemm, C. Lindemann and M. Lohmann, Traffic modeling of IP
networks using the batch Markovian arrival process, in: Proceedings of
Tools 2002, London, Great Britain (April 2002) pp. 92110.
[17] J. Kilpi and I. Norros, Call level traffic analysis of a large ISP, in: Proceedings
of the 13th ITC Specialist Seminar on Measurement and Modeling
of IP Traffic, Monterey, CA (2000) pp. 6.16.9.
[18] M. Krunz and A. Makowski, A source model for VBR video traffic
based on M/G/
input processes, in: Proceedings of the 17th Conference
on Computer Communications (IEEE INFOCOM), San Francisco,
CA (1998) pp. 14411449.
[19] C. Lindemann, M. Lohmann and A. Thmmler, Adaptive performance
management for UMTS networks, Computer Networks 38 (2002) 477
496.
[20] R. Ludwig, A. Konrad and A.D. Joseph, Optimizing the end-to-end
performance of reliable flows over wireless links, in: Proceedings
of the 5th Conference on Mobile Computing and Networking (ACM
MobiCom), Seattle, WA (1999) pp. 113119.
[21] J.K. MacKie-Mason and H.R. Varian, Pricing the Internet, in: Public
Access to the Internet, eds. B. Kahin and J. Keller (MIT Press, 1995)
pp. 269314.
[22] M. Meyer, TCP performance over GPRS, in: Proceedings of the First
Wireless Communications and Networking Conference (IEEE WCNC),
New Orleans, MS (1999) pp. 12481252.
[23] Mobile Wireless Internet Forum (MWIF), OpenRAN architecture in
3rd generation mobile systems, Technical report MTR-007 (September
2001) http://www.mwif.org
[24] J.M. Peha and A. Sutivong, Admission control algorithms for cellular
systems, Wireless Networks 7 (2001) 117125.
[25] S. Rao and E.R. Petersen, Optimal pricing of priority services, Operations
Research 46 (1998) 4656.
[26] UMTS-Forum, UMTS/IMT-2000 Spectrum, Report No. 6 (1999).
[27] H. Zhang, Service disciplines for guaranteed performance service in
packet-switched networks, Proceedings of the IEEE 83 (1995) 1374
1396.
[28] Wireless
World
Research
Forum
(WWRF),
http://www.
wireless-world-research.org
Christoph Lindemann is an Associate Professor in
the Department of Computer Science at the University
of Dortmund and leads the Computer Systems
and Performance Evaluation group. From 1994 to
1997, he was a Senior Research Scientist at the GMD
Institute for Computer Architecture and Software
Technology (GMD FIRST) in Berlin. In the summer
1993 and during the academic year 1994/1995,
he was a Visiting Scientist at the IBM Almaden Research
Center, San Jose, CA. Christoph Lindemann
is a Senior Member of the IEEE. He is author of the monograph Performance
Modelling with Deterministic and Stochastic Petri Nets (Wiley, 1998). Moreover
, he co-authored the survey text Performance Evaluation Origins and
Directions (Springer-Verlag, 2000). He served on the program committees of
various well-known international conferences. His current research interests
include mobile computing, communication networks, Internet search technology
, and performance evaluation.
E-mail: [email protected]
WWW: http://www4.cs.uni-dortmund.de/
Lindemann/
Marco Lohmann received the degree Diplom-Infor-matiker
(M.S. in computer science) with honors
from the University of Dortmund in March 2000.
Presently, he is a Ph.D. student in the Computer Systems
and Performance Evaluation group at the University
of Dortmund. He is a student member of the
IEEE and the ACM. His research interests include
mobile computing, Internet search technology, and
stochastic modeling.
E-mail: [email protected]
Axel Thmmler received the degree Diplom-Infor-matiker
(M.S. in computer science) from the University
of Dortmund in April 1998. Presently, he is a
Ph.D. student in the Computer Systems and Performance
Evaluation group at the University of Dortmund
. His research interests include mobile computing
, communication networks, and performance
evaluation.
E-mail: [email protected] | QoS;packet loss probability;Quality of Service in mobile systems;provider revenue;performance evaluation of next generation mobile systems;packet scheduler;adaptive performance management;admission control in mobile system;pricing policy;admission control;3G mobile networks;pricing and revenue optimization |
24 | A WEIGHTED RANKING ALGORITHM FOR FACET-BASED COMPONENT RETRIEVAL SYSTEM | Facet-based component retrieval techniques have been proved to be an effective way for retrieving. These Techniques are widely adopted by component library systems, but they usually simply list out all the retrieval results without any kind of ranking. In our work, we focus on the problem that how to determine the ranks of the components retrieved by user. Factors which can influence the ranking are extracted and identified through the analysis of ER-Diagram of facet-based component library system. In this paper, a mathematical model of weighted ranking algorithm is proposed and the timing of ranks calculation is discussed. Experiment results show that this algorithm greatly improves the efficiency of component retrieval system. | Motivations
A high efficiency retrieval system for software
component library is important for the reuse of software
components. The point of high efficiency is not that the
time performance in one matching or retrieving process
which can be measured by how many seconds or how
many milliseconds elapsed, but that the efficiency to
make the component consumers be able to find what they
need as soon as possible, even though the former is the
basis of the latter.
No matter accuracy matching or fuzzy matching, our
component retrieval system usually simply lists out all the
retrieval results without any kind of ranking, or at least
without a systematic ranking. Users have to view the
detail information of all the retrieval results one by one to
find out which is the best to fit their requirements, or else
they have to adjust their query conditions to retrieve again.
If there are a large number of components retrieved from
the component library, it could be a tough and torturous
experience to find a proper component. However, it's a
fact that there's a matching degree between the query
conditions and retrieval results. The matching degree is
just the similarity and relevancy between the query
condition and its retrieval results. Only when we rank the
retrieval results by the matching degree as the Web search
engines can component consumers easily find what they
need. They only have to compare the first several retrieval
results but not all of them.
According to the discussion above, it's clear that a
formula to calculate the matching degree and its
corresponding ranking algorithm, which can greatly
improve the retrieval efficiency for software component
library, are needed. In this paper, we propose a weighted
ranking algorithm for facet-based component retrieval
system. This algorithm has been implemented in a
software component library, called DLCL, and greatly
improves the efficiency of the retrieval system.
Introduction to Retrieval Methods for Component Library
2.1 Existing Retrieval Methods for Component
Library
Retrieval of software components is a core technique of
component library. Today there are lots of retrieval
methods for software component library. The main are as
follows [1, 2]: (1) Specification matching method; (2) AI
Based method; (3) Information science method; (4)
Hypertext browsing method. As to the four methods, each
has its own features and there's no a general formula to
calculate the matching degree. For example, specification
matching method uses formal specifications to describe
the behavior of software components
and relies on
theorem proving to determine match and mismatch. AI
Based method relies on the use of AI planning techniques
to automatically search software components in
component library. So we have to use different
calculating strategies to calculate the matching degree of
each retrieval method.
Among the retrieval methods discussed above,
information science method is widely used in practice.
Information science method usually comprises several
different retrieval methods which are attribute-value,
enumerated, faceted, and keyword method. Of the four
methods, facet-based component retrieval method has
been proved to be an effective way for retrieving and has
been widely adopted by component library systems. In the
following section, we'll discuss the facet-based retrieval
method.
505-049
274
2.2
Facet-based Retrieval Method
A component classification is a set of {facet, facet term}
pairs, also called descriptors [3]. Reusable software
components (RSC) are classified by assigning appropriate
facet terms for all applicable facets. The objective in
classifying the RSC is to make it possible for all users
who might use the RSC to retrieve it as a result of their
requests. Faceted classification scheme is an effective
way for classifying the components and widely adopted
by component library systems.
Correspondingly, there are several retrieving
algorithms for faceted classification scheme. Some
systems use the traditional database query techniques in
facet-based retrieval. Wang YF proposed a tree matching
algorithm in his PH.D dissertation [4]. This algorithm
maps the component facets into a facet tree and maps the
query conditions into a query tree. The matching
algorithm deals with the match of the facet tree and query
tree and calculates the matching cost. This algorithm
bases on the tree matching theories, such as tree
embedding, tree inclusion, and tree containment. These
three tree matching methods are becoming more and more
elastic in order to improve the retrieving recall while
maintaining the precision to a certain extent. Matching
cost of the tree matching will be calculated to measure the
approximate degree between the facet trees of the
components and the query tree. The data structure of a
tree is represented by a three-tuple: T= (V, E, root (T)), V
represents a limited set of vectors, root (T) represents the
root of the tree, E represents the set of edges.
Weighted Ranking Algorithm for Facet-based Component Retrieval System
There's no a general formula to calculate the matching
degree due to the different feature of each retrieval
method. Facet-based retrieval method has been widely
adopted by existing component library systems, such as
REBOOT, Proteus, Asset Library, and JBCL [5]. It has
been proved to be an effective method to the retrieval of
component library system. And therefore, it makes great
sense to propose a component ranking algorithm for facet-based
retrieval system.
3.1 ER-Diagram of Software Component Library
The extraction and identification of the influential factors
which are used to calculate the matching degree is the
first step to establish a mathematical model. To Analyze
the ER-Diagram of software component library is an
effective way to extract the factors. An ER-diagram of
facet-based component library was given below:
Producer
Provide
Consume
r
Component
Reuse
Facet
Term
Describe
Feedback
Summary
Feedback
1
n
1
1
1 1
n
n
n
n
m
m
Relate
n
m
Include
Describe
Fig. 1. ER-Diagram of Component Library
Entities list:
Component: component is the basic and primary
entity in component library. Besides the attributes,
there are facet-term pairs and information summary
to describe a component.
User Feedback: an opinion, a comment or a score
provided by users after they have used a component.
Component Summary: an information summary to
describe a component which enables users to know
well the component quickly.
Facet: facet and its terms are used to classify and
represent the components.
3.2 Factors of Weighted Ranking Algorithm
As to a facet-based component library system, facet is the
most important method to classify and represent the
components. Correspondingly, facet-based retrieval
methods, such as facet tree matching method, are
important for the component retrieval system. The
matching degree between facet tree and query tree is of
much importance for ranking. However, matching degree
of facet is not the only factor which is able to influence
the ranking.
Retrieval system of component library usually has two
search modes: simple query and complex query. Simple
query just simply uses the traditional database query
method to match the Attribute-Valued pairs. In contrast,
complex query is a much more effective way which
combines several query methods together to match
different kinds of component information. And therefore,
275
we should take other factors into account besides the facet
for ranking the retrieval results. According to the analysis
of ER-Diagram above, we can extract some other factors
which are able to influence the ranking of component
retrieval results while using the complex query.
Attributes of component, such as component name, can
be used to match the keywords in the query conditions.
The matching degree of Attribute-Valued pairs should be
an influential factor for ranking.
Summary of component can also be used to match the
query conditions. Query conditions usually consist of
several keywords. The density, prominence, and position
of keywords within the component summary will
influence the ranking of components. The keyword
density is just the number of occurrences of the keywords
within the component summary divided by the total
number of words. Keyword prominence is related to the
location of keywords in the summary. For example,
keywords placed at the beginning of the summary maybe
carry more weight than those towards the end of it.
User feedback of a component is very useful for other
users who want to use it to evaluate the quality and other
features of the component. They can acquire much more
objective description and useful information about the
component besides the component attributes and
summary.
How many times the component information has been
visited and how many times the component has been
downloaded for reusing should also be taken into account
as the factors to calculate the matching degree for ranking.
They reflect the popularity and reusability of the
component from another aspect.
3.3 Mathematical Model
Retrieval results consist of a collection of components
matching the query conditions:
Definition 1: Components (C
1
, C
2
, ......, C
i
, ......, Cn);
(n N, n1)
It makes no sense to discuss the circumstance of empty
retrieval results, since that we are going to discuss the
ranking of component lists.
Accordingly, each component has a rank value:
Definition 2: Ranks (R
1
, R
2
, ......, R
i
, ......, Rn); (n N,
n1)
The query condition consists of a collection of
keywords:
Definition 3: Keywords (K
1
, K
2
, ......, K
i
, ......, K
n0
);
(n
0
N, n
0
1)
Each component is described by a set of Attribute-Valued
pairs:
Definition 4: Attributes (A
1
, A
2
, ......, A
i
, ......, A
n1
);
(n
1
N,
n
1
1)
Besides with Attribute-Valued pairs, components are
also classified and represented by a set of facets and their
terms:
Definition 5: Facets (F
1
, F
2
, ......, F
i
, ......, F
n2
);
(n
2
N, n
2
1)
Summary of component information differs from
Attribute-Valued pairs. It provides a comprehensive
description of a component in context.
Definition 6: Summary (S);
User feedback includes all the comments and feedback
to a specific component:
Definition 7: User Feedback (U
1
, U
2
, ......, U
i
, ......,
U
n3
); (n
3
N, n
3
1)
User feedback must be analyzed and evaluated to a
relative number. We use E to represent the Evaluation
number of user feedback.
Definition 8: E = Evaluate (User Feedback).
Definition 9: Visited times of a component: Visited
times (V);
Definition 10: Downloaded times of a component:
Downloaded times (D);
We have listed out all the influential factors above,
which constitute a six-tuple:
Factors (A, F, S, U, V, D);
Their influential weights differ from each other
according to their feature and importance:
Definition 11: Weights (W
A
, W
F
, W
S
, W
U
, W
V
, W
D
);
(0W
A
, W
F
, W
S
, W
U
, W
V
, W
D
1,
W
A
+W
F
+W
S
+W
U
+W
V
+W
D
= 1)
W
A
represents the weight of Attributes; W
F
represents
the weight of Facets; W
S
, W
U
, W
V
, and W
D
also represent
the weight of corresponding factor discussed above.
There are several functions for calculating the matching
degree of some factors. The core calculating formula of
each function relies on its corresponding matching
algorithm.
Functions Formulas
Summary F
S
(Keywords,
Summary)
=
no
i
S
Ki
match
1
)
,
(
Facets
F
F
(Keywords,
Facets)
=
=
no
i
n
j
Fj
Ki
match
1
2
1
)
,
(
Attributes
F
A
(Keywords,
Attributes)
=
=
no
i
n
j
Aj
Ki
match
1
1
1
)
,
(
Match function of component summary uses the
content-based similarity measurement algorithm of the
search engine techniques. A Best-First algorithm was
proposed by Cho [6]. This algorithm uses a vector space
model to calculate the similarity between the keywords
and the content. Its formula is given as following:
=
p
k
q
k
kq
kp
p
q
k
kp
kq
W
W
W
W
p
q
sim
2
2
|
)
,
(
The variable q represents the collection of keywords, p
represents the content, and W
kp
represents the importance
of k to a specific topic. In our mathematical model,
variable q represents the query condition, and the variable
p represents the summary of the component information.
276
Facet-based retrieving method usually adopts the facet
tree matching. And therefore, its match function
calculates the matching degree between the facet tree of
component and the query tree. A formula to calculate the
matching cost of tree containment matching was given by
Xu [7]:
Q=(V,E,root(Q)), D=(W,F,root(D)) are two unordered
label tree, TCostM(Q, D) represents the tree containment
matching cost from tree Q to tree D.
( )
{
}
D
Q
f
f
=
:
|
min
D)
TCostM(Q,
( )
(
)
(
)
(
+
+
=
)
(
)
(
)
(
)
(
)
(
)
(
))
(
(
)
(
f
Range
f
spectrum
w
f
domain
V
v
f
domain
v
v
label
v
label
v
f
label
v
label
f
)
If f is a tree containment matching from tree Q to tree
D, and (f) = TCostM(Q, D), then f is the tree
containment matching which obtains the minimum
matching cost from tree Q to tree D. This definition could
be also applied to the containment matching between tree
and forest or between forest and forest.
As to the match function of component attributes, we
just use the traditional database query methods to deal
with it.
E, V, and D are three ranking factors without any
relation to query keywords. Even though they are
numbers, we could not use them directly for ranking.
Functions should be provided to transform them.
Functions
Evaluation of Feedback
F
E
(E)
Visited Times
F
V
(V)
Downloaded Times
F
D
(D)
According to the discussion above, we finally draw out
a very simple formula to calculate the rank for each
component:
Rank = F
A
W
A
+ F
F
W
F
+ F
S
W
S
+ F
E
W
E
+ F
V
W
V
+ F
D
W
D
We can use matrix operation to represent the
calculation of rank value for each component in the
retrieval results. There are n components and 6 influential
factors. F
6n
W
16
= R
1n
:
=
n
i
D
V
E
S
F
A
Dn
Vn
En
Sn
Fn
An
Di
Vi
Ei
Si
Fi
Ai
D
V
E
S
F
A
D
V
E
S
F
A
R
R
R
R
W
W
W
W
W
W
F
F
F
F
F
F
F
F
F
F
F
F
F
F
F
F
F
F
F
F
F
F
F
F
M
M
M
M
M
M
M
M
M
M
M
M
M
M
2
1
2
2
2
2
2
2
1
1
1
1
1
1
We specify the weights with experiential values at the
very beginning, and then use the data mining technology
to analyze the user logs to dynamically and iteratively
adjust those values.
3.4 Timing of Ranks Calculation
It's time for us to determine when to deal with the
calculation of ranks, since that we have designed how to
deal with it. It's just the matter of the timing of rank
calculation. There are two probable times for calculation:
calculating after the retrieving process has been finished;
calculating during the process of retrieving. Both of the
probable times have their own advantages and
disadvantages.
If we calculate the rank after the retrieving process has
been finished, we can deal with the retrieving and ranking
separately. It will be much easier for us to design and
maintain the system, since the retrieving and ranking
process are independent. However, it costs much more
time and space to calculate the rank. It needs a lot of
memory space to store a large number of retrieval results
temporally before they are ranked, and costs much more
time to manage the transmission of data between storage
devices and CPU. On the contrary, if we calculate the
rank during the process of retrieving, we have to combine
the retrieving with the ranking process completely or
partly. Undoubtedly, it will be hard for us to implement
and maintain the system, but it can greatly improve the
time and the space performances.
According to the discussion above, we can choose a
proper time to calculate the ranks. Which solution we
should choose depends on the requirements of the system.
The solution which calculates the ranks during the
retrieving process should be adopted if the time and the
space performances are rigorously required.
Implementation
Our component library system, named DLCL, is
implemented with J2EE platform. The mathematical
model and its algorithm we discussed above have been
implemented in this system with Java. Java is an object-oriented
language. Its implementation consists of several
core interfaces and classes. The core interfaces, classes
and the relationship between them are demonstrated in the
UML class diagram:
277
Fig. 2. Class Diagram of Ranking Module
We calculate the ranks during the retrieving process to
improve the time and space performances, and therefore,
we have to combine the ranking module partly with the
retrieving module. In order to lower the coupled degree
between these two modules, a callback mechanism was
adopted. We define an interface Rank, which consists of
only one method to calculate the ranks.
RankComponentImpl is a class implementing the
interface Rank to calculate the ranks of those components
retrieved by users. The method of its concrete object can
be executed by the searcher during the retrieving process.
Class Component encapsulates those methods which
provide the component information.
ComponentMatchingDegree is a class providing those
methods for calculating the matching degree between the
query keywords and component. Each influential factor
has its own strategy for calculation. There's also a class
Weight, which provides the methods to get the influential
weight of each factor.
Experiment and its Results
In order to verify the efficiency of the component
retrieving system which adoptes our weighted ranking
algorithm, we design an experiment and carry out this
experiment in our component library system, named
DLCL. There are more than 1000 components in this
system. Retrieving system of DLCL splits the retrieval
results into several pages if there are too many
components retrieved, and lists out 10 components per
page.
The experiment separates the users into two groups,
group 1 and group 2. Each group consists of 10 persons.
All the users know about the knowledge of component
reuse to a certain extent. Both groups use the facet-based
component retrieving method to retrieve the components.
Retrieval results of group 1 are listed out without any
ranking, however, those of group 2 are ranked by our
weighted ranking algorithm.
There are several aspects to measure the efficiency of
each group: how many pages they turned; how many
times they had to adjust the query condition; and the most
important, how many time was elapsed during the whole
retrieving process. The experimental results are given in
the following table:
Group 1 Group 2
Average Turned Pages
(pages)
2.7 1.4
Average Adjusted Times
(times)
2.3 1.1
Average Time elapsed
(minutes)
26.6 9.5
The experimental results obviously shows that the
efficiency of group 2 is greatly higher than group 1. By
applying the weighted ranking algorithm into the
retrieving system of DLCL, users needn't turn too many
pages to view and compare the component information or
to adjust the query condition to improve the query
precision. Only to view the first page of retrieval results
will be enough most of the time. And therefore, it greatly
saves the time and retrieval costs.
Related Works
The idea of Component Rank comes from computing fair
impact factors of published papers [8]. Google is a web
search engine. Its method can be considered as an HTML
extension of the method proposed for counting impact of
publications, called influence weight in [8]. Google
computes the ranks (called PageRanks) for HTML
documents in the Internet [9, 10]. In reference [11], the
authors present the Component Rank model for ranking
software components, and show a system for computing
Component Rank. In this model, a collection of software
components is represented as a weighted directed graph
whose nodes correspond to the components and edges
correspond to the usage relations. Similar components are
clustered into one node so that effect of simply duplicated
nodes is removed. The nodes in the graph are ranked by
their weights which are defined as the elements of the
eigenvector of an adjacent matrix for the directed graph.
A major distinction of Component Rank model in [11]
from PageRank and the influence weight in [9, 10] is that
Component Rank model explores similarity between
components before the weight computation.
In this paper, we also propose a weighted ranking
algorithm for component retrieval system. This weighted
ranking algorithm uses different calculating strategies
according to the feature of facet-based retrieval methods.
While in [11], the authors employed only statical use
relations.
Conclusion
In this paper, a mathematical model of weighted ranking
algorithm is proposed and the timing of ranks calculation
is discussed. We have applied this ranking algorithm into
our component library system, named DLCL. The
experiment we carried out shows that this algorithm
greatly improves the efficiency of component retrieving
system, saving the time and retrieval costs for component
reusing.
Acknowledgement
278
This research is partially supported by the National High
Technology Development 863 Program under Grant No.
2004AA116010.
References
[1] Frakes WB, Pole TP, An empirical study of
representation methods for reusable software components,
IEEE Transactions on Software Engineering, 1994, l20(8),
pp617-630
[2] H. Mili, R. Rada, W. Wang, K. Strickland, C.
Boldyreff, L. Olsen, J. Witt, J. Heger, W. Scherr, and P.
Elzer, Practitioner and SoftClass: A Comparative Study of
Two Software Reuse Research Projects, J. Systems and
Software, 1994, 27(5)
[3] NEC Software Engineering Laboratory, NATO
Standard for Management of a Reusable Software
Component Library, NATO Communications and
Information Systems Agency, 1991
[4] Wang YF. Research on retrieving reusable
components classified in faceted scheme [Ph.D. Thesis].
Shanghai: Fudan University, 2002.
[5] Chang JC, et al. Representation and Retrieval of
Reusable Software Components [J].Computer Science,
1999, 26(5):41-48.
[6] Cho J, Garcia-Molina H, Page L. Efficient Crawling
Through URL Ordering [J]. Computer Networks, 1998,
30(1~7):161-172
[7] Xu RZ, et al. Research on Matching Algorithm for
XML-Based Software Component Query, Journal of
Software, 2003, 14(7):1195-1202.
[8] G. Pinski and F. Narin. "Citation Influence for Journal
Aggregates of Scientific Publications: Theory, with
Application to the Literature of Physics". Information
Processing and Management, 12(5):297.312, 1976.
[9] L. Page, S. Brin, R. Motwani, and T. Winograd. "The
PageRank Citation Ranking: Bringing Order to the Web".
Technical Report of Stanford Digital Library
Technologies Project, 1998. "http://www-db
.stanford.edu/.backrub/ pageranksub.ps".
[10] J. Kleinberg. "Authoritative Sources in a
Hyperlinked Environment". Journal of the ACM,
46(5):604.632, 1999.
[11] Katsuro Inoue, Reishi Yokomori, Hikaru Fujiwara,
Tetsuo Yamamoto, Makoto Matsushita, Shinji Kusumoto:
Component Rank: Relative Significance Rank for
Software Component Search. ICSE 2003: 14-24
279 | retrieval system;facet;component rank;component retrieval;and component library;ranking algorithm;Weighted ranking algorithm;matching degree;facet-based component retrieval;component library |
25 | Accelerated Focused Crawling through Online Relevance Feedback | The organization of HTML into a tag tree structure, which is rendered by browsers as roughly rectangular regions with embedded text and HREF links, greatly helps surfers locate and click on links that best satisfy their information need. Can an automatic program emulate this human behavior and thereby learn to predict the relevance of an unseen HREF target page w.r.t. an information need, based on information limited to the HREF source page? Such a capability would be of great interest in focused crawling and resource discovery, because it can fine-tune the priority of unvisited URLs in the crawl frontier, and reduce the number of irrelevant pages which are fetched and discarded. We show that there is indeed a great deal of usable information on a HREF source page about the relevance of the target page. This information, encoded suitably, can be exploited by a supervised apprentice which takes online lessons from a traditional focused crawler by observing a carefully designed set of features and events associated with the crawler. Once the apprentice gets a sufficient number of examples, the crawler starts consulting it to better prioritize URLs in the crawl frontier. Experiments on a dozen topics using a 482-topic taxonomy from the Open Directory (Dmoz) show that online relevance feedback can reduce false positives by 30% to 90%. | Introduction
Keyword search and clicking on links are the dominant
modes of accessing hypertext on the Web.
Support for
keyword search through crawlers and search engines is very
mature, but the surfing paradigm is not modeled or assisted
(Note: The HTML version of this paper is best viewed using
Microsoft Internet Explorer. To view the HTML version using
Netscape, add the following line to your ~/.Xdefaults or
~/.Xresources file:
Netscape*documentFonts.charset*adobe-fontspecific: iso-8859-1
For printing use the PDF version, as browsers may not print the
mathematics properly.)
Contact author, email [email protected]
Copyright is held by the author/owner(s).
WWW2002, May 711, 2002, Honolulu, Hawaii, USA.
ACM 1-58113-449-5/02/0005
Baseline learner
Dmoz
topic
taxonomy
Class models
consisting of
term stats
Frontier URLS
priority queue
Crawler
Pick
best
Newly fetched
page u
Submit page for classification
If Pr(c*|u) is large enough
then enqueue all outlinks v of u
with priority Pr(c*|u)
Crawl
database
Seed
URLs
Figure 1: A basic focused crawler controlled by one topic
classifier/learner.
as well. Support for surfing is limited to the basic interface
provided by Web browsers, except for a few notable research
prototypes.
While surfing, the user typically has a topic-specific
information need, and explores out from a few known
relevant starting points in the Web graph (which may be
query responses) to seek new pages relevant to the chosen
topic/s. While deciding for or against clicking on a specific
link (u, v), humans use a variety of clues on the source
page u to estimate the worth of the (unseen) target page
v, including the tag tree structure of u, text embedded in
various regions of that tag tree, and whether the link is
relative or remote. "Every click on a link is a leap of faith"
<A href="25.html#12">[19], but humans are very good at discriminating between
links based on these clues.
Making an educated guess about the worth of clicking
on a link (u, v) without knowledge of the target v is
central to the surfing activity. Automatic programs which
can learn this capability would be valuable for a number
of applications which can be broadly characterized as
personalized, topic-specific information foragers.
Large-scale, topic-specific information gatherers are
called focused crawlers <A href="25.html#12">[1, 9, 14, 28, 30]. In contrast to giant,
all-purpose crawlers which must process large portions of
the Web in a centralized manner, a distributed federation of
focused crawlers can cover specialized topics in more depth
and keep the crawl more fresh, because there is less to cover
for each crawler.
In its simplest form, a focused crawler consists of a
supervised topic classifier (also called a `learner') controlling
the priority of the unvisited frontier of a crawler (see
Figure <A href="25.html#1">1). The classifier is trained a priori on document
samples embedded in a topic taxonomy such as Yahoo!
or Dmoz.
It thereby learns to label new documents as
belonging to topics in the given taxonomy <A href="25.html#12">[2, 5, 21]. The
goal of the focused crawler is to start from nodes relevant
to a focus topic c
in the Web graph and explore links to
selectively collect pages about c
, while avoiding fetching
pages not about c
.
Suppose the crawler has collected a page u and
148
encountered in u an unvisited link to v. A simple crawler
(which we call the baseline) will use the relevance of u
to topic c
(which, in a Bayesian setting, we can denote
Pr(c
|u)) as the estimated relevance of the unvisited page
v.
This reflects our belief that pages across a hyperlink
are more similar than two randomly chosen pages on the
Web, or, in other words, topics appear clustered in the
Web graph <A href="25.html#12">[11, 23]. Node v will be added to the crawler's
priority queue with priority Pr(c
|u). This is essentially a
"best-first" crawling strategy. When v comes to the head
of the queue and is actually fetched, we can verify if the
gamble paid off, by evaluating Pr(c
|v). The fraction of
relevant pages collected is called the harvest rate.
If V
is the set of nodes collected, the harvest rate is defined
as (1/
|V |)
v
V
Pr(c
|v). Alternatively, we can measure
the loss rate, which is one minus the harvest rate, i.e., the
(expected) fraction of fetched pages that must be thrown
away.
Since the effort on relevant pages is well-spent,
reduction in loss rate is the primary goal and the most
appropriate figure of merit.
For focused crawling applications to succeed, the "leap
of faith" from u to v must pay off frequently. In other words,
if Pr(c
|v) is often much less than the preliminary estimate
Pr(c
|u), a great deal of network traffic and CPU cycles
are being wasted eliminating bad pages. Experience with
random walks on the Web show that as one walks away
from a fixed page u
0
relevant to topic c
0
, the relevance of
successive nodes u
1
, u
2
, . . . to c
0
drops dramatically within
a few hops <A href="25.html#12">[9, 23]. This means that only a fraction of outlinks
from a page is typically worth following. The average
out-degree of the Web graph is about 7 <A href="25.html#12">[29]. Therefore, a
large number of page fetches may result in disappointment,
especially if we wish to push the utility of focused crawling
to topic communities which are not very densely linked.
Even w.r.t. topics that are not very narrow, the
number of distracting outlinks emerging from even fairly
relevant pages has grown substantially since the early
days of Web authoring <A href="25.html#12">[4].
Template-based authoring,
dynamic page generation from semi-structured databases,
ad links, navigation panels, and Web rings contribute many
irrelevant links which reduce the harvest rate of focused
crawlers. Topic-based link discrimination will also reduce
these problems.
1.1
Our contribution: Leaping with more faith
In this paper we address the following questions:
How much information about the topic of the HREF
target is available and/or latent in the HREF source page,
its tag-tree structure, and its text? Can these sources be
exploited for accelerating a focused crawler?
Our basic idea is to use two classifiers. Earlier, the regular
baseline classifier was used to assign priorities to unvisited
frontier nodes. This no longer remains its function. The role
of assigning priorities to unvisited URLs in the crawl frontier
is now assigned to a new learner called the apprentice, and
the priority of v is specific to the features associated with
the (u, v) link which leads to it
<A href="25.html#2">1
<A href="25.html#2">. The features used by the
apprentice are derived from the Document Object Model or
1
If many u's link to a single v, it is easiest to freeze the priority of
v when the first-visited u linking to v is assessed, but combinations
of scores are also possible.
Baseline learner (Critic)
Dmoz
topic
taxonomy
Class models
consisting of
term stats
Frontier URLS
priority queue
Crawler
Pick
best
Newly fetched
page u
Submit page for classification
If Pr(c*|u) is
large enough...
An instance (u,v)
for the apprentice
u
v
Pr(c*|v)
Pr(c|u) for
all classes c
Crawl
database
Apprentice learner
Class
models
+
Online
training
... submit (u,v)
to the apprentice
Apprentice
assigns more
accurate priority
to node v
Figure 2:
The apprentice is continually presented with
training cases (u, v) with suitable features. The apprentice
is interposed where new outlinks (u, v) are registered with
the priority queue, and helps assign the unvisited node v a
better estimate of its relevance.
DOM (<A href="http://www.w3.org/DOM/">http://www.w3.org/DOM/) of u. Meanwhile, the role
of the baseline classifier becomes one of generating training
instances for the apprentice, as shown in Figure <A href="25.html#2">2. We may
therefore regard the baseline learner as a critic or a trainer,
which provides feedback to the apprentice so that it can
improve "on the job."
The critic-apprentice paradigm is related to reinforcement
learning and AI programs that learn to play games
<A href="25.html#12">[26,
1.2]. We argue that this division of labor is natural
and effective.
The baseline learner can be regarded as
a user specification for what kind of content is desired.
Although we limit ourselves to a generative statistical model
for this specification, this can be an arbitrary black-box
predicate.
For rich and meaningful distinction between
Web communities and topics, the baseline learner needs
to be fairly sophisticated, perhaps leveraging off human
annotations on the Web (such as topic directories).
In
contrast, the apprentice specializes in how to locate pages
to satisfy the baseline learner.
Its feature space is more
limited, so that it can train fast and adapt nimbly to
changing fortunes at following links during a crawl.
In
Mitchell's words <A href="25.html#12">[27], the baseline learner recognizes "global
regularity" while the apprentice helps the crawler adapt
to "local regularity."
This marked asymmetry between
the classifiers distinguishes our approach from Blum and
Mitchell's co-training technique <A href="25.html#12">[3], in which two learners
train each other by selecting unlabeled instances.
Using a dozen topics from a topic taxonomy derived
from the Open Directory, we compare our enhanced crawler
with the baseline crawler. The number of pages that are
thrown away (because they are irrelevant), called the loss
rate, is cut down by 3090%. We also demonstrate that
the fine-grained tag-tree model, together with our synthesis
and encoding of features for the apprentice, are superior to
simpler alternatives.
1.2
Related work
Optimizing the priority of unvisited URLs on the crawl
frontier for specific crawling goals is not new. FishSearch
by De Bra et al. <A href="25.html#12">[12, 13] and SharkSearch by Hersovici
et al. <A href="25.html#12">[16] were some of the earliest systems for localized
searches in the Web graph for pages with specified keywords.
149
In another early paper, Cho et al. <A href="25.html#12">[10] experimented with a
variety of strategies for prioritizing how to fetch unvisited
URLs.
They used the anchor text as a bag of words to
guide link expansion to crawl for pages matching a specified
keyword query, which led to some extent of differentiation
among out-links, but no trainer-apprentice combination was
involved. No notion of supervised topics had emerged at
that point, and simple properties like the in-degree or the
presence of specified keywords in pages were used to guide
the crawler.
Topical locality on the Web has been studied for a few
years.
Davison made early measurements on a 100000-node
Web subgraph <A href="25.html#12">[11] collected by the DiscoWeb system.
Using the standard notion of vector space TFIDF similarity
<A href="25.html#12">[31], he found that the endpoints of a hyperlink are much
more similar to each other than two random pages, and that
HREFs close together on a page link to documents which are
more similar than targets which are far apart. Menczer has
made similar observations <A href="25.html#12">[23]. The HyperClass hypertext
classifier also uses such locality patterns for better semi-supervised
learning of topics <A href="25.html#12">[7], as does IBM's Automatic
Resource Compilation (ARC) and Clever topic distillation
systems <A href="25.html#12">[6, 8].
Two important advances have been made beyond the
baseline best-first focused crawler: the use of context graphs
by Diligenti et al. <A href="25.html#12">[14] and the use of reinforcement learning
by Rennie and McCallum <A href="25.html#12">[30].
Both techniques trained
a learner with features collected from paths leading up to
relevant nodes rather than relevant nodes alone. Such paths
may be collected by following backlinks.
Diligenti et al. used a classifier (learner) that regressed
from the text of u to the estimated link distance from u to
some relevant page w, rather than the relevance of u or an
outlink (u, v), as was the case with the baseline crawler.
This lets their system continue expanding u even if the
reward for following a link is not immediate, but several
links away.
However, they do favor links whose payoffs
are closest. Our work is specifically useful in conjunction
with the use of context graphs: when the context graph
learner predicts that a goal is several links away, it is crucial
to offer additional guidance to the crawler based on local
structure in pages, because the fan-out at that radius could
be enormous.
Rennie and McCallum <A href="25.html#12">[30] also collected paths leading
to relevant nodes, but they trained a slightly different
classifier, for which:
An instance was a single HREF link like (u, v).
The features were terms from the title and headers
(<h1>...</h1> etc.)
of u, together with the text
in and `near' the anchor (u, v).
Directories and
pathnames were also used.
(We do not know the
precise definition of `near', or how these features were
encoded and combined.)
The prediction was a discretized estimate of the
number of relevant nodes reachable by following (u, v),
where the reward from goals distant from v was
geometrically discounted by some factor < 1/2 per
hop.
Rennie and McCallum obtained impressive harvests of
research papers from four Computer Science department
sites, and of pages about officers and directors from 26
company Websites.
Lexical proximity and contextual features have been
used extensively in natural language processing for disambiguating
word sense <A href="25.html#12">[15]. Compared to plain text, DOM
trees and hyperlinks give us a richer set of potential features.
Aggarwal et al. have proposed an "intelligent crawling"
framework <A href="25.html#12">[1] in which only one classifier is used, but similar
to our system, that classifier trains as the crawl progresses.
They do not use our apprentice-critic approach, and do not
exploit features derived from tag-trees to guide the crawler.
The "intelligent agents" literature has brought forth
several systems for resource discovery and assistance to
browsing <A href="25.html#12">[19].
They range between client- and site-level
tools. Letizia <A href="25.html#12">[18], Powerscout, and WebWatcher <A href="25.html#12">[17] are
such systems.
Menczer and Belew proposed InfoSpiders
<A href="25.html#12">[24], a collection of autonomous goal-driven crawlers without
global control or state, in the style of genetic algorithms. A
recent extensive study <A href="25.html#12">[25] comparing several topic-driven
crawlers including the best-first crawler and InfoSpiders
found the best-first approach to show the highest harvest
rate (which our new system outperforms).
In all the systems mentioned above, improving the
chances of a successful "leap of faith" will clearly reduce
the overheads of fetching, filtering, and analyzing pages.
Furthermore, whereas we use an automatic first-generation
focused crawler to generate the input to train the apprentice,
one can envisage specially instrumented browsers being used
to monitor users as they seek out information.
We distinguish our work from prior art in the following
important ways:
Two classifiers:
We use two classifiers. The first one is
used to obtain `enriched' training data for the second one.
(A breadth-first or random crawl would have a negligible
fraction of positive instances.) The apprentice is a simplified
reinforcement learner. It improves the harvest rate, thereby
`enriching' the data collected and labeled by the first learner
in turn.
No manual path collection:
Our two-classifier framework
essentially eliminates the manual effort needed to
create reinforcement paths or context graphs. The input
needed to start off a focused crawl is just a pre-trained topic
taxonomy (easily available from the Web) and a few focus
topics.
Online training:
Our apprentice trains continually, acquiring
ever-larger vocabularies and improving its accuracy
as the crawl progresses. This property holds also for the
"intelligent crawler" proposed by Aggarwal et al., but they
have a single learner, whose drift is controlled by precise
relevance predicates provided by the user.
No manual feature tuning:
Rather than tune ad-hoc
notions of proximity between text and hyperlinks, we encode
the features of link (u, v) using the DOM-tree of u, and
automatically learn a robust definition of `nearness' of a
textual feature to (u, v).
In contrast, Aggarwal et al
use many tuned constants combining the strength of text-and
link-based predictors, and Rennie et al. use domain
knowledge to select the paths to goal nodes and the word
bags that are submitted to their learner.
150
Methodology and algorithms
We first review the baseline focused crawler and then
describe how the enhanced crawler is set up using the
apprentice-critic mechanism.
2.1
The baseline focused crawler
The baseline focused crawler has been described in detail
elsewhere <A href="25.html#12">[9, 14], and has been sketched in Figure <A href="25.html#1">1. Here
we review its design and operation briefly.
There are two inputs to the baseline crawler.
A topic taxonomy or hierarchy with example URLs
for each topic.
One or a few topics in the taxonomy marked as the
topic(s) of focus.
Although we will generally use the terms `taxonomy' and
`hierarchy', a topic tree is not essential; all we really need is
a two-way classifier where the classes have the connotations
of being `relevant' or `irrelevant' to the topic(s) of focus.
A topic hierarchy is proposed purely to reduce the tedium
of defining new focused crawls. With a two-class classifier,
the crawl administrator has to seed positive and negative
examples for each crawl. Using a taxonomy, she composes
the `irrelevant' class as the union of all classes that are not
relevant. Thanks to extensive hierarchies like Dmoz in the
public domain, it should be quite easy to seed topic-based
crawls in this way.
The baseline crawler maintains a priority queue on the
estimated relevance of nodes v which have not been visited,
and keeps removing the highest priority node and visiting it,
expanding its outlinks and checking them into the priority
queue with the relevance score of v in turn.
Despite its
extreme simplicity, the best-first crawler has been found to
have very high harvest rates in extensive evaluations <A href="25.html#12">[25].
Why do we need negative examples and negative classes
at all? Instead of using class probabilities, we could maintain
a priority queue on, say, the TFIDF cosine similarity
between u and the centroid of the seed pages (acting as an
estimate for the corresponding similarity between v and the
centroid, until v has been fetched). Experience has shown
<A href="25.html#12">[32] that characterizing a negative class is quite important to
prevent the centroid of the crawled documents from drifting
away indefinitely from the desired topic profile.
In this paper, the baseline crawler also has the implicit
job of gathering instances of successful and unsuccessful
"leaps of faith" to submit to the apprentice, discussed next.
2.2
The basic structure of the apprentice
learner
In estimating the worth of traversing the HREF (u, v), we
will limit our attention to u alone. The page u is modeled
as a tag tree (also called the Document Object Model or
DOM). In principle, any feature from u, even font color and
site membership may be perfect predictors of the relevance
of v. The total number of potentially predictive features will
be quite staggering, so we need to simplify the feature space
and massage it into a form suited to conventional learning
algorithms. Also note that we specifically study properties
of u and not larger contexts such as paths leading to u,
meaning that our method may become even more robust and
useful in conjunction with context graphs or reinforcement
along paths.
Initially, the apprentice has no training data, and passes
judgment on (u, v) links according to some fixed prior
obtained from a baseline crawl run ahead of time (e.g., see
the statistics in
<A href="25.html#7">3.3). Ideally, we would like to train the
apprentice continuously, but to reduce overheads, we declare
a batch size between a few hundred and a few thousand
pages. After every batch of pages is collected, we check if any
page u fetched before the current batch links to some page
v in the batch. If such a (u, v) is found, we extract suitable
features for (u, v) as described later in this section, and add
(u, v), Pr(c
|v) as another instance of the training data for
the apprentice. Many apprentices, certainly the simple naive
Bayes and linear perceptrons that we have studied, need not
start learning from scratch; they can accept the additional
training data with a small additional computational cost.
2.2.1
Preprocessing the DOM tree
First, we parse u and form the DOM tree for u.
Sadly,
much of the HTML available on the Web violates any
HTML standards that permit context-free parsing, but
a variety of repair heuristics (see, e.g., HTML Tidy,
available at <A href="http://www.w3.org/People/Raggett/tidy/">http://www.w3.org/People/Raggett/tidy/)
let us generate reasonable DOM trees from bad HTML.
a
HREF
TEXT
font
TEXT
li
li
li
ul
li
TEXT
TEXT
em
TEXT
tt
TEXT
TEXT
@0
@0
@1
@2
@3
@-1
@-2
Figure 3: Numbering of DOM leaves used to derive offset
attributes for textual tokens. `@' means "is at offset".
Second, we number all leaf nodes consecutively from left
to right. For uniformity, we assign numbers even to those
DOM leaves which have no text associated with them. The
specific <a href...> which links to v is actually an internal
node a
v
, which is the root of the subtree containing the
anchor text of the link (u, v). There may be other element
tags such as <em> or <b> in the subtree rooted at a
v
. Let
the leaf or leaves in this subtree be numbered (a
v
) through
r(a
v
)
(a
v
). We regard the textual tokens available from
any of these leaves as being at DOM offset zero w.r.t. the
(u, v) link. Text tokens from a leaf numbered , to the left of
(a
v
), are at negative DOM offset
- (a
v
). Likewise, text
from a leaf numbered to the right of r(a
v
) are at positive
DOM offset
- r(a
v
). See Figure <A href="25.html#4">3 for an example.
2.2.2
Features derived from the DOM and text
tokens
Many related projects mentioned in
<A href="25.html#2">1.2 use a linear notion
of proximity between a HREF and textual tokens. In the
ARC system, there is a crude cut-off distance measured
151
in bytes to the left and right of the anchor.
In the
Clever system, distance is measured in tokens, and the
importance attached to a token decays with the distance.
In reinforcement learning and intelligent predicate-based
crawling, the exact specification of neighborhood text is not
known to us. In all cases, some ad-hoc tuning appears to be
involved.
We claim (and show in
<A href="25.html#7">3.4) that the relation between
the relevance of the target v of a HREF (u, v) and the
proximity of terms to (u, v) can be learnt automatically. The
results are better than ad-hoc tuning of cut-off distances,
provided the DOM offset information is encoded as features
suitable for the apprentice.
One obvious idea is to extend the Clever model: a page
is a linear sequence of tokens. If a token t is distant x from
the HREF (u, v) in question, we encode it as a feature t, x .
Such features will not be useful because there are too many
possible values of x, making the t, x space too sparse to
learn well. (How many HREFS will be exactly five tokens
from the term `basketball' ?)
Clearly, we need to bucket x into a small number of
ranges. Rather than tune arbitrary bucket boundaries by
hand, we argue that DOM offsets are a natural bucketing
scheme provided by the page author.
Using the node
numbering scheme described above, each token t on page u
can be annotated w.r.t. the link (u, v) (for simplicity assume
there is only one such link) as t, d , where d is the DOM
offset calculated above.
This is the main set of features
used by the apprentice. We shall see that the apprentice
can learn to limit
|d| to less than d
max
= 5 in most cases,
which reduces its vocabulary and saves time.
A variety of other feature encodings suggest themselves.
We are experimenting with some in ongoing work (
<A href="25.html#11">4),
but decided against some others. For example, we do not
expect gains from encoding specific HTML tag names owing
to the diversity of authoring styles.
Authors use <div>,
<span>, <layer> and nested tables for layout control in
non-standard ways; these are best deflated to a nameless
DOM node representation.
Similar comments apply to
HREF collections embedded in <ul>, <ol>, <td> and
<dd>.
Font and lower/upper case information is useful
for search engines, but would make features even sparser
for the apprentice.
Our representation also flattens two-dimensional
tables to their "row-major" representation.
The features we ignore are definitely crucial for other
applications, such as information extraction. We did not
see any cases where this sloppiness led to a large loss rate.
We would be surprised to see tables where relevant links
occurred in the third column and irrelevant links in the fifth,
or pages where they are rendered systematically in different
fonts and colors, but are not otherwise demarcated by the
DOM structure.
2.2.3
Non-textual features
Limiting d may lead us to miss features of u that may be
useful at the whole-page level. One approach would be to use
"d =
" for all d larger in magnitude than some threshold.
But this would make our apprentice as bulky and slow to
train as the baseline learner.
Instead, we use the baseline learner to abstract u for
the apprentice. Specifically, we use a naive Bayes baseline
learner to classify u, and use the vector of class probabilities
returned as features for the apprentice. These features can
help the apprentice discover patterns such as
"Pages about /Recreation/Boating/Sailing often
link to pages about /Sports/Canoe_and_Kayaking."
This also covers for the baseline classifier confusing between
classes with related vocabulary, achieving an effect similar
to context graphs.
Another kind of feature can be derived from co-citation.
If v
1
has been fetched and found to be relevant and HREFS
(u, v
1
) and (u, v
2
) are close to each other, v
2
is likely to
be relevant. Just like textual tokens were encoded as t, d
pairs, we can represent co-citation features as , d , where
is a suitable representation of relevance.
Many other features can be derived from the DOM tree
and added to our feature pool. We discuss some options
in
<A href="25.html#11">4. In our experience so far, we have found the t, d
features to be most useful. For simplicity, we will limit our
subsequent discussion to t, d features only.
2.3
Choices of learning algorithms for the
apprentice
Our feature set is thus an interesting mix of categorical,
ordered and continuous features:
Term tokens t, d have a categorical component t and
a discrete ordered component d (which we may like to
smooth somewhat). Term counts are discrete but can
be normalized to constant document length, resulting
in continuous attribute values.
Class names are discrete and may be regarded as
synthetic terms. The probabilities are continuous.
The output we desire is an estimate of Pr(c
|v), given all the
observations about u and the neighborhood of (u, v) that
we have discussed. Neural networks are a natural choice
to accommodate these requirements. We first experimented
with a simple linear perceptron, training it with the delta
rule (gradient descent) <A href="25.html#12">[26]. Even for a linear perceptron,
convergence was surprisingly slow, and after convergence,
the error rate was rather high.
It is likely that local
optima were responsible, because stability was generally
poor, and got worse if we tried to add hidden layers or
sigmoids.
In any case, convergence was too slow for use
as an online learner. All this was unfortunate, because the
direct regression output from a neural network would be
convenient, and we were hoping to implement a Kohonen
layer for smoothing d.
In contrast, a naive Bayes (NB) classifier worked very
well. A NB learner is given a set of training documents,
each labeled with one of a finite set of classes/topic.
A
document or Web page u is modeled as a multiset or bag
of words,
{ , n(u, ) } where is a feature which occurs
n(u, ) times in u. In ordinary text classification (such as
our baseline learner) the features are usually single words.
For our apprentice learner, a feature is a t, d pair.
NB classifiers can predict from a discrete set of classes,
but our prediction is a continuous (probability) score. To
bridge this gap, We used a simple two-bucket (low/high
relevance) special case of Torgo and Gama's technique of
using classifiers for discrete labels for continuous regression
<A href="25.html#12">[33], using "equally probable intervals" as far as possible.
152
Torgo and Gama recommend using a measure of centrality,
such as the median, of each interval as the predicted value of
that class. Rennie and McCallum <A href="25.html#12">[30] corroborate that 23
bins are adequate. As will be clear from our experiments, the
medians of our `low' and `high' classes are very close to zero
and one respectively (see Figure <A href="25.html#7">5). Therefore, we simply
take the probability of the `high' class as the prediction from
our naive Bayes apprentice.
The prior probability of class c, denoted Pr(c) is the
fraction of training documents labeled with class c. The NB
model is parameterized by a set of numbers
c,
which is
roughly the rate of occurrence of feature in class c, more
exactly,
c,
=
1 +
u
V
c
n(u, )
|T | +
u,
n(u, ) ,
(1)
where V
c
is the set of Web pages labeled with c and T is the
entire vocabulary. The NB learner assumes independence
between features, and estimates
Pr(c
|u)
Pr(c) Pr(u
|c)
Pr(c)
u
n(u, )
c,
. (2)
Nigam et al. provide further details <A href="25.html#12">[22].
Experimental study
Our experiments were guided by the following requirements.
We wanted to cover a broad variety of topics, some `easy' and
some `difficult', in terms of the harvest rate of the baseline
crawler. Here is a quick preview of our results.
The apprentice classifier achieves high accuracy in
predicting the relevance of unseen pages given t, d
features. It can determine the best value of d
max
to
use, typically, 46.
Encoding DOM offsets in features improves the
accuracy of the apprentice substantially, compared
to a bag of ordinary words collected from within the
same DOM offset window.
Compared to a baseline crawler, a crawler that is
guided by an apprentice (trained offline) has a 30%
to 90% lower loss rate.
It finds crawl paths never
expanded by the baseline crawler.
Even if the apprentice-guided crawler is forced to
stay within the (inferior) Web graph collected by the
baseline crawler, it collects the best pages early on.
The apprentice is easy to train online. As soon as it
starts guiding the crawl, loss rates fall dramatically.
Compared to t, d features, topic- or cocitation-based
features have negligible effect on the apprentice.
To run so many experiments, we needed three highly
optimized and robust modules: a crawler, a HTML-to-DOM
converter, and a classifier.
We started with the w3c-libwww crawling library from
<A href="http://www.w3c.org/Library/">http://www.w3c.org/Library/, but replaced it with our
own crawler because we could effectively overlap DNS
lookup, HTTP access, and disk access using a select over
all socket/file descriptors, and prevent memory leaks visible
in w3c-libwww. With three caching DNS servers, we could
achieve over 90% utilization of a 2Mbps dedicated ISP
connection.
We used the HTML parser libxml2 library to extract
the DOM from HTML, but this library has memory leaks,
and does not always handle poorly written HTML well. We
had some stability problems with HTML Tidy <A href="http://www.w3.org/People/Raggett/tidy/">(http://www.
w3.org/People/Raggett/tidy/), the well-known HTML
cleaner which is very robust to bad HTML. At present we
are using libxml2 and are rolling our own HTML parser and
cleaner for future work.
We intend to make our crawler and HTML parser code
available in the public domain for research use.
For both the baseline and apprentice classifier we used
the public domain BOW toolkit and the Rainbow naive
Bayes classifier created by McCallum and others <A href="25.html#12">[20]. Bow
and Rainbow are very fast C implementations which let us
classify pages in real time as they were being crawled.
3.1
Design of the topic taxonomy
We downloaded from the Open Directory <A href="http://dmoz.org/">(http://dmoz.
org/) an RDF file with over 271954 topics arranged in a
tree hierarchy with depth at least 6, containing a total of
about 1697266 sample URLs. The distribution of samples
over topics was quite non-uniform. Interpreting the tree as
an is-a hierarchy meant that internal nodes inherited all
examples from descendants, but they also had their own
examples. Since the set of topics was very large and many
topics had scarce training data, we pruned the Dmoz tree
to a manageable frontier by following these steps:
1. Initially we placed example URLs in both internal and
leaf nodes, as given by Dmoz.
2. We fixed a minimum per-class training set size of k =
300 documents.
3. We iteratively performed the following step as long
as possible: we found a leaf node with less than k
example URLs, moved all its examples to its parent,
and deleted the leaf.
4. To
each
internal
node
c,
we
attached
a
leaf
subdirectory called Other.
Examples associated
directly with c were moved to this Other subdirectory.
5. Some topics were populated out of proportion, either
at the beginning or through the above process. We
made the class priors more balanced by sampling
down the large classes so that each class had at most
300 examples.
The resulting taxonomy had 482 leaf nodes and a total
of 144859 sample URLs. Out of these we could successfully
fetch about 120000 URLs. At this point we discarded the
tree structure and considered only the leaf topics. Training
time for the baseline classifier was about about two hours
on a 729MHz Pentium III with 256kB cache and 512MB
RAM. This was very fast, given that 1.4GB of HTML text
had to be processed through Rainbow. The complete listing
of topics can be obtained from the authors.
3.2
Choice of topics
Depending on the focus topic and prioritization strategy,
focused crawlers may achieve diverse harvest rates.
Our
153
early prototype <A href="25.html#12">[9] yielded harvest rates typically between
0.25 and 0.6.
Rennie and McCallum <A href="25.html#12">[30] reported recall
and not harvest rates. Diligenti et al. <A href="25.html#12">[14] focused on very
specific topics where the harvest rate was very low, 46%.
Obviously, the maximum gains shown by a new idea in
focused crawling can be sensitive to the baseline harvest
rate.
To avoid showing our new system in an unduly positive
or negative light, we picked a set of topics which were fairly
diverse, and appeared to be neither too broad to be useful
(e.g., /Arts, /Science) nor too narrow for the baseline
crawler to be a reasonable adversary.
We list our topics
in Figure <A href="25.html#7">4. We chose the topics without prior estimates of
how well our new system would work, and froze the list
of topics.
All topics that we experimented with showed
visible improvements, and none of them showed deteriorated
performance.
3.3
Baseline crawl results
We will skip the results of breadth-first or random crawling
in our commentary, because it is known from earlier work
on focused crawling that our baseline crawls are already
far better than breadth-first or random crawls. Figure <A href="25.html#7">5
shows, for most of the topics listed above, the distribution
of page relevance after running the baseline crawler to
collect roughly 15000 to 25000 pages per topic.
The
baseline crawler used a standard naive Bayes classifier on
the ordinary term space of whole pages. We see that the
relevance distribution is bimodal, with most pages being
very relevant or not at all. This is partly, but only partly, a
result of using a multinomial naive Bayes model. The naive
Bayes classifier assumes term independence and multiplies
together many (small) term probabilities, with the result
that the winning class usually beats all others by a large
margin in probability. But it is also true that many outlinks
lead to pages with completely irrelevant topics. Figure <A href="25.html#7">5
gives a clear indication of how much improvement we can
expect for each topic from our new algorithm.
3.4
DOM window size and feature selection
A key concern for us was how to limit the maximum window
width so that the total number of synthesized t, d features
remains much smaller than the training data for the baseline
classifier, enabling the apprentice to be trained or upgraded
in a very short time. At the same time, we did not want
to lose out on medium- to long-range dependencies between
significant tokens on a page and the topic of HREF targets
in the vicinity. We eventually settled for a maximum DOM
window size of 5. We made this choice through the following
experiments.
The easiest initial approach was an end-to-end cross-validation
of the apprentice for various topics while
increasing d
max
.
We observed an initial increase in the
validation accuracy when the DOM window size was
increased beyond 0.
However, the early increase leveled
off or even reversed after the DOM window size was
increased beyond 5. The graphs in Figure <A href="25.html#8">6 display these
results.
We see that in the Chess category, though the
validation accuracy increases monotonically, the gains are
less pronounced after d
max
exceeds 5. For the AI category,
accuracy fell beyond d
max
= 4.
Topic
#Good #Bad
/Arts/Music/Styles/Classical/Composers
24000 13000
/Arts/Performing_Arts/Dance/Folk_Dancing
7410
8300
/Business/Industries.../Livestock/Horses...
17000
7600
/Computers/Artificial_Intelligence
7701 14309
/Computers/Software/Operating_Systems/Linux
17500
9300
/Games/Board_Games/C/Chess
17000
4600
/Health/Conditions_and_Diseases/Cancer
14700
5300
/Home/Recipes/Soups_and_Stews
20000
3600
/Recreation/Outdoors/Fishing/Fly_Fishing
12000 13300
/Recreation/Outdoors/Speleology
6717 14890
/Science/Astronomy
14961
5332
/Science/Earth_Sciences/Meteorology
19205
8705
/Sports/Basketball
26700
2588
/Sports/Canoe_and_Kayaking
12000 12700
/Sports/Hockey/Ice_Hockey
17500 17900
Figure 4: We chose a variety of topics which were neither
too broad nor too narrow, so that the baseline crawler
was a reasonable adversary.
#Good (#Bad) show the
approximate number of pages collected by the baseline
crawler which have relevance above (below) 0.5, which
indicates the relative difficulty of the crawling task.
0
0
.
2
0
.
4
0
.
6
0
.
8
1
AI
Astronomy
Basketball
Cancer
Chess
Composers
FlyFishing
FolkDance
Horses
IceHockey
Kayaking
Linux
Meteorology
Soups
Tobacco
10
100
1000
10000
100000
Expected #pages
Relevance probability
Figure 5: All of the baseline classifiers have harvest rates
between 0.25 and 0.6, and all show strongly bimodal
relevance score distribution: most of the pages fetched are
very relevant or not at all.
It is important to notice that the improvement in
accuracy is almost entirely because with increasing number
of available features, the apprentice can reject negative
(low relevance) instances more accurately, although the
accuracy for positive instances decreases slightly. Rejecting
unpromising outlinks is critical to the success of the
enhanced crawler. Therefore we would rather lose a little
accuracy for positive instances rather than do poorly on the
negative instances. We therefore chose d
max
to be either 4
or 5 for all the experiments.
We verified that adding offset information to text tokens
was better than simply using plain text near the link <A href="25.html#12">[8].
One sample result is shown in Figure <A href="25.html#8">7.
The apprentice
accuracy decreases with d
max
if only text is used, whereas
it increases if offset information is provided. This highlights
154
Chess
65
70
75
80
85
90
95
100
0
2
4
6
8
d_max
%
A
c
c
u
r
a
c
y
Negative
Positive
Average
AI
65
70
75
80
85
90
0
2
4
6
8
d_max
%
A
c
c
u
r
a
c
y
Negative
Positive
Average
Figure 6: There is visible improvement in the accuracy
of the apprentice if d
max
is made larger, up to about 5
7 depending on topic. The effect is more pronounced on
the the ability to correctly reject negative (low relevance)
outlink instances.
`Average' is the microaverage over all
test instances for the apprentice, not the arithmetic mean
of `Positive' and `Negative'.
AI
76
78
80
82
84
86
0
1
2
3
4
5
6
7
8
d_max
%
A
c
c
u
r
a
c
y
Text
Offset
Figure 7: Encoding DOM offset information with textual
features boosts the accuracy of the apprentice substantially.
the importance of designing proper features.
To corroborate the useful ranges of d
max
above, we
compared the value of average mutual information gain for
terms found at various distances from the target HREF.
The experiments revealed that the information gain of terms
found further away from the target HREF was generally
lower than those that were found closer, but this reduction
was not monotonic. For instance, the average information
Chess
0.00002
0.00004
0.00006
0.00008
0.0001
0.00012
0.00014
0.00016
0.00018
0.0002
-8
-6
-4
-2
0
2
4
6
8
d
I
n
f
o
g
a
i
n
d_max=8
d_max=5
d_max=4
d_max=3
AI
4.00E-05
5.00E-05
6.00E-05
7.00E-05
8.00E-05
9.00E-05
1.00E-04
-8
-6
-4
-2
0
2
4
6
8
d
I
n
f
o
G
a
i
n
d_max=8
d_max=5
d_max=4
d_max=3
Figure 8:
Information gain variation plotted against
distance from the target HREF for various DOM window
sizes. We observe that the information gain is insensitive to
d
max
.
gain at d =
-2 was higher than that at d = -1; see Figure <A href="25.html#8">8.
For each DOM window size, we observe that the information
gain varies in a sawtooth fashion; this intriguing observation
is explained shortly. The average information gain settled
to an almost constant value after distance of 5 from the
target URL. We were initially concerned that to keep the
computation cost manageable, we would need some cap on
d
max
even while measuring information gain, but luckily,
the variation of information gain is insensitive to d
max
, as
Figure <A href="25.html#8">8 shows. These observations made our final choice of
d
max
easy.
In a bid to explain the occurrence of the unexpected
saw-tooth form in Figure <A href="25.html#8">8 we measured the rate
t,d
at
which term t occurred at offset d, relative to the total count
of all terms occurring at offset d. (They are roughly the
multinomial naive Bayes term probability parameters.) For
fixed values of d, we calculated the sum of values of terms
found at those offsets from the target HREF. Figure <A href="25.html#9">9(a)
shows the plot of these sums to the distance(d) for various
categories. The values showed a general decrease as the
distances from the target HREF increased, but this decrease,
like that of information gain, was not monotonic. The
values of the terms at odd numbered distances from the
target HREF were found to be lower than those of the
terms present at the even positions. For instance, the sum
of values of terms occurring at distance
-2 were higher
than that of terms at position
-1. This observation was
explained by observing the HTML tags that are present
at various distances from the target HREF. We observed
that tags located at odd d are mostly non-text tags, thanks
to authoring idioms such as <li><a...><li><a...> and
<a...><br><a...><br> etc.
A plot of the frequency of
HTML tags against the distance from the HREF at which
155
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
-5
-4
-3
-2
-1
0
1
2
3
4
5
d
T
h
e
t
a
_
{
t
,
d
}
AI
Chess
Horses
Cancer
IceHockey
Linux
Bball+
Bball-Tags
at various DOM offsets
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
-5
-4
-3
-2
-1
0
1
2
3
4
5
d
N
u
m
b
e
r
o
f
o
c
c
u
r
r
e
n
c
e
s
font
td
img
b
br
p
tr
li
comment
div
table
center
i
span
hr
Figure 9: Variation of (a) relative term frequencies and
(b) frequencies of HTML tags plotted against d.
they were found is shown in Figure <A href="25.html#9">9(b). (The <a...> tag
obviously has the highest frequency and has been removed
for clarity.)
These were important DOM idioms, spanning many
diverse Web sites and authoring styles, that we did not
anticipate ahead of time.
Learning to recognize these
idioms was valuable for boosting the harvest of the enhanced
crawler. Yet, it would be unreasonable for the user-supplied
baseline black-box predicate or learner to capture crawling
strategies at such a low level.
This is the ideal job of
the apprentice. The apprentice took only 310 minutes
to train on its (u, v) instances from scratch, despite a
simple implementation that wrote a small file to disk for
each instance of the apprentice. Contrast this with several
hours taken by the baseline learner to learn general term
distribution for topics.
3.5
Crawling with the apprentice trained
off-line
In this section we subject the apprentice to a "field test" as
part of the crawler, as shown in Figure <A href="25.html#2">2. To do this we
follow these steps:
1. Fix a topic and start the baseline crawler from all
example URLs available from the given topic.
2. Run the baseline crawler until roughly 2000025000
pages have been fetched.
3. For all pages (u, v) such that both u and v have
been fetched by the baseline crawler, prepare an
instance from (u, v) and add to the training set of
the apprentice.
4. Train the apprentice. Set a suitable value for d
max
.
0
2000
4000
6000
0
2000
4000
6000
8000
10000
Expected #pages lost
#Pages fetched
Folk Dancing
Baseline
Apprentice
0
4000
8000
0
4000
8000
12000
16000
20000
Expected #pages lost
#Pages fetched
Ice Hockey
Baseline
Apprentice
Figure 10:
Guidance from the apprentice significantly
reduces the loss rate of the focused crawler.
5. Start the enhanced crawler from the same set of pages
that the baseline crawler had started from.
6. Run the enhanced crawler to fetch about the same
number of pages as the baseline crawler.
7. Compare the loss rates of the two crawlers.
Unlike with the reinforcement learner studied by Rennie
and McCallum, we have no predetermined universe of URLs
which constitute the relevant set; our crawler must go
forth into the open Web and collect relevant pages from
an unspecified number of sites. Therefore, measuring recall
w.r.t. the baseline is not very meaningful (although we do
report such numbers, for completeness, in
<A href="25.html#10">3.6). Instead, we
measure the loss (the number of pages fetched which had to
be thrown away owing to poor relevance) at various epochs
in the crawl, where time is measured as the number of pages
fetched (to elide fluctuating network delay and bandwidth).
At epoch n, if the pages fetched are v
1
, . . . , v
n
, then the total
expected loss is (1/n)
i
(1
- Pr(c
|v
i
)).
Figure <A href="25.html#9">10 shows the loss plotted against the number of
pages crawled for two topics: Folk dancing and Ice hockey.
The behavior for Folk dancing is typical; Ice hockey is
one of the best examples. In both cases, the loss goes up
substantially faster with each crawled page for the baseline
crawler than for the enhanced crawler. The reduction of loss
for these topics are 40% and 90% respectively; typically, this
number is between 30% and 60%. In other words, for most
156
topics, the apprentice reduces the number of useless pages
fetched by one-third to two-thirds.
In a sense, comparing loss rates is the most meaningful
evaluation in our setting, because the network cost of
fetching relevant pages has to be paid anyway, and can be
regarded as a fixed cost. Diligenti et al. show significant
improvements in harvest rate, but for their topics, the loss
rate for both the baseline crawler as well as the context-focused
crawler were much higher than ours.
3.6
URL overlap and recall
The reader may feel that the apprentice crawler has an
unfair advantage because it is first trained on DOM-derived
features from the same set of pages that it has to crawl
again. We claim that the set of pages visited by the baseline
crawler and the (off-line trained) enhanced crawler have
small overlap, and the superior results for the crawler guided
by the apprentice are in large part because of generalizable
learning. This can be seen from the examples in Figure <A href="25.html#10">11.
Baseline
Apprentice Intersect
Basketball
27220
26280
2431
FolkDance
14011
8168
2199
IceHockey
34121
22496
1657
FlyFishing
19252
14319
6834
Basketball
49%
47%
4%
Baseline
Apprentice
Intersect
FolkDance
57%
34%
9%
Baseline
Apprentice
Intersect
IceHockey
58%
39%
3%
Baseline
Apprentice
Intersect
FlyFishing
48%
35%
17%
Baseline
Apprentice
Intersect
Figure 11:
The apprentice-guided crawler follows paths
which are quite different from the baseline crawler because
of its superior priority estimation technique. As a result
there is little overlap between the URLs harvested by these
two crawlers.
Given that the overlap between the baseline and the
enhanced crawlers is small, which is `better' ? As per the
verdict of the baseline classifier, clearly the enhanced crawler
is better. Even so, we report the loss rate of a different
version of the enhanced crawler which is restricted to visiting
only those pages which were visited by the baseline learner.
We call this crawler the recall crawler. This means that in
the end, both crawlers have collected exactly the same set
of pages, and therefore have the same total loss. The test
then is how long can the enhanced learner prevent the loss
from approaching the baseline loss. These experiments are a
rough analog of the `recall' experiments done by Rennie and
McCallum. We note that for these recall experiments, the
apprentice does get the benefit of not having to generalize,
so the gap between baseline loss and recall loss could be
optimistic. Figure <A href="25.html#10">12 compares the expected total loss of
the baseline crawler, the recall crawler, and the apprentice-guided
crawler (which is free to wander outside the baseline
collection) plotted against the number of pages fetched, for a
few topics. As expected, the recall crawler has loss generally
0
1000
0
1000
2000
3000
4000
5000
6000
Expected #pages lost
#Pages fetched
Ice Hockey
Baseline
Recall
Apprentice
0
5000
10000
0
5000
10000
15000
20000
Expected #pages lost
#Pages fetched
Kayaking
Baseline
Recall
Apprentice
Figure 12: Recall for a crawler using the apprentice but
limited to the set of pages crawled earlier by the baseline
crawler.
somewhere between the loss of the baseline and the enhanced
crawler.
3.7
Effect of training the apprentice online
Next we observe the effect of a mid-flight correction when
the apprentice is trained some way into a baseline and
switched into the circuit. The precise steps were:
1. Run the baseline crawler for the first n page fetches,
then stop it.
2. Prepare instances and train the apprentice.
3. Re-evaluate the priorities of all unvisited pages v in
the frontier table using the apprentice.
4. Switch in the apprentice and resume an enhanced
crawl.
We report our experience with "Folk Dancing." The baseline
crawl was stopped after 5200 pages were fetched.
Re-evaluating
the priority of frontier nodes led to radical
changes in their individual ranks as well as the priority
distributions. As shown in Figure <A href="25.html#11">13(a), the baseline learner
is overly optimistic about the yield it expects from the
frontier, whereas the apprentice already abandons a large
fraction of frontier outlinks, and is less optimistic about
157
(a)
Folk Dancing
0
2000
4000
6000
8000
10000
12000
0
0-.2
.2-.4
.4-.6
.6-.8
.8-1
Estimated relevance of outlinks
F
r
e
q
u
e
n
c
y
Baseline
Apprentice
(b)
Folk Dancing
2100
2200
2300
2400
2500
2600
2700
4500
5500
#Pages crawled
E
x
p
e
c
t
e
d
l
o
s
s
(
#
p
a
g
e
s
)
Train
apprentice
Collect instances
for apprentice
Apprentice
guides crawl
Figure 13: The effect of online training of the apprentice.
(a)
The
apprentice
makes
sweeping
changes
in
the
estimated promise of unvisited nodes in the crawl frontier.
(b) Resuming the crawl under the guidance of the
apprentice immediately shows significant reduction in the
loss accumulation rate.
the others, which appears more accurate from the Bayesian
perspective.
Figure <A href="25.html#11">13(b) shows the effect of resuming an enhanced
crawl guided by the trained apprentice.
The new (u, v)
instances are all guaranteed to be unknown to the apprentice
now.
It is clear that the apprentice's prioritization
immediately starts reducing the loss rate. Figure <A href="25.html#11">14 shows
an even more impressive example. There are additional mild
gains from retraining the apprentice at later points. It may
be possible to show a more gradual online learning effect
by retraining the classifier at a finer interval, e.g., every
100 page fetches, similar to Aggarwal et al. In our context,
however, losing a thousand pages at the outset because of
the baseline crawler's limitation is not a disaster, so we need
not bother.
3.8
Effect of other features
We experimented with two other kinds of feature, which we
call topic and cocitation features.
Our limiting d
max
to 5 may deprive the apprentice of
important features in the source page u which are far from
the link (u, v).
One indirect way to reveal such features
to the apprentice is to classify u, and to add the names
of some of the top-scoring classes for u to the instance
(u, v).
<A href="25.html#5">2.2.3 explains why this may help. This modification
resulted in a 1% increase in the accuracy of the apprentice.
A further increase of 1% was observed if we added all
Classical Composers
600
800
1000
1200
1400
1600
1800
2000
3000
4000
5000
6000
7000
8000
#Pages fetched
C
u
m
u
l
a
t
i
v
e
e
x
p
e
c
t
e
d
l
o
s
s
(
#
p
a
g
e
s
)
Collect
instances for
apprentice
Train
apprentice
Apprentice
guides crawl
Figure 14:
Another example of training the apprentice
online followed by starting to use it for crawl guidance.
Before guidance, loss accumulation rate is over 30%, after,
it drops to only 6%.
prefixes of the class name.
For example, the full name
for the Linux category is /Computers/Software/Operating_
Systems/Linux.
We added all of the following to the
feature set of the source page: /, /Computers, /Computers/
Software, /Computers/Software/Operating_Systems and
/Computers/Software/Operating_Systems/Linux. We also
noted that various class names and some of their prefixes
appeared amongst the best discriminants of the positive and
negative classes.
Cocitation features for the link (u, v) are constructed by
looking for other links (u, w) within a DOM distance of d
max
such that w has already been fetched, so that Pr(c
|w) is
known. We discretize Pr(c
|w) to two values high and low
as in
<A href="25.html#5">2.3, and encode the feature as low, d or high, d .
The use of cocitation features did not improve the accuracy
of the apprentice to any appreciable extent.
For both kinds of features, we estimated that random
variations in crawling behavior (because of fluctuating
network load and tie-breaking frontier scores) may prevent
us from measuring an actual benefit to crawling under
realistic operating conditions. We note that these ideas may
be useful in other settings.
Conclusion
We have presented a simple enhancement to a focused
crawler that helps assign better priorities to the unvisited
URLs in the crawl frontier. This leads to a higher rate of
fetching pages relevant to the focus topic and fewer false
positives which must be discarded after spending network,
CPU and storage resources processing them. There is no
need to manually train the system with paths leading to
relevant pages. The key idea is an apprentice learner which
can accurately predict the worth of fetching a page using
DOM features on pages that link to it. We show that the
DOM features we use are superior to simpler alternatives.
Using topics from Dmoz, we show that our new system can
cut down the fraction of false positives by 3090%.
We are exploring several directions in ongoing work.
We wish to revisit continuous regression techniques for the
apprentice, as well as more extensive features derived from
the DOM. For example, we can associate with a token t the
length
of the DOM path from the text node containing t to
158
the HREF to v, or the depth of their least common ancestor
in the DOM tree. We cannot use these in lieu of DOM offset,
because regions which are far apart lexically may be close
to each other along a DOM path.
t, , d features will be
more numerous and sparser than t, d features, and could
be harder to learn. The introduction of large numbers of
strongly dependent features may even reduce the accuracy
of the apprentice. Finally, we wish to implement some form
of active learning where only those instances (u, v) with the
largest
| Pr(c
|u) - Pr(c
|v)| are chosen as training instances
for the apprentice.
Acknowledgments:
Thanks to the referees for suggesting
that we present Figure <A href="25.html#8">7.
References
[1] C. C. Aggarwal, F. Al-Garawi, and P. S. Yu.
Intelligent
crawling on the World Wide Web with arbitrary predicates. In
WWW2001, Hong Kong, May 2001. ACM.
Online at <A href="http://www10.org/cdrom/papers/110/">http:
<A href="http://www10.org/cdrom/papers/110/">//www10.org/cdrom/papers/110/.
[2] C. Apte, F. Damerau, and S. M. Weiss.
Automated learning
of decision rules for text categorization. ACM Transactions on
Information Systems, 1994. IBM Research Report RC18879.
[3] A. Blum and T. M. Mitchell. Combining labeled and unlabeled
data with co-training.
In Computational Learning Theory,
pages 92100, 1998.
[4] S. Chakrabarti.
Integrating the document object model with
hyperlinks for enhanced topic distillation and information
extraction.
In WWW 10, Hong Kong, May 2001.
Online at
<A href="http://www10.org/cdrom/papers/489">http://www10.org/cdrom/papers/489.
[5] S. Chakrabarti,
B. Dom,
R. Agrawal,
and P. Raghavan.
Scalable feature selection, classification and signature generation
for organizing large text databases into hierarchical topic
taxonomies.
VLDB Journal, Aug. 1998.
Online at <A href="http://www.cs.berkeley.edu/~soumen/VLDB54_3.PDF">http:
<A href="http://www.cs.berkeley.edu/~soumen/VLDB54_3.PDF">//www.cs.berkeley.edu/~soumen/VLDB54_3.PDF.
[6] S. Chakrabarti, B. Dom, D. Gibson, J. Kleinberg, P. Raghavan,
and S. Rajagopalan.
Automatic resource compilation by
analyzing hyperlink structure and associated text. In 7th Worldwide
web conference (WWW7), 1998. Online at <A href="http://www7.scu.edu.au/programme/fullpapers/1898/com1898.html">http://www7.
scu.edu.au/programme/fullpapers/1898/com1898.html.
[7] S. Chakrabarti, B. Dom, and P. Indyk.
Enhanced hypertext
categorization using hyperlinks. In SIGMOD Conference. ACM,
1998. Online at <A href="http://www.cs.berkeley.edu/~soumen/sigmod98.ps">http://www.cs.berkeley.edu/~soumen/sigmod98.
ps.
[8] S.
Chakrabarti,
B.
E.
Dom,
D.
A.
Gibson,
R.
Kumar,
P. Raghavan, S. Rajagopalan, and A. Tomkins. Topic distillation
and spectral filtering.
Artificial Intelligence Review, 13(5
6):409435, 1999.
[9] S. Chakrabarti, M. van den Berg, and B. Dom.
Focused
crawling:
a new approach to topic-specific web resource
discovery.
Computer Networks, 31:16231640, 1999.
First
appeared in the 8th International World Wide Web Conference,
Toronto,
May
1999.
Available
online
at
<A href="http://www8.org/w8-papers/5a-search-query/crawling/index.html">http://www8.org/
<A href="http://www8.org/w8-papers/5a-search-query/crawling/index.html">w8-papers/5a-search-query/crawling/index.html.
[10] J. Cho, H. Garcia-Molina, and L. Page.
Efficient crawling
through URL ordering. In 7th World Wide Web Conference,
Brisbane, Australia, Apr. 1998. Online at <A href="http://www7.scu.edu.au/programme/fullpapers/1919/com1919.htm">http://www7.scu.edu.
au/programme/fullpapers/1919/com1919.htm.
[11] B. D. Davison.
Topical locality in the Web.
In Proceedings
of the 23rd Annual International Conference on Research and
Development in Information Retrieval (SIGIR 2000), pages
272279, Athens, Greece, July 2000. ACM. Online at <A href="http://www.cs.rutgers.edu/~davison/pubs/2000/sigir/">http://
www.cs.rutgers.edu/~davison/pubs/2000/sigir/.
[12] P. M. E. De Bra and R. D. J. Post.
Information retrieval
in the world-wide web: Making client-based searching feasible.
In Proceedings of the First International World Wide Web
Conference, Geneva, Switzerland, 1994. Online at <A href="http://www1.cern.ch/PapersWWW94/reinpost.ps">http://www1.
cern.ch/PapersWWW94/reinpost.ps.
[13] P. M. E. De Bra and R. D. J. Post.
Searching for arbitrary
information in the WWW: The fish search for Mosaic. In Second
World Wide Web Conference '94:
Mosaic and the Web,
Chicago, Oct. 1994.
Online at <A href="http://archive.ncsa.uiuc.edu/SDG/IT94/Proceedings/Searching/debra/article.html">http://archive.ncsa.uiuc.edu/
<A href="http://archive.ncsa.uiuc.edu/SDG/IT94/Proceedings/Searching/debra/article.html">SDG/IT94/Proceedings/Searching/debra/article.html and <A href="http://citeseer.nj.nec.com/172936.html">http:
//citeseer.nj.nec.com/172936.html.
[14] M. Diligenti, F. Coetzee, S. Lawrence, C. L. Giles, and M. Gori.
Focused crawling using context graphs. In A. E. Abbadi, M. L.
Brodie, S. Chakravarthy, U. Dayal, N. Kamel, G. Schlageter,
and
K.-Y.
Whang,
editors,
VLDB
2000,
Proceedings
of
26th International Conference on Very Large Data Bases,
September 10-14, 2000, Cairo, Egypt, pages 527534. Morgan
Kaufmann, 2000. Online at <A href="http://www.neci.nec.com/~lawrence/papers/focus-vldb00/focus-vldb00.pdf">http://www.neci.nec.com/~lawrence/
papers/focus-vldb00/focus-vldb00.pdf.
[15] W. A. Gale, K. W. Church, and D. Yarowsky. A method for
disambiguating word senses in a large corpus. Computer and
the Humanities, 26:415439, 1993.
[16] M. Hersovici, M. Jacovi, Y. S. Maarek, D. Pelleg, M. Shtalhaim,
and S. Ur. The shark-search algorithm--an application: Tailored
Web site mapping. In WWW7, 1998. Online at <A href="http://www7.scu.edu.au/programme/fullpapers/1849/com1849.htm">http://www7.scu.
edu.au/programme/fullpapers/1849/com1849.htm.
[17] T. Joachims, D. Freitag, and T. Mitchell. WebWatcher: A tour
guide for the web. In IJCAI, Aug. 1997. Online at <A href="http://www.cs.cmu.edu/~webwatcher/ijcai97.ps">http://www.
cs.cmu.edu/~webwatcher/ijcai97.ps.
[18] H. Leiberman. Letizia: An agent that assists Web browsing. In
International Joint Conference on Artificial Intelligence (IJCAI
), Montreal, Aug. 1995. See Website at <A href="http://lieber.www.media.mit.edu/people/lieber/Lieberary/Letizia/Letizia.html">http://lieber.www.
media.mit.edu/people/lieber/Lieberary/Letizia/Letizia.html.
[19] H. Leiberman, C. Fry, and L. Weitzman.
Exploring the Web
with reconnaissance agents.
CACM, 44(8):6975, Aug. 2001.
<A href="http://www.acm.org/cacm">http://www.acm.org/cacm.
[20] A. McCallum. Bow: A toolkit for statistical language modeling,
text retrieval, classification and clustering. Software available
from <A href="http://www.cs.cmu.edu/~mccallum/bow/">http://www.cs.cmu.edu/~mccallum/bow/, 1998.
[21] A. McCallum and K. Nigam. A comparison of event models for
naive Bayes text classification. In AAAI/ICML-98 Workshop
on Learning for Text Categorization, pages 4148. AAAI Press,
1998. Online at <A href="http://www.cs.cmu.edu/~knigam/">http://www.cs.cmu.edu/~knigam/.
[22] A. McCallum and K. Nigam. A comparison of event models for
naive Bayes text classification. In AAAI/ICML-98 Workshop
on Learning for Text Categorization, pages 4148. AAAI Press,
1998.
Also technical report WS-98-05, CMU; online at <A href="http://www.cs.cmu.edu/~knigam/papers/multinomial-aaaiws98.pdf">http:
<A href="http://www.cs.cmu.edu/~knigam/papers/multinomial-aaaiws98.pdf">//www.cs.cmu.edu/~knigam/papers/multinomial-aaaiws98.pdf.
[23] F.
Menczer.
Links
tell
us
about
lexical
and
semantic
Web content.
Technical Report Computer Science Abstract
CS.IR/0108004, arXiv.org, Aug. 2001. Online at <A href="http://arxiv.org/abs/cs.IR/0108004">http://arxiv.
org/abs/cs.IR/0108004.
[24] F. Menczer and R. K. Belew.
Adaptive retrieval agents:
Internalizing local context and scaling up to the Web. Machine
Learning, 39(2/3):203242, 2000.
Longer version available as
Technical Report CS98-579, <A href="http://dollar.biz.uiowa.edu/~fil/Papers/MLJ.ps">http://dollar.biz.uiowa.edu/~fil/
Papers/MLJ.ps, University of California, San Diego.
[25] F. Menczer, G. Pant, M. Ruiz, and P. Srinivasan. Evaluating
topic-driven Web crawlers. In SIGIR, New Orleans, Sept. 2001.
ACM.
Online at <A href="http://dollar.biz.uiowa.edu/~fil/Papers/sigir-01.pdf">http://dollar.biz.uiowa.edu/~fil/Papers/
<A href="http://dollar.biz.uiowa.edu/~fil/Papers/sigir-01.pdf">sigir-01.pdf.
[26] T. Mitchell. Machine Learning. McGraw Hill, 1997.
[27] T. Mitchell. Mining the Web. In SIGIR 2001, Sept. 2001. Invited
talk.
[28] S. Mukherjea.
WTMS: a system for collecting and analyzing
topic-specific Web information. WWW9/Computer Networks,
33(16):457471, 2000. Online at <A href="http://www9.org/w9cdrom/293/293.html">http://www9.org/w9cdrom/293/
293.html.
[29] S. RaviKumar, P. Raghavan, S. Rajagopalan, D. Sivakumar,
A. Tomkins, and E. Upfal. Stochastic models for the Web graph.
In FOCS, volume 41, pages 5765. IEEE, nov 2000. Online at
<A href="http://www.cs.brown.edu/people/eli/papers/focs00.ps">http://www.cs.brown.edu/people/eli/papers/focs00.ps.
[30] J. Rennie and A. McCallum. Using reinforcement learning to
spider the web efficiently. In ICML, 1999. Online at <A href="http://www.cs.cmu.edu/~mccallum/papers/rlspider-icml99s.ps.gz">http://
www.cs.cmu.edu/~mccallum/papers/rlspider-icml99s.ps.gz.
[31] G.
Salton
and
M.
J.
McGill.
Introduction
to
Modern
Information Retrieval. McGraw-Hill, 1983.
[32] M. Subramanyam, G. V. R. Phanindra, M. Tiwari, and M. Jain.
Focused crawling using TFIDF centroid.
Hypertext Retrieval
and Mining (CS610) class project, Apr. 2001. Details available
from [email protected].
[33] L. Torgo and J. Gama. Regression by classification. In D. Borges
and C. Kaestner, editors, Brasilian AI Symposium, volume 1159
of Lecture Notes in Artificial Intelligence, Curitiba, Brazil,
1996. Springer-Verlag. Online at <A href="http://www.ncc.up.pt/~ltorgo/Papers/list_pub.html">http://www.ncc.up.pt/~ltorgo/
Papers/list_pub.html.
159 | focused crawlers;Reinforcement learning;URLs;Focused crawling;taxonomy;DOM;HREF link;classifiers;Document object model |
26 | Accelerating 3D Convolution using Graphics Hardware | Many volume filtering operations used for image enhancement, data processing or feature detection can be written in terms of three-dimensional convolutions. It is not possible to yield interactive frame rates on todays hardware when applying such convolutions on volume data using software filter routines. As modern graphics workstations have the ability to render two-dimensional convoluted images to the frame buffer, this feature can be used to accelerate the process significantly. This way generic 3D convolution can be added as a powerful tool in interactive volume visualization toolkits. | Introduction
Direct volume rendering is a very important technique for visualizing
three dimensional data. Several fundamental different methods
have been introduced [2, 4, 5, 6, 8, 12]. Hardware-based volume
texturing [9, 14] is one of the most prominent variants for interactive
visualization due to the high frame rates that can be achieved
with this technique.
The basic principle of texture based volume rendering is depicted
in Figure 1. According to the sampling theorem a 3D view of the
volume is generated by drawing an adequate number of equidistant,
semi-transparent polygons parallel to the image plane with respect
to the current viewing direction ("volume slicing").
Filtering on the other hand is a major part of the visualization
pipeline. It is broadly used for improving images, reducing noise,
and enhancing detail structure. Volume rendering can benefit from
filter operations, as low pass filters reduce the noise e.g. of sam-pled
medical volume images and high pass filters can be used for
edge extraction, visualizing prominent data features. Multiscale approaches
as [13] regularly use disjunct filtering and downsampling
steps and can benefit from any speedups of the filtering process.
Filters can be classified as linear or non-linear. Discrete linear filters
can be written as convolutions with filter kernels that completely
specify the filtering operation. Non-linear filters, as for instance
morphological operators, were recently used for volume analysis
and visualization [7]. Segmentation and classification depend heavily
on filtering operations as well. Bro-Nielson [1] already thought
about using convolution hardware for accelerating the registration
process.
For texture based volume rendering the data set has to be loaded
into special texture memory, which can be addressed by the graphics
pipe very fast. The loading process itself is relatively slow, taking
several seconds for big data sets even on the fastest available
graphics workstations. As the data set has to be reloaded after a
filter operation has been performed in software, interactive filtering
will benefit a lot from convolution algorithms that directly work
on the texture hardware. Additionally, we will show in the following
that computing the convolution with graphics hardware is much
faster than software solutions.
3D Convolution
The general three-dimensional discrete convolution can be written
as
~
f (x, y, z) =
i
1
,i
2
,i
3
k(i
1
, i
2
, i
3
)
f(x + i
1
, y + i
2
, z + i
3
)
with f being the input data function and k being the filter kernel,
resulting in the convoluted data ~
f .
In the following examination we will assume without loss of generality
that k(i
1
, i
2
, i
3
) is given for 0
i
1
, i
2
, i
3
< n and vanishes
outside this interval. Also, we will assume that the input data function
vanishes for (x, y, z) outside the interval [0, m)
3
.
In the special case of k(i
1
, i
2
, i
3
) =
k
1
(i
1
)
k
2
(i
2
)
k
3
(i
3
) the
0-7803-5897-X/99/$10.00 Copyright 1999 IEEE
471
0
0.1
0.2
0.3
0.4
0.5
-3
-2
-1
0
1
2
3
0.1
0.2
0.3
0
1
2
3
4
5
6
7
-0.5
-0.4
-0.3
-0.2
-0.1
0
0.1
0.2
0.3
-3
-2
-1
0
1
2
3
-0.5
-0.4
-0.3
-0.2
-0.1
0
0.1
0.2
0.3
0
1
2
3
4
5
6
7
Figure 2: The Gau filter function and its second derivative
Figure 3: Example image; original, filtered with Gau, and filtered
with second derivative
kernel is called separable. In this case the number of operations
necessary for the convolution can be reduced down to O(m
3
n),
from O(m
3
n
3
) in the non-separable case:
~
f
1
(x, y, z)
=
i
1
k
1
(i
1
)
f(x + i
1
, y, z)
(1)
~
f
2
(x, y, z)
=
i
2
k
2
(i
2
)
~
f
1
(x, y + i
2
, z)
(2)
~
f (x, y, z)
=
i
3
k
3
(i
3
)
~
f
2
(x, y, z + i
3
)
(3)
Of course special care has to be taken near the boundaries of the
input data function, as convolution routines are generally written
on a very low language level for speed purposes.
Figure 2 shows two well known convolution filters, the Gau filter
and its second derivative, both in their continuous and discrete
forms. They can be used for noise reduction and edge detection,
respectively. An example image that was filtered with these kernels
can be seen in Figure 3.
Hardware Acceleration
In order to accelerate the convolution process, special purpose hardware
can be used. On systems that have built-in Digital Signal
Processors (DSPs), for example for multimedia acceleration, a spe-cialized
convolution subroutine could be downloaded to the signal
2D Convolution
Figure 4: The first pass of the hardware filtering algorithm
0
1
000000000000000000000000000000
000000000000000000000000000000
000000000000000000000000000000
000000000000000000000000000000
111111111111111111111111111111
111111111111111111111111111111
111111111111111111111111111111
111111111111111111111111111111
Used texture coordinates
0
00
00
11
11
the data value
15/16
Texel
inside a texel
Exact position of
1/16
3/4
7/8
1
Texture coordinates
5/8
1/8
1/4
3/8
1/2
00
00
11
11
00
00
11
11
00
00
11
11
00
00
11
11
00
00
11
11
Figure 5: Texture coordinates used for exact texel hits
processor. On the other hand, most times these DSPs are not well
documented or the run-time system can not be modified by the user.
In general they are not faster than the main processor as well. Additionally
, there exists a wide range of different DSP systems, all of
which are incompatible to each other.
The approach we have implemented in our system is to combine
a 2D and a 1D convolution kernel in order to calculate three-dimensional
separable convolutions. Several vendors of the graphics
API OpenGL -- as for example Silicon Graphics Inc. [10] -have
included extensions for one- and two-dimensional filtering.
In contrast to most implementations that emulate these extensions
only in software, the SGI graphics pipes MXE and BasicReality
calculate the convolutions on-board, boosting performance by an
order of magnitude already for reasonably sized filters. The CRM
graphics system of the O
2
is capable of rendering convolutions in
hardware as well, but it does not support volume textures, which
are crucial for the algorithm as well.
Recall that the volume data is already stored in texture memory for
visualization using texture hardware. Now (1) and (2) are combined
to one 2D convolution that is to be applied to every plane of the volume
data perpendicular to the z-axis. Therefore, plane by plane is
drawn by rendering textured triangle strips (two triangles per strip)
into the frame buffer as it is sketched in Figure 4. The texture coordinates
assigned to the vertices of the triangle strips are specified
in such way that no interpolation of the texture is necessary (see
Figure 5). In order to increase the potential speedup and to avoid
rounding problems, nearest neighbor interpolation is activated during
the rendering process. Each plane is then read back with the
OpenGL routine
glCopyTexSubImage3DEXT
, which replaces
one plane of the active volume texture orthogonal to the z-axis by
data that is read directly from the frame buffer. While transfering
the data to the texture memory, the separable 2D convolution filter
is activated using
glSeparableFilter2DEXT
. After this first
pass, the volume texture contains data filtered along the x- and y-axes
.
472
1D Convolution
Figure 6: The second pass
Framebuffer
Operations
Memory
Texture
Pixel Data
Textures
Pixel Transfer Modes
Primitives
Per-Fragment
Rasterization
Engine
Geometry
Geometric
Scale, Bias
Clamping
Storage Mode
Convolution
Post-Convolution
Scale, Bias
Pixel
Figure 7: The OpenGL graphics pipeline
Applying the convolution to the third axis is more complicated and
depicted in Figure 6. In this second pass planes are rendered perpendicular
to one of the other axes. Assume that the y-axis has been
chosen. As
glCopyTexSubImage3DEXT
can not write planes
orthogonal to any other axis than the z-axis to the texture memory,
they have to be transfered to a second volume texture. OpenGL's
texture objects are used by calling
glBindTexture
for switching
between them, which implies only a very small overhead. While
copying the data from the frame buffer to the texture memory, a
one-dimensional convolution filter is activated. As we are dealing
with two-dimensional image data, we specify a 2D convolution filter
with
glConvolutionFilter2DEXT
, using a filter kernel
that is exactly one pixel wide.
After this second pass the convoluted volume data has been mirrored
at the plane y
- z = 0. For texture based volume renderers
this does not impose any restrictions, as they only have to swap coordinates
during texture coordinate calculations. When data order
is crucial for the application, the algorithm of pass two can be used
for both passes, thus restoring the data order in the second pass.
However, the textured planes have to be drawn two times perpendicular
to the y-axis. Cache misses are much more likely in this
case in respect to planes rendered orthogonal to the z-axis. This
can increase the convolution times on big volumes by up to 50%
even on fast graphics hardware.
Figure 7 depicts the relevant part of the OpenGL pipeline. It reveals
, that pixel fragments read from the frame buffer are clamped
to [0, 1) before they can be written back to the frame buffer or into
the texture memory. Filter kernels with negative coefficients can
compute negative intermediary values during the two-dimensional
convolution pass, which will not contribute to the final 1D convolution
. Additionally, no negative results can be stored in the output
volume. These values are especially needed when the filter kernel
is not symmetrical.
The strategy for avoiding these effects depends on the particular
application. For edge detection the absolute maxima of the filtered
volume data are of interest.
In this case, calculating the
absolute value can be performed in hardware as well, further reducing
necessary computations on the CPU. In most other cases
post-convolution scaling and bias can be used to map the expected
results to the interval [0, 1) just before the clamping takes place.
OpenGL provides the
GL POST CONVOLUTION c SCALE EXT
and
GL POST CONVOLUTION c BIAS EXT
parameters, which
are applied to pixel color values after convolution and before clamping
as depicted in Figure 7.
Results
The used data sets are presented on the color plate. Figures 8 to 12
show slices of a head data set of size 128
3
. Figure 8 reveals the
unfiltered data set, whereas Figure 9 and 10 present slices of the
software and hardware low pass filtered volume data, respectively.
Here, a Gau filter kernel of size 5
3
has been used. Almost no
differences can be detected. Figure 11 and 12 depict the results
for high pass filtering using the second derivative of the Gau filter
, again computed in software and in hardware. The hardware
convoluted volume displays noticeable artifacts that occur due to
the already mentioned clamping step in the OpenGL pipeline. By
using post-convolution scaling and bias the artifacts disappear completely
.
The Figures 13 to 18 picture another data set that has been used
for testing the implementation. They have been visualized with the
hardware based volume rendering toolkit TiVOR [11], again with
the first picture being rendered with the original data set. While
the noise reduction effect of the Gau filter is rather bothering in
Figure 14 by smearing volume details, it has remarkable positive
effects on ISO surface generation (compare Figures 17 and 18).
Please remark that the ISO surfaces are rendered in real-time using
a hardware accelerated volume rendering approach described
in [14].
Noise interferes with high pass filters, which can be seen in Figure
15. Using a high pass filter on the already low pass filtered data
set reveals by far more and better separable details (see Figure 16)
compared to the directly filtered volume.
We have tested the speed our implementation against a well tuned
software convolution filter. Unsurprisingly, the software convolution
is almost completely memory bound. Even extremely fast
workstations as the Octane are limited by the main memory band-473
Filter size
2
3
3
3
5
3
7
3
head
0.33/0.72
0.33/1.02
0.33/1.56
0.48/2.0
angio
2.5/6.0
2.5/8.7
2.5/14.7
3.7/21.3
Data set created by computer tomography, 128
3
Data set created by MR angiography, 256
3
All times were measured on a Silicon Graphics Onyx2
equipped with a BasicReality graphics pipe.
The
system has two R10000/195 MHz processors and
640 MB main memory.
Table 1: Convolution times in seconds using hardware / software
width, as today's caches are far too small for the values needed
for convolution along the z-axis. High end machines as the Onyx2
perform huge 3D convolutions three times faster than the Octane,
even when equipped with slower CPUs. Standard PCs cannot cope
with the memory bandwidth of the Onyx2 system, and multipro-cessor
options will not accelerate the process because it is not CPU
bound.
Table 1 shows convolution times for different data sets and filter
sizes, using software and hardware convolution. All times have
been measured on an Onyx2 equipped with a BasicReality graphics
pipe. The maximum filter size supported by the graphics system
is 7
2
. Therefor, the maximum 3D convolution that can be performed
in hardware on this system is 7
3
. Noteworthy is the fact
that the BasicReality graphics system is optimized for filter kernels
of size 5
2
. Convolutions with smaller kernels need exactly the same
computation time. Filters of size 6
2
and 7
2
share their timing results
as well. The x and y coordinates of the volume are swapped during
the hardware based convolution process, which is a side effect of
the presented 3D convolution algorithm.
Conclusion
As several of today's graphics workstation vendors have added two-dimensional
convolution to their OpenGL pipeline, using this capability
for accelerating 3D convolution is an almost straightforward
approach. We have determined that by using the implemented
algorithm three-dimensional convolution can be performed even
on big data sets with nearly interactive rates. First promising approaches
of accelerating wavelet decomposition and reconstruction
have been investigated as well [3].
As all intermediary data is transfered to the frame buffer, clamping
can swallow negative values that result from the two-dimensional
convolution as well as final negative results. Thus this approach
is currently most useful for symmetrical filter kernels. By using
post-convolution scaling and bias extensions these problems can be
easily overcome.
Non-separable convolutions are not possible right now with this algorithm
. However, by applying several two-dimensional filter kernels
and blending convoluted images in the frame buffer the use of
non-separable 3D kernels will be a possibility for the future as well.
References
[1] M Bro-Nielson. Medical Image Registration and Surgery Simulation.
PhD thesis, IMM, Technical University of Denmark, 1996.
[2] R. A. Crawfis and N. L. Max. Texture Splats for 3D Scalar and Vector
Field Visualization. In G. M. Nielson and Bergeron D., editors, Visualization
93, pages 261265, Los Alamitos, CA, 1993. IEEE Computer
Society, IEEE Computer Society Press.
[3] M. Hopf and T. Ertl. Hardware Based Wavelet Transformations. In
Erlangen Workshop '99 on Vision, Modeling and Visualization, Erlangen
, November 1999. IEEE. accepted for publication, November 17
19.
[4] A. Kaufman. Volume Visualization. IEEE Computer Society Press,
1991.
[5] P. Lacroute and M. Levoy. Fast Volume Rendering Using a Shear-Warp
Factorization of the Viewing Transformation.
In Computer
Graphics Proceedings, Annual Conference Series, pages 451457,
Los Angeles, California, July 1994. ACM SIGGRAPH, Addison-Wesley
Publishing Company, Inc.
[6] L. Lippert, M. H. Gross, and C. Kurmann. Compression Domain
Volume Rendering for Distributed Environments. In D. Fellner and
L. Szirmay-Kalos, editors, EUROGRAPHICS '97, volume 14, pages
C95C107. Eurographics Association, Blackwell Publishers, 1997.
[7] C. L urig and T. Ertl. Hierarchical Volume Analysis and Visualization
Based on Morphological Operators. In Proc. IEEE Visualization '98,
pages 335341, 1998.
[8] P. Schroder and G. Stoll. Data Parallel Volume Rendering as Line
Drawing. In 1992 Workshop on Volume Visualization. ACM SIGGRAPH
, October 1992.
[9] Silicon Graphics Inc., Mountain View, California. Volume Rendering
using RE2 Technology, 1994.
[10] Silicon Graphics Inc., Mountain View, California. OpenGL on Silicon
Graphics Systems, 1996.
[11] O. Sommer, A. Dietz, R. Westermann, and T. Ertl. An Interactive Visualization
and Navigation Tool for Medical Volume Data. In N. M.
Thalmann and V. Skala, editors, WSCG '98, The Sixth International
Conference in Central Europe on Computer Graphics and Visualization
'98, volume II, pages 362371, Plzen, Czech Republic, February
1998. University of West Bohemia Press.
[12] T. Totsuka and M. Levoy. Frequency Domain Volume Rendering.
Computer Graphics, 27(4):27178, August 1993.
[13] R. Westermann and T. Ertl.
A Multiscale Approach to Integrated
Volume Segmentation and Rendering.
In Computer Graphics Forum
16(3) (Proc. EUROGRAPHICS '97), number 3, pages 117129.
Blackwell, 1997.
[14] R. Westermann and T. Ertl. Efficiently Using Graphics Hardware in
Volume Rendering Applications. In Computer Graphics Proceedings,
Annual Conference Series, pages 169177, Orlando, Florida, 1998.
ACM SIGGRAPH.
474
Figure 8: The unfiltered
head data set
Figure 9: Head, low pass
filtered in software
Figure 10:
Head, low
pass filtered in hardware
Figure 11:
Head, high
pass filtered in software
Figure 12:
Head, high
pass filtered in hardware
Figure 13: The original angiography
data set
Figure 14: The Gau filtered
data set
Figure 15:
Data,
after direct
filtering with Gau' second
derivative
Figure 16: First low pass, then
high pass filtered data
Figure 17: ISO surfaces on the original angiography data set
Figure 18: ISO surfaces on the Gau filtered data set | 3D convolution;Convolution;visualization;filtering;Volume Visualization;Hardware Acceleration;volume rendering |
27 | Agent Technology for Personalized Information Filtering: The PIA-System | As today the amount of accessible information is overwhelming, the intelligent and personalized filtering of available information is a main challenge. Additionally, there is a growing need for the seamless mobile and multi-modal system usage throughout the whole day to meet the requirements of the modern society ("anytime, anywhere, anyhow"). A personal information agent that is delivering the right information at the right time by accessing, filtering and presenting information in a situation-aware matter is needed. Applying Agent-technology is promising, because the inherent capabilities of agents like autonomy, pro- and reactiveness offer an adequate approach. We developed an agent-based personal information system called PIA for collecting, filtering, and integrating information at a common point, offering access to the information by WWW, e-mail, SMS, MMS, and J2ME clients. Push and pull techniques are combined allowing the user to search explicitly for specific information on the one hand and to be informed automatically about relevant information divided in pre-, work and recreation slots on the other hand. In the core of the PIA system advanced filtering techniques are deployed through multiple filtering agent communities for content-based and collaborative filtering. Information-extracting agents are constantly gathering new relevant information from a variety of selected sources (internet, files, databases, web-services etc.). A personal agent for each user is managing the individual information provisioning, tailored to the needs of this specific user, knowing the profile, the current situation and learning from feedback. | Introduction
Nowadays, desired information often remains unfound,
because it is hidden in a huge amount of unnecessary and
irrelevant data. On the Internet there are well maintained search
engines that are highly useful if you want to do full-text
keyword-search [1], but they are not able to support you in a
personalized way and typically do not offer any "push-services"
or in other words no information will be sent to you when you are
not active. Also, as they normally do not adapt themselves to
mobile devices, they cannot be used throughout a whole day
because you are not sitting in front of a standard browser all the
time and when you return, these systems will treat you in the
very same way as if you have never been there before (no
personalization, no learning). Users who are not familiar with
domain-specific keywords won't be able to do successful
research, because no support is offered. Predefined or auto-generated
keywords for the search-domains are needed to fill that
gap. As information demands are continuously increasing today
and the gathering of information is time-consuming, there is a
growing need for a personalized support. Labor-saving
information is needed to increase productivity at work and also
there is an increasing aspiration for a personalized offer of
general information, specific domain knowledge, entertainment,
shopping, fitness, lifestyle and health information. Existing
commercial "personalized" systems are far away from that
functionality, as they usually do not offer much more than
allowing to choose the kind of the layout or collecting some of
the offered information channels (and the information within is
not personalized).
To overcome that situation you need a personal information
agent (PIA) who "knows" the way of your thinking. and can
really support you throughout the whole day by accessing,
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
SAC'05, March 13-17, 2005, Santa Fe, New Mexico, USA.
Copyright 2005 ACM 1-58113-964-0/05/0003...$5.00.
54
2005 ACM Symposium on Applied Computing
filtering and presenting information to you in a situation-aware
matter (figure 1). Some systems exist (FAB [2], Amalthaea [3],
WAIR [4], P-Tango [5], TripMatcher [6], PIAgent [7], Letizia
[8], Let's Browse [9], Newt [10], WebWatcher [11], PEA [12],
BAZAR [13]) that implement advanced algorithmic technology,
but did not become widely accepted, maybe because of real
world requirements like availability, scalability and adaptation to
current and future standards and mobile devices.
In this paper we present an agent-based approach for the
efficient, seamless and tailored provisioning of relevant
information on the basis of end-users' daily routine. As we
assume the reader will be familiar with agent-technology (see
[14], [15] for a good introduction), we will concentrate on the
practical usage and the real-world advantages of agent-technology
. We describe the design and architecture in the
following section and afterwards depict the system in section
three.
Design of PIA The Personal Information Agent
To meet the discussed requirements and to support the user in
that matter, we designed a multi-agent system composed of four
classes of agents: many information extracting agents, agents that
implement different filtering strategies, agents for providing
different kinds of presentation and one personal agent for each
user. Logically, all this can be seen as a classical three tier
application (figure 2). Concerning the information extraction,
general search engines on the one hand but also domain-specific
portals on the other hand have to be integrated. Additional
information sources (Databases, Files, Mailinglists etc.) should
also be integrated easily at run-time.
Several agents realize different filtering strategies (content-based
and collaborative filtering [16], [5]) that have to be
combined in an intelligent matter. Also agents for providing
information actively via SMS, MMS, Fax, e-mail (push-services)
are needed. A Multi-access service platform has to manage the
presentation of the results tailored to the used device and the
current situation. The personal agent should constantly improve
the knowledge about "his" user by learning from the given
feedback, which is also taken for collaborative filtering, as
information that has been rated as highly interesting might be
useful for a user with a similar profile as well. As users usually
are not very keen on giving explicit feedback (ratings), implicit
feedback like the fact that the user stored an article can also be
taken into account [18].
A "keywordassistant" should support the user to be able to
define queries even if he is not familiar with a certain domain.
Keywords predefined by experts should be offered and also the
possibility to point to a "basis-paper" serving as an example. The
keywordassistant will extract automatically the most important
keywords of that paper and will provide them for searching. The
whole system is designed to be highly scalable, easy to modify, to
adapt and to improve and therefore an agent-based approach that
allows to integrate or to remove agents even at run-time is a
smart choice. The different filtering techniques are needed to
provide accurate results, because the weakness of individual
techniques should be compensated by the strengths of others.
Documents should be logically clustered by their domains to
allow fast access, and for each document several "models" [19]
will be built, all including stemming and stop-word elimination,
but some tailored for very efficient retrieval at run-time and
others to support advanced filtering algorithms for a high
accuracy.
PIA
Demand for information is increasing continuously
Information gathering is time consuming
Users have different devices
Facts
Portals
Databases
General
Information
News
Weather
Information to
increase
productivity /
Labor-saving
information
Domain knowledge
Recreation
Entertainment
Shopping
Health
/ Fitness /
Lifestyle
Technologies
End Devices
Software
Networks
www
/ Internet
Not
structured
content
Structured
Dedicated
content
Structured
content
Profile
Demand
for
information
Figure 1: Demand for a personal information agent
55
Figure 2: The PIA System seen as a three tier application
If the system notices that the content-based filtering is not
able to offer sufficient results, additional information should be
offered by collaborative filtering, i.e. information that was rated
as interesting by a user with a similar profile will be presented.
With the "push-services", the user can decide to get new
integrated relevant information immediately and on a mobile
device, but for users who do not want to get new information
immediately, a personalized newsletter also has to be offered.
This newsletter is collecting new relevant information to be
conveniently delivered by e-mails, allowing users to stay
informed even if they are not actively using the system for some
time.
Deployment and Evaluation
We implemented the system using Java and standard open
source database and web-technology and based on the JIAC IV
1
agent framework [20]. JIAC IV is FIPA 2000 compliant [21], that
is it is conforming to the latest standards.
It consists of a communication infrastructure as well as
services for administrating and organizing agents (Agent
Management Service, AMS and Directory Facilitator, DF). The
JIAC IV framework provides a variety of management and
security functions, management services including configuration,
fault management and event logging, security aspects including
authorization, authentication and mechanisms for measuring and
ensuring trust and therefore has been a good choice to be used
from the outset to the development of a real world application.
Within JIAC IV, agents are arranged on platforms, allowing
the arrangement of agents that belong together with the control of
at least one "manager". A lot of visual tools are offered to deal
with administration aspects. Figure 3 shows a platform, in this
case with agents for the building of different models specialized
for different retrieval algorithms.
The prototypical system is currently running on Sun-Fire-880
, Sun-Fire-480R and Sun Fire V65x, whereas the main
filtering computation, database- and web-server and information-extraction
is placed on different machines for performance
reasons.
1
JIAC IV is funded by T-Systems Nova GmbH
Figure 3: Several Agents are building different models specialized for different retrieval algorithms.
56
New content is stored, validated and consolidated in a
central relational database (update-driven). Information can be
accessed by WWW, e-mail, SMS, MMS, and J2ME Clients,
where the system adapts the presentation accordingly, using the
CC/PP (Preferences Profile) with a tailored layout for a mobile
phone and a PDA (see section 3.5). The personalized newsletter
and the push-services are sent via e-mail, SMS or MMS. The
user can use self-defined keywords for a request for information
or choose a category and therefore the system will use a list of
keywords predefined by experts and updated smoothly by
learning from collaborative filtering. A combination of both is
also possible. The keyword assistant is able to extract the most
import keywords of a given article using the term frequency
inverse document frequency (TFIDF)-algorithm [22].
3.2
Gathering new Information
New information is constantly inserted in the system by
information extraction agents, e.g. web-reader agents or agents
that are searching specified databases or directories. Additional
agents for the collection of new content can easily be integrated
even at runtime, as all that is necessary for a new agent is to
register himself at the system, store the extracted information at
a defined database table and inform the modeling-manager agent
about the insertion. As a file reader-agent is constantly observing
a special directory, manual insertion of documents can be done
simply by drag-and-drop and an e-mail and upload-interface also
exists. Source can also be integrated by Web services. New
Readers can be created using a easy-to-handle tool and another
tool is enabling to conveniently observe the extraction-agents, as
this is the interface to the outside that might become critical if
for example the data-format of a source is changed.
3.3
Pre processing for efficient retrieval
The first step of pre processing information for efficient retrieval
is the use of distinct tables in the global database for different
domains like e.g. news, agent-related papers, etc. Depending on
the filtering request, tables with no chance of being relevant can
therefore be omitted. The next step is the building of several
models for each document. Stemming and stop-word elimination
is implemented in every model but different models are built by
computing a term importance either based only on local
frequencies, or based on term frequency inverse document
frequency (TFIDF) approach. Furthermore number of words that
should be included in models is different which makes models
either more accurate or more efficient. Created models are
indexed either on document or word level, which facilitate their
efficient retrieval. The manager agent is assigning the
appropriate modeling agents to start building their models but
might decide (or the human system administrator can tell him) at
runtime to delay latest time-consuming modeling activity for a
while if system load is critical at a moment. This feature is
important for a real-world application, as overloading has been a
main reason for the unusability of advanced academic systems.
3.4
Filtering technology
As the quality of results to a particular filtering request might
heavily depend on the information domain (news, scientific
papers, conference calls), different filtering communities are
implemented. For each domain, there is at least one community
which contains agents being tailored to do specific filtering and
managing tasks in an efficient way. Instead of having only
filtering agents, each and every community has also one so called
manager agent that is mainly responsible for doing coordination
of filtering tasks. The coordination is based on quality, CPU, DB
and memory fitness values, which are the measures being
associated to each filtering agent [23]. These measures
respectively illustrate filtering agent successfulness in the past,
its efficiency in using available CPU and DB resources, and the
amount of memory being required for filtering. A higher CPU,
DB or memory fitness value means that filtering agent needs a
particular resource in a smaller extent for performing a filtering
task. This further means that an insufficiency of a particular
resource has a smaller influence on filtering agents with a higher
particular fitness value.
The introduced different fitness values together with the
knowledge about the current system runtime performance can
make coordination be situation aware (see also [23]) in the way
that when a particular resource is highly loaded a priority in
coordination should be given to filtering agents for which a
particular resource has a minor importance. This situation aware
coordination provides a way to balance response time and
filtering accuracy, which is needed in overcoming the problem of
finding a perfect filtering result after few hours or even few days
of an expensive filtering.
Instead of assigning filtering task to the agent with the best
combination of fitness values in the current runtime situation,
manager is going to employ a proportional selection principle
[24] where the probability for the agent to be chosen to do actual
filtering is proportional to the mentioned combination of its
fitness values. By not always relying only on the most promising
agents, but also sometimes offering a job to other agents,
manager gives a chance to each and every agent to improve its
fitness values. While the adaptation of quality fitness value can
be accomplished after the user feedback became available, the
other fitness values can be changed immediately after the
filtering was finished through the response time analyses. The
adaptation scheme has a decreasing learning rate that prevents
already learnt fitness values of being destroyed, which further
means that proven agents pay smaller penalties for bad jobs than
the novice ones [17].
In the case where the received filtering task cannot be
successfully locally accomplished usually because of belonging to
unsupported information domain, manager agent has to cooperate
with other communities. While coordination takes place inside
each community between manager and filtering agents,
cooperation occurs between communities among manager agents.
Cooperation is based on either finding a community which
supports given domain or in splitting received task on sub-tasks
where for each sub-task a community with good support exists.
Figure 4 presents a high level overview of the filtering
framework being composed of three different filtering
communities (FC), where each community has one filter manager
agent (M) and different number of specialized filtering agents
(F). There are two different databases (DB) with information
from different domains, and they are accessed at least by one
community. On the figure cooperation is illustrated as a circle
with arrows which connect manager agents.
57
Figure 4: Filtering framework with three different
communities
3.5
Presentation
As one of the main design principles has been to support the user
throughout the whole day, the PIA system provides several
different access methods and adapts its interfaces to the used
device (Figure 5). To fulfill these requirements an agent platform
("Multi Access Service Platform") was developed that optimizes
the graphical user interface for the access by Desktop PCs, PDAs
and smart phones.
If the user wants to use the PIA system, the request is received by
the Multi Access Service Platform (MAP). The MAP delegates
the request to an agent, providing the logic for this service. In the
PIA system the requests are forwarded either to login agent or
the personal agent. The chosen agent performs the service
specific actions and sends the MAP an abstract description of the
formular that should be presented to the user. For this purpose
the XML based Abstract Interaction Description Language
(AIDL) has been defined. Based on the abstract description and
the features of the used device the MAP generates an optimized
interface presented to the user. The conversion is implemented as
a XSLT transformation in which the optimal XSLT style sheet is
selected based on the CC/PP information about the user's device.
This approach simplifies the creation of optimized user
interfaces for different devices. The abstract interface description
can be easily transformed into HTML, PDA optimized HTML or
WML. If the user wants to have a voice interface, a style sheet
for converting the abstract user interface description into
VoiceXML has to be added to the MAP. Additional changes at
the PIA system are not needed.
Beside of the features provided by MAP the design of the user
interface must create an easy to use system even on devices with
a tiny screen and without a keyboard. That is why the PIA
interface provides additional navigation elements on complex
forms and minimizes the use of text input fields. New results
matching a defined request are presented first as a list of short
blocks containing only title, abstract and some meta-information
(as this is familiar to every user from well-known search-engines
). This information is also well readable on PDAs or even
mobile phones. Important articles can be stored in a repository.
This allows the user to choose the articles on his PDA he wants
to read later at his desktop PC.
Depending on the personal options specified by the user, old
information found for a specific query may be deleted
automatically step by step after a given time, that is, there is
always up to date information that is presented to the user (we
call this "smart mode"). This is for example convenient for
getting personalized filtering news. The other option is to keep
that information unlimited ("global mode") for a query for e.g.
basic scientific papers.
For the "push-services" (that is, the system is becoming active
and sending the user information without an explicit request), the
user specifies his working time (e.g. 9 am to 5 pm). This divides
the day in a pre-, work, and a recreation slot, where the PIA
system assumes different demands of information. For each slot
an adequate delivering technology can be chosen (e-mail, sms,
mms, fax or Voice). If you decide to subscribe to the
personalized newsletter, new relevant information for you will be
collected and sent by e-mail or fax once a day with a similar
layout and structure for convenient reading if you have not seen it
already by other pull- or push services. Therefore you can also
stay informed without having to log into the system and if you do
not want to get all new information immediately.
Figure 5: Information accessed by browser or tailored for
presentation on a PDA or a mobile phone
58
Conclusion and future work
The implemented system has an acceptable runtime performance
and shows that it is a good choice to develop a personal
information system using agent-technology based on a solid
agent-framework like JIAC IV. Currently, PIA system supports
more than 120 different web sources, grabs daily around 3.000
new semi-structured and unstructured documents, has almost
500.000 already pre-processed articles, and actively helps about
fifty scientists related to our laboratory in their information
retrieval activities. Their feedback and evaluation is a valuable
input for the further improvement of PIA. In the near future we
plan to increase the number of users to thousands, and therefore
we plan to work on the further optimization of the filtering
algorithms to be able to simultaneously respond to multiple
filtering requests. Also, we think about integrating additional
services for the user that provide information tailored to his
geographical position (GPS), a natural speech interface and
innovative ways to motivate the user to give precise explicit
feedback, as the learning ability of the system is depending on
that information.
REFERENCES
[1]
Brin, S.; Page, L.: The anatomy of a large-scale hyper
textual (Web) search engine, Proc. 7th International World
Wide Web Conference on Computer Networks, 30(1-7), pp.
107-117, 1998.
[2]
Balabanovic, M.; Yoav, S.: FAB: Content-Based
Collaborative Recommendation, Communication of the
ACM, Volume 40, Number 3, pp. 66-72, 1997.
[3]
Moukas, A.: Amalthaea: Information Discovery and
Filtering using a Multi agent Evolving Ecosystem, Practical
Application of Intelligent Agents & Multi-Agent
Technology, London 1998.
[4]
Zhang, B.; Seo, Y.: Personalized Web-Document Filtering
Using Reinforcement Learning, Applied Artificial
Intelligence, Volume 15 Number 7, pp. 665-685, 2001.
[5]
Claypool, M.; Gokhale, A.; Miranda, T.; Murnikov, P.;
Netes, D.; Sartin, N.: Combining Content-Based and
Collaborative Filters in an Online Newspaper, ACM SIGIR
Workshop on Recommender Systems, Berkeley, CA, August
19, 1999.
[6]
Delgado, J.; Davidson, R.: Knowledge bases and user
profiling in travel and hospitality recommender systems, In
Proceedings of the ENTER 2002 Conference, Innsbruck,
Austria, January 22-25 2002, Springer Verlag, pp. 1-16.
[7]
Kuropka, D.; Serries, T.: Personal Information Agent,
Informatik Jahrestagung 2001, pp. 940-946.
[8]
Lieberman, H.: Letizia: An Agent That Assists Web
Browsing, International Joint Conference on Artificial
Intelligence, Montreal, August 1995.
[9]
Lieberman, H.; Van Dyke, N,; Vivacqua, A.: Let's Browse:
A Collaborative Browsing Agent, Knowledge-Based
Systems, 12(8), 1999, pp. 427431.
[10]
Sheth, B.: A Learning Approach to Personalized
Information Filtering, M.S. Thesis, MIT- EECS dept, USA,
1994.
[11]
Joachims, T.; Freitag, D.; Mitchell, T.: WebWatcher: A Tour
Guide for the World Wide Web, In IJCAI (1), 1997, pp. 770-777
.
[12]
Winiwarter, W.: PEA - A Personal Email Assistant with
Evolutionary Adaptation, International Journal of
Information Technology, Vol. 5, No. 1, 1999.
[13]
Thomas, C.; Fischer, G.: Using agents to improve the
usability and usefulness of the world wide web. In
Proceedings of the Fifth International Conference on User
Modelling, pages 5--12, 1996.
[14]
Jennings, N; Wooldridge, M: Agent-oriented software
engineering, Handbook of Agent Technology (ed. J.
Bradshaw). AAAI/MIT Press, 2000.
[15]
Sesseler, R.; Albayrak, S.: Agent-based Marketplaces for
Electronic Commerce, International Conference on Artificial
Intelligence, IC-AI 2001.
[16]
Resnick, P.; Neophytos, J.; Suchak, M.; Bergstrom, P.;
Riedl, J.: GroupLens: An open architecture for collaborative
filtering of net news, Proceedings ACM Conference on
Computer-Supported Cooperative Work, pp. 175-186, 1994.
[17]
Albayrak, S.; Milosevic, D.: Self Improving Coordination in
Multi Agent Filtering Framework, IEEE/WIC/ACM
International Joint Conference on Intelligent Agent
technology (IAT 04) and Web Intelligence (WI 04), Beijing,
China, September 2004., (to appear).
[18]
Nichols, D.: Implicit Rating and Filtering, Proc. Fifth
DELOS Workshop on Filtering and Collaborative Filtering,
Budapest, Hungary, 10-12 November, ERCIM, pp. 31-36,
1997.
[19]
Tauritz, D.: Adaptive Information Filtering: concepts and
algorithms, Ph.D. dissertation, Leiden University, 2002.
[20]
Fricke, S.; Bsufka, K.; Keiser, J.; Schmidt, T.; Sesseler, R.;
Albayrak, S.: Agent-based Telematic Services and Telecom
Applications, Communications of the ACM, April. 2001.
[21]
Foundation for Intelligent Physical Agents, www.fipa.org,
2004.
[22]
Jing, L.; Huang, H.; Shi, H.: Improved Feature Selection
Approach TFIDF in Text Mining, Proc. 1
st
Internat.
Conference on Machine Learning and Cybernetics, Beijing,
2002.
[23]
Albayrak, S; Milosevic D.: Situation-aware Coordination in
Multi Agent Filtering Framework, The 19th International
Symposium on Computer and Information Sciences (ISCIS
04), Antalya, Turkey, October 2004., (to appear).
[24]
Zbigniew, M.; Fogel, D.: How to Solve It: Modern
Heuristics, Springer-Verlag New York, Inc., New York,
NY, 2000.
59 | Adaptation and Learning;filtering;Recommendation systems;Agent-based deployed applications;Evolution;Intelligent and personalized filtering;Agents and complex systems;personal information agent;agent technology;Ubiquitous access |
28 | An Adaptive Information Retrieval System based on Associative Networks | In this paper we present a multilingual information retrieval system that provides access to Tourism information by exploiting the intuitiveness of natural language. In particular, we describe the knowledge representation model underlying the information retrieval system. This knowledge representation approach is based on associative networks and allows the definition of semantic relationships between domain-intrinsic information items. The network structure is used to define weighted associations between information items and augments the system with a fuzzy search strategy. This particular search strategy is performed by a constrained spreading activation algorithm that implements information retrieval on associative networks. Strictly speaking, we take the relatedness of terms into account and show, how this fuzzy search strategy yields beneficial results and, moreover, determines highly associated matches to users' queries. Thus, the combination of the associative network and the constrained spreading activation approach constitutes a search algorithm that evaluates the relatedness of terms and, therefore, provides a means for implicit query expansion. | Introduction
Providing easy and intuitive access to information
still remains a challenge in the area of information system
research and development. Moreover, as Van Rijsbergen
(1979) points out, the amount of available
information is increasing rapidly and offering accurate
and speedy access to this information is becoming
ever more difficult. This quote, although about
20 years old, is still valid nowadays if you consider the
amount of information offered on the Internet. But
how to address these problems? How to overcome
the limitations associated with conventional search interfaces
? Furthermore, users of information retrieval
systems are often computer illiterate and not familiar
with the required logic for formulating appropriate
queries, e.g. the burdens associated with Boolean
Copyright c 2004, Australian Computer Society, Inc. This paper
appeared at First Asia-Pacific Conference on Conceptual
Modelling (APCCM 2004), Dunedin, New Zealand. Conferences
in Research and Practice in Information Technology, Vol.
31. Sven Hartmann and John Roddick, Ed. Reproduction for
academic, not-for profit purposes permitted provided this text
is included.
logic. This goes hand in hand with the urge to understand
what users really want to know from information
retrieval systems.
Standard information retrieval interfaces consist of
check boxes, predefined option sets or selection lists
forcing users to express her or his needs in a very restricted
manner. Therefore, an approach leaving the
means of expression in users' hands, narrows the gap
between users' needs and interfaces used to express
these needs. An approach addressing this particular
problem is to allow query formulation in natural
language. Natural language interfaces offer easy and
intuitive access to information sources and users can
express their information needs in their own words.
Hence, we present a multilingual information retrieval
system allowing for query formulation in natural
language. To reduce word sense ambiguities the
system operates on a restricted domain. In particular,
the system provides access to tourism information,
like accommodations and their amenities throughout
Austria.
However, the core element of the information retrieval
system remains the underlying knowledge representation
model.
In order to provide a knowledge
representation model allowing to define relations
among information items, an approach based on a
network structure, namely an associative network, is
used.
More precisely, this associative network incorporates
a means for knowledge representation allowing
for the definition of semantic relationships of
domain-intrinsic information. Processing the network
and, therefore, result determination is accomplished
by a technique refereed to as spreading activation.
Some nodes of the network act as sources of activation
and, subsequently, activation is propagated to
adjacent nodes via weighted links. These newly activated
nodes, in turn, transmit activation to associated
nodes, and so on.
We introduce a knowledge representation approach
based on an associative network consisting
of three layers. Moreover, a constrained spreading
activation algorithm implements a processing technique
that operates on the network. Due to the network
structure of the knowledge representation model
and the processing technique, implicit query expansion
enriches the result set with additional matches.
Hence, a fuzzy search strategy is implemented.
The remainder of the paper is organized as follows
. In Section <A href="28.html#2">2 we review the architecture of the
information retrieval system that acts as a basis for
the redeveloped approach presented herein.
Moreover
, Section <A href="28.html#4">3 gives an overview about associative
networks and we present an algorithm for processing
such networks, i.e. spreading activation. In Section <A href="28.html#5">4
we describe our approach based on associative net-27
works and finally, some conclusions are given in Section
<A href="28.html#9">5.
AD.M.IN A Natural Language Information Retrieval System
Crestani (1997) points out that information retrieval
is a science that aims to store and allow fast access
to a large amount of data. In contrast to conventional
database systems, an information retrieval system
does not provide an exact answer to a query but
tries to produce a ranking that reflects the intention
of the user. More precisely, documents are ranked according
to statistical similarities based on the occurrence
frequency of terms in queries and documents.
The occurrence frequency of a term provides an indicator
of the significance of this term. Moreover, in order
to get a measure for determining the significance
of a sentence, the position of terms within a sentence
is taken into account and evaluated. For comprehensive
reports about information retrieval see Salton
& McGill (1983), Salton (1989) and Baeza-Yates &
Ribeiro-Neto (1999).
In order to adapt information retrieval systems to
the multilingual demands of users, great efforts have
been made in the field of multilingual information retrieval
. Hull & Grafenstette (1996) subsume several
attempts to define multilingual information retrieval,
where Harman (1995) formulates the most concise
one: "multilingual information retrieval is information
retrieval in any language other than English".
Multilingual information retrieval systems have to
be augmented by mechanisms for query or document
translation to support query formulation in multiple
languages. Information retrieval is such an inexact
discipline that it is not clear whether or not query
translation is necessary or even optimal for identifying
relevant documents and, therefore, to determine
appropriate matches to the user query. Strictly speaking
, the process of translating documents or queries
represents one of the main barriers in multilingual information
retrieval.
Due to the shortness of user queries, query translation
introduces ambiguities that are hard to overcome
. Contrarily, resolving ambiguity in document
translation is easier to handle because of the quantity
of text available. Nevertheless, state-of-the-art
machine translation systems provide only an insufficient
means for translating documents. Therefore,
resolving ambiguities associated with translations remains
a crucial task in the field of multilingual information
retrieval. Ballesteros & Croft (1998), for
instance, present a technique based on co-occurrence
statistics from unlinked text corpora which can be
used to reduce the ambiguity associated with translations
. Furthermore, a quite straightforward approach
in reducing ambiguities is to restrict the domain a
multilingual information retrieval system operates on.
Xu, Netter & Stenzhorn (2000) describe an information
retrieval system that aims at providing
uniform multilingual access to heterogeneous data
sources on the web.
The MIETTA (Multilingual
Tourist Information on the World Wide Web) system
has been applied to the tourism domain containing
information about three European regions,
namely Saarland, Turku, and Rome. The languages
supported are English, Finnish, French, German, and
Italian. Since some of the tourism information about
the regions were available in only one language, machine
translation was used to deal with these web
documents. Due to the restricted domain, automatic
translation should suffice to understand the basic
meaning of the translated document without having
knowledge of the source language. Users can query
the system in various ways, such as free text queries,
form-based queries, or browsing through the concept
hierarchy employed in the system. MIETTA makes
it transparent to the users whether they search in a
database or a free-form document collection.
2.1
The Architecture of the Original System
The software architecture of the natural language information
retrieval system is designed as a pipeline
structure. Hence, successively activated pipeline elements
apply transformations on natural language
queries that are posed via arbitrary client devices,
such as, for instance, web browsers, PDAs or mobile
phones. Due to the flexibility of this approach, different
pipeline layouts can be used to implement different
processing strategies. Figure <A href="28.html#3">1 depicts the layout
of the software architecture and illustrates the way of
interaction of the pipeline elements.
In a first step, the natural language query is evaluated
by an automatic language identification module
to determine the language of the query. Next, the system
corrects typographic errors and misspellings to
improve retrieval performance. Before adding grammar
rules and semantic information to the query
terms, a converter transforms numerals to their nu-meric
equivalents. Depending on the rules assigned to
the query terms, a mapping process associates these
terms with SQL fragments that represent the query
in a formal way. Due to the fact that the system
uses a relational database as backend this mapping
process is crucial. In a next step the SQL fragments
are combined according to the modifiers (e.g. "and",
"or", "near", "not") identified in the query and a single
SQL statement that reflects the intention of the
query is obtained. Then the system determines the
appropriate result and generates an XML representation
for further processing. Finally, the XML result
set is adapted to fit the needs of the client device.
The remainder of this section gives a brief outline
of the system.
2.1.1
The Knowledge Base
A major objective of the Ad.M.In.(Adaptive Multilingual
Interfaces) system was to separate the program
logic from domain dependent data. In particular
, language, domain and device dependent portions
are placed in the knowledge base. Thus, the knowledge
base represents the backbone of the system and
consists of a relational database and a set of ontologies
. The database stores information about domain
entities, as, for instance, amenities of accommodations
. The ontologies store synonyms, define semantic
relations and grammar rules.
Basically, the knowledge base consists of separate
XML files, whereas the synonym ontology is used to
associate terms having the same semantic meaning,
i.e. describes linguistic relationships like synonymy.
The synonym ontology is based on a flat structure,
allowing to define synonymy. Taking a look at the
tourism domain, "playground" represents a concept
possessing several semantic equivalents, as, for instance
, "court".
Unfortunately, the synonym ontology provides no
means to associate concepts. Consider, for example,
the three concepts "sauna", "steam bath" and "vegetarian
kitchen". Straightforward, someone might derive
a stronger degree of relatedness between the concepts
"sauna" and "steam bath" as between "sauna"
and "vegetarian kitchen".
The second component of the knowledge base
stores a set of grammar rules.
More precisely, a
lightweight grammar describes how certain concepts
28
Figure 1: Software Architecture
may be modified by prepositions, adverbial or adjectival
structures that are also specified in the synonym
ontology. For a more detailed description we refer
to Berger (2001).
2.1.2
Language Identification
To identify the language of a query, an n-gram-based
text classification approach (cf. Cavnar & Trenkle
(1994)) is used. An n-gram is an n-character slice of
a longer character string. As an example, for n = 3,
the trigrams of the string "language" are: { la, lan,
ang, ngu, gua, uag, age, ge }.
Dealing with multiple
words in a string, the blank character is usu-ally
replaced by an underscore " " and is also taken
into account for the construction of an n-gram document
representation. This language classification approach
using n-grams requires sample texts for each
language to build statistical models, i.e. n-gram frequency
profiles, of the languages. We used various
tourism-related texts, e.g. hotel descriptions and holiday
package descriptions, as well as news articles both
in English and German language. The n-grams, with
n ranging from 1...5, of these sample texts were an-alyzed
and sorted in descending order according to
their frequency, separately for each language. These
sorted histograms are the n-gram frequency profiles
for a given language. For a comprehensive description
see Berger, Dittenbach & Merkl (2003).
2.1.3
Error Correction
To improve retrieval performance, potential orthographic
errors have to be considered in the web-based
interface.
After identifying the language, a spell-checking
module is used to determine the correctness
of query terms. The efficiency of the spell checking
process improves during the runtime of the system
by learning from previous queries. The spell checker
uses the metaphone algorithm (cf. Philips (1990)) to
transform the words into their soundalikes. Because
this algorithm has originally been developed for the
English language, the rule set defining the mapping of
words to the phonetic code has to be adapted for other
languages. In addition to the base dictionary of the
spell checker, domain-dependent words and proper
names like names of cities, regions or states, have
to be added to the dictionary. For every misspelled
term of the query, a list of potentially correct words
is returned. First, the misspelled word is mapped to
its metaphone equivalent, then the words in the dictionary
, whose metaphone translations have at most
an edit distance (cf. Levenshtein (1966)) of two, are
added to the list of suggested words. The suggestions
are ranked according to the mean of first, the edit
distance between the misspelled word and the suggested
word, and second, the edit distance between
the misspelled word's metaphone and the suggested
word's. The smaller this value is for a suggestion, the
more likely it is to be the correct substitution from
the orthographic or phonetic point of view. However,
this ranking does not take domain-specific information
into account. Because of this deficiency, correctly
spelled words in queries are stored and their respective
number of occurrences is counted. The words
in the suggestion list for a misspelled query term are
looked up in this repository and the suggested word
having the highest number of occurrences is chosen
as the replacement of the erroneous original query
term. In case of two or more words having the same
number of occurrences the word that is ranked first
is selected. If the query term is not present in the
repository up to this moment, it is replaced by the
first suggestion, i.e. the word being phonetically or
orthographically closest. Therefore, suggested words
that are very similar to the misspelled word, yet make
no sense in the context of the application domain,
might be rejected as replacements. Consequently, the
word correction process described above is improved
by dynamic adaptation to past knowledge.
Another important issue in interpreting the natural
language query is to detect terms consisting of
multiple words. Proper names like "Bad Kleinkirch-heim"
or nouns like "parking garage" have to be
treated as one element of the query. Therefore, all
multi-word denominations known to the system are
stored in an efficient data structure allowing to identify
such cases. More precisely, regular expressions
are used to describe rules applied during the identification
process.
2.1.4
SQL Mapping
With the underlying relational database management
system PostgreSQL, the natural language query has
to be transformed into a SQL statement to retrieve
29
the requested information. As mentioned above the
knowledge base describes parameterized SQL fragments
that are used to build a single SQL statement
representing the natural language query. The query
terms are tagged with class information, i.e. the relevant
concepts of the domain (e.g. "hotel" as a type
of accommodation or "sauna" as a facility provided
by a hotel), numerals or modifying terms like "not",
"at least", "close to" or "in". If none of the classes
specified in the ontology can be applied, the database
tables containing proper names have to be searched.
If a noun is found in one of these tables, it is tagged
with the respective table's name, such that "Tyrol"
will be marked as a federal state. In the next step,
this class information is used by the grammar to select
the appropriate SQL fragments. Finally, the SQL
fragments have to be combined to a single SQL statement
reflecting the natural language query of the
user. The operators combining the SQL fragments
are again chosen according to the definitions in the
grammar.
Associative Networks
Quillian (1968) introduced the basic principle of a
semantic network and it played, since then, a central
role in knowledge representation. The building
blocks of semantic networks are, first, nodes that express
knowledge in terms of concepts, second, concept
properties, and third,the hierarchical sub-super class
relationship between these concepts.
Each concept in a semantic network represents a
semantic entity. Associations between concepts describe
the hierarchical relationship between these semantic
entities via is-a or instance-of links.
The
higher a concept moves up in the hierarchy along is-a
relations, the more abstract is its semantic meaning
. Properties are attached to concepts and, therefore
, properties are also represented by concepts and
linked to nodes via labeled associations. Furthermore,
a property that is linked to a high-level concept is inherited
by all descendants of the concept. Hence, it is
assumed that the property applies to all subsequent
nodes. An example of a semantic network is depicted
in Figure <A href="28.html#5">2.
Semantic networks initially emerged in cognitive
psychology and the term itself has been used in the
field of knowledge representation in a far more general
sense than described above. In particular, the
term semantic network has been commonly used to
refer to a conceptual approach known as associative
network.
An associative network defines a generic
network which consists of nodes representing information
items (semantic entities) and associations between
nodes, that express, not necessarily defined or
labeled, relations among nodes. Links between particular
nodes might be weighted to determine the
strength of connectivity.
3.1
Spreading Activation
A commonly used technique, which implements information
retrieval on semantic or associative networks,
is often referred to as spreading activation.
The
spreading activation processing paradigm is tight-knit
with the supposed mode of operation of human memory
. It was introduced to the field of artificial intelligence
to obtain a means of processing semantic or
associative networks. The algorithm, which underlies
the spreading activation (SA) paradigm, is based
on a quite simple approach and operates on a data
structure that reflects the relationships between information
items. Thus, nodes model real world entities
and links between these nodes define the relatedness
of entities. Furthermore, links might possess
, first, a specific direction, second, a label and,
third, a weight that reflects the degree of association.
This conceptual approach allows for the definition of
a more general, a more generic network than the basic
structure of a semantic network demands. Nevertheless
, it could be used to model a semantic network as
well as a more generic one, for instance an associative
network.
The idea, underlying spreading activation, is to
propagate activation starting from source nodes via
weighted links over the network. More precisely, the
process of propagating activation from one node to
adjacent nodes is called a pulse. The SA algorithm
is based on an iterative approach that is divided into
two steps: first, one or more pulses are triggered and,
second, a termination check determines if the process
has to continue or to halt.
Furthermore, a single pulse consists of a pre-adjustment
phase, the spreading process and a post-adjustment
phase.
The optional pre- and post-adjustment
phases might incorporate a means of activation
decay, or to avoid reactivation from previous
pulses. Strictly speaking, these two phases are used
to gain more control over the network. The spreading
phase implements propagation of activation over the
network. Spreading activation works according to the
formula:
I
j
(p) =
k
i
(O
i
(p - 1) w
ij
)
(1)
Each node j determines the total input I
j
at pulse
p of all linked nodes. Therefore, the output O
i
(p - 1)
at the previous pulse p - 1 of node i is multiplied with
the associated weight w
ij
of the link connecting node i
to node j and the grand total for all k connected nodes
is calculated. Inputs or weights can be expressed by
binary values (0/1), inhibitory or reinforcing values
(-1/+1), or real values defining the strength of the
connection between nodes. More precisely, the first
two options are used in the application of semantic
networks, the latter one is commonly used for associative
networks. This is due to the fact that the type
of association does not necessarily have some exact
semantic meaning. The weight rather describes the
relationship between nodes. Furthermore, the output
value of a node has to be determined. In most cases,
no distinction is made between the input value and
the activation level of a node, i.e. the input value of a
node and its activation level are equal. Before firing
the activation to adjacent nodes a function calculates
the output depending on the activation level of the
node:
O
i
= f (I
i
)
(2)
Various functions can be used to determine the
output value of a node, for instance the sigmoid function
, or a linear activation function, but most commonly
used is the threshold function. The threshold
function determines, if a node is considered to be active
or not, i.e. the activation level of each node is
compared to the threshold value. If the activation
level exceeds the threshold, the state of the node is
set to active. Subsequent to the calculation of the
activation state, the output value is propagated to
adjacent nodes. Normally, the same output value is
sent to all adjacent nodes.
The process described
above is repeated, pulse after pulse, and activation
spreads through the network and activates more and
more nodes until a termination condition is met. Finally
, the SA process halts and a final activation state
is obtained. Depending on the application's task the
30
accomodation
hotel
animal
farm
pension
pig
sheep
facility
steam bath
sauna
hot
is_a
is_a
is_a
is_a
is_a
is_a
is_a
has
offers
is
is
Figure 2: A semantic network example of tourism-related terms
activation levels are evaluated and interpreted accordingly
.
3.2
Taming Spreading Activation
Unfortunately, the basic approach of spreading activation
entails some major drawbacks. Strictly speaking
, without appropriate control, activation might be
propagated all over the network. Furthermore, the semantics
of labeled associations are not incorporated
in SA and it is quite difficult to integrate an inference
mechanism based on the semantics of associations
. To overcome these undesired side-effects the
integration of constraints helps to tame the spreading
process (cf. Crestani (1997)). Some constraints
commonly used are described as follows.
Fan-out constraint: Nodes with a broad semantic
meaning possess a vast number of links to
adjacent nodes. This circumstance implies that
such nodes activate large areas of the network.
Therefore, activation should diminish at nodes
with a high degree of connectivity to avoid this
unwanted effect.
Distance constraint: The basic idea underlying
this constraint is, that activation ceases
when it reaches nodes far away from the activation
source. Thus, the term far corresponds
to the number of links over which activation was
spread, i.e. the greater the distance between two
nodes, the weaker is their semantic relationship.
According to the distance of two nodes their relation
can be classified. Directly connected nodes
share a first order relation. Two nodes connected
via an intermediate node are associated by a second
order relation, and so on.
Activation constraint: Threshold values are
assigned to nodes (it is not necessary to apply
the same value to all nodes) and are interpreted
by the threshold function. Moreover, threshold
values can be adapted during the spreading process
in relation to the total amount of activity in
the network.
Path constraint: Usually, activation spreads
over the network using all available links. The integration
of preferred paths allows to direct activation
according to application-dependent rules.
Another enhancement of the spreading activation
model is the integration of a feedback process. The
activation level of some nodes or the entire network
is evaluated by, for instance, another process or by
a user. More precisely, a user checks the activation
level of some nodes and adapts them according to
her or his needs.
Subsequently, activation spreads
depending on the user refinement. Additionally, users
may indicate preferred paths for spreading activation
and, therefore, are able to adapt the spreading process
to their own needs.
Recommendation via Spreading Activation
One of the first information retrieval systems using
constrained spreading activation was GRANT. Kjeldsen
& Cohen (1987) developed a system that handles
information about research proposals and potential
funding agencies. GRANT's domain knowledge is
stored in a highly associated semantic network. The
search process is carried out by constrained spreading
activation over the network. In particular, the system
extensively uses path constraints in the form of path
endorsement. GRANT can be considered as an inference
system applying repeatedly the same inference
schema:
IF x AND R(x, y) y
(3)
R(x, y) represents a path between two nodes x and y.
This inference rule can be interpreted as follows: "if
a founding agency is interested in topic x and there
is a relation between topic x and topic y then the
founding agency might be interested in the related
topic y."
Croft, Lucia, Crigean & Willet (1989) developed
an information retrieval system initially intended to
study the possibility of retrieving documents by plausible
inference. In order to implement plausible inference
constrained spreading activation was chosen
accidently. The I
3
R system acts as a search intermediary
(cf. Croft & Thompson (1987)). To accomplish
this task the system uses domain knowledge to
refine user queries, determines the appropriate search
strategy, assists the user in evaluating the output and
reformulating the query. In its initial version, the domain
knowledge was represented using a tree structure
of concepts. The design was later refined to meet
the requirements of a semantic network.
Belew (1989) investigated the use of connectionist
techniques in an information retrieval system
called Adaptive Information Retrieval (AIR). The
system handles information about scientific publications
, like the publication title and the author. AIR
uses a weighted graph as knowledge representation
paradigm. For each document, author and keyword
(keywords are words found in publication titles) a
node is created and associations between nodes are
constructed from an initial representation of documents
and attributes. A user's query causes initial
activity to be placed on some nodes of the network.
31
This activity is propagated to other nodes until certain
conditions are met. Nodes with the highest level
of activation represent the answer to the query by the
AIR system. Furthermore, users are allowed to assign
a degree of relevance to the results (++, +, -, --).
This causes new links to be created and the adaptation
of weights between existing links. Moreover,
feedback is averaged across the judgments of many
users.
A mentionable aspect of the AIR system is that no
provision is made for the traditional Boolean operators
like AND and OR. Rather, AIR emulates these
logical operations because "the point is that the difference
between AND and OR is a matter of degree".
This insight goes back to Von Neumann (as pointed
out by Belew (1989)).
A system based on a combination of an ostensive
approach with the associative retrieval approach is described
in Crestani & Lee (2000). In the WebSCSA
(Web Searching by Constrained Spreading Activation
) approach a query does not consist of keywords.
Instead, the system is based on an ostensive approach
and assumes that the user has already identified relevant
Web pages that act as a basis for the following
retrieval process.
Subsequently, relevant pages are
parsed for links and they are followed to search for
other relevant associated pages. The user does not ex-plicitly
refine the query. More precisely, users point to
a number of relevant pages to initiate a query and the
WebSCSA system combines the content of these pages
to build a search profile. In contrast to conventional
search engines WebSCSA does not make use of extensive
indices during the search process. Strictly speaking
, it retrieves relevant information only by navigating
the Web at the time the user searches for information
. The navigation is processed and controlled by
means of a constrained spreading activation model.
In order to unleash the power of WebSCSA the system
should be used when users already have a point
to start for her or his search. Pragmatically speaking,
the intention of WebSCSA is to enhance conventional
search engines, use these as starting points and not
to compete with them.
Hartmann & Strothotte (2002) focus on a spreading
activation approach to automatically find associations
between text passages and multimedia material
like illustrations, animations, sounds, and videos.
Moreover, a media-independent formal representation
of the underlying knowledge is used to automatically
adapt illustrations to the contents of small text segments
. The system contains a hierarchical representation
of basic anatomic concepts such as bones, muscles
, articulations, tendons, as well as their parts and
regions.
Network structures provide a flexible model for
adaptation and integration of additional information
items. Nevertheless, Crestani (1997) points out that
"... the problem of building a network which effec-tively
represents the useful relations (in terms of the
IRs aims) has always been the critical point of many
of the attempts to use SA in IR. These networks are
very difficult to build, to maintain and keep up to date.
Their construction requires in depth application domain
knowledge that only experts in the application
domain can provide."
Dittenbach, Merkl & Berger (2003) present an approach
based on neural networks for organizing words
of a specific domain according to their semantic relations
. A two-dimensional map is used to display
semantically similar words in spatially regions. This
representation can support the construction and enrichment
of information stored in the associative network
.
4.1
The Redeveloped System Architecture
To overcome the limitations of the knowledge base of
the original system, the development of an alternative
approach to model domain knowledge was necessary
.
Basically, the unassociated, non-hierarchic
knowledge representation model inhibits the power
of the system. Strictly speaking, the original system
failed to retrieve results on a fuzzy basis, i.e. the results
determined by the system provide exact matches
only, without respect to first, interactions users made
during past sessions, second, personal preferences of
users, third, semantic relations of domain intrinsic information
, and fourth, locational interdependencies.
In order to adapt the system architecture accordingly
, an approach based on associative networks was
developed. This associative network replaces the flat
synonym ontology used in the original system. Moreover
, both the grammar rules and the SQL fragments
have been removed from the knowledge base.
More precisely, the functionality and logic is now covered
by newly developed pipeline elements or implic-itly
resolved by the associative network.
In analogy
to the original pipeline, the first three processing
steps are accomplished.
Next, a newly implemented
initializationelement associates concepts extracted
from the query with nodes of the associative
network. These nodes act as activation sources. Subsequently
, the newly designed spreading element implements
the process of activation propagation. Finally
, the new evaluationelement analyzes the activation
level of the associative network determined
during the spreading phase and produces a ranking
according to this activation level.
4.1.1
The Knowledge Representation Model
Basically, the knowledge base of the information retrieval
system is composed of two major parts: first,
a relational database that stores information about
domain entities and, second, a data structure based
on an associative network that models the relationships
among terms relevant to the domain. Each domain
entity is described by a freely definable set of attributes
. To provide a flexible and extensible means
for specifying entity attributes, these attributes are
organized as <name, value> pairs. An example from
the tourism domain is depicted in Table <A href="28.html#6">1.
Hotel Wellnesshof
category
4
facility
sauna
facility
solarium
facility
...
Table 1: <name,value>-pair example for entity "Hotel
Wellnesshof "
The associative network consists of a set of nodes
and each node represents an information item. Moreover
, each node is member of one of three logical layers
defined as follows:
Abstraction layer: One objective of the rede-velopment
of the knowledge base was to integrate
information items with abstract semantic meaning
. More precisely, in contrast to the knowledge
base used in the original system which only
supported modeling of entity attributes, the new
approach allows the integration of a broader set
of terms, e.g. terms like "wellness" or "summer
activities" that virtually combine several information
items.
32
Conceptual layer: The second layer is used to
associate entity attributes according to their semantic
relationship. Thus, each entity attribute
has a representation at the conceptual layer. Furthermore
, the strengths of the relationships between
information items are expressed by a real
value associated with the link.
Entity layer: Finally, the entity layer associates
entities with information items (entity attributes
) of the conceptual layer, e.g. an entity
possessing the attribute "sauna" is associated
with the saunanode of the conceptual layer.
The building blocks of the network are concepts.
A concept represents an information item possessing
several semantically equivalent terms, i.e. synonyms,
in different languages. Each concept possesses one of
three different roles:
Concrete concepts are used to represent information
items at the conceptual layer. More
precisely, concrete concepts refer to entity attributes
.
Concepts with an abstract role refer to terms at
the abstraction layer.
Finally, the modifier role is used to categorize
concepts that alter the processing rules for abstract
or concrete concepts. A modifier like, for
instance, "not" allows the exclusion of concepts
by negation of the assigned initialization value.
Moreover, concepts provide, depending on their
role, a method for expressing relationships among
them.
The connectedTo relation defines a bidirec-tional
weighted link between two concrete concepts,
e.g. the concrete concept "sauna" is linked to "steam
bath". The second relation used to link information
items is the parentOf association. It is used to express
the sub-super class relationship between abstract concepts
or concrete and abstract concepts.
A set of concepts representing a particular domain
is described in a single XML file and acts as input
source for the information retrieval system. During
initialization, the application parses the XML file, in-stantiates
all concepts, generates a list of synonyms
pointing at corresponding concepts, associates concepts
according to their relations and, finally, links the
entities to concrete concepts. Currently, the associative
network consists of about 2,200 concepts, 10,000
links and more than 13,000 entities. The concept network
includes terms that describe the tourism domain
as well as towns, cities and federal states throughout
Austria.
To get a better picture of the interdependencies
associated with the layers introduced above see Figure
<A href="28.html#8">3. Each layer holds a specific set of concepts.
Abstract concepts associate concepts at the same or
at the conceptual layer. Concepts at the conceptual
layer define links between entity attributes and associate
these attributes with entities at the entity layer.
Finally, entities are placed at the lowest layer, the
entity layer. Concepts at the entity layer are not associated
with items at the same layer. Consider, for
example, the abstract concept "indoor sports" and
the concept "sauna" as concepts from which activation
originates from. First, activation is propagated
between the abstraction layer to the conceptual layer
via the dashed line from "indoor sports" to "table tennis"
. We shall note, that dashed lines indicate links
between concepts of different layers. Thus, "sauna"
and "table tennis" act as source concepts and, moreover
, activation is spread through the network along
links at the conceptual layer. Activation received by
concepts at the conceptual layer is propagated to the
entities at the entity layer stimulating, in this particular
case, the entities "Hotel Stams", "Hotel Thaya"
as well as "Wachauerhof ". Moreover, a fraction of
activation is propagated to adjacent concept nodes at
the conceptual layer, i.e. "solarium", "whirlpool" as
well as "tennis", and to entities, i.e. "Hotel Wiental"
and "Forellenhof ", respectively.
4.1.2
Processing the Associative Network
Due to the flexibility and adaptivity of the original
system, the integration of the redesigned parts has
been accomplished with relatively little effort. In particular
, the existing knowledge base has been replaced
by the associative network and additional pipeline elements
to implement spreading activation have been
incorporated.
Figure <A href="28.html#9">4 depicts the redeveloped knowledge base
on which the processing algorithm operates.
The
conceptual layer stores concrete concepts and the
weighted links among them.
Associating abstract
concepts with concrete concepts is done at the abstraction
layer. Each entity has a unique identifier
that is equivalent to the entity identifier stored in the
relational database. Furthermore, entities are connected
to concepts at the conceptual layer.
More
precisely, an entity is connected to all attributes it
possesses. As an example consider the entity "Hotel
Stams" as depicted in Figure <A href="28.html#9">4. This hotel offers
a "sauna", a "steam bath" and a "solarium" and is,
therefore, linked to the corresponding concepts at the
conceptual layer.
First, a user's query, received by the information
retrieval system, is decomposed into single terms. After
applying an error correction mechanism and a
phrase detection algorithm to the query, terms found
in the synonym lexicon are linked to their corresponding
concept at the abstraction or conceptual layer.
These concepts act as activation sources and, subsequently
, the activation process is initiated and activation
spreads according to the algorithm outlined
below.
At the beginning, the role of each concept is evaluated
. Depending on its role, different initialization
strategies are applied:
Modifier role: In case of the "not" modifier,
the initialization value of the subsequent concept
is multiplied with a negative number. Due to the
fact that the "and" and "or" modifiers are im-plicitly
resolved by the associative network, they
receive no special treatment. More precisely, if,
for instance, somebody is searching for an accommodation
with a sauna or solarium, those accommodations
offering both facilities will be ranked
higher than others, providing only one of the desired
facilities. Furthermore, the "near" modifier
reflecting geographic dependencies, is automatically
resolved by associating cities or towns
within a circumference of 15km. Depending on
the distance, the weights are adapted accordingly
, i.e. the closer they are together, the higher
is the weight of the link in the associative network
.
Abstract role: If a source concept is abstract,
the set of source concepts is expanded by resolving
the parentOf relation between parent and
child concepts.
This process is repeated until
all abstract concepts are resolved, i.e. the set of
source concepts contains members of the conceptual
layer only. The initial activation value is
propagated to all child concepts, with respect to
the weighted links.
33
Figure 3: Network layer interdependencies
Concrete role: The initial activation level of
concrete concepts is set to initialization value
defined in the XML source file. The spreading
activation process takes place at the conceptual
layer, i.e. the connectedTo relations between adjacent
concepts are used to propagate activation
through the network.
After the initialization phase has completed, the iterative
spreading process is activated. During a single
iteration one pulse is performed, i.e. the number of
iterations equals the number of pulses. Starting from
the set of source concepts determined during initialization
, in the current implementation activation is
spread to adjacent nodes according to the following
formula:
O
i
(p) =
0
if I
i
(p) < ,
F
i
p+1
I
i
(p)
else, with F
i
= (1 C
i
C
T
) (4)
The output, O
i
(p), sent from node i at pulse p,
is calculated as the fraction of F
i
, which limits the
propagation according to the degree of connectivity
of node i (i.e. fan-out constraint, cf. Section <A href="28.html#5">3.2), and
p + 1, expressing the diminishing semantic relationship
according to the distance of node i to activation
source nodes (i.e. distance constraint, cf. Section <A href="28.html#5">3.2).
Moreover, F
i
is calculated by dividing the number of
concepts C
i
directly connected to node i by the total
number of nodes C
T
building the associative network.
Note, represents a threshold value.
Simultaneous to calculating the output value for
all connected nodes, the activation level I
i
(p) of node
i is added to all associated entities. More precisely,
each entity connected to node i receives the same
value and adds it to an internal variable representing
the total activation of the entity. As an example,
if the concept node "sauna" is activated, the activation
potential is propagated to the entities "Hotel
Stams" and "Hotel Thaya" (cf. Figure <A href="28.html#9">4). Next, all
newly activated nodes are used in the subsequent iteration
as activation sources and the spreading process
continues until the maximum number of iterations is
reached.
After the spreading process has terminated, the
system inspects all entities and ranks them according
to their activation. Figure <A href="28.html#10">5 depicts the results
determined for the example query
Ich und meine Kinder m
ochten in einem Hotel in
Kitzb
uhel Urlaub machen.
Es sollte ein Dampfbad
hab<A href="28.html#8">en.
<A href="28.html#8">1
In this particular case, the entities "Schwarzer
Adler Kitzb
uhel" and "Hotel Schloss Lebenberg
Kitzb
uhel" located in "Kitzb
uhel" are suggested to
be the best matching answers to the query. Moreover,
the result set includes matches that are closely related
to the user's query. Thus, depending on the relations
stored in the associative network, entities offering related
concepts are activated accordingly. More precisely
, not only the attributes "hotel", "steam bath"
and "kids" are taken into account, but also all other
related entity attributes (e.g. "sauna", "whirlpool",
"solarium", etc.) have some influence on the ranking
position. Furthermore, accommodations in cities in
the vicinity of "Kitzb
uhel" providing the same or even
better offers are also included in the result set. Thus,
the associative network provides a means for exact
information retrieval and incorporates a fuzzy search
strategy that determines closely related matches to
the user's query.
1
Me and my kids would like to spend our holidays in a hotel in
Kitzbhel. It should have a steam bath.
34
Figure 4: Knowledge base architecture
Conclusion
A natural language system based on an approach
described in Berger (2001) and Berger, Dittenbach,
Merkl & Winiwarter (2001) has been reviewed in this
paper and, furthermore, provided the basis for the research
presented herein. The reviewed system offers
multilingual access to information on a restricted domain
. In this particular case the system operates on
the tourism domain. Moreover, users of the search interface
are encouraged to formulate queries in natural
language, i.e. they are able to express their intentions
in their own words.
We developed a knowledge representation model
that facilitates the definition of semantic relations between
information items exemplified by terms of the
tourism domain.
In particular, an associative network
based on a three layered structure was introduced
. First, the abstraction layer allows modelling
of terms with a subjective or broader semantic meaning
, second, the conceptual layer is used to define
relations via weighted links between terms, and, finally
, the entity layer provides a means to associate
elements stored in a relational database with information
items in the associative network. Moreover,
a constrained spreading activation algorithm implements
a processing technique operating on the network
. Generally, the combination of the associative
nature of the knowledge representation model and the
constrained spreading activation approach constitutes
a search algorithm that evaluates the relatedness of
terms and, therefore, provides a means for implicit
query expansion.
The flexible method of defining relationships between
terms unleashes the ability to determine highly
associated results as well as results that are predefined
due to personal preferences. Moreover, especially designed
associative networks can be used to model scenarios
, as, for instance, a winter holiday scenario that
favors accommodations offering winter sports activities
by adapting the weights of links accordingly.
One important task for further enhancement is the
possibility to express the relevance of query terms.
Users should be able to assign a degree of significance
to terms. Consider, for example, a user searching for
an accommodation with several amenities in the capital
city of Austria. Moreover, the user is a vegetarian.
Therefore, a means for expressing the importance of
vegetarian kitchen is needed. In order to accomplish
this requirement, the system might be extended to
understand words that emphasis terms, e.g. in analogy
to modifiers like "and", "or", "near", etc. the
word "important" is handled like a modifier and influences
the activation level of the following query
term. Additionally, an interface providing a graphical
instrument to express relevance by means of a
slide controller might be considered.
Furthermore, an associative network might act as
a kind of short term memory. More precisely, during
a user session a particular network is used to store
the activation level determined during past user interactions
. A user, for instance, is searching for a
hotel in Vienna. Thus, the associative network stores
the activation level for further processing. Next, the
user might restrict the results to accommodations offering
a sauna. This spreading process is carried out
using the associative network determined during the
previous interaction.
References
Baeza-Yates, R. A. & Ribeiro-Neto, B. (1999),
Modern Information Retrieval, Addison-Wesley,
Reading, MA.
Ballesteros, L. & Croft, W. B. (1998), Resolving
ambiguity for cross-language retrieval, in `Re-search
and Development in Information Retrieval'
, pp. 6471.
Belew, R. K. (1989), Adaptive information retrieval:
Using a connectionist representation to retrieve
and learn about documents, in N. J. Nicholas
J. Belkin & C. J. Van Rijsbergen, eds, `Proceedings
of the 12th International Conference on
Research and Development in Information Retrieval
(SIGIR'89)', ACM, pp. 1120.
Berger, H. (2001), Adaptive multilingual interfaces,
Master's thesis, Vienna University of Technology
.
Berger, H., Dittenbach, M. & Merkl, D. (2003),
Querying tourism information systems in natural
language, in `Proceedings of the 2nd International
Conference on Information System
Technology and its Applications (ISTA 2003)',
Kharkiv, Ukraine.
Berger, H., Dittenbach, M., Merkl, D. & Winiwarter,
W. (2001), Providing multilingual natural language
access to tourism information, in W. Winiwarter
, S. Bressan & I. K. Ibrahim, eds, `Proceedings
of the 3rd International Conference on
35
Figure 5: Weighted result set determined by constrained spreading activation
Information Integration and Web-based Applications
and Services (IIWAS 2001)', Austrian
Computer Society, Linz, Austria, pp. 269276.
Cavnar, W. B. & Trenkle, J. M. (1994), N-gram-based
text categorization, in `International Symposium
on Document Analysis and Information
Retrieval', Las Vegas, NV.
Crestani, F. (1997), `Application of spreading activation
techniques in information retrieval', Artificial
Intelligence Review 11(6), 453582.
Crestani, F. & Lee, P. L. (2000), `Searching the web
by constrained spreading activation', Information
Processing and Management 36(4), 585
605.
Croft, W., Lucia, T., Crigean, J. & Willet, P. (1989),
`Retrieving documents by plausible inference: an
experimental study', Information Processing &
Management 25(6), 599614.
Croft, W. & Thompson, R. H. (1987), `I
3
R: A New
Approach to the Design of Document Retrieval
Systems', Journal of the American Society for
Information Science 38(6), 389404.
Dittenbach, M., Merkl, D. & Berger, H. (2003), Using
a connectionist approach for enhancing domain
ontologies: Self-organizing word category
maps revisited, in `Proceedings of the 5th International
Conference on Data Warehousing and
Knowledge Discovery - (DaWaK 2003)'.
Accepted
for publication.
Harman, D. K. (1995), Overview of the 3rd Text
Retrieval Conference (TREC-3), in D. K. Harman
, ed., `Proceedings of the 3rd Text Retrieval
Conference (TREC-3)', NIST Special Publication
500225, pp. 119.
Hartmann, K. & Strothotte, T. (2002), A spreading
activation approach to text illustration, in `Proceedings
of the 2nd International Symposium on
Smart graphics', ACM Press, pp. 3946.
Hull, D. A. & Grafenstette, G. (1996), Querying
across languages: A dictionary-based approach
to multilingual information retrieval, in `Proceedings
of ACM SIGIR Conference on Research
and Development in Information Retrieval (SIGIR
1996)', pp. 4957.
Kjeldsen, R. & Cohen, P. (1987), `The evolution and
performance of the GRANT system', IEEE Expert
pp. 7379.
Levenshtein, V. I. (1966), `Binary codes capable of
correcting deletions, insertions and reversals',
Soviet Physics Doklady 10(8), 707710.
Philips, L. (1990), `Hanging on the metaphone', Computer
Language Magazine 7(12).
Quillian, M. R. (1968), Semantic memory, in M. Min-sky
, ed., `Semantic Information Processing', MIT
Press, pp. 227270.
Salton, G. (1989), Automatic Text Processing: The
Transformation, Analysis, and Retrieval of Information
by Computer, Addison-Wesley, Reading
, MA.
Salton, G. & McGill, M. J. (1983), Introduction
to Modern Information Retrieval, McGraw-Hill,
New York.
Van Rijsbergen, C. J. (1979), Information Retrieval,
Department of Computer Science, University of
Glasgow.
Xu, F., Netter, K. & Stenzhorn, H. (2000), Mietta a
framework for uniform and multilingual access
to structured database and web information, in
`Proceedings of the 5th International Workshop
on Information Retrieval with Asian languages'.
36 | natural language information retrieval;constrained spreading activation;query expansion;spreading activation;multilingual information retrieval system;knowledge representation model;associative networks;knowledge representation;natural language query |
29 | An Analytical Model Based on G/M/1 with Self-Similar Input to Provide End-to-End QoS in 3G Networks | The dramatic increase in demand for wireless Internet access has lead to the introduction of new wireless architectures and systems including 3G, Wi-Fi and WiMAX. 3G systems such as UMTS and CDMA2000 are leaning towards an all-IP architecture for transporting IP multimedia services, mainly due to its scalability and promising capability of inter-working heterogeneous wireless access networks. During the last ten years, substantial work has been done to understand the nature of wired IP traffic and it has been proven that IP traffic exhibits self-similar properties and burstiness over a large range of time scales. Recently, because of the large deployment of new wireless architectures, researchers have focused their attention towards understanding the nature of traffic carried by different wireless architecture and early studies have shown that wireless data traffic also exhibits strong long-range dependency. Thus, the classical tele-traffic theory based on a simple Markovian process cannot be used to evaluate the performance of wireless networks. Unfortunately, the area of understanding and modeling of different kinds of wireless traffic is still immature which constitutes a problem since it is crucial to guarantee tight bound QoS parameters to heterogeneous end users of the mobile Internet. In this paper, we make several contributions to the accurate modeling of wireless IP traffic by presenting a novel analytical model that takes into account four different classes of self-similar traffic. The model consists of four queues and is based on a G/M/1 queueing system. We analyze it on the basis of priority with no preemption and find exact packet delays. To date, no closed form expressions have been presented for G/M/1 with priority. | INTRODUCTION
During the past decade, researchers have made significant efforts
to understand the nature of Internet traffic and it has been proven
that Internet traffic exhibits self-similar properties. The first
study, which stimulated research on self-similar traffic, was
based on measurements of Ethernet traffic at Bellcore [1].
Subsequently, the self-similar feature has been discovered in
many other types of Internet traffic including studies on
Transmission Control Protocol (TCP) [2, 3], WWW traffic [4],
VBR video [5] and Signaling System No 7 [6]. Deeper studies
into the characteristics of Internet traffic has discovered and
investigated various properties such as self-similarity [7], long-range
dependence [8] and scaling behavior at small time-scale
[9]. The references [10, 11] provide two extensive bibliographies
on self-similarity and long-range dependence research covering
both theoretical and applied papers on the subject.
Concurrently, over the past few years, we have witnessed a
growing popularity of Third Generation Systems (3G), which
have been designed to provide high-speed data services and
multimedia applications over mobile personal communication
networks. The Universal Mobile Telecommunication System
(UMTS) is the predominant global standard for 3G developed by
Third Generation Partnership Project (3GPP) [12]. The UMTS
architecture is shown in Fig. 1. It consists of two service
domains, a Circuit Switched (CS) service domain and a Packet
Switched (PS) service domain, which is of interest in this paper.
In the PS service domain, a UMTS network connects to a public
data network (PDN) through Serving GPRS Support node
(SGSN) and Gateway GPRS support node (GGSN). 3GPP has
defined four different QoS classes for UMTS; (1) Conversational
(2) Interactive (3) Streaming and (4) Background, conversational
being the most delay-sensitive and background the least delay
sensitive class [12].
With the increasing demand of Internet connectivity and the
flexibility and wide deployment of IP technologies, there has
emerged a paradigm shift towards IP-based solutions for wireless
networking [13]. Several Wireless IP architectures have been
proposed [17-23] based on three main IP QoS models, IntServ
[14], DiffServ [15] and MPLS [16]. 3GPP has also recently
introduced a new domain called IP Multimedia Subsystem (IMS)
for UMTS. The main objective of IMS is to deliver innovative
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
MobiWAC'06, October 2, 2006, Torremolinos, Malaga, Spain.
Copyright 2006 ACM 1-59593-488-X/06/0010...$5.00.
180
Fig. 1: A Simplified UMTS Network Architecture
and cost-effective services such as IP telephony, media streaming
and multiparty gaming by providing IP connectivity to every
mobile device [24].
In the light of this, researchers have recently focused on
understanding the nature of wireless IP traffic and early studies
have shown that wireless data traffic also exhibits self-similarity
and long-range dependency [25-28]. Much of the current
understanding of wireless IP traffic modeling is based on the
simplistic Poisson model, which can yield misleading results and
hence poor wireless network planning. Since the properties and
behavior of self-similar traffic is very different from traditional
Poisson or Markovian traffic, several issues need to be addressed
in modeling wireless IP traffic to provide end-to-end QoS to a
variety of heterogeneous applications. We begin by giving an
overview of related work on wired and wireless IP traffic
modeling along with a comparison of our model with previous
work.
RELATED WORK
In this section, we first discuss the related work which has been
done in the area of performance evaluation of wired IP and
Wireless IP networks under self-similar input and then we
compare our model with the previous ones.
2.1
Previous Work on IP Traffic Modeling
There has been much work done on Internet traffic modeling based
on queueing theory in the presence of self-similar traffic [29-34].
In [33], a Matrix Geometric (analytical) method is used to compute
numerical results for a two class DiffServ link being fed by a
Markovian Modulated Poisson Process (MMPP) input. A
weakness of this model is that MMPP may require an estimation of
a large number of parameters. An OPNET based simulation
approach was adopted in [34] to see the impact of self-similarity
on the performance evaluation of DiffServ networks. As a result,
an idea of expected queue length was given in relation to the Hurst
parameter and server utilization. It is difficult to offer guaranteed
QoS parameters on the basis of such analysis. The major weakness
of the majority of available queueing based results is that only the
FIFO queueing discipline has been considered for serving the
incoming traffic and thus differential treatment to different kinds
of traffic can not be provided. In addition, the previous results are
asymptotic. We also refer the readers to [35-39] for an overview of
previous work that has been carried out to evaluate the
performance of IP networks. The major drawback of the existing
work is that, the queueing models considered are not able to
capture the self-similar characteristics of Internet traffic.
Furthermore, it is important to note that most of the previous work
is focused on the analysis of one type of traffic only without
discussing its affect on the performance of other kinds of network
traffic.
2.2
Previous Work on Wireless IP Traffic
Modeling
Few studies have focused on wireless traffic modeling and here we
discuss the most relevant work. As shown in Fig. 1, the principle
of allocation of data flows between end users and GGSN leads to
increasing load on the network elements when moving closer to
the GGSN. Hence, GGSN is the node most exposed to self-similar
influence in UMTS [40]. The influence of self-similar input on
GGSN performance in the UMTS Release 5 IM-subsystem has
been analyzed on the basis of a FBM/D/1/W queueing system
(FBM-Fractional Brownian Motion) in [40]. In this work, different
probabilistic parameters of GGSN such as average queue length
and average service rate were also found. The work in [41]
presents modeling and a simulation study of the Telus Mobility (a
commercial service provider) Cellular Digital Packet Data (CDPD)
network. The collected results on average queueing delay and
buffer overflow probability indicated that genuine traffic traces
181
produce longer queues as compared to traditional Poisson based
traffic models. To get an overview of the analysis done in wireless
IP traffic modeling with self-similar input, we refer the readers to
[42-45]. These studies are merely based on characterization of
wireless traffic. To provide differential treatment to multiple
traffic classes with different QoS demands, there is a need to
accurately determine end-to-end QoS parameters such as delay,
jitter, throughput, packet loss, availability and per-flow sequence
preservation.
2.3
Comparison of our Model with Prior
Work
To overcome the limitations of the previous work in traffic
modeling (wired and wireless IP traffic), we present a realistic and
novel analytical model by considering four different classes of
traffic that exhibit long-range dependence and self-similarity. Our
model implements four queues based on a G/M/1 queueing system
and we analyze it on the basis of priority with no preemption. The
traffic model considered is parsimonious with few parameters and
has been studied in [46]. The model is furthermore similar to
on/off processes, in particular to its variation N-Burst model
studied in [47] where packets are incorporated. However, only a
single type of traffic is considered in [47]. We present a novel
analytical approach and make the following contributions to
Wireless IP traffic modeling.
Interarrival Time Calculations: We calculate the packet
interarrival time distributions for the particular self-similar traffic
model [46] for the first time in this paper. The distribution of cross
interarrival time between different types of packets is derived on
the basis of single packet results.
Packet Delays for Multiple Self-Similar Traffic Classes: We
consider a G/M/1 queueing system which takes into account four
different classes of self-similar input traffic denoted by SS/M/1
and analyze it on the basis of non preemptive priority and find
exact packet delays. To date, no closed form expressions have
been presented for G/M/1 with priority.
Embedded Markov Chain Formulation: We also formulate the
embedded Markov chain of G/M/1 by considering all possible
states and derive the corresponding transition probabilities.
The rest of the paper is organized as follows. Section 3 and 4 are
devoted to explaining the self-similar traffic model with multiple
classes and the calculation of interarrival times respectively.
Section 5 explains the procedure of formulating the embedded
Markov Chain along with the derivation of packet delays. The
applications of the model are discussed in section 6. Finally,
conclusion and future work is given in Section 7.
TRAFFIC MODEL
The traffic model considered here [46] belongs to a particular class
of self-similar traffic models also called telecom process in [48],
recently. The model captures the dynamics of packet generation
while accounting for the scaling properties of the traffic in
telecommunication networks. Such models, also called infinite
source models, are similar to on/off processes with heavy tailed on
and/or off times. What is more, our model abstracts the packet
arrival process in particular and facilitates queueing analysis by
the approaches developed in the sequel.
In the framework of a Poisson point process, the model represents
an infinite number of potential sources. The traffic is found by
aggregating the number of packets generated by such sources.
Each source initiates a session with a heavy-tailed distribution, in
particular a Pareto distribution whose density is given by
,
1
)
(
=
r
b
r
g
r
> b
where
is
related to the Hurst parameter by
2
/
)
3
(
=
H
.
The sessions arrive according to a Poisson process with rate
.
The packets arrive according to a Poisson process with rate
,
locally, over each session.
For each class, the traffic Y (t) measured as the total number of
packets injected in [0, t] is found by
)
)
(
(
)
(
=
t
S
i
i
i
i
S
t
R
U
t
Y
where
denote the local Poisson process, the duration
and the arrival time of session i, respectively. Hence, Y(t)
corresponds to the sum of packets generated by all sessions
initiated in [0,t] until the session expires if that happens before t,
and until t if is does not. The stationary version of this model
based on an infinite past is considered in calculations below. The
packet sizes are assumed to be fixed because each queue
corresponds to a certain type of application where the packets have
fixed size or at least fixed service time distribution.
i
i
i
S
R
U
,
,
The traffic model Y is long-range dependent and almost second-order
self-similar; the auto covariance function of its increments is
that of fractional Gaussian noise. Three different heavy traffic
limits are possible depending on the rate of increase in the traffic
parameters [46, 48]. Two of these limits are well known self-similar
processes, fractional Brownian motion and Levy process,
which do not account for packet dynamics in particular.
INTERARRIVAL TIMES
Packet interarrival time distributions for the particular self-similar
traffic model are calculated for the first time in this paper. We
consider a single type of packet first. The distributions of cross
interarrival time between different types of packets are derived on
the basis of single packet results.
4.1
Interarrival Times for a Single Class
Although the packet arrival process itself is long-range dependent
and shows self-similarity, the number of alive sessions at a period
of time, say of length t, has a stationary distribution and is
Poisson distributed. The alive sessions at any time can be further
split into independent components as those session that last longer
than t and those that expire before t. Such results are well known
[49, pg.273] and will be used to derive the interarrival time
distribution of the packets.
Given that there is a packet arrival at an instant in time, we aim to
find the distribution of the time until next arrival denoted by T.
We will find
)
(t
F
=
, for
. When the event
is considered, the information that there is a packet
arrival is equivalent to the information that there is at least one
session alive at the given instant. This follows from the
assumption that local packet generation process is Poisson over
each session. The probability that next interarrival is greater than t
}
{
t
T
P
>
0
t
}
{
t
T
>
182
on a particular session is the same as the probability that the
remaining time until next arrival is greater than t due to the
memoryless property of exponential distribution. That is,
=
> }
{
t
T
P
P {Next packet interarrival is greater than t | there
is a packet arrival}
=
1
P {Next packet interarrival is greater than t, there is at least
one alive session}
where
is the probability that there is at least one alive session,
in other words the utilization of an
system. The
event that next packet interarrival is greater than t can be split as
follows:
/
/ G
M
The active sessions that expire before t do not incur
any new arrivals.
The active sessions that expire after t do not incur any
new arrivals
No new session arrivals in t or at least one session
arrival with no packet arrival in t.
We find the probability that all three events occur at the same
time by using the independence of a Poisson point process over
disjoint sets. The result is
}
{
t
T
P
>
=
)]
1
(
exp[
{
1
)
(
)
(
t
B
v
A
v
e
t
e
e
t
t
)
1
]
/
)
1
)(
(
exp[
)
1
]
)
(
(exp[
t
e
B
v
e
A
v
t
t
t
t
)
1
]
e
)
(
)](exp[
1
(
exp[
e
e
)
(
)
(
+
t
t
t
B
A
A
e
t
t
t
}
)
1
]
/
)
e
1
)(
(
)](exp[
1
(
exp[
e
e
)
(
)
(
+
t
B
e
t
t
t
t
B
A
t
t
where
=
)
(
t
A
t
dy
y
g
t
y
)
(
)
(
(1)
+
=
t
t
t
G
t
dy
y
yg
B
0
)
(
)
(
)
(
(2)
and
)
]
duration
session
[
E
exp(
1
=
]
)
1
/(
exp[
1
=
b
because the steady state number in the system in
/
/ G
M
queue is Poisson distributed with mean
E[Session duration]
[50], and
and b are the parameters of the session duration with
complementary distribution function
G
and density
1
)
(
=
r
b
r
g
b
r
>
which is Pareto.
4.2
Interarrival Times for Multiple Classes
Here we explain the detailed procedure to find out the Interarrival
times for two classes, the Interarrival times for more than two
classes can be found in a similar way. Let
denote the
interarrival time between a class i packet that comes first and a
class j packet that follows,
The analysis, which can
be extended to
, provides a method for other self-similar
models as well provided that the distribution of interarrivals
are available.
ij
T
.
2
,
1
,
=
j
i
3
,
j
i
i
T
For the consecutive packet 1 arrival time
, we have
11
T
,
{
}
{
1
11
t
T
P
t
T
P
>
=
>
no arrivals of class 2 in
}
1
T
=
t
T
ds
s
f
P
)
(
}
s
in
2
class
of
arrivals
no
{
1
=
t
T
ds
s
f
s
F
)
(
)
(
1
0
2
where
)
(
)
(
0
2
2
2
)
(
t
t
A
v
B
v
e
e
t
F
=
.
]
)
(
exp[
)]
1
(
exp[
2
2
2
2
t
t
t
e
A
v
e
t
(3)
]
/
)
1
)(
(
exp[
2
2
2
t
e
B
v
t
t
)
(
2
t
A
and
)
(
2
t
B
are defined analogously as in (1) and (2),
and we used the independence of class 1 and 2 packet inputs.
Here,
0
2
F
is found through similar arguments used for P {T>t}
in the last subsection, without assuming that there is an alive
session of type 2 . As a result, by differentiation we find
)
(
)
(
)
(
0
2
1
11
t
F
t
f
t
f
T
T
=
Now consider the interarrival time T
12
occurring between a class 1
packet followed by a class 2 packet. For T
12
, we get
=
}
{
12
t
T
P
t
ds
s
P
s
f
0
0
2
}
arrived
packet
class1
a
|
in
1
class
of
arrivals
no
{
)
(
=
t
T
ds
s
F
s
f
0
0
2
)
(
)
(
1
where
is the density function corresponding to the event
that there is an arrival of class 2 packet at time s, and we used
independence of class 1 and 2 packet streams. As a matter of fact,
can be obtained by taking the derivative of the
complementary distribution function
)
(
0
2
s
f
)
(
0
2
s
f
0
2
F
given in (3). As a
result, we get
)
(
)
(
)
(
1
12
0
2
t
F
t
f
t
f
T
T
=
183
Similarly, it can be shown that
)
(
)
(
)
(
0
1
2
22
t
F
t
f
t
f
T
T
=
,
)
(
)
(
)
(
2
21
0
1
t
F
t
f
t
f
T
T
=
QUEUEING MODEL
We consider a model of four queues based on G/M/1 by
considering four different classes of self-similar input traffic
denoted by SS/M/1, and analyze it on the basis of priority with no
preemption. Let the service time distribution have rate
1
,
2
3
,
and
4
for type 1, type 2, type 3 and type 4 packets,
respectively, and let type 1 packets have the highest priority and
type 4 packets have the lowest priority.
5.1
With Four Classes
1
/
/ M
SS
The usual embedded Markov chain [51] formulation of
is based on the observation of the queueing system at
the time of arrival instants, right before an arrival. At such
instants, the number in the system is the number of packets that
arriving packet sees in the queue plus packets in service, if any,
excluding the arriving packet itself. We specify the states and the
transition probability matrix P of the Markov chain with the self-similar
model for four types of traffic.
1
/
/ M
G
Let
denote the embedded Markov chain at the
time of arrival instants. As the service is based on priority, the
type of packet in service is important at each arrival instant of a
given type of packet to determine the queueing time. Therefore,
we define the state space as:
}
0
:
{
n
X
n
}
,
,
,
},
,
,
,
,
{
},
,
,
,
{
:
)
,
,
,
,
,
{(
4
3
2
1
4
3
2
1
4
3
2
1
4
3
2
1
+
=
Z
i
i
i
i
I
s
s
s
s
s
a
a
a
a
a
s
a
i
i
i
i
S
(4)
where
are labels to denote the type of
arrival,
are labels to denote the type of packet in
service,
are the number of packets in each queue
including a possible packet in service, I denotes the idle state in
which no packet is in service or queued and
is the set of
nonnegative integers. Some of the states in the state space S given
in (4) have zero probability. For example,
is impossible. The particular notation in (4) for S is chosen for
simplicity, although the impossible states could be excluded from
S. Each possible state, the reachable states from each and the
corresponding transition probabilities will be calculated.
4
3
2
1
,
,
,
a
a
a
a
4
3
2
1
,
,
,
s
s
s
s
4
3
2
1
,
,
,
i
i
i
i
+
Z
)
,
,
,
,
0
,
(
2
1
4
3
1
s
a
i
i
i
5.2 States of the Embedded Markov Chain
The states of the Markov chain and the possible transitions with
respective probabilities can be enumerated by considering each
case. We will only analyze the states with non-empty queues in
this paper.
5.2.1 States
with
)
,
,
,
,
,
(
4
3
2
1
s
a
i
i
i
i
0
,
,
,
4
3
2
1
i
i
i
i
We can divide the states and transitions into 256 groups. Because
(a, s) can occur 4x4=16 different ways, and the next state (p, q)
can be composed similarly in 16 different ways as
}
,
,
,
{
,
4
3
2
1
a
a
a
a
p
a
and
. We
will analyze only the first one in detail; the others follow
similarly.
}
,
,
,
{
,
4
3
2
1
s
s
s
s
q
s
5.2.2 Transition from
)
,
,
,
,
,
(
)
,
,
,
,
,
(
2
2
4
3
2
1
1
1
4
3
2
1
s
a
j
j
j
j
s
a
i
i
i
i
This is the case where a transition occurs from an arrival of type 1
to an arrival of type 2 such that the first arrival has seen a type 1
packet in service, packets of type 1 (equivalently, total of
queue 1 and the packet in service) and
packets of type 2 (in
this case only queue 2), packets of type 3 and
packets of
type 4 in the system. The transition occurs to
packets of type
1,
packets of type 2, with a type 2 packet in service,
packets of type 3 and
packets of type 4 in the system. Due to
priority scheduling, an arrival of type 2 can see a type 2 packet in
service in the next state only if all type 1 packets including the
one that arrived in the previous state are exhausted during the
interarrival time. That is why
can take only the value 0 and
exactly
1
i
2
i
3
i
4
i
1
j
2
j
3
j
4
j
1
j
1
1
+
i
packets of type 1 are served. In contrast, the
number of packets served from queue 2, say k, can be anywhere
between 0 and
1
2
i
as at least one type 2 packet is in the
system, one being in service, when a new arrival occurs. The
transition probabilities are
)}
,
,
,
,
,
(
|
)
,
,
,
,
,
0
(
{
1
1
4
3
2
1
2
2
4
3
2
1
s
a
i
i
i
i
X
s
a
i
i
k
i
X
P
n
n
=
=
+
1
{
1
+
=
i
P
served from type 1, k served from type 2 and a type
2 packet remains in service during
}
12
T
where we use the fact that the remaining service time of a type 1
packet in service has the same exponential distribution Exp(
1
),
due to the memory-less property of a Markovian service.
Therefore, for
1
,
,
0
2
=
i
k
K
)}
,
,
,
,
,
(
|
)
,
,
,
,
,
0
(
{
1
1
4
3
2
1
2
2
4
3
2
1
s
a
i
i
i
i
X
s
a
i
i
k
i
X
P
n
n
=
=
+
+
+
=
0 0
)
(
)
(
)
(
12
2
1
1
1
2
t
x
t
T
S
S
S
dt
dx
ds
t
f
x
f
s
f
k
i
where
: sum of l independent service times of type m
packets, m=1, 2,
l
m
S
+
Z
l
. Note that
has an Erlang
distribution with parameters
l
m
S
)
,
(
m
l
as each service time has an
exponential distribution, and the sum
being the sum
of several exponentially distributed random variables has a
hypoexponential distribution. The density functions of all these
distributions can easily be evaluated numerically. Similarly, we
can enumerate all 256 cases. The results for first 64 cases are
given in Table 1 in the Appendix.
2
1
2
1
l
l
S
S
+
184
5.3 Limiting Distribution and Waiting Times
Steady state distribution
as seen by an arrival can be found by
solving
=
P
using the transition matrix
P
of the Markov
chain analyzed above. In practice, the queue capacity is limited in
a router. So, the steady state distribution exists.
To the best of our knowledge, no previous analytical expressions
are available for the waiting time of a G/M/1 queue with priority.
Our analysis relies on the limiting distribution of the state of the
queue at the arrival instances, which can be computed using the
analysis given above for our self-similar traffic model. In general,
the following analysis is valid for any G/M/1 queueing system
where the limiting distribution
at the arrival instances can be
computed. The expected waiting time for the highest priority
queue can be found as
+
+
+
=
=
=
=
=
=
=
=
=
)
,
,
,
,
,
(
)
1
(
)
,
,
,
,
,
(
]
[
2
1
4
3
2
1
1
0
1
0
0
2
1
1
1
1
4
3
2
1
1
1
0
0
0
1
1
1
1
1
2
2
3
3
4
4
1
1
2
2
3
3
4
4
s
a
j
j
j
j
j
s
a
j
j
j
j
j
W
E
J
j
J
j
J
j
J
j
J
j
J
j
J
j
J
j
)
,
,
,
,
,
(
)
1
(
)
,
,
,
,
,
(
)
1
(
4
1
4
3
2
1
1
0
0
0
1
4
1
1
3
1
4
3
2
1
3
1
0
0
1
0
1
1
1
1
2
2
3
3
4
4
1
1
2
2
3
3
4
4
s
a
j
j
j
j
j
s
a
j
j
j
j
j
J
j
J
j
J
j
J
j
J
j
J
j
J
j
J
j
=
=
=
=
=
=
=
=
+
+
+
where
and
are the respective capacities of each
queue. This follows clearly from the fact that an arriving packet
of higher priority will wait until all packets of the same priority as
well as the packet in service are served. Depending on the type of
the packet in service, we have the constituent expressions in the
sum.
,
,
2
1
J
J
3
J
4
J
On the other hand, we obtain the expected waiting time for the
low priority queues by analyzing the events that constitute this
delay. The amount of work in the system at any time is defined as
the (random) sum of all service times that will be required by the
packets in the system at that instant. The waiting time of a type 2
packet (which is 2
nd
highest priority queue) can be written as
....
3
2
1
2
+
+
+
=
Z
Z
Z
W
(5)
where Z
1
is the amount of work seen by the arriving packet in
queue 1 and queue 2 (i.e, higher priority and equal priority), Z
2
is
the amount of work associated with higher priority (i.e.type 1)
packets arriving during Z
1
, Z
3
is the amount of work associated
with type 1 packets arriving during Z
2
, and so on. As illustrated in
Fig.2, the waiting time of an arriving packet of type 2 is indeed
given by the total workload building in front of it. The arrows in
the figure denote the arrival times of type 1 packets, and all the
oblique lines have 45 degrees angle with the time axis. In this
figure the waiting time is
4
3
2
1
2
Z
Z
Z
Z
W
+
+
+
=
for example.
Let M
j
denote the number of type j arrivals over Z
i
, j=1,2,....Then
L
+
+
+
=
2
1
1
1
1
2
M
M
S
S
Z
W
where
denotes the random sum of M
j
M
S
1
j
independent service
times of type 1 packets. Then,
L
+
+
+
=
]
[
]
[
]
[
]
[
]
[
[
2
1
1
1
1
2
]
M
E
S
E
M
E
S
E
Z
E
W
E
since the service times and the arrival process are independent.
For a stationary packet arrival process, we get
]
[
]
[
]]
|
[
[
]
[
1
1
j
j
j
j
j
Z
E
c
Z
c
E
Z
M
E
E
M
E
=
=
=
due to mentioned independence, where
is a constant
particular to the arrival process. That is, expectation of the
number of arrivals in any period of time is proportional to the
length of that period because of stationarity in time and linearity
of expectation. In our stationary self-similar traffic input process,
c
0
1
>
c
1
is the expected number of arrivals per unit time which can be
called the arrival rate, given by the product of the arrival rate of
session arrivals, the arrival rate of packets over a session, and the
expected session length [46]. Explicitly,
.
Hence, the expected waiting time reduces to
)
1
/(
1
=
b
c
L
+
+
+
=
]
[
]
[
]
[
]
[
]
[
[
2
1
1
1
1
1
1
2
]
Z
E
c
S
E
Z
E
c
S
E
Z
E
W
E
]
[
]
[
)
]
[
]
[
(
]
[
2
1
1
1
2
1
1
1
1
W
E
c
Z
E
Z
E
Z
E
c
Z
E
+
=
+
+
+
=
L
In view of (5), therefore we get
from
]
[
2
W
E
+
+
+
+
=
=
=
=
=
=
=
=
=
)
,
,
,
,
,
(
)
(
)
,
,
,
,
,
(
)
(
]
[
2
2
4
3
2
1
2
2
0
1
1
0
0
1
1
1
2
4
3
2
1
2
2
1
1
0
0
0
1
1
2
1
1
2
2
3
3
4
4
1
1
2
2
3
3
4
4
s
a
j
j
j
j
j
j
s
a
j
j
j
j
j
j
W
E
J
j
J
j
J
j
J
j
J
j
J
j
J
j
J
j
1
2
1
4
2
4
3
2
1
4
0
1
0
0
1
2
2
1
1
3
2
4
3
2
1
3
0
1
0
1
0
2
2
1
1
]
[
)
,
,
,
,
,
(
)
1
(
)
,
,
,
,
,
(
)
1
(
1
1
2
2
3
3
4
4
1
1
2
2
3
3
4
4
W
E
c
s
a
j
j
j
j
j
j
s
a
j
j
j
j
j
j
J
j
J
j
J
j
J
j
J
j
J
j
J
j
J
j
+
+
+
+
+
+
=
=
=
=
=
=
=
=
Z
4
Z
3
Z
2
Z
1
time
Work (time)
Fig.2 Waiting time of a type 2 packet in terms of Z
j
's.
185
Similarly, we can directly write down the expected waiting
time for a packet of type 3 (3
rd
priority queue) and type 4
(lowest priority queue). The expected waiting time for a
packet of type 3 can be found from:
+
+
+
+
+
+
=
=
=
=
=
=
=
=
=
)
,
,
,
,
,
(
)
(
)
,
,
,
,
,
(
)
(
]
[
2
3
4
3
2
1
3
3
0
1
1
0
0
2
2
1
1
1
3
4
3
2
1
3
3
2
2
1
0
1
0
0
1
1
3
1
1
2
2
3
3
4
4
1
1
2
2
3
3
4
4
s
a
j
j
j
j
j
j
j
s
a
j
j
j
j
j
j
j
W
E
J
j
J
j
J
j
J
j
J
j
J
j
J
j
J
j
]
[
)
(
)
,
,
,
,
,
(
)
1
(
)
,
,
,
,
,
(
)
(
3
2
2
1
1
4
3
4
3
2
1
4
0
0
1
0
1
3
3
2
2
1
1
3
3
4
3
2
1
3
3
2
2
0
0
1
1
0
1
1
1
1
2
2
3
3
4
4
1
1
2
2
3
3
4
4
W
E
c
c
s
a
j
j
j
j
j
j
j
s
a
j
j
j
j
j
j
j
J
j
J
j
J
j
J
j
J
j
J
j
J
j
J
j
+
+
+
+
+
+
+
+
=
=
=
=
=
=
=
=
and
can be determined from
]
[
4
W
E
+
+
+
+
+
+
+
+
=
=
=
=
=
=
=
=
=
)
,
,
,
,
,
(
)
(
)
,
,
,
,
,
(
)
(
]
[
2
4
4
3
2
1
4
4
3
3
0
1
0
1
0
2
2
1
1
1
4
4
3
2
1
4
4
3
3
2
2
1
0
0
1
0
1
1
4
1
1
2
2
3
3
4
4
1
1
2
2
3
3
4
4
s
a
j
j
j
j
j
j
j
j
s
a
j
j
j
j
j
j
j
j
W
E
J
j
J
j
J
j
J
j
J
j
J
j
J
j
J
j
+
+
+
+
+
+
+
+
=
=
=
=
=
=
=
=
)
,
,
,
,
,
(
)
(
)
,
,
,
,
,
(
)
(
4
4
4
3
2
1
4
4
0
0
0
1
1
3
3
2
2
1
1
3
4
4
3
2
1
4
4
0
0
1
1
0
3
3
2
2
1
1
1
1
2
2
3
3
4
4
1
1
2
2
3
3
4
4
s
a
j
j
j
j
j
j
j
j
s
a
j
j
j
j
j
j
j
j
J
j
J
j
J
j
J
j
J
j
J
j
J
j
J
j
]
[
)
(
4
3
3
2
2
1
1
W
E
c
c
c
+
+
APPLICATIONS OF THE MODEL
Here we give an overview of the prime application of the model.
3G systems such as UMTS and CDMA2000 are leaning towards
an all-IP network architecture for transporting IP multimedia
services [52]. An all-IP DiffServ platform is the currently most
promising architecture to interwork the heterogeneous wireless
access networks and the Internet to provide broadband access,
seamless global roaming and QoS guarantees for various IP
multimedia services [53]. To transport UMTS services through IP
networks without loosing end-to-end QoS provisioning, a
consistent and efficient QoS mapping between UMTS services and
IP QoS classes is required. According to 3GPP, UMTS-to-IP QoS
mapping is performed by a translation function in the GGSN
router that classifies each UMTS packet flow and maps it to a
suitable IP QoS class [52]. In order to make accurate mappings and
to ensure guaranteed QoS parameters to the end user of mobile
Internet, it is essential to being able to accurately model the end-to
-end behavior of different classes of wireless IP traffic
(conversational, interactive, streaming and background) passing
through a DiffServ domain. Several queueing tools have been
developed that can be implemented in IP routers within different
QoS domains including Priority Queueing (PQ), Custom Queueing
(CQ), Weighted Fair Queueing (WFQ), Class Based Weighted Fair
Queueing (CBWFQ) and Low-Latency Queueing (LLQ) [54].
Our model is directly applicable to the problem of determining the
end-to-end queueing behavior of IP traffic through both Wired and
wireless IP domains, but modeling accuracy is more crucial in
resource constrained environments such as wireless networks. For
example, our model is directly able to analyze the behavior of four
different QoS classes of UMTS traffic passing through a DiffServ
domain, in which routers are implemented with priority queueing.
Thus, the model enables tighter bounds on actual behavior so that
over-provisioning can be minimized. It also enables translations of
traffic behavior between different kinds of QoS domains so that it
is possible to map reservations made in different domains to
provide session continuity.
CONCLUSION AND FUTURE WORK
In this paper, we have presented a novel analytical model based on
G/M/1 queueing system for accurate modeling of wireless IP
traffic behavior under the assumption of four different classes of
self-similar traffic. We have analyzed it on the basis of non-preemptive
priority and explicit expressions of expected waiting
time for the corresponding classes have been derived. The model
represents an important step towards the overall aim of finding
realistic (under self-similar traffic assumptions) end-to-end QoS
behavior (in terms of QoS parameters such as delay, jitter and
throughput) of multiple traffic classes passing through
heterogeneous wireless IP domains (IntServ, DiffServ and MPLS).
At the moment, we are working on the numerical analysis to find
solutions to the suggested imbedded Markov Chain in order to find
exact QoS parameter bounds for a given system. Our future work
will focus to analyze the performance of different QoS domains
implemented with different queueing disciplines.
REFERENCES
[1]
W. Leland, M. Taqqu, W. Willinger and D. Wilson, "On the self-similar
nature of Ethernet traffic (extended version)", IEEE/ACM
Transactions on Networking, vol. 2. no. 1, pp. 1-15, Feb. 1994.
[2]
V. Paxon, "Empirically derived analytical models of wide-area TCP
connections", IEEE/ACM Transactions on Networking, vol. 2, pp.
316-336, Aug. 1994
[3]
V. Paxon and S. Floyd , "Wide-area traffic: the failure of Poisson
modeling", in Proc. ACM SIGCOMM 94, London, U.K., Aug.
1994, pp. 257-268
[4]
M. Crovella and A. Bestavros, "Explaining World Wide Web
Traffic Self-Similarity", Tech. Rep. TR-95-015, Boston University,
CS Dept, Boston, MA 02215, Aug. 1995
[5]
M. W. Garrett and W. Willinger, "Analysis, Modeling and
Generation of Self-Similar VBR Video Traffic", ACM Computer
Communication Review, vol. 24, Oct. 1994, SIGCOMM 94
Symposium
[6]
W. Willinger et al, "Statistical analysis of CCSN/SS7 traffic data
from working CCS subnetworks", IEEE. Journal on Selected Areas
of Communication, vol. 12, no. 3, pp. 544-551, Apr. 1994
[7]
M. Crovella and A. Bestavros, "Self-Similarity in World Wide Web
Traffic: Evidence and Possible Causes", in ACM Sigmetrics, May
1996
[8]
J.C Bolot and M. Grossglauser, "On the Relevance of Long-Range
Dependence in Network Traffic", Computer Communication
Review, vol. 26, no. 4, pp. 15-24, October 1996.
186
[9]
Z. L. Zhang, V. Ribeiro, S. Moon and C. Diot, "Small-Time
Scaling behavior of internet backbone traffic: An Empirical Study",
in IEEE INFOCOM, march 2003
[10]
M. S. Taqqu, "Self-Similar processes". In S. Kotz and N. Johnson,
editors, Encyclopedia of Statistical Sciences, vol. 8, pp. 352-357.
Wiley, New York, 1988.
[11]
W. Willinger, M.S Taqqu and A. Erramilli, "A bibliographical
guide to self-similar traffic and performance modeling for modern
high speed networks", In F. P. Kelly, S. Zachary and I. Ziedins,
editors, Stochastic Networks: Theory and Applications, pp. 339-366
, Claredon Press, Oxford, 1996
[12]
H. Holma and A. Taskala, "WCDMA for UMTS, Radio Access for
Third Generation Mobile Communications, 2
nd
Edition", John
Wiley & Sons, Ltd. 2002, pp. 1-5
[13]
J. Yang and I. Kriaras, "Migration to all-IP based UMTS networks,
" IEEE 1
st
International Conference on 3G Mobile Communication
Technologies , 27-29 March, 2000, pp. 19-23
[14]
W. Stallings, "Integrated Services Architecture: The Next-Generation
Internet", International Journal of Network
Management, 9, 1999, pp. 38-43
[15]
S. Blake et al., "An Architecture for Differentiated Services", IETF
RFC 2475, Dec. 1998
[16]
Rosen E. et al., "Multiprotocol Label Switching (MPLS)
Architecture", RFC 3031, Jan. 2001
[17]
K. Venken, J. De Vriendt and D. De Vleeschauwer, "Designing a
DiffServ-capable IP-backbone for the UTRAN", IEEE 2
nd
International Conference on 3G Mobile Communication
Technologies, 26-28 March 2001, pp. 47-52
[18]
S. Maniatis, C. Grecas and I. Venieris, "End-to-End QoS Issues
Over Next Generation Mobile Internet", IEEE Symposium on
Communication and Vehicular Technology, 2000, SVCT-2000, 19
Oct, 2000, pp. 150-154
[19]
P. Newman, Netillion Inc. "In Search of the All-IP Mobile
Network", IEEE Communication Magazine, vol. 42, issue 12, Dec.
2004, pp. S3-S8
[20]
G. Araniti, F. Calabro, A. Iera, A. Molinaro and S. Pulitano,
"Differentiated Services QoS Issues in Next Generation Radio
Access Network: a New Management Policy for Expedited
Forwarding Per-Hop Behavior", IEEE Vehicular Technology
Conference, VTC 2004-Fall, vol. 4, 26-29 Sept. 2004, pp. 2693-2697
[21]
S. Uskela, "All IP Architectures for Cellular Networks", 2
nd
International Conference on 3G Mobile Communication
Technologies, 26-28 March 2001, pp. 180-185
[22]
Jeong-Hyun Park, "Wireless Internet Access for Mobile
Subscribers Based on GPRS/UMTS Network" IEEE
Communication Magazine, vol. 40, issue 4, April 2002, pp. 38-39
[23]
K. Daniel Wong and Vijay K. Varma, "Supporting Real-Time IP
Multimedia Services in UMTS", IEEE Communication Magazine,
vol. 41, issue 11, Nov. 2003, pp. 148-155
[24]
3GPP, "Universal Mobile Telecommunication System (UMTS);
QoS Concepts and Architecture", TS23.107V6, March 2004
[25]
R. Chakravorty, J. Cartwright and I. Pratt, "Practical Experience
with TCP over GPRS", in IEEE GlobeCom, Nov. 2002
[26]
D. Schwab and R. Bunt, "Characterizing the use of a Campus
Wireless Network", in IEEE INFOCOM, March 2004
[27]
X. Meng, S. Wong, Y. Yuan and S.Lu, "Characterizing Flows in
Large Wireless Data Networks", in ACM Mobicom, Sep 2004
[28]
A. Balachandran, G. M. Voelker, P. Bahl and P. Venkat Rangan,
"Characterizing user behavior and network performance in a public
Wireless LAN", Sigmetrics Performance Evaluation. Review, vol.
30. no. 1, 2002, pp. 195-205
[29]
A. Adas and A. Mukherjee, "On Resource Management and QoS
guarantees for long-range dependant traffic", In Proc IEEE
INFOCOM, 1995, pp. 779-787
[30]
M. Parulekar and A. Makowski, "Tail Probabilities for a
Multiplexer with self-similar input", In proc IEEE INFOCOM,
1996, pp. 1452-1459
[31]
I. Norros, "A Storage Model with self-similar input", Queueing
System, 16, 1994, pp. 387-396
[32]
B. Tsybakov and N. D. Georganas, "Self-Similar traffic and upper
bounds to buffer overflow in ATM queue", Performance
Evaluation, 36, 1998, pp. 57-80
[33]
M. Zukerman et al, "Analytical Performance Evaluation of a Two
Class DiffServ link", IEEE ICS, 25-28 Nov. 2002, vol. 1, pp. 373-377
[34]
J. M. Chung, Z. Quan, "Impact of Self-Similarity on Performance
Evaluation in DiffServ Networks", IEEE MWSCAS, 4-7 Aug. 2002,
vol. 2, pp. 326-329
[35]
C. F. Chou et al, "Low Latency and efficient packet scheduling for
streaming applications", IEEE ICC, 20-24 June, 2004, vol. 4, pp.
1963-1967
[36]
A. Kos and B. Klepec, "Performance of VoIP applications in a
simple Differentiated Services network architecture", IEEE
EUROCON, 4-7 July, 2001, vol. 1, pp. 214-217
[37]
J. M. Chung and H. M. Soo, "Analysis of non preemptive priority
queueing of MPLS networks with Bulk arrivals", IEEE MWSCAS,
4-7 Aug. 2002, vol. 3. pp. 81-84
[38]
Salil S. Kanhere and Harish Sethu, "Fair, Efficient and Low-Latency
Packet Scheduling using Nested Deficit Round Robin",
Proceedings of the IEEE Workshop on High Performance
Switching and Routing (HSPR), May 2001
[39]
N. F. MIR and A. Chien, "Simulation of Voice over MPLS
communication networks", IEEE ICCS, 25-28 Nov. 2002, vol. 1,
pp. 389-393
[40]
A. Krendzel, Y. Koucheryavy, J. Harju and S. Lopatin, "Traffic and
QoS management in Wireless Multimedia Networks" COST 290::
Wi-QoST, Working group N3 http://www.cost290.org
[41]
M. Jiang, M. Nikolic, S. Hardy and L. Trajkovic, "Impact of Self-Similarity
on Wireless Data Network Performance", IEEE ICC,
2001, vol. 2, pp. 477-481
[42]
J. Ridoux, A. Nucci and D. Veitch, "Characterization of Wireless
Traffic based on Semi-Experiments", Technical Report-LIP6,
December 2005
[43]
Z. Sahinoglu and S. Tekinay, "On Multimedia Networks: Self-Similar
Traffic and Network Performance", IEEE Communication
Magazine, vol. 37, issue 1, Jan. 1999, pp. 48-52
[44]
I. Norros, "On the use of Fractional Brownian Motion in theory of
connectionless networks", IEEE Journal on Selected Areas in
Communications, vol. 13. no. 6, August 1995, pp. 953-962
[45]
P. Benko, G. Malicsko and A. Veres, "A Large-scale, passive
analysis of end-to-end TCP Performances over GPRS", in IEEE
INFOCOM, March 2004
[46]
M. Caglar, "A Long-Range Dependant Workload Model for Packet
Data Traffic", Mathematics of Operations Research, 29, 2004, pp.
92-105
187
[47]
H. P. Schwefel, L. Lipsky, "Impact of aggregated self-similar
ON/OFF traffic on delay in stationary queueing models (extended
version)", Performance Evaluation, 43, 2001, pp. 203-221
[51]
E. Cinlar, "Introduction to Stochastic Processes", 1975, pp.
178
[52]
R. Ben Ali, Y Lemieux and S. Pierre, "UMTS-to-IP QoS Mapping
for Voice and Video Telephony Services, IEEE Network , vol. 19,
issue 2, March/April 2005, pp. 26-32
[48]
I. Kaj, "Limiting fractal random processes in heavy-tailed systems",
In Fractals in Engineering, New Trends in Theory and
Applications, Eds.J. Levy-Lehel, E. Lutton, Springer-Verlag
London, 2005, pp. 199-218
[53]
Y. Cheng, H, Jiang, W, Zhuang, Z. Niu and C. Lin, "Efficient
Resource Allocation for China's 3G/4G Wireless Networks, IEEE
Communication Magazine, vol. 43, issue 1, Jan 2005, pp. 76-83
[49]
S.M. Ross, "Introduction to Probability Models" Academic Press,
1997.
[54]
W. Odom and M. J. Cavanaugh, "IP Telephony Self-Study Cisco
DQoS Exam Certification Guide", Cisco Press, 2004, pp. 3-314
[50]
K.S. Trivedi, Probability and statistics with reliability, queueing,
and computer science applications Wiley, New York, 2002.
APPENDIX
Table: 1 The States of the Markov Chain and Transition Probabilities
Initial State
Reachable States (
4
,
3
,
2
,
1
=
m
)
Transition Probability
)
,
,
,
,
,
1
(
1
4
3
2
1
s
a
i
i
i
k
i
m
+
,
1
,
,
0
i
k
K
=
0
0
)
(
)
(
)
(
1
1
1
t
x
t
T
S
S
dt
dx
ds
t
f
x
f
s
f
m
k
)
,
,
,
,
,
0
(
2
4
3
2
s
a
i
i
k
i
m
,
1
,
,
0
2
=
i
k
K
+
+
0 0
)
(
)
(
)
(
1
2
1
1
1
2
t
x
t
T
S
S
S
dt
dx
ds
t
f
x
f
s
f
m
k
i
1
,.......
1
,
0
)
,
,
,
,
0
,
0
(
3
3
4
3
=
i
k
s
a
i
k
i
m
dsdxdt
t
f
x
f
s
f
m
k
i
i
T
S
S
S
t
x
t
S
)
(
)
(
)
(
1
3
2
2
1
1
1
3
0 0
+
+
+
)
,
,
,
,
,
(
1
1
4
3
2
1
s
a
i
i
i
i
1
.....
,.........
1
,
0
)
,
,
,
0
,
0
,
0
(
4
4
4
=
i
k
s
a
k
i
m
+
+
+
+
0 0
)
(
)
(
)
(
1
4
3
3
2
2
1
1
1
4
t
x
t
T
S
S
S
S
S
dsdxdt
t
f
x
f
s
f
m
k
i
i
i
)
,
,
,
,
1
,
1
(
1
4
3
2
1
s
a
i
i
i
k
i
m
+
,
1
,
,
0
i
k
K
=
+
0 0
)
(
)
(
)
(
1
1
2
1
1
t
x
t
T
S
S
S
dt
dx
ds
t
f
x
f
s
f
m
k
)
,
,
,
,
,
1
(
2
4
3
2
1
s
a
i
i
i
i
m
+
0
)
(
)
(
1
2
t
T
S
dt
ds
t
f
s
f
m
)
,
,
,
,
,
0
(
2
4
3
2
s
a
i
i
k
i
m
K
,
3
,
2
2
=
i
and
1
,
,
1
2
=
i
k
K
+
+
0 0
)
(
)
(
)
(
1
2
1
1
1
2
t
x
t
T
S
S
S
dt
dx
ds
t
f
x
f
s
f
m
k
i
)
,
,
,
,
,
(
2
1
4
3
2
1
s
a
i
i
i
i
1
...
,.........
1
,
0
)
,
,
,
,
0
,
0
(
3
3
4
3
=
i
k
s
a
i
k
i
m
+
+
+
0 0
)
(
)
(
)
(
1
3
2
2
1
1
1
3
t
x
t
T
S
S
S
S
dsdxdt
t
f
x
f
s
f
m
k
i
i
188
1
...
,.........
2
,
1
,
0
)
,
,
,
0
,
0
,
0
(
4
4
4
=
i
k
s
a
k
i
m
+
+
+
+
0 0
)
(
)
(
)
(
1
4
3
3
2
2
1
1
1
4
t
x
t
T
S
S
S
S
S
dsdxdt
t
f
x
f
s
f
m
k
i
i
i
1
1
4
3
2
1
...
,.........
1
,
0
)
,
,
,
1
,
,
1
(
i
k
s
a
i
i
i
k
i
m
=
+
+
0 0
)
(
)
(
)
(
1
1
3
1
1
t
x
t
T
S
S
S
dsdxdt
t
f
x
f
s
f
m
k
1
...
,.........
1
,
0
)
,
,
,
1
,
,
0
(
2
2
4
3
2
=
i
k
s
a
i
i
k
i
m
+
+
+
0 0
)
(
)
(
)
(
1
1
3
2
1
1
1
2
t
x
t
T
S
S
S
S
dsdxdt
t
f
x
f
s
f
m
k
i
)
,
,
,
,
,
1
(
3
4
3
2
1
s
a
i
i
i
i
m
+
0
)
(
)
(
1
3
t
T
S
dt
ds
t
f
s
f
m
)
,
,
,
,
0
,
0
(
3
4
3
s
a
i
k
i
m
,......
3
,
2
3
=
i
1
,
,
1
3
=
i
k
K
+
+
+
0 0
)
(
)
(
)
(
1
3
2
2
1
1
1
3
t
x
t
T
S
S
S
S
dt
dx
ds
t
f
x
f
s
f
m
k
i
i
)
,
,
,
,
,
(
3
1
4
3
2
1
s
a
i
i
i
i
1
.....
,.........
1
,
0
)
,
,
,
0
,
0
,
0
(
4
4
4
=
i
k
s
a
k
i
m
+
+
+
+
0 0
)
(
)
(
)
(
1
4
3
3
2
2
1
1
1
4
t
x
t
T
S
S
S
S
S
dsdxdt
t
f
x
f
s
f
m
k
i
i
i
1
1
4
3
2
1
..
,.........
1
,
0
)
,
,
1
,
,
,
1
(
i
k
s
a
i
i
i
k
i
m
=
+
+
0 0
)
(
)
(
)
(
1
1
4
1
1
t
x
t
T
S
S
S
dsdxdt
t
f
x
f
s
f
m
k
1
..
,.........
1
,
0
)
,
,
1
,
,
,
0
(
2
2
4
3
2
=
i
k
s
a
i
i
k
i
m
+
+
+
0 0
)
(
)
(
)
(
1
1
4
2
1
1
1
2
t
x
t
T
S
S
S
S
dsdxdt
t
f
x
f
s
f
m
k
i
1
....
,.........
1
,
0
)
,
,
1
,
,
0
,
0
(
3
3
4
3
=
i
k
s
a
i
k
i
m
+
+
+
+
0 0
)
(
)
(
)
(
1
1
4
3
2
2
1
1
1
3
t
x
t
T
S
S
S
S
S
dsdxdt
t
f
x
f
s
f
m
k
i
i
)
,
,
,
,
,
1
(
4
4
3
2
1
s
a
i
i
i
i
m
+
0
)
(
)
(
1
4
t
T
S
dsdt
t
f
s
f
m
)
,
,
,
,
,
(
4
1
4
3
2
1
s
a
i
i
i
i
..
,.........
3
,
2
)
,
,
,
0
,
0
,
0
(
4
4
4
=
i
s
a
k
i
m
1
,...
2
,
1
4
=
i
k
+
+
+
+
0 0
)
(
)
(
)
(
1
4
3
3
2
2
1
1
1
4
t
x
t
T
S
S
S
S
S
dsdxdt
t
f
x
f
s
f
m
k
i
i
i
189 | QoS;3G networks;traffic modelling;3G;Self-Similar;GGSN;self-similar traffic;wireless IP traffic;UMTS;queuing model |
3 | A Computational Approach to Reflective Meta-Reasoning about Languages with Bindings | We present a foundation for a computational meta-theory of languages with bindings implemented in a computer-aided formal reasoning environment. Our theory provides the ability to reason abstractly about operators, languages, open-ended languages, classes of languages, etc. The theory is based on the ideas of higher-order abstract syntax, with an appropriate induction principle parameterized over the language (i.e. a set of operators) being used. In our approach , both the bound and free variables are treated uniformly and this uniform treatment extends naturally to variable-length bindings . The implementation is reflective, namely there is a natural mapping between the meta-language of the theorem-prover and the object language of our theory. The object language substitution operation is mapped to the meta-language substitution and does not need to be defined recursively. Our approach does not require designing a custom type theory; in this paper we describe the implementation of this foundational theory within a general-purpose type theory. This work is fully implemented in the MetaPRL theorem prover, using the pre-existing NuPRL-like Martin-Lof-style computational type theory. Based on this implementation, we lay out an outline for a framework for programming language experimentation and exploration as well as a general reflective reasoning framework. This paper also includes a short survey of the existing approaches to syntactic reflection. | Introduction
1.1
Reflection
Very generally, reflection is the ability of a system to be "self-aware"
in some way. More specifically, by reflection we mean the
property of a computational or formal system to be able to access
and internalize some of its own properties.
There are many areas of computer science where reflection
plays or should play a major role. When exploring properties of
programming languages (and other languages) one often realizes
that languages have at least two kinds of properties -- semantic
properties that have to do with the meaning of what the language's
constructs express and syntactic properties of the language itself.
Suppose for example that we are exploring some language that
contains arithmetic operations. And in particular, in this language
one can write polynomials like x
2
+ 2x + 1. In this case the number
of roots of a polynomial is a semantic property since it has to do
with the valuation of the polynomial. On the other hand, the degree
of a polynomial could be considered an example of a syntactic
property since the most natural way to define it is as a property of
the expression that represents that polynomial. Of course, syntactic
properties often have semantic consequences, which is what makes
them especially important. In this example, the number of roots of
a polynomial is bounded by its degree.
Another area where reflection plays an important role is run-time
code generation -- in most cases, a language that supports
run-time code generation is essentially reflective, as it is capable
of manipulating its own syntax. In order to reason about run-time
code generation and to express its semantics and properties, it is
natural to use a reasoning system that is reflective as well.
There are many different flavors of reflection. The syntactic
reflection we have seen in the examples above, which is the ability
of a system to internalize its own syntax, is just one of these
many flavors. Another very important kind of reflection is logical
reflection, which is the ability of a reasoning system or logic to
internalize and reason about its own logical properties. A good
example of a logical reflection is reasoning about knowledge -since
the result of reasoning about knowledge is knowledge itself,
the logic of knowledge is naturally reflective <A href="3.html#10">[Art04].
In most cases it is natural for reflection to be iterated. In the
case of syntactic reflection we might care not only about the syntax
of our language, but also about the syntax used for expressing the
syntax, the syntax for expressing the syntax for expressing the
syntax and so forth. In the case of the logic of knowledge it is
natural to have iterations of the form "I know that he knows that
I know . . .".
When a formal system is used to reason about properties of programming
languages, iterated reflection magnifies the power of the
2
system, making it more natural to reason not just about individual
languages, but also about classes of languages, language schemas,
and so on. More generally, reflection adds a lot of additional power
to a formal reasoning system <A href="3.html#10">[GS89, Art99]. In particular, it is
well-known <A href="3.html#10">[God36, <A href="3.html#11">Mos52, <A href="3.html#10">EM71, <A href="3.html#11">Par71] that reflection allows
a super-exponential reduction in the size of certain proofs. In addition
, reflection could be a very useful mechanism for implementing
proof search algorithms <A href="3.html#9">[ACU93, <A href="3.html#10">GWZ00, CFW04]. See also
<A href="3.html#10">[Har95] for a survey of reflection in theorem proving.
1.2
Uniform Reflection Framework
For each of the examples in the previous section there are many
ad-hoc ways of achieving the specific benefits of a specific flavor
of reflection. This work aims at creating a unifying reflective
framework that would allow achieving most of these benefits in a
uniform manner, without having to reinvent and re-implement the
basic reflective methodology every time. We believe that such a
framework will increase the power of the formal reasoning tools,
and it may also become an invaluable tool for exploring the properties
of novel programming languages, for analyzing run-time code
generation, and for formalizing logics of knowledge.
This paper establishes a foundation for the development of this
framework -- a new approach to reflective meta-reasoning about
languages with bindings. We present a theory of syntax that:
in a natural way provides both a higher-order abstract syntax
(HOAS) approach to bindings and a de Bruijn-style approach
to bindings, with easy and natural translation between the two;
provides a uniform HOAS-style approach to both bound and
free variables that extends naturally to variable-length "vectors"
of binders;
permits meta-reasoning about languages -- in particular, the
operators, languages, open-ended languages, classes of languages
etc. are all first-class objects that can be reasoned about
both abstractly and concretely;
comes with a natural induction principle for syntax that can be
parameterized by the language being used;
provides a natural mapping between the object syntax and meta-syntax
that is free of exotic terms, and allows mapping the
object-level substitution operation directly to the meta-level one
(i.e. -reduction);
is fully derived in a pre-existing type theory in a theorem
prover;
is designed to serve as a foundation for a general reflective
reasoning framework in a theorem prover;
is designed to serve as a foundation for a programming language
experimentation framework.
The paper is structured as follows. Our work inherits a large
number of ideas from previous efforts and we start in Section <A href="3.html#2">2
with a brief survey of existing techniques for formal reasoning
about syntax. Next in Section <A href="3.html#4">3 we outline our approach to reasoning
about syntax and in Section <A href="3.html#5">4 we present a formal account
of our theory based on a Martin-Lof style computational type theory
<A href="3.html#10">[CAB
<A href="3.html#10">+
<A href="3.html#10">86, HAB
<A href="3.html#10">+
<A href="3.html#10">] and the implementation of that account in
the MetaPRL theorem prover <A href="3.html#11">[Hic97, Hic99, Hic01, HNC
<A href="3.html#11">+
<A href="3.html#11">03,
<A href="3.html#11">HNK
<A href="3.html#11">+
<A href="3.html#11">, <A href="3.html#10">HAB
<A href="3.html#10">+
<A href="3.html#10">]. Then in Section <A href="3.html#8">5 we outline our plan for building
a uniform reflection framework based on the syntactic reflection.
Finally, in Section <A href="3.html#9">6 we resume the discussion of related work that
was started in Section <A href="3.html#2">2.
1.3
Notation and Terminology
We believe that our approach to reasoning about syntax is fairly
general and does not rely on any special features of the theorem
prover we use. However, since we implement this theory in
MetaPRL, we introduce some basic knowledge about MetaPRL
terms.
A MetaPRL term consists of:
1. An operator name (like "sum"), which is a unique name indicating
the logic and component of a term;
2. A list of parameters representing constant values; and
3. A set of subterms with possible variable bindings.
We use the following syntax to describe terms, based on the NuPRL
definition <A href="3.html#9">[ACHA90]:
opname
operator name
[ p
1
; ; p
n
]
parameters
{v
1
.t
1
; ; v
m
.t
m
}
subt er ms
In addition, MetaPRL has a meta-syntax somewhat similar to
the higher-order abstract syntax presented in Pfenning and Elliott
<A href="3.html#11">[PE88]. MetaPRL uses the second-order variables in the style of
Huet and Lang <A href="3.html#11">[HL78] to describe term schemas. For example,
x.V [x], where V is a second-order variable of arity 1, is a schema
that stands for an arbitrary term whose top-level operator is .
This meta-syntax requires that every time a binding occurrence
is explicitly specified in a schema, all corresponding bound occurrences
have to be specified as well. This requirement makes it very
easy to specify free variable restrictions -- for example, x.V ,
where V is a second-order meta-variable of arity 0, is a schema
that stands for an arbitrary term whose top-level operator is and
whose body does not have any free occurrences of the variable
bound by that . In particular, the schema x.V matches the term
y.1, but not the term x.x.
In addition, this meta-language allows specifying certain term
transformations, including implicit substitution specifications. For
example, a beta reduction transformation may be specified using
the following schema:
(x.V
1
[x]) V
2
V
1
[V
2
]
Here the substitution of V
2
for x in V
1
is specified implicitly.
Throughout this paper we will use this second-order notation to
denote arbitrary terms -- namely, unless stated otherwise, when we
write "x.t [x]" we mean an arbitrary term of this form, not a term
containing a concrete second-order variable named "t".
As in LF <A href="3.html#11">[HHP93] we assume that object level variables (i.e.
the variables of the language whose syntax we are expressing)
are directly mapped to meta-theory variables (i.e. the variable of
the language that we use to express the syntax). Similarly, we
assume that the object-level binding structure is mapped to the
meta-level binding structure. In other words, the object-level notion
of the "binding/bound occurrence" is a subset of that in the metalanguage
. We also consider -equal terms -- both on the object
level and on the meta-level -- to be identical and we assume that
substitution avoids capture by renaming.
The sequent schema language we use <A href="3.html#11">[NH02] contains a number
of more advanced features in addition to those outlined here.
However, for the purposes of this presentation, the basic features
outlined above are sufficient.
Previous Models of Reflection
In 1931 Godel used reflection to prove his famous incompleteness
theorem <A href="3.html#10">[God31]. To express arithmetic in arithmetic itself, he
assigned a unique number (a Godel number) to each arithmetic
3
formula. A Godel number of a formula is essentially a numeric
code of a string of symbols used to represent that formula.
A modern version of the Godel's approach was used by Aitken
et al. <A href="3.html#9">[ACHA90, AC92, ACU93, <A href="3.html#10">Con94] to implement reflection
in the NuPRL theorem prover <A href="3.html#10">[CAB
<A href="3.html#10">+
<A href="3.html#10">86, <A href="3.html#9">ACE
<A href="3.html#9">+
<A href="3.html#9">00]. A large part
of this effort was essentially a reimplementation of the core of the
NuPRL prover inside NuPRL's logical theory.
In Godel's approach and its variations (including Aitken's one),
a general mechanism that could be used for formalizing one logical
theory in another is applied to formalizing a logical theory in itself.
This can be very convenient for reasoning about reflection, but for
our purposes it turns out to be extremely impractical. First, when
formalizing a theory in itself using generic means, the identity
between the theory being formalized and the one in which the
formalization happens becomes very obfuscated, which makes it
almost impossible to relate the reflected theory back to the original
one. Second, when one has a theorem proving system that already
implements the logical theory in question, creating a completely
new implementation of this logical theory inside itself is a very
tedious redundant effort. Another practical disadvantage of the
Godel numbers approach is that it tends to blow up the size of
the formulas; and iterated reflection would cause the blow-up to
be iterated as well, making it exponential or worse.
A much more practical approach is being used in some programming
languages, such as Lisp and Scheme. There, the common
solution is for the implementation to expose its internal syntax
representation to user-level code by the quote constructor (where
quote (t) prevents the evaluation of the expression t). The problems
outlined above are solved instantly by this approach: there is
no blow-up, there is no repetition of structure definitions, there is
even no need for verifying that the reflected part is equivalent to the
original implementation since they are identical. Most Scheme implementations
take this even further: the eval function is the internal
function for evaluating a Scheme expression, which is exposed
to the user-level; Smith <A href="3.html#11">[Smi84] showed how this approach can
achieve an infinite tower of processors. A similar language with the
quotation and antiquotation operators was introduced in <A href="3.html#10">[GMO03].
This approach, however, violates the congruence property with
respect to computation: if two terms are computationally equal then
one can be substituted for the other in any context. For instance,
although 2 2 is equal to 4, the expressions "2*2" and "4" are
syntactically different, thus we can not substitute 2*2 by 4 in
the expression quote(2*2). The congruence property is essential
in many logical reasoning systems, including the NuPRL system
mentioned above and the MetaPRL system <A href="3.html#11">[HNC
<A href="3.html#11">+
<A href="3.html#11">03, HNK
<A href="3.html#11">+
<A href="3.html#11">,
<A href="3.html#10">HAB
<A href="3.html#10">+
<A href="3.html#10">] that our group uses.
A possible way to expose the internal syntax without violating
the congruence property is to use the so-called "quoted" or
"shifted" operators <A href="3.html#9">[AA99, <A href="3.html#10">Bar01, Bar05] rather than quoting the
whole expression at once. For any operator op in the original language
, we add the quoted operator (denoted as op) to represent a
term built with the operator op. For example, if the original language
contains the constant "0" (which, presumably, represents the
number 0), then in the reflected language, 0 would stand for the
term that denotes the expression "0". Generally, the quoted operator
has the same arity as the original operator, but it is defined on
syntactic terms rather than on semantic objects. For instance, while
is a binary operator on numbers, is a binary operator on terms.
Namely, if t
1
and t
2
are syntactic terms that stand for expressions
e
1
and e
2
respectively, then t
1
t
2
is a new syntactic term that stands
for the expression e
1
e
2
. Thus, the quotation of the expression 12
would be 1 2.
In general, the well-formedness (typing) rule for a quoted operator
is the following:
t
1
Term
. . .
t
n
Term
op{t
1
; . . . ; t
n
} Term
(1)
where Term is a type of terms.
Note that quotations can be iterated arbitrarily many times,
allowing us to quote quoted terms. For instance, 1 stands for the
term that denotes the term that denotes the numeral 1.
Problems arise when quoting expressions that contain binding
variables. For example, what is the quotation of x.x? There are
several possible ways of answering this question. A commonly
used approach <A href="3.html#11">[PE88, <A href="3.html#10">DH94, DFH95, <A href="3.html#9">ACM02, ACM03] in logical
frameworks such as Elf <A href="3.html#11">[Pfe89], LF <A href="3.html#11">[HHP93], and Isabelle <A href="3.html#11">[PN90,
Pau94] is to construct an object logic with a concrete operator
that has a type like
(Term Term) Term or (Var Term) Term.
In this approach, the quoted x.x might look like (x.x) and the
quoted x.1 might look like (x.1). Note that in these examples
the quoted terms have to make use of both the syntactic (i.e. quoted)
operator and the semantic operator .
Exotic Terms.
Naive implementations of the above approach
suffer from the well-known problem of exotic terms <A href="3.html#10">[DH95,
DFH95]. The issue is that in general we can not allow applying
the operator to an arbitrary function that maps terms to terms (or
variables to terms) and expect the result of such an application to
be a "proper" reflected term.
Consider for example the following term:
(x. if x = 1 then 1 else 2)
It is relatively easy to see that it is not a real syntactic term and
can not be obtained by quoting an actual term. (For comparison,
consider (x. if x = 1 then 1 else 2), which is a quotation of
x. if x = 1 then 1 else 2).
How can one ensure that e denotes a "real" term and not an
"exotic" one? That is, is it equal to a result of quoting an actual
term of the object language? One possibility is to require e to be
a substitution function; in other words it has to be equal to an
expression of the form x.t [x] where t is composed entirely of term
constructors (i.e. quoted operators) and x, while using destructors
(such as case analysis, the if operator used in the example above,
etc) is prohibited.
There are a number of approaches to enforcing the above restriction
. One of them is the usage of logical frameworks with restricted
function spaces <A href="3.html#11">[PE88, HHP93], where -terms may only contain
constructors. Another is to first formalize the larger type that
does include exotic terms and then to define recursively a predicate
describing the "validity" or "well-formedness" of a term <A href="3.html#10">[DH94,
DFH95] thus removing the exotic terms from consideration. Yet
another approach is to create a specialized type theory that combines
the idea of restricted function spaces with a modal type operator
<A href="3.html#10">[DPS97, DL99, DL01]. There the case analysis is disallowed
on objects of "pure" type T , but is allowed on objects of a special
type
T . This allows expressing both the restricted function space
"T
1
T
2
" and the unrestricted one "( T
1
) T
2
" within a single
type theory.
Another way of regarding the problem of exotic terms is that it
is caused by the attempt to give a semantic definition to a primarily
syntactic property. A more syntax-oriented approach was used by
Barzilay et al. <A href="3.html#10">[BA02, BAC03, Bar05]. In Barzilay's approach, the
quoted version of an operator that introduces a binding has the
same shape (i.e. the number of subterms and the binding structure)
as the original one and the variables (both the binding and the
4
bound occurrences) are unaffected by the quotation. For instance,
the quotation of x.x is just x.x.
The advantages of this approach include:
This approach is simple and clear.
Quoted terms have the same structure as original ones, inheriting
a lot of properties of the object syntax.
In all the above approaches, the -equivalence relation for
quoted terms is inherited "for free". For example, x.x and
y.y are automatically considered to be the same term.
Substitution is also easy: we do not need to re-implement the
substitution that renames binding variables to avoid the capture
of free variables; we can use the substitution of the original
language instead.
To prune exotic terms, Barzilay says that x.t [x] is a valid term
when x.t [x] is a substitution function. He demonstrates that it is
possible to formalize this notion in a purely syntactical fashion. In
this setting, the general well-formedness rule for quoted terms with
bindings is the following:
is subst
k
{x
1
, , x
k
.t[x]}
is subst
l
{z
1
, , z
l
.s[z]}
op{x
1
, , x
k
.t[x];
;
z
1
, , z
l
.s[z]} Term
(2)
where is subst
n
{x
1
, , x
n
.t[x]} is the proposition that t is a substitution
function over variables x
1
, , x
n
(in other words, it is a
syntactic version of the Valid predicate of <A href="3.html#10">[DH94, DFH95]). This
proposition is defined syntactically by the following two rules:
is subst
n
{x
1
, , x
n
. x
i
}
and
is subst
n+k
{x
1
, , x
n
, y
1
, , y
k
.t[x; y]}
.
.
.
is subst
n+l
{x
1
, , x
n
, z
1
, , z
l
.s[x; z]}}
is subst
n
{x
1
x
n
.op{y
1
y
k
.t[x; y]; ; z
1
z
l
.s[x; z]}}
In this approach the is subst
n
{} and operators are essentially
untyped (in NuPRL type theory, the computational properties of
untyped terms are at the core of the semantics; types are added on
top of the untyped computational system).
Recursive Definition and Structural Induction Principle.
A
difficulty shared by both the straightforward implementations of
the (Term Term) Term approach and by the Barzilay's one
is the problem of recursively defining the Term type. We want to
define the Term type as the smallest set satisfying rules <A href="3.html#3">(1) and <A href="3.html#4">(2).
Note, however, that unlike rule <A href="3.html#3">(1), rule <A href="3.html#4">(2) is not monotonic in the
sense that is subst
k
{x
1
, , x
k
.t[x]} depends non-monotonically
on the Term type. For example, to say whether x.t [x] is a term, we
should check whether t is a substitution function over x. It means at
least that for every x in Term, t [x] should be in Term as well. Thus
we need to define the whole type Term before using <A href="3.html#4">(2), which
produces a logical circle. Moreover, since has type (Term
Term) Term, it is hard to formulate the structural induction
principle for terms built with the term constructor.
Variable-Length Lists of Binders.
In Barzilay's approach, for
each number n, is subst
n
{} is considered to be a separate operator
-- there is no way to quantify over n, and there is no way to
express variable-length lists of binders. This issue of expressing the
unbounded-length lists of binders is common to some of the other
approaches as well.
Meta-Reasoning.
Another difficulty that is especially apparent
in Barzilay's approach is that it only allows reasoning about concrete
operators in concrete languages. This approach does not provide
the ability to reason about operators abstractly; in particular,
there is no way to state and prove meta-theorems that quantify over
operators or languages, much less classes of languages.
Higher-Order Abstract Syntax with Inductive Definitions
Although it is possible to solve the problems outlined in the previous
Section (and we will return to the discussion of some of those
solutions in Section <A href="3.html#9">6), our desire is to avoid these difficulties from
the start. We propose a natural model of reflection that manages to
work around those difficulties. We will show how to give a simple
recursive definition of terms with binding variables, which does
not allow the construction of exotic terms and does allow structural
induction on terms.
In this Section we provide a conceptual overview of our approach
; details are given in Section <A href="3.html#5">4.
3.1
Bound Terms
One of the key ideas of our approach is how we deal with terms
containing free variables. We extend to free variables the principle
that variable names do not really matter. In fact, we model free
variables as bindings that can be arbitrarily -renamed. Namely,
we will write bterm{x
1
, , x
n
.t[x]} for a term t over variables
x
1
, , x
n
. For example, instead of term xy we will use the
term bterm{x, y.xy} when it is considered over variables x and
y and bterm{x, y, z.xy} when it is considered over variables x,
y and z. Free occurrences of x
i
in t [x] are considered bound
in bterm{x
1
, , x
n
.t[x]} and two -equal bterm{} expressions
("bterms") are considered to be identical.
Not every bterm is necessarily well-formed. We will define the
type of terms in such a way as to eliminate exotic terms. Consider
for example a definition of lambda-terms.
E
XAMPLE
1. We can define a set of reflected lambda-terms as the
smallest set such that
bterm{x
1
, , x
n
.x
i
}, where 1 i n, is a lambda-term (a
variable);
if bterm x
1
, , x
n
, x
n+1
.t[x] is a lambda-term, then
bterm x
1
, , x
n
.x
n+1
.t[x]
is also a lambda-term (an abstraction);
if bterm{x
1
, , x
n
.t
1
[x]} and bterm{x
1
, , x
n
.t
2
[x]} are
lambda-terms, then
bterm{x
1
; ; x
n
.apply{t
1
[x]; t
2
[x]}}
is also a lambda-term (an application).
In a way, bterms could be understood as an explicit coding for
Barzilay's substitution functions. And indeed, some of the basic
definitions are quite similar. The notion of bterms is also very
similar to that of local variable contexts <A href="3.html#10">[FPT99].
3.2
Terminology
Before we proceed further, we need to define some terminology.
D
EFINITION
1. We change the notion of subterm so that the subterms
of a bterm are also bterms. For example, the immediate subterms
of bterm{x , y.x y} are bterm{x , y.x } and bterm{x , y.y}; the
immediate subterm of bterm{x .y.x } is bterm{x, y.x }.
D
EFINITION
2. We call the number of outer binders in a bterm
expression its binding depth. Namely, the binding depth of the
bterm bterm{x
1
, , x
n
.t[x]} is n.
D
EFINITION
3. Throughout the rest of the paper we use the notion
of operator shape. The shape of an operator is a list of natural numbers
each stating how many new binders the operator introduces on
5
the corresponding subterm. The length of the shape list is therefore
the arity of the operator. For example, the shape of the + operator
is [0; 0] and the shape of the operator is [1].
The mapping from operators to shapes is also sometimes called
a binding signature of a language <A href="3.html#10">[FPT99, <A href="3.html#11">Plo90].
D
EFINITION
4. Let op be an operator with shape [d
1
; ; d
N
],
and let btl be a list of bterms [b
1
; ; b
M
]. We say that btl is
compatible with op at depth n when,
1. N = M;
2. the binding depth of bterm b
j
is n + d
j
for each 1 j N .
3.3
Abstract Operators
Expressions of the form bterm{x.op{ }} can only be used to express
syntax with concrete operators. In other words, each expression
of this form contains a specific constant operator op. However,
we would like to reason about operators abstractly; in particular,
we want to make it possible to have variables of the type "Op" that
can be quantified over and used in the same manner as operator
constants. In order to address this we use explicit term constructors
in addition to bterm{x.op{ }} constants.
The expression mk bterm{n; "op"; btl}, where "op" is some encoding
of the quoted operator op, stands for a bterm with binding
depth n, operator op and subterms btl. Namely,
mk bterm{n; op; bterm{x
1
, , x
n
, y
1
.t
1
[x; y
1
]} :: ::
bterm{x
1
, , x
n
, y
k
.t
k
[x; y
k
]} :: nil}
is bterm{x
1
, , x
n
.op {y
1
.t
1
[x; y
1
]; ; y
k
.t
k
[x; y
k
]}}. Here,
nil is the empty list and :: is the list cons operator and therefore
the expression b
1
:: :: b
n
:: nil represents the concrete list
[b
1
; ; b
n
].
Note that if we know the shape of the operator op and we know
that the mk bterm expression is well-formed (or, more specifically,
if we know that btl is compatible with op at depth n), then it
would normally be possible to deduce the value of n (since n is
the difference between the binding depth of any element of the list
btl and the corresponding element of the shape(op) list). There are
two reasons, however, for supplying n explicitly:
When btl is empty (in other words, when the arity of op is 0),
the value of n can not be deduced this way and still needs to be
supplied somehow. One could consider 0-arity operators to be a
special case, but this results in a significant loss of uniformity.
When we do not know whether an mk bterm expression is
necessarily well-formed (and as we will see it is often useful
to allow this to happen), then a lot of definitions and proofs
are greatly simplified when the binding depth of mk bterm
expressions is explicitly specified.
Using the mk bterm constructor and a few other similar constructors
that will be introduced later, it becomes easy to reason abstractly
about operators. Indeed, the second argument to mk bterm
can now be an arbitrary expression, not just a constant. This has a
cost of making certain definitions slightly more complicated. For
example, the notion of "compatible with op at depth n" now becomes
an important part of the theory and will need to be explicitly
formalized. However, this is a small price to pay for the ability to
reason abstractly about operators, which easily extends to reasoning
abstractly about languages, classes of languages and so forth.
3.4
Inductively Defining the Type of Well-Formed Bterms
There are two equivalent approaches to inductively defining the
general type (set) of all well-formed bterms. The first one follows
the same idea as in Example <A href="3.html#4">1:
bterm{x
1
, , x
n
.x
i
} is a well-formed bterm for 1 i n;
mk bterm{n; op; btl} is a well-formed bterm when op is a well-formed
quoted operator and btl is a list of well-formed bterms
that is compatible with op at some depth n.
If we denote bterm{x
1
, , x
l
, y, z
1
, , z
r
.y} as var{l; r},
we can restate the base case of the above definition as "var{l; r },
where l and r are arbitrary natural numbers, is a well-formed
bterm". Once we do this it becomes apparent that the above definition
has a lot of similarities with de Bruijn-style indexing of
variables <A href="3.html#10">[dB72]. Indeed, one might call the numbers l and r the
left and right indices of the variable var{l; r }.
It is possible to provide an alternate definition that is closer to
pure HOAS:
bnd{x .t [x]}, where t is a well-formed substitution function, is
a well-formed bterm (the bnd operation increases the binding
depth of t by one by adding x to the beginning of the list of t's
outer binders).
mk term{op; btl}, where op is a well-formed quoted operator,
and btl is a list of well-formed bterms that is compatible with
op at depth 0, is a well-formed bterm (of binding depth 0).
Other than better capturing the idea of HOAS, the latter definition
also makes it easier to express the reflective correspondence
between the meta-syntax (the syntax used to express the theory of
syntax, namely the one that includes the operators mk bterm, bnd,
etc.) and the meta-meta-syntax (the syntax that is used to express
the theory of syntax and the underlying theory, in other words, the
syntax that includes the second-order notations.) Namely, provided
that we define the subst{bt; t } operation to compute the result of
substituting a closed term t for the first outer binder of the bterm
bt, we can state that
subst{bnd{x .t
1
[x]} ; t
2
} t
1
[t
2
]
(3)
(where t
1
and t
2
are literal second-order variables). In other words,
we can state that the substitution operator subst and the implicit
second-order substitution in the "meta-meta-" language are equivalent
.
The downside of the alternate definition is that it requires defining
the notion of "being a substitution function".
3.5
Our Approach
In our work we try to combine the advantages of both approaches
outlined above. In the next Section we present a theory that includes
both the HOAS-style operations (bnd, mk term) and the de Bruijn-style
ones (var, mk bterm). Our theory also allows deriving the
equivalence <A href="3.html#5">(3). In our theory the definition of the basic syntactic
operations is based on the HOAS-style operators; however, the
recursive definition of the type of well-formed syntax is based on
the de Bruijn-style operations. Our theory includes also support for
variable-length lists of binders.
Formal Implementation in a Theorem Prover
In this Section we describe how the foundations of our theory are
formally defined and derived in the NuPRL-style Computational
Type Theory in the MetaPRL Theorem Prover. For brevity, we
will present a slightly simplified version of our implementation;
full details are available in the extended version of this paper
<A href="3.html#11">[NKYH05, Appendix].
4.1
Computations and Types
In our work we make heavy usage of the fact that our type theory
allows us to define computations without stating upfront (or even
knowing) what the relevant types are. In NuPRL-style type theo-6
ries (which some even dubbed "untyped type theory"), one may define
arbitrary recursive functions (even potentially nonterminating
ones). Only when proving that such function belongs to a particular
type, one may have to prove termination. See <A href="3.html#10">[All87a, All87b] for
a semantics that justifies this approach.
The formal definition of the syntax of terms consists of two
parts:
The definition of untyped term constructors and term operations
, which includes both HOAS-style operations and de
Bruijn-style operations. As it turns out, we can establish most
of the reduction properties without explicitly giving types to all
the operations.
The definition of the type of terms. We will define the type of
terms as the type that contains all terms that can be legitimately
constructed by the term constructors.
4.2
HOAS Constructors
At the core of our term syntax definition are two basic HOAS-style
constructors:
bnd{x .t [x]} is meant to represent a term with a free variable x.
The intended semantics (which will not become explicit until
later) is that bnd{x.t [x]} will only be considered well-formed
when t is a substitution function.
Internally, bnd{x.t [x]} is implemented simply as the pair
0, x.t [x] . This definition is truly internal and is used only
to prove the properties of the two destructors presented below;
it is never used outside of this Section (Section <A href="3.html#6">4.2).
mk term{op; ts} pairs op with ts. The intended usage of this
operation (which, again, will only become explicit later) is that
it represents a closed term (i.e. a bterm of binding depth 0) with
operator op and subterms ts. It will be considered well-formed
when op is an operator and ts is a list of terms that is compatible
with op at depth 0. For example, mk term{; bnd{x.x}} is x.x.
Internally, mk term{op; ts} is implemented as the nested pair
1, op, ts . Again, this definition is never used outside of this
Section.
We also implement two destructors:
subst{bt; t } is meant to represent the result of substituting term
t for the first variable of the bterm bt. Internally, subst{bt; t }
is defined simply as an application (bt.2) t (where bt.2 is the
second element of the pair bt ).
We derive the following property of this substitution operation:
subst{bnd{x.t
1
[x]} ; t
2
} t
1
[t
2
]
where "" is the computational equality relation
<A href="3.html#6">1
and t
1
and
t
2
may be absolutely arbitrary, even ill-typed. This derivation
is the only place where the internal definition of subst{bt; t} is
used.
Note that the above equality is exactly the "reflective property
of substitution" <A href="3.html#5">(3) that was one of the design goals for our
theory.
weak dest {bt; bcase; op, ts.mkt case[op; ts]} is designed to
provide a way to find out whether bt is a bnd{} or a mk term{op; ts}
1
In NuPRL-style type theories the computational equality relation (which
is also sometimes called "squiggle equality" and is sometimes denoted
as "" or "") is the finest-grained equality relation in the theory.
When a b is true, a may be replaced with b in an arbitrary context.
Examples of computational equality include beta-reduction x.a[x] b
a[b], arithmetical equalities (1 + 2 3), and definitional equality (an
abstraction is considered to be computationally equal to its definition).
and to "extract" the op and ts in the latter case. In the rest of
this paper we will use the "pretty-printed" form for weak dest
-- "match bt with bnd{ } bcase | mk term{op; ts}
mkt case[op; ts]". Internally, it is defined as
if bt.
1 = 0 then bcase else mkt case[bt.2.1; bt.2.2].
From this internal definition we derive the following properties
of weak dest:
match bnd{x.t[x]} with
bnd{ } bcase
| mk term{op; ts} mkt case[op; ts]
bcase
match mk term{op; ts} with
bnd{ } bcase
| mk term{o; t} mkt case[o; t]
mkt case[op; ts]
4.3
Vector HOAS Operations
As we have mentioned at the end of Section <A href="3.html#2">2, some approaches to
reasoning about syntax make it hard or even impossible to express
arbitrary-length lists of binders. In our approach, we address this
challenge by allowing operators where a single binding in the metalanguage
stands for a list of object-level bindings. In particular, we
allow representing bnd{x
1
.bnd{x
2
. bnd{x
n
.t[x
1
; . . . ; x
n
]} }}
as
vbnd{n; x .t [nth{1; x} ; . . . ; nth{n; x}]}, where "nth{i ; l}" is the "i th
element of the list l" function.
We define the following vector-style operations:
vbnd{n; x .t [x]} represents a "telescope" of nested bnd operations
. It is defined by induction
<A href="3.html#6">2
on the natural number n as
follows:
vbnd{0; x.t [x]}
:=
t [nil]
vbnd{n + 1; x.t [x]}
:=
bnd{v.vbnd{n; x .t [v :: x ]}}
We also introduce vbnd{n; t } as a simplified notation for
vbnd{n; x .t } when t does not have free occurrences of x.
vsubst{bt; ts} is a "vector" substitution operation that is meant
to represent the result of simultaneous substitution of the terms
in the ts list for the first |ts| variables of the bterm bt (here |l| is
the length of the list l). vsubst{bt; ts} is defined by induction on
the list ts as follows:
vsubst{bt; nil}
:=
bt
vsubst{bt; t :: ts}
:=
vsubst{subst{bt; t } ; ts}
Below are some of the derived properties of these operations:
bnd{v.t [v]} vbnd{1; hd(v)}
(4)
m, n N.
vbnd{m + n; x .t [x]} vbnd{m; y.vbnd{n; z.t [y@z]}}
(5)
l List. (vsubst{vbnd{|l|; v.t[v]} ; l} t[l])
(6)
l List.n N. (n |l|)
(vsubst{vbnd{n; v.t[v]} ; l} vbnd{n - |l|; v.bt[l@v]})
(7)
n N.
vbnd{n; l.vsubst{vbnd{n; v.t [v]} ; l}} vbnd{n; l.t [l]}
(8)
where "hd" is the list "head" operation, "@" is the list append
operation, "List" is the type of arbitrary lists (the elements of a list
do not have to belong to any particular type), N is the type of natural
numbers, and all the variables that are not explicitly constrained to
a specific type stand for arbitrary expressions.
2
Our presentation of the inductive definitions is slightly simplified by
omitting some minor technical details. See <A href="3.html#11">[NKYH05, Appendix]
for
complete details.
7
Equivalence <A href="3.html#6">(5) allows the merging and splitting of vector bnd
operations. Equivalence <A href="3.html#6">(6) is a vector variant of equivalence <A href="3.html#5">(3).
Equivalence <A href="3.html#6">(8) is very similar to equivalence <A href="3.html#6">(6) applied in the
vbnd{n; l. } context, except that <A href="3.html#6">(8) does not require l to be a
member of any special type.
4.4
De Bruijn-style Operations
Based on the HOAS constructors defined in the previous two sections
, we define two de Bruijn-style constructors.
var{i ; j } is defined as vbnd{i ; bnd{v.vbnd{ j ; v}}}. It is easy to
see that this definition indeed corresponds to the informal
bterm{x
1
, , x
l
, y, z
1
, , z
r
.y}
definition given in Section <A href="3.html#5">3.4.
mk bterm{n; op; ts} is meant to compute a bterm of binding
depth n, with operator op, and with ts as its subterms. This operation
is defined by induction on natural number n as follows:
mk bterm{0; op; ts}
:=
mk term{op; ts}
mk bterm{n + 1; op; ts}
:=
bnd{v.mk bterm{n; op; map t.subst{t ; v} ts}}
Note that, if ts is a list of bnd expressions (which is the intended
usage of the mk bterm operation), then the
bnd{v. map t.subst{t ; v} ts }
has the effect of stripping the outer bnd from each of the members
of the ts list and "moving" them into a single "merged" bnd
on the outside.
We also define a number of de Bruijn-style destructors, i.e., operations
that compute various de Bruijn-style characteristics of a
bterm. Since the var and mk bterm constructors are defined in terms
of the HOAS constructors, the destructors have to be defined in
terms of HOAS operations as well. Because of this, these definitions
are often far from straightforward.
It is important to emphasize that the tricky definitions that we
use here are only needed to establish the basic properties of the
operations we defined. Once the basic theory is complete, we can
raise the level of abstraction and no usage of this theory will
ever require using any of these definitions, being aware of these
definitions, or performing similar tricks again.
bdepth{t } computes the binding depth of term t . It is defined
recursively using the Y combinator as
Y
f.b.match b with
bnd{ } 1 + f subst{b; mk term{0; 0}}
| mk term{ ; } 0
t
In effect, this recursive function strips the outer binders from a
bterm one by one using substitution (note that here we can use
an arbitrary mk bterm expression as a second argument for the
substitution function; the arguments to mk bterm do not have
to have the "correct" type) and counts the number of times it
needs to do this before the outermost mk bterm is exposed.
We derive the following properties of bdepth:
l, r N. bdepth{var{l; r}} (l + r + 1) ;
n N. bdepth{mk bterm{n; op; ts}} n .
Note that the latter equivalence only requires n to have the
"correct" type, while op and ts may be arbitrary. Since the
bdepth operator is needed for defining the type of Term of well-formed
bterms, at this point we would not have been able to
express what the "correct" type for ts would be.
left{t } is designed to compute the "left index" of a var expression
. It is defined as
Y
f.b.l.
match b with
bnd{ }
1 + f subst{b; mk term{l; 0}} (l + 1)
| mk term l ;
l
t 0
In effect, this recursive function substitutes mk term{0; 0}
for the first binding of t , mk term{1; 0} for the second one,
mk term{2; 0} for the next one and so forth. Once all the binders
are stripped and a mk term{l; 0} is exposed, l is the index
we were looking for. Note that here we intentionally supply
mk term with an argument of a "wrong" type (N instead of
Op); we could have avoided this, but then the definition would
have been significantly more complicated.
As expected, we derive that
l, r N.(left{var{l; r}} l).
right{t } computes the "right index" of a var expression. It
is trivial to define in terms of the previous two operators:
right{t } := bdepth{t } - left{t } - 1.
get op{t ; op} is an operation such that
n N. get op mk bterm{n; op; ts} ; op op ,
l, r N. (get op{var{i; j} ; op} op .
Its definition is similar to that of left{}.
subterms{t } is designed to recover the last argument of a
mk bterm expression. The definition is rather technical and
complicated, so we omit it; see <A href="3.html#11">[NKYH05, Appendix C] for
details. The main property of the subterms operation that we
derive is
n N.btl List. subterms{mk bterm{n; op; btl}}
map b.vbnd{n; v.vsubst{b; v}} btl
The right-hand side of this equivalence is not quite the plain
"btl" that one might have hoped to see here. However, when
btl is a list of bterms with binding depths at least n, which is
necessarily the case for any well-formed mk bterm{n; op; btl},
equivalence <A href="3.html#6">(8) would allow simplifying this right-hand side to
the desired btl.
4.5
Operators
For this basic theory the exact representation details for operators
are not essential and we define the type of operators Op abstractly.
We only require that operators have decidable equality and that
there exist a function of the type Op N List that computes
operators' shapes.
Using this shape function and the bdepth function from Section
<A href="3.html#7">4.4, it is trivial to formalize the "ts is compatible with op at
depth n" predicate of Definition <A href="3.html#5">4. We denote this predicate as
shape compat{n; op; ts} and define it as
|shape{op}| = |btl|
i 1..|btl|.bdepth{nth{btl; i}} = n + nth{shape{op}; i}
4.6
The Type of Terms
In this section we will define the type of terms (i.e. well-formed
bterms), Term, as the type of all terms that can be constructed by
the de Bruijn constructors from Section <A href="3.html#7">4.4. That is, the Term type
contains all expressions of the forms:
var{i ; j } for all natural numbers i, j ; or
8
mk bterm{n; op; ts} for any natural number n, operator op, and
list of terms ts that is compatible with op at depth n.
The Term type is defined as a fixpoint of the following function
from types to types:
Iter(X ) := Image(dom(X ); x .mk(x )),
where
Image is a type constructor such that Image(T ; x. f [x]) is the
type of all the f [t ] for t T (for it to be well-formed, T must
be a well-formed type and f must not have any free variables
except for x);
dom(X ) is a type defined as
(NN)+ n:Nop:Op{ts:X List | shape compat{n; op; ts}} ;
and mk(x) (where x is presumably a member of the type
dom(X )) is defined as
match x with
inl (i, j) var{i; j}
| inr (n, op, ts) mk bterm{n; op; ts} .
The fixpoint of Iter is reached by defining
Term
0
:= Void (an empty type)
Term
n+1
:= Iter(Term
n
)
Term :=
n
N
Term
n
We derive the intended introduction rules for the Term type:
i N
j N
var{i ; j } Term
and
n N
op Op
ts Term List
shape compat{n; op; ts}
mk bterm{n; op; ts} Term
.
Also, the structural induction principle is derived for the Term
type. Namely, we show that to prove that some property P[t ] holds
for any term t , it is sufficient to prove
(Base case) P holds for all variables, that is, P[var{i ; j }] holds
for all natural numbers i and j ;
(Induction step) P[mk bterm{n; op; ts}] is true for any natural
number n, any operator op, and any list of terms ts that is
compatible with op at depth n, provided P[t ] is true for any
element t of the list ts.
Note that the type of "terms over n variables" (where n = 0 corresponds
to closed terms) may be trivially defined using the Term
type and the "subset" type constructor -- {t : Term | bdepth{t } =
n}.
Conclusions and Future Work
In Sections <A href="3.html#4">3 and <A href="3.html#5">4 we have presented a basic theory of syntax
that is fully implemented in a theorem prover. As we mentioned in
the introduction, the approach is both natural and expressive, and
provides a foundation for reflective reasoning about classes of languages
and logics. However, we consider this theory to be only
the first step towards building a user-accessible uniform reflection
framework and a user-accessible uniform framework for programming
language reasoning and experimentation, where tasks similar
to the ones presented in the P
OPL
M
ARK
challenge <A href="3.html#9">[ABF
<A href="3.html#9">+
<A href="3.html#9">05] can
be performed easily and naturally. In this section we provide an outline
of our plans for building such frameworks on top of the basic
syntactic theory.
5.1
Higher-Level User Interface
One obvious shortcoming of the theory presented in Sections <A href="3.html#4">3
and <A href="3.html#5">4 is that it provides only the basic low-level operations such
as bnd, var, subterms, etc. It presents a very low-level account of
syntax in a way that would often fail to abstract away the details
irrelevant to the user.
To address this problem we are planning to provide user interface
functionality capable of mapping the high-level concepts
to the low-level ones. In particular, we are going to provide an
interface that would allow instantiating general theorems to specific
collections of operators and specific languages. Thus, the user
will be able to write something like "reflect language [x.;
apply{; }]" and the system will create all the components outlined
in Example <A href="3.html#4">1:
It will create a definition for the type
Language[x.; apply{; }]
of reflected lambda-terms (where Language[l] is a general definition
of a language over a list of operators l);
It will state and derive the introduction rules for this type;
It will state and derive the elimination rule for this type (the
induction principle).
Moreover, we are planning to support even more complicated language
declarations, such as
t := int | t t ;
e := v | x : t.e[x ] | apply{e; e}
that would cause the system to create mutually recursive type
definitions and appropriate rules.
Finally, we are also planning to support "pattern bindings" that
are needed for a natural encoding of ML-like pattern matching
(such as the one sketched in the P
OPL
M
ARK
challenge <A href="3.html#9">[ABF
<A href="3.html#9">+
<A href="3.html#9">05]).
As far as the underlying theory goes, we believe that the mechanisms
very similar to the "vector bindings" presented in Section <A href="3.html#6">4.3
will be sufficient here.
5.2
"Dereferencing" Quoted Terms
As in Barzilay's work, the quoted operator approach makes it easy
to define the "unquoting" (or "dereferencing") operator [[]]
unq
. If t
is a syntactic term, then [[t ]]
unq
is the value represented by t . By
definition,
[[op{t
1
; . . . ; t
n
}]]
unq
= op{[[t
1
]]
unq
; . . . ; [[t
n
]]
unq
}.
For instance, [[2 3]]
unq
is 2 3 (i.e. 6).
In order to define unquoting on terms with bindings, we need to
introduce the "guard" operation
such that [[ t ]]
unq
is t for an
arbitrary expression t . Then [[]]
unq
can be defined as follows:
[[op{x
1
, , x
k
.t[x
1
; . . . ; x
k
]; ;z
1
, , z
l
.s[z
1
; . . . ; z
l
]}]]
unq
=
op{x
1
, , x
k
.[[t[ x
1
; . . . ; x
k
]]]
unq
;
;
z
1
, , z
l
.[[s[ z
1
; . . . ; z
l
]]]
unq
}.
For example, [[x.2 x]]
unq
= x.[[2 x ]]
unq
= x.[[2]]
unq
[[ x ]]
unq
= x.2 x.
The unquote operation establishes the identity between the original
syntax and the reflected syntax, making it a "true" reflection.
Note that the type theory (which ensures, in particular, that
only terminating functions may be shown to belong to a function
type) would keep the [[ ]]
unq
operation from introducing logical
paradox<A href="3.html#8">es.
<A href="3.html#8">3
3
This is, obviously, not a proper argument. While a proper argument can be
made here, it is outside of the scope of this particular paper.
9
Also, since the notion of the quoted operators is fully open-ended
, each new language added to the system will automatically
get to use the [[ ]]
unq
operation for all its newly introduced operators
.
5.3
Logical Reflection
After defining syntactic reflection, it is easy to define logical reflection
. If we consider the proof system open-ended, then the logical
reflection is trivial -- when P is a quotation of a proposition, we
can regard "[[P]]
unq
" as meaning " P is true". The normal modal
rules for the [[]]
unq
modality are trivially derivable. For example
modus ponens
[[P Q]]
unq
[[P]]
unq
[[Q]]
unq
is trivially true because if we evaluate the first [[]]
unq
(remember,
[[P Q]]
unq
= [[P]]
unq
[[Q]]
unq
by definition of [[]]
unq
), we get an obvious tautology
([[P]]
unq
[[Q]]
unq
) [[P]]
unq
[[Q]]
unq
.
In order to consider a closed proof system (in other words, if
we want to be able to do induction over derivations), we would
need to define a provability predicate for that system. We are
planning to provide user interface functionality that would allow
users to describe a set of proof rules and the system would generate
appropriate proof predicate definitions and derive appropriate rules
(in a style similar to the one outlined in Section <A href="3.html#8">5.1 for the case of
language descriptions).
Related Work
In Section <A href="3.html#2">2 we have already discussed a number of approaches
that we consider ourselves inheriting from. Here we would like to
revisit some of them and mention a few other related efforts.
Our work has a lot in common with the HOAS implemented in
Coq by Despeyroux and Hirschowitz <A href="3.html#10">[DH94]. In both cases, the
more general space of terms (that include the exotic ones) is later
restricted in a recursive manner. In both cases, the higher-order
analogs of first-order de Bruijn operators are defined and used as a
part of the "well-formedness" specification for the terms. Despeyroux
and Hirschowitz use functions over infinite lists of variables
to define open terms, which is similar to our vector bindings.
There are a number of significant differences as well. Our approach
is sufficiently syntactical, which allows eliminating all exotic
terms, even those that are extensionally equal to the well-formed
ones, while the more semantic approach of
<A href="3.html#10">[DH94,
<A href="3.html#10">DFH95] has to accept such exotic terms (their solution to this problem
is to consider an object term to be represented by the whole
equivalence class of extensionally equal terms); more generally
while <A href="3.html#10">[DH94] states that "this problem of extensionality is recurrent
all over our work", most of our lemmas establish identity and
not just equality, thus avoiding most of the issues of extensional
equality. In our implementation, the substitution on object terms is
mapped directly to -reduction, while Despeyroux et al. <A href="3.html#10">[DFH95]
have to define it recursively. In addition, we provide a uniform approach
to both free and bound variables that naturally extends to
variable-length "vector" bindings.
While our approach is quite different from the modal -calculus
one <A href="3.html#10">[DPS97, DL99, DL01], there are some similarities in the intuition
behind it. Despeyroux et al. <A href="3.html#10">[DPS97] says "Intuitively, we
interpret
B as the type of closed objects of type B. We can iterate
or distinguish cases over closed objects, since all constructors
are statically known and can be provided for." The intuition behind
our approach is in part based on the canonical model of the
NuPRL type theory <A href="3.html#10">[All87a, All87b], where each type is mapped
to an equivalence relations over the closed terms of that type.
Gordon and Melham <A href="3.html#10">[GM96] define the type of -terms as a
quotient of the type of terms with concrete binding variables over
-equivalence. Michael Norrish <A href="3.html#11">[Nor04] builds upon this work by
replacing certain variable "freshness" requirements with variable
"swapping". This approach has a number of attractive properties;
however, we believe that the level of abstraction provided by the
HOAS-style approaches makes the HOAS style more convenient
and accessible.
Ambler, Crole, and Momigliano <A href="3.html#9">[ACM02] have combined the
HOAS with the induction principle using an approach which in
some sense is opposite to ours. Namely, they define the HOAS
operators on top of the de Bruijn definition of terms using higher
order pattern matching. In a later work <A href="3.html#9">[ACM03] they have de-scribed
the notion of "terms-in-infinite-context" which is quite similar
to our approach to vector binding. While our vector bindings
presented in Section <A href="3.html#6">4.3 are finite length, the exact same approach
would work for the infinite-length "vectors" as well.
Acknowledgments
The authors are grateful to Eli Barzilay whose ideas were an inspiration
for some of the work that lead to this paper. We are also
grateful for his comments on an early draft of this paper.
We are grateful to the anonymous reviewers for their very thorough
and fair feedback and many helpful suggestions.
References
[AA99]
Eric Aaron and Stuart Allen. Justifying calculational logic
by a conventional metalinguistic semantics. Technical Report
TR99-1771, Cornell University, Ithaca, New York, September
1999.
[ABF
+
05]
Brian E. Aydemir, Aaron Bohannon, Matthew Fairbairn,
J. Nathan Foster, Benjamin C. Pierce, Peter Sewell, Dimitrios
Vytiniotis, Geoffrey Washburn, Stephanie Weirich, and Steve
Zdancewic. Mechanized metatheory for the masses: The
POPLmark challenge. Available from <A href="http://www.cis.upenn.edu/group/proj/plclub/mmm/">http://www.cis.
upenn.edu/group/proj/plclub/mmm/, 2005.
[AC92]
William Aitken and Robert L. Constable. Reflecting on
NuPRL : Lessons 14. Technical report, Cornell University,
Computer Science Department, Ithaca, NY, 1992.
[ACE
+
00]
Stuart Allen, Robert Constable, Richard Eaton, Christoph
Kreitz, and Lori Lorigo. The NuPRL open logical environment
. In David McAllester, editor, Proceedings of the
17
t h
International Conference on Automated Deduction, volume
1831 of Lecture Notes in Artificial Intelligence, pages
170176. Springer Verlag, 2000.
[ACHA90]
Stuart F. Allen, Robert L. Constable, Douglas J. Howe,
and William Aitken. The semantics of reflected proof. In
Proceedings of the 5
t h
Symposium on Logic in Computer
Science, pages 95197. IEEE Computer Society Press, June
1990.
[ACM02]
Simon Ambler, Roy L. Crole, and Alberto Momigliano.
Combining higher order abstract syntax with tactical theorem
proving and (co)induction. In TPHOLs '02: Proceedings
of the 15th International Conference on Theorem Proving
in Higher Order Logics, pages 1330, London, UK, 2002.
Springer-Verlag.
[ACM03]
S. J. Ambler, R. L. Crole, and Alberto Momigliano. A
definitional approach to primitive recursion over higher
order abstract syntax. In Proceedings of the 2003 workshop
on Mechanized reasoning about languages with variable
binding, pages 111. ACM Press, 2003.
[ACU93]
William Aitken, Robert L. Constable, and Judith Underwood.
Metalogical Frameworks II: Using reflected decision procedures
. Journal of Automated Reasoning, 22(2):171221,
1993.
10
[All87a]
Stuart F. Allen. A Non-type-theoretic Definition of Martin-Lof's
Types. In D. Gries, editor, Proceedings of the 2
nd
IEEE
Symposium on Logic in Computer Science, pages 215224.
IEEE Computer Society Press, June 1987.
[All87b]
Stuart F. Allen. A Non-Type-Theoretic Semantics for Type-Theoretic
Language. PhD thesis, Cornell University, 1987.
[Art99]
Sergei Artemov. On explicit reflection in theorem proving
and formal verification. In Ganzinger <A href="3.html#10">[Gan99], pages 267
281.
[Art04]
Sergei Artemov.
Evidence-based common knowledge.
Technical Report TR-2004018, CUNY Ph.D. Program in
Computer Science Technical Reports, November 2004.
[BA02]
Eli Barzilay and Stuart Allen. Reflecting higher-order abstract
syntax in NuPRL. In Victor A. Carre~no, Cezar A. Mu~noz,
and Sophi`ene Tahar, editors, Theorem Proving in Higher
Order Logics; Track B Proceedings of the 15
t h
International
Conference on Theorem Proving in Higher Order Logics
(TPHOLs 2002), Hampton, VA, August 2002, pages 2332.
National Aeronautics and Space Administration, 2002.
[BAC03]
Eli Barzilay, Stuart Allen, and Robert Constable. Practical
reflection in NuPRL. Short paper presented at 18th Annual
IEEE Symposium on Logic in Computer Science, June 22
25, Ottawa, Canada, 2003.
[Bar01]
Eli Barzilay. Quotation and reflection in NuPRL and Scheme.
Technical Report TR2001-1832, Cornell University, Ithaca,
New York, January 2001.
[Bar05]
Eli Barzilay. Implementing Reflection in NuPRL. PhD thesis,
Cornell University, 2005. In preparation.
[CAB
+
86]
Robert L. Constable, Stuart F. Allen, H. M. Bromley, W. R.
Cleaveland, J. F. Cremer, R. W. Harper, Douglas J. Howe,
T. B. Knoblock, N. P. Mendler, P. Panangaden, James T.
Sasaki, and Scott F. Smith. Implementing Mathematics with
the NuPRL Proof Development System. Prentice-Hall, NJ,
1986.
[CFW04]
Luis Crus-Filipe and Freek Weidijk. Hierarchical reflection.
In Slind et al. <A href="3.html#11">[SBG04], pages 6681.
[Con94]
Robert L. Constable. Using reflection to explain and enhance
type theory. In Helmut Schwichtenberg, editor, Proof and
Computation, volume 139 of NATO Advanced Study Institute
, International Summer School held in Marktoberdorf,
Germany, July 20-August 1, NATO Series F, pages 65100.
Springer, Berlin, 1994.
[dB72]
N. G. de Bruijn. Lambda calculus notation with nameless
dummies, a tool for automatic formula manipulation, with
application to the Church-Rosser theorem. Indagaciones
Mathematische, 34:381392, 1972. This also appeared in the
Proceedings of the Koninklijke Nederlandse Akademie van
Wetenschappen, Amsterdam, series A, 75, No. 5.
[DFH95]
Joelle Despeyroux, Amy Felty, and Andre Hirschowitz.
Higher-order abstract syntax in Coq.
In M. Dezani-Ciancaglini
and G. Plotkin, editors, Proceedings of the
International Conference on Typed Lambda Calculus and
its Applications, volume 902 of Lecture Notes in Computer
Science, pages 124138. Springer-Verlag, April 1995. Also
appears as <A href="http://www.inria.fr/rrrt/rr-2556.html">INRIA research report RR-2556.
[DH94]
Joelle Despeyroux and Andre Hirschowitz. Higher-order
abstract syntax with induction in Coq.
In LPAR '94:
Proceedings of the 5th International Conference on Logic
Programming and Automated Reasoning, volume 822
of Lecture Notes in Computer Science, pages 159173.
Springer-Verlag, 1994. Also appears as <A href="http://www.inria.fr/rrrt/rr-2292.html">INRIA research
report RR-2292.
[DH95]
James Davis and Daniel Huttenlocher. Shared annotations for
cooperative learning. In Proceedings of the ACM Conference
on Computer Supported Cooperative Learning, September
1995.
[DL99]
Joelle Despeyroux and Pierre Leleu. A modal lambda
calculus with iteration and case constructs. In T. Altenkirch,
W. Naraschewski, and B. Reus, editors, Types for Proofs
and Programs: International Workshop, TYPES '98, Kloster
Irsee, Germany, March 1998, volume 1657 of Lecture Notes
in Computer Science, pages 4761, 1999.
[DL01]
Joelle Despeyroux and Pierre Leleu. Recursion over objects
of functional type. Mathematical Structures in Computer
Science, 11(4):555572, 2001.
[DPS97]
Joelle Despeyroux, Frank Pfenning, and Carsten Schurmann.
Primitive recursion for higherorder abstract syntax. In
R. Hindley, editor, Proceedings of the Third International
Conference on Typed Lambda Calculus and Applications
(TLCA'97), volume 1210 of Lecture Notes in Computer
Science, pages 147163. Springer-Verlag, April 1997. An
extended version is available as <A href="http://reports-archive.adm.cs.cmu.edu/anon/1996/CMU-CS-96-172.ps.gz">Technical Report CMU-CS-96
-172, Carnegie Mellon University.
[EM71]
Andrzej Ehrenfeucht and Jan Mycielski.
Abbreviating
proofs by adding new axioms. Bulletin of the American
Mathematical Society, 77:366367, 1971.
[F
+
86]
Solomon Feferman et al., editors. Kurt Godel Collected
Works, volume 1.
Oxford University Press, Oxford,
Clarendon Press, New York, 1986.
[FPT99]
Marcelo Fiore, Gordon Plotkin, and Daniele Turi. Abstract
syntax and variable binding. In Proceedings of 14
t h
IEEE
Symposium on Logic in Computer Science, pages 193+. IEEE
Computer Society Press, 1999.
[Gan99]
Harald Ganzinger, editor. Proceedings of the 16
t h
International
Conference on Automated Deduction, volume 1632
of Lecture Notes in Artificial Intelligence, Berlin, July 710
1999. Trento, Italy.
[GM96]
A. D. Gordon and T. Melham.
Five axioms of alpha-conversion
. In J. von Wright, J. Grundy, and J. Harrison,
editors, Theorem Proving in Higher Order Logics: 9th
International Conference, Turku, Finland, August 1996:
Proceedings, volume 1125 of Lecture Notes in Computer
Science, pages 173190. Springer-Verlag, 1996.
[GMO03]
Jim Grundy, Tom Melham, and John O'Leary. A reflective
functional language for hardware design and theorem
proving. Technical Report PRG-RR-03-16, Oxford Univerity,
Computing Laboratory, 2003.
[God31]
Kurt Godel.
Uber formal unentscheidbare satze der principia
mathematica und verwandter systeme I. Monatshefte fur
Mathematik und Physik, 38:173198, 1931. English version
in <A href="3.html#11">[vH67].
[God36]
K. Godel.
Uber die Lange von beweisen. Ergebnisse
eines mathematischen Kolloquiums, 7:2324, 1936. English
translation in <A href="3.html#10">[F
<A href="3.html#10">+
<A href="3.html#10">86], pages 397399.
[GS89]
F. Giunchiglia and A. Smaill. Reflection in constructive
and non-constructive automated reasoning. In H. Abramson
and M. H. Rogers, editors, Meta-Programming in Logic
Programming, pages 123140. MIT Press, Cambridge,
Mass., 1989.
[GWZ00]
H. Geuvers, F. Wiedijk, and J. Zwanenburg. Equational reasoning
via partial reflection. In J. Harrison and M. Aagaard,
editors, Theorem Proving in Higher Order Logics: 13
t h
International
Conference, TPHOLs 2000, volume 1869 of Lecture
Notes in Computer Science, pages 162178. Springer-Verlag,
2000.
[HAB
+
]
Jason J. Hickey, Brian Aydemir, Yegor Bryukhov, Alexei
Kopylov, Aleksey Nogin, and Xin Yu. A listing of MetaPRL
theories. <A href="http://metaprl.org/theories.pdf">http://metaprl.org/theories.pdf.
[Har95]
J. Harrison. Metatheory and reflection in theorem proving:
A survey and critique. Technical Report CRC-53, SRI
International, Cambridge Computer Science Research
Centre, Millers Yard, Cambridge, UK, February 1995.
11
[HHP93]
Robert Harper, Furio Honsell, and Gordon Plotkin.
A
framework for defining logics. Journal of the Association
for Computing Machinery, 40(1):143184, January 1993. A
revised and expanded verion of '87 paper.
[Hic97]
Jason J. Hickey.
NuPRL-Light: An implementation
framework for higher-order logics. In William McCune,
editor, Proceedings of the 14
t h
International Conference
on Automated Deduction, volume 1249 of Lecture Notes in
Artificial Intelligence, pages 395399. Springer, July 1317
1997. An extended version of the paper can be found at
<A href="http://www.cs.caltech.edu/~jyh/papers/cade14_nl/default.html">http://www.cs.caltech.edu/~jyh/papers/cade14_
nl/default.html.
[Hic99]
Jason J. Hickey. Fault-tolerant distributed theorem proving.
In Ganzinger <A href="3.html#10">[Gan99], pages 227231.
[Hic01]
Jason J. Hickey. The MetaPRL Logical Programming
Environment. PhD thesis, Cornell University, Ithaca, NY,
January 2001.
[HL78]
Gerard P. Huet and Bernard Lang. Proving and applying
program transformations expressed with second-order
patterns. Acta Informatica, 11:3155, 1978.
[HNC
+
03]
Jason Hickey, Aleksey Nogin, Robert L. Constable,
Brian E. Aydemir, Eli Barzilay, Yegor Bryukhov, Richard
Eaton, Adam Granicz, Alexei Kopylov, Christoph Kreitz,
Vladimir N. Krupski, Lori Lorigo, Stephan Schmitt, Carl
Witty, and Xin Yu. MetaPRL -- A modular logical environment
. In David Basin and Burkhart Wolff, editors,
Proceedings of the 16
t h
International Conference on Theorem
Proving in Higher Order Logics (TPHOLs 2003), volume
2758 of Lecture Notes in Computer Science, pages 287303.
Springer-Verlag, 2003.
[HNK
+
]
Jason J. Hickey, Aleksey Nogin, Alexei Kopylov, et al.
MetaPRL home page. <A href="http://metaprl.org/">http://metaprl.org/.
[Mos52]
Andrzej Mostowski. Sentences undecidable in formalized
arithmetic: an exposition of the theory of Kurt Godel.
Amsterdam: North-Holland, 1952.
[NH02]
Aleksey Nogin and Jason Hickey. Sequent schema for
derived rules. In Victor A. Carre~no, Cezar A. Mu~noz,
and Sophi`ene Tahar, editors, Proceedings of the 15
t h
International Conference on Theorem Proving in Higher
Order Logics (TPHOLs 2002), volume 2410 of Lecture Notes
in Computer Science, pages 281297. Springer-Verlag, 2002.
[NKYH05]
Aleksey Nogin, Alexei Kopylov, Xin Yu, and Jason Hickey.
A computational approach to reflective meta-reasoning
about languages with bindings.
Technical Report CaltechCSTR
:2005.003, California Institure of Technology,
2005. Available at <A href="http://resolver.caltech.edu/CaltechCSTR:2005.003">http://resolver.caltech.edu/
CaltechCSTR:2005.003.
[Nor04]
Michael Norrish. Recursive function definition for types with
binders. In Slind et al. <A href="3.html#11">[SBG04], pages 241256.
[Par71]
R. Parikh. Existence and feasibility in arithmetic. The Journal
of Symbolic Logic, 36:494508, 1971.
[Pau94]
Lawrence C. Paulson. Isabelle: A Generic Theorem Prover,
volume 828 of Lecture Notes in Computer Science. Springer-Verlag
, New York, 1994.
[PE88]
Frank Pfenning and Conal Elliott. Higher-order abstract
syntax. In Proceedings of the ACM SIGPLAN '88 Conference
on Programming Language Design and Implementation
(PLDI), volume 23(7) of SIGPLAN Notices, pages 199208,
Atlanta, Georgia, June 1988. ACM Press.
[Pfe89]
Frank Pfenning. Elf: a language for logic definition and
verified metaprogramming. In Proceedings of the 4
t h
IEEE
Symposium on Logic in Computer Science, pages 313322,
Asilomar Conference Center, Pacific Grove, California, June
1989. IEEE Computer Society Press.
[Plo90]
Gordon Plotkin. An illative theory of relations. In R. Cooper,
K. Mukai, and J. Perry, editors, Situation Theory and Its
Applications, Volume 1, number 22 in CSLI Lecture Notes,
pages 133146. Centre for the Study of Language and
Information, 1990.
[PN90]
L. Paulson and T. Nipkow. Isabelle tutorial and user's manual
. Technical report, University of Cambridge Computing
Laboratory, 1990.
[SBG04]
Konrad Slind, Annette Bunker, and Ganesh Gopalakrishnan,
editors. Proceedings of the 17
t h
International Conference
on Theorem Proving in Higher Order Logics (TPHOLs
2004), volume 3223 of Lecture Notes in Computer Science.
Springer-Verlag, 2004.
[Sch01]
Carsten Schurmann. Recursion for higher-order encodings.
In L. Fribourg, editor, Computer Science Logic, Proceedings
of the 10
t h
Annual Conference of the EACSL, volume 2142
of Lecture Notes in Computer Science, pages 585599.
Springer-Verlag, 2001.
[Smi84]
B.C. Smith. Reflection and semantics in Lisp. Principles of
Programming Languages, pages 2335, 1984.
[vH67]
J. van Heijenoort, editor. From Frege to Godel: A Source
Book in Mathematical Logic, 18791931. Harvard University
Press, Cambridge, MA, 1967.
12 | system reflection;programming language;High order abstract syntax;formal languages;Theorem prover;NuPRL;Meta-syntax;MetaPRL theorem prover;Languages with bindings;Uniform reflection framework;Higher-Order Abstract Syntax;Bruijn-style operations;HOAS-style operations;NuPRL-like Martin-Lof-style computational type theory;higher-order abstract syntax;Type Theory;formal definition and theory;computer aided reasoning;Meta-reasoning;Recursive definition;Reflective reasoning;Reflection;Languages with Bindings;Substitution;MetaPRL;Runtime code generation;Meta-language;uniform reflection framework;Theory of syntax;Programming Language Experimentation |
30 | An Architectural Style for High-Performance Asymmetrical Parallel Computations | Researchers with deep knowledge of scientific domains are now developing highly-adaptive and irregular (asymmetrical ) parallel computations, leading to challenges in both delivery of data for computation and mapping of processes to physical resources. Using software engineering principles, we have developed a new communications protocol and architectural style for asymmetrical parallel computations called ADaPT. Utilizing the support of architecturally-aware middleware, we show that ADaPT provides a more efficient solution in terms of message passing and load balancing than asymmetrical parallel computations using collective calls in the Message-Passing Interface (MPI) or more advanced frameworks implementing explicit load-balancing policies. Additionally , developers using ADaPT gain significant windfall from good practices in software engineering, including implementation-level support of architectural artifacts and separation of computational loci from communication protocols | INTRODUCTION
In recent years, as the cost-to-performance ratio of consumer
hardware has continued to decrease, computational
clusters consisting of fast networks and commodity hardware
have become a common sight in research laboratories. A
Copyright is held by the author/owner.
ICSE'06, May 2028, 2006, Shanghai, China.
ACM 1-59593-085-X/06/0005.
growing number of physicists, biologists, chemists, and computer
scientists have developed highly-adaptive and irregular
parallel applications that are characterized by computational
intensity, loosely-synchronous parallelism and dynamic
computation. Because the computation time of each
parallel process varies significantly for this class of computation
, we shall refer to them as asymmetrical parallel computations
.
Adaptive mesh refinements for the simulation
of crack growth, combinatorial search applications used in
artificial intelligence, and partial differential equation field
solvers [2] are examples of asymmetrical computations.
While supercomputing platforms available to us continue
to increase performance, our ability to build software capable
of matching theoretical limits is lacking [8]. At the
same time, researchers with significant depth of knowledge
in a scientific domain but with limited software experience
are confounded by the interface bloat of libraries such the
Message-Passing Interface (MPI), which has 12 different routines
for point-to-point communications alone [5].
Would-be practitioners of high-performance computing are
introduced early to the mantra of optimization. The myth
that high-level concepts inherent to software engineering
principles, such as "separation of concerns," result in inefficiencies
at the performance level has caused these researchers
to eschew best practices of traditional software
development in favor of highly-optimized library routines.
We contend that a sound software engineering solution to
asymmetrical parallel computations provides decoupling of
connectors from computational loci and reduces the complexity
of development for the programmer while still providing
an efficient solution both in terms of load-balancing
and message-delivery. In this paper, we present such a solution
.
In the next section, we will discuss our motivations for creating
the ADaPT protocol and architecture, including the
load-balancing inefficiencies of "optimized" communications
libraries when computing asymmetrical parallel computations
. We will then present ADaPT, a communications protocol
and associated software architecture for asymmetrical
computations. Additionally, we will present analysis which
shows ADaPT's ability to outperform both MPI and other
load-balancing frameworks using traditional work-sharing
strategies. We conclude with an overview of future research
opportunities afforded by ADaPT.
MOTIVATION
This work has been motivated by our experience with
two key classes of existing approaches: use of optimized
857
communications libraries such as MPI [4], and message-passing
frameworks which implement load-balancing strategies
based on work sharing.
2.1
Message-Passing Interface
High-performance communications libraries such as MPI
are optimized to reduce the bandwidth needed to communicate
a large amount of data to subprocesses. In order to
accomplish this reduction, collective calls in MPI are synchronous
, causing barriers at data-distribution points in the
software. When used to compute uniform parallel computations
barriers are unobtrusive. In asymmetrical computations
, however, an effective mapping of processes to physical
resources contributes more significantly to wall-clock time to
completion than efficient communications. For these computations
, asynchronous communications are needed, despite
increased bandwidth.
To illustrate this phenomena, let us consider a mapping
of a large normalized population of computation times with
a high level of variance onto a significantly smaller number
of physical nodes (a strategy known as overaggregation).
The MPI library offers developers efficient use of bandwidth
via collective scatter and gather commands. While
bandwidth is conserved using these collective calls, analyses
made by Gropp, et. al. and Skjellum [10, 4] suggest
that most implementations of MPI's scatter are built on top
of MPI's rendezvous protocol and result in a synchronous
barrier at each subsequent distribution of data.
Since each process has variable computation time, a number
of subprocesses will remain idle until the longest process
completes during each of the scatters. In [1] we have shown
that the smallest contribution to overall wall-clock time to
completion made by this idle time is given as
n , where n
is the number of subprocesses and
is the mean of the computation
times. In comparison, the wall-clock time saved
using the collective calls to reduce bandwidth is negligible.
While these collective calls only consider bandwidth optimizations
, it is clear that in asymmetrical parallel computations
, process load-balancing across subprocesses is a more
important optimization to pursue.
2.2
Load-Balancing Frameworks
Attempts to develop message-passing frameworks that can
assist computational scientists in the development of asymmetrical
parallel computations can be divided into two groups:
static load-balancing frameworks and dynamic load-balancing
frameworks. Because a priori knowledge of the computation
involved in asymmetrical parallel computations is required
of static load balancers, such frameworks are inapplicable to
this class of problems.
Unlike static load balancers, dynamic load-balancing frameworks
do not require information a priori and are able to redeploy
balanced distributions of data during program execution
. Notable examples of parallel development frameworks
which provide dynamic load-balancing are PREMA [2] and
Charm++ [6]. Unfortunately, these frameworks often incur
significant performance losses due to the introduction of
barriers for load-balancing. Additionally, these frameworks
do not provide explicit support for consistency of structure
and development.
A software architectural solution can provide a number of
benefits in addition to load balancing. Employing a sound
software engineering principle, the separation of communication
from computational elements shields the developer from
the need to optimize communications and provides enforcement
of architectural constraints. An added benefit is that
architectural components reified as explicit implementation-level
artifacts allow for easy reconfiguration of software in
principle. We will revisit this point below.
A NOVEL PROTOCOL
Two overlooked aspects of performance optimizations that
must be addressed in order to provide a truly efficient solution
are asynchronous load-balancing and event pattern optimization
. In addition to simply providing a load-balanced
distribution, asynchronous load-balancing provides a best
effort redistribution of processes without introducing a barrier
to computation.
Event pattern optimization suggests that a protocol is capable
of utilizing the predictability of future messages given
analysis of past messages. During overaggregated parallel
computations, a number of computations need to be distributed
to each of the subprocesses over the course of the
parallel computation, causing a pattern to emerge.
In order to incorporate each of these optimizations into a
high-performance communications protocol, we have developed
ADaPT, an Adaptive Data-parallel Publication/ Subscription
Transport protocol and software architecture. The
thesis of ADaPT is that it is possible to exploit the sequence
that emerges from sending multiple messages to each parallel
process in order to reduce the overall wall clock time to completion
of the computation while still making a best-effort
to avoid sending messages to each subprocess to quickly for
the process to buffer.
3.1
ADaPT Defined
We feel that for the purposes of this paper it is most helpful
to define ADaPT's protocol, architectural elements, and
implementation.
3.1.1
Protocol
ADaPT views each parallel process as an independent
software component (Worker) residing on a physical node
capable of performing computations on data. Each Worker
initiates computation by subscribing to a coordination component
(Master).
An important distinction between ADaPT and traditional
publication/subscription systems is that unlike traditional
pub/sub systems, ADaPT does not duplicate messages to
service multiple downstream requests. Rather, it distributes
messages uniquely from a queue in a round-robin fashion.
Upon receipt of a subscription, the Master publishes a
message to the Worker. There is another divergence from
traditional pub/sub systems at this point. The Master waits
for another request from the subscribed Worker before publishing
another message to that Worker. Using data from
each subscribed Worker on its computation time, or
, the
Master tracks an average processing time, or
.
Because the protocol is adaptive, when a predetermined
number of messages have been sent to the Workers and a
has been calculated, the Master switches from this conservative
phase to an aggressive phase during which it sends
messages of the requested type to the process at the regular
interval dictated by
. The protocol exploits the emerging
event pattern to reduce the overall processing time at each
physical node.
858
0
5
0 100 150 200
0
5
10
15
20
Number of Workers
% Overhead
Computation Time Variance = 10
LBF
MP I
ADaPT
0
50
100
150
200
0
5
10
15
20
Number of Workers
% Overhead
Computation Time Variance = 50
LBF
MPI
ADaPT
0
50
100
150
200
0
5
10
15
20
Number of Workers
% Overhead
Computation Time Variance = 100
LBF
MPI
ADaPT
0
50
100
150
200
0
5
10
15
20
Number of Workers
% Overhead
Computation Time Variance = 200
LBF
MPI
ADaPT
Figure 1: Monte Carlo simulations of overhead for asymmetrical computations exhibiting multiple variances.
Similar to MPI's eager protocol, this phase of ADaPT can
be too aggressive, flooding the process's buffer (datasets in
high-performance computing tend to be very large causing
memory limitations to surface frequently). If the number
of messages in the Worker's buffer reaches a maximum, the
Worker unsubscribes from the Master. After the Worker has
computed each of the messages in its buffer, it re-subscribes
to the Master, starting once again with the conservative
phase of delivery as described above.
3.1.2
Architectural Model and Implementation
We have further codified ADaPT in a software architectural
style [9].
In addition to Master and Worker components
, the ADaPT connector utilizes an adaptive dispatcher
to deliver messages to each subscribed Worker using
the ADaPT protocol. The dispatcher uses a priority-based
round-robin algorithm which utilizes the calculated
and attempts to saturate each Worker's computation load
without flooding the Worker's buffer. This handler auto-matically
switches between the conservative and aggressive
phases. The key contribution of this connector is the encap-sulation
of underlying protocols, allowing the developer to
focus instead on the computations to be performed.
Similar to the C2 software architecture [3], messages triggering
computation travel downstream from one or more
Masters to the ADaPT connector. Messages typed as results
originating at Workers travel upstream through the
ADaPT connector back to the Masters.
We have implemented these architectural rules through
extensions to the Prism framework [7]. Prism-MW, a middleware
designed to enforce architectural rules at the level
of software artifacts, is a light-weight event-based framework
consisting of a core set of functionality with handles
to extensible components, connectors, and event handlers.
Topological rules for each architectural style are also en-forced
through overloaded methods for connecting artifacts.
3.2
Performance Analysis
In analyzing ADaPT's performance in comparison to load-balancing
frameworks as well as synchronous scatters and
gathers using MPI, it is first important to define a base
metric with which to compare protocols. This metric, the
"natural rate" of parallel computation, is the sum of all individual
computations to be completed divided by the number
of nodes in the parallel computation. In this section we will
present comparisons of protocols as measured by percentage
overhead (calculated as the wall-clock time for the parallel
process to complete minus the natural rate, divided by the
natural rate).
In order to properly compare ADaPT's ability to reduce
message traffic as well as to efficiently map asymmetrical
computations to physical resources, we developed a Monte
Carlo simulation in which a normalized population of computations
was delivered to virtual processors via three different
communications policies/architectures and the percentage
overhead was calculated for each. All massage-passing
costs were uniform across the network for each policy implemented
.
MPI (collective calls) - Costs of synchronous scatters
and gathers using MPI were modeled using equations from
[10, 4]. In this policy each worker receives a computation via
a scatter and returns via a gather before scattering the next
subset until all computations are completed. This process
is known as a multi-part scatter [1].
Load-balancing framework - The Monte Carlo simulation
of the load-balancing framework uses work-sharing
methods. All events are delivered to workers before they begin
processing and a barrier is periodically introduced. At
859
this barrier, the workload is redistributed evenly between all
processors. In order to idealize load balancing, the cost of
this calculation was treated as negligible.
ADaPT implementation - Using the routing policies
of ADaPT, this implementation assumes that workers are
capable of buffering only two events and each worker is homogeneous
. We made each of these assumptions in order to
conservatively profile ADaPT's performance.
In each of four simulations, a normalized population of
1000 computations was generated with a mean computation
time of 100 milliseconds and a variance of 10 milliseconds
2
,
50 milliseconds
2
, 100 milliseconds
2
, and 200 milliseconds
2
,
respectively. For each simulation, the aggregation of the parallel
computation (i.e, the ratio of workers to computations)
was varied from 1:500 to 1:5.
Results of this comparison are shown in Figure 1.
DISCUSSION
It can be seen in these plots that while ADaPT performs
uniformly at all aggregations smaller than 1:100 (i.e.,
>=10
workers), MPI collective commands and load-balancing frameworks
decrease performance as the aggregation is reduced.
For load-balancing frameworks, this is due to the increased
volume of messaging required in order to re-balance the load
across all processors at each barrier. In the presence of load-balancing
, the idle time is significantly reduced, but the cost
of rerouting messages to new processors makes ADaPT the
better performer especially in high variance computations.
From these initial results, we feel that our implementation
of ADaPT outperforms collective calls via MPI as well
as load-balancing frameworks employing a full worksharing
scheme for significantly varied aggregations and computation
time variances. In Monte Carlo simulations, ADaPT
produced a better mapping of computations to resources,
reducing computational overhead to under 1% for aggregations
less than 1:100. In the simulations of aggregations
greater than 1:100, ADaPT does not perform as well as MPI
or other load-balancing frameworks due to the increased percentage
of time each worker's buffer remains empty before
another event is pushed to the Worker at the calculated rate
of computation. This situation seems of little consequence,
however, in that data sets are seldom overagreggated to this
extreme.
ADaPT offers a significant decrease in overhead for event
delivery in parallel computations and also outperforms established
load-balancing techniques for use with asymmetrical
parallel computations. Additionally, ADaPT, through
its implementation in Prism, offers developers architectural
artifacts at the level of implementation, clear division between
the computation loci (in the form of extensible Workers
) and communications algorithms, and reduction of communications
knowledge needed by the developer in order to
implement asymmetrical parallel computations.
4.2
Future Work
While ADaPT is clearly an applicable architectural style
to high-performance computing, we make no claim as to its
monopoly of the field. In future work, we hope to build a
more substantial architectural framework for high-performance
computing which provides more underlying protocol choices
and further assists developers in code migration to new platforms
including SMP and other shared-memory machines.
We hope to demonstrate the ease of system design and implementation
via architectures to the high-performance community
without serious performance degradation, as is cur-rently
the prevalent though.
Further enhancements to the ADaPT protocol and architecture
will include refinement of its topological constraints
to encapsulate both data-parallel stages of computation and
higher-level workflow stages using multiple layers of masters
and workers connected between more advanced ADaPT
connectors (themselves perhaps distributed across multiple
physical nodes). Also, we hope to further investigate the
tradeoffs associated with alternate unsubscription policies
and the effects of "pumping" the parallel computation by
modifying delivery rates to be faster than average computation
rates.
This work was supported by the NSF 0312780 grant. Any
opinions, findings and conclusions or recommendations expressed
in this material are those of the authors and do not
necessarily reflect those of the National Science Foundation.
The authors also wish to thank the anonymous reviewers for
their helpful comments.
REFERENCES
[1] D. Woollard et. al. Adapt: Event-passing protocol for
reducing delivery costs in scatter-gather parallel
processes. In Proceeding of the Workshop on Patterns
in High Performance Computing, Urbana, Illinois,
May 2005.
[2] K. Barker et. al. A load balancing framework for
adaptive and asynchronous applications. Parallel and
Distributed Systems, IEEE Transactions on,
15:183192, 2004.
[3] R. Taylor et. al. A component- and message-based
architectural style for gui software. IEEE Transactions
on Software Engineering, June, 1996.
[4] W. Gropp, E. Lusk, and A. Skjellum. Using MPI:
Portable Programming with the Message Passing
Interface. MIT Press, 1999.
[5] S. Guyer and C. Lin. Broadway: A software
architecture for scientific computing. In Proceedings of
the IFIP TC2/WG2.5 Working Conference on the
Architecture of Scientific Software, pages 175192,
Deventer, The Netherlands, The Netherlands, 2001.
Kluwer, B.V.
[6] L. Kale and S. Krishnan. CHARM++: A Portable
Concurrent Object Oriented System Based on C++.
In A. Paepcke, editor, Proceedings of OOPSLA'93,
pages 91108. ACM Press, September 1993.
[7] S. Malek, M. Mikic-Rakic, and N. Medvidovic. A
style-aware architectural middleware for
resource-constrained, distributed systems. IEEE
Transactions on Software Engineering, March, 2005.
[8] D. Post and L. Votta. Computational science demands
a new paradigm. Physics Today, 58(1):3541, 2005.
[9] M. Shaw and D. Garlan. Software Architecture:
Perspectives on an Emerging Discipline. Prentice-Hall,
1996.
[10] A. Skjellum. High performance mpi: Extending the
message passing interface for higher performance and
higher predictability, 1998.
860
| collective calls;ADaPT;MPI;software engineering;asymamtrical parallel computations;load balancing;communication protocols;high-performance computing;High-Performance Computing;Asymmetrical Parallel Computations |
31 | An empirical comparison of supervised machine learning techniques in bioinformatics | Research in bioinformatics is driven by the experimental data. Current biological databases are populated by vast amounts of experimental data. Machine learning has been widely applied to bioinformatics and has gained a lot of success in this research area. At present, with various learning algorithms available in the literature, researchers are facing difficulties in choosing the best method that can apply to their data. We performed an empirical study on 7 individual learning systems and 9 different combined methods on 4 different biological data sets, and provide some suggested issues to be considered when answering the following questions: (i) How does one choose which algorithm is best suitable for their data set? (ii) Are combined methods better than a single approach? (iii) How does one compare the effectiveness of a particular algorithm to <A href="31.html#1">the others? | Introduction
In the post-genome era, research in bioinformatics has
been overwhelmed by the experimental data. The
complexity of biological data ranges from simple strings
(nucleotides and amino acids sequences) to complex
graphs (biochemical networks); from 1D (sequence data)
to 3D (protein and RNA structures). Considering the
amount and complexity of the data, it is becoming
impossible for an expert to compute and compare the
entries within the current databases. Thus, machine
learning and artificial intelligence techniques have been
widely applied in this domain to discover and mine the
knowledge in the databases. Quoting from Baldi and
Brunak (Baldi and Brunak, 2001) "As a result, the need for
computer / statistical / machine learning techniques is
today stronger rather than weaker."
Shavlik et al. (Shavlik et al., 1995) described the field of
molecular biology as tailor-made for machine learning
approaches. This is due to the nature of machine learning
approaches that performs well in domains where there is a
vast amount of data but little theory this is exactly the
situation in bioinformatics. Since the introduction of
machine learning to this field, various algorithms and
methods have been produced and applied to study different
data sets. Most of these studies compare a `new' algorithm
with the conventional ones, asserting the effectiveness and
efficiencies of their methods in particular data sets. The
variety of learning algorithms currently available for the
researchers are enormous and the main problems faced by
researchers are: (i) How does one choose which algorithm
is best suitable for their data set? (ii) Are combined
methods better than a single approach? (iii) How does one
compare the effectiveness of a particular algorithm to the
others?
Copyright 2003, Australian Computer Society, Inc. This paper
appeared at First Asia-Pacific Bioinformatics Conference,
Adelaide, Australia. Conferences in Research and Practice in
Information Technology, Vol. 19. Yi-Ping Phoebe Chen, Ed.
Reproduction for academic, not-for profit purposes permitted
provided this text is included.
The objective of this study is to provide some suggestions
for the community by answering the above questions. This
paper is organised as follows. Section 2 presents a brief
summary of machine learning. Section 3 outlines the
materials and methods used in this study. Section 4
presents the results and discussion, and the final section
summarises this work.
Machine Learning Background
A machine learning algorithm is one that can learn from
experience (observed examples) with respect to some class
of tasks and a performance measure. (Mitchell, 1997).
Machine learning methods are suitable for molecular
biology data due to the learning algorithm's ability to
construct classifiers/hypotheses that can explain complex
relationships in the data. The classifiers or hypotheses can
then be interpreted by a domain expert who suggests some
wet-lab experiments to validate or refute the hypotheses.
This feedback loop between in silico and in vivo / in vitro
experiments accelerates the knowledge discovery process
over the biological data. This feedback is an important
characteristic of machine learning in bioinformatics.
Generally, there are two types of learning schemes in
machine learning: supervised learning where the output
has been given a priori labelled or the learner has some
prior knowledge of the data; and unsupervised learning
where no prior information is given to the learner
regarding the data or the output. The overall tasks for the
learner are to classify, characterise, and cluster the input
data. Classification is the most common task in biological
problem where given two different sets of examples,
namely positive E
+
and negative E
examples
(E
+
E
=
),
the learner needs to construct a classifier to distinguish
between the positive examples and the negative set. This
classifier can then be used as the basis for classifying as
yet unseen data in the future. Usually, for a supervised
classification problem, the training examples are in the
form of a set of tuples
{(
where x
)}
,
(
),...,
,
1
1
j
n
n
j
y
x
y
x
i
is
the class label and y
ij
is the set of attributes for the
instances. The task of the learning algorithm is to produce
a classifier (hypothesis, function) to classify the instances
into the correct class. In this study, we only consider
supervised machine learning applied to classification.
Materials and Methodologies
We performed an empirical comparison of rule-based
learning systems (Decision trees, One Rule, Decision
rules), statistical learning system (Nave Bayes, Instance
Based, SVM and neural networks) and ensemble methods
(Stacking, Bagging and Boosting) on the data listed in
Table 1 based on the accuracy, positive predicted value,
specificity and sensitivity of the learning algorithms. All
the learning methods used in this study were obtained from
the WEKA machine learning package
(http://www.cs.waikato.ac.nz/~ml/weka/).
3.2 Data
set
In this study we used the following data sets obtained from
UCI machine learning repository (Blake and Merz, 1998).
We briefly describe the biological motivation for the data
sets; interested readers should refer to the cited papers for
details.
E.coli data set The objective of this data set is to predict
the cellular localisation sites of E.coli proteins (Horton and
Nakai, 1996). There are 8 different cellular sites, which
are cytoplasm (cp), inner membrane without signal
sequence (im), periplasm (pp), inner membrane with
uncleavable signal sequence (imU), outer membrane (om),
outer membrane lipoprotein (omL), inner membrane
lipoprotein (imL) and inner membrane with cleavable
signal sequence (imS). The attributes are signal sequence
recognition methods (specifically those of McGeoch and
von Heijne), the presence of charge on N-terminus of
predicted lipoproteins and 3 different scoring functions on
the amino acid contents whether predicted as a outer
membrane or inner membrane, cleavable or uncleavable
sequence signal.
Yeast data set The objective is similar to the E.coli data,
which is to determine the cellular localisation of the yeast
proteins (Horton and Nakai, 1996). There are 10 different
sites, which include: CYT (cytosolic or cytoskeletal);
NUC (nuclear); MIT (mitochondrial); ME3 (membrane
protein, no N-terminal signal); ME2 (membrane protein,
uncleaved signal); ME1 (membrane protein, cleaved
signal); EXC (extracellular); VAC (vacuolar); POX
(peroxisomal) and ERL (endoplasmic reticulum lumen).
The attributes are similar to the E.coli data set with the
addition of nuclear localisation information.
Promoter data set. The task of the classifier is to predict
whether a DNA sequence from E.coli is either a promoter
or not (Towell et al., 1990). The input data is a
57-nucleotide sequence (A, C, T or G).
HIV data set The data set contains 362 octamer protein
sequences each of which needs to be classified as an HIV
protease cleavable site or uncleavable site (Cai and Chou,
1998).
Data set
E.coli
Yeast
Promoters
HIV
Continuous Attribute
2 0 57
8
Discrete Attribute
5 8 0
0
Classes
8 10 2
2
Data Size
336 1484 106
362
Table 1: Data sets used in this study.
3.3 Evaluation
We constructed a confusion matrix (contingency table) to
evaluate the classifier's performance. Table 2 shows a
generic contingency table for a binary class problem. True
positives (TP) denote the correct classifications of positive
examples. True negatives (TN) are the correct
classifications of negative examples. False positives (FP)
represent the incorrect classifications of negative
examples into class positive and False negatives (FN) are
the positive examples incorrectly classified into class
negative.
Predicted
Positive Negative
Positive
TP FN
Actual
Negative
FP TN
Table 2: A contingency table for a binary class
problem.
Based on the contingency table, several measurements can
be carried out to evaluate the performance of the induced
classifier. The most popular performance evaluation
measure used in prediction or classification learning is
classifier accuracy which measures the proportion of
correctly classified instances;
FN
FP
TN
TP
TN
TP
+
+
+
Acc
+
=
.
Positive Predictive Accuracy (PPV, or the reliability of
positive predictions of the induced classifier) is computed
by
FP
TP
TP
PPV
+
=
. Sensitivity (S
n
) measures the fraction of
actual positive examples that are correctly classified
FN
TP
TP
+
=
S
n
; while specificity (S
p
) measures the fraction
of actual negative examples that are correctly
classified
FP
TN
TN
+
S
.
p
=
3.4 Cross-validation
To evaluate the robustness of the classifier, the normal
methodology is to perform cross validation on the
classifier. Ten fold cross validation has been proved to be
statistically good enough in evaluating the performance of
the classifier (Witten and Frank, 2000). In ten fold cross
validation, the training set is equally divided into 10
different subsets. Nine out of ten of the training subsets
are used to train the learner and the tenth subset is used as
the test set. The procedure is repeated ten times, with a
different subset being used as the test set.
Results and Discussion
We summarise our experimental results in Figure 1 and 2.
The full analysis of this study is available in
http://www.brc.dcs.gla.ac.uk/~actan/APBC2003.
Figure 1. Accuracy vs Positive Predictive Value
Figure 2. Specificity vs Sensitivity
From the results, we observed that most of the individual
learners tend to perform well either in accuracy or
specificity. Probably this is due to the induced classifier
being able to characterise the negative examples (most of
the training sets have large ratio of negative examples
compared to positive examples). Furthermore, the results
suggest that combination approaches are in general better
at minimising overfitting of the training data. We also
observed from this experiment that boosting performs
better than bagging. This is because attributes which are
highly important in discriminating between classes are
randomly removed by bagging; however they are
preserved in boosting and thus contribute to the final
voting scheme. The only individual learning system that
perform better than the combined methods is Nave Bayes
learning. This may suggest that Nave Bayes is capable of
classifying instances based on simple prior probabilistic
knowledge. In this study SVM does not perform well
compared to other methods, probably due to the fact that
training data are not separable in the vector space.
4.1 Rules-of-thumb
In this section, we address the following questions by
providing some suggested issues (rules-of-thumb) to be
considered when answering them.
(i) How does one choose which algorithm is best suitable
for their data set?
Ratio of the training data From these experiments, we
observed that the division of the training data plays a
crucial role in determining the performance of the
algorithms. If the training TPs and TNs are almost equal in
size, the algorithms tend to construct much better
classifiers. This observation suggested that the classifier
induced from equal size of TP and TN tend to be more
robust in classifying the instances. Furthermore, the
classifiers generated consider all the discriminative
attributes that distinguish between two different classes. If
the size of the TP set is small compared to that of TN, most
probably the classifier will overfit the positive examples
and thus perform poorly in the cross validation stages.
Attributes Another factor that must be taken into
consideration when choosing a learning method is the
nature of the attributes. Generally, statistical methods (e.g.
SVM, neural networks) tend to perform much better over
multi-dimensions and continuous attributes. This is
because the learning strategy embedded in these
algorithms enables the learners to find a maximal margin
that can distinguish different classes in the vector space.
By contrast, rule-based systems (e.g. Decision trees,
PART) tend to perform better in discrete / categorical
attributes. The algorithms of these methods operate in a
top-down manner where the first step is to find the most
discriminative attribute that classifies different classes.
The process is iterated until most of the instances are
classified into their class.
Credibility vs. Comprehensibility When choosing a
machine learning technique, users need to ask themselves
what they really want to "discover" from the data. If they
are interested in generating understandable hypotheses,
then a rule-base learning algorithm should be used instead
of statistical ones. Most machine learning algorithms
follow Occam's principle when constructing the final
hypothesis. According to this principle, the algorithm
tends to find the simplest hypotheses by avoiding
overfitting the training data. But does this principle still
hold in bioinformatics? In bioinformatics we often wish
to explore data and explain results, and hence we are
interested in applying intelligent systems to provide an
insight to understand the relations between complex data.
The question then arises as to whether we prefer a simple
classifier or a highly comprehensible model. In general,
there is a trade off between the credibility and
comprehensibility of a model. Domingos (1999)
suggested applying domain constraints as an alternative
for avoiding overfitting the data. We agree with
Muggleton et al. (1998) that when comparing the
performance of learning systems in a bioinformatics
context, the hypothesis with better explanatory power is
preferable when there exist more than one hypotheses with
statistical equivalent predictive accuracy.
(ii) Are combined methods better than a single approach?
From the experiments most of the combined methods
perform better than the individual learner. This is because
none of the individual methods can claim that they are
superior to the others due to statistical, computational and
representational reasons (Dietterich, 2000). Every
learning algorithm uses a different search strategy. If the
training data is too small, the individual learner can induce
different hypotheses with similar performances from the
search space. Thus, by averaging the different hypotheses,
the combined classifier may produce a good
approximation to the true hypotheses. The computational
reason is to avoid local optima of individual search
strategy. By performing different initial searches and
combining the outputs, the final classifier may provide a
better approximation to the true hypotheses. Lastly, due to
the limited amount of training data, the individual
classifier may not represent the true hypotheses. Thus,
through considering different classifiers, it may be
possible to expand the final classifier to an approximate
representation of the true hypotheses. Ensemble learning
has been an active research topic in machine learning but
not in the bioinformatics community. Since most of the
hypotheses induced are from incomplete biological data, it
is essential to generate a good approximation by
combining individual learners.
(iii) How does one compare the effectiveness of a
particular algorithm to the others?
Predictive accuracy Most of the time, we can find in the
literature reports that a learning scheme performs better
than another in term of one model's accuracy when
applied to a particular data set. From this study, we found
that accuracy is not the ultimate measurement when
comparing the learner's credibility. Accuracy is just the
measurement of the total correctly classified instances.
This measurement is the overall error rate, but there can be
other measures of the accuracy of a classifier rule. If the
training data set has 95 TNs and 5 TPs, by classifying all
the instances into a negative class, the classifier still can
achieve a 95% accuracy. But the sensitivity and the
positive predicted value is 0% (both measurements
evaluate the performance in classifying TPs). This means
that although the accuracy of the classifier is 95% it still
cannot discriminate between the positive examples and the
negatives. Thus, when comparing the performance of
different classifiers, accuracy as a measure is not enough.
Different measures should be evaluated depending on
what type of question that the user seeks to answer. See
Salzberg (Salzberg, 1999) for a tutorial on comparing
classifiers.
Conclusions
Machine learning has increasingly gained attention in
bioinformatics research. With the availability of different
types of learning methods, it has become common for the
researchers to apply the off-shelf systems to classify and
mine their databases. In the research reported in this paper,
we have performed a comparison of different supervised
machine learning techniques in classifying biological data.
We have shown that none of the single methods could
consistently perform well over all the data sets. The
performance of the learning techniques is highly
dependant on the nature of the training data. This study
also shows that combined methods perform better than the
individual ones in terms of their specificity, sensitivity,
positive predicted value and accuracy. We have suggested
some rules-of-thumb for the reader on choosing the best
suitable learning method for their dataset.
Acknowledgements
We would like to thank colleagues in the Bioinformatics
Research Centre for constructive discussions. We would
also like to thank the anonymous reviewers for their useful
comments. The University of Glasgow funded AC Tan's
studentship.
References
BALDI, P. AND BRUNAK, S. (2001) Bioinformatics:
The Machine Learning Approach, 2
nd
Ed., MIT Press.
Blake, C.L. AND Merz, C.J. (1998) UCI Repository of
machine learning databases
[http://www.ics.uci.edu/~mlearn/MLRepository.html]
CAI, Y.-D. AND CHOU, K.-C. (1998) Artificial neural
network model for predicting HIV protease cleavage
sites in protein. Advances in Engineering Software, 29:
119-128.
DIETTERICH, T.G. (2000) Ensemble methods in
machine learning. In Proceedings of the First
International Workshop on MCS, LNCS 1857: 1-15.
DOMINGOS, P. (1999) The role of Occam's razor in
knowledge discovery. Data Mining and Knowledge
Discovery, 3: 409-425.
HORTON, P. AND NAKAI, K. (1996) A probabilistic
classification system for predicting the cellular
localization sites of proteins. In Proceedings of Fourth
International Conference on ISMB, p.109-115. AAAI /
MIT Press.
MITCHELL, T. (1997) Machine Learning. McGraw-Hill.
MUGGLETON, S., SRINIVASAN, A., KING, R.D. AND
STERNBERG, M.J.E. (1998) Biochemical knowledge
discovery using inductive logic programming. In H.
Motoda (Ed.) Proceedings of the First Conference on
Discovery Science, Springer-Verlag.
SALZBERG, S. (1999). On comparing classifiers: a
critique of current research and methods. Data mining
and knowledge discovery, 1: 1-12.
SHAVLIK, J., HUNTER, L. & SEARLS, D. (1995).
Introduction. Machine Learning, 21: 5-10.
TOWELL, G.G., SHAVLIK, J.W. AND NOORDEWIER,
M.O. (1990) Refinement of approximate domain
theories by knowledge-based neural networks. In
Proceedings of the Eighth National Conference on
Artificial Intelligence, p. 861-866. AAAI Press.
WITTEN, I.H. AND FRANK, E. (2000) Data Mining:
Practical machine learning tools and techniques with
java implementations. Morgan Kaufmann.
| classification;Supervised machine learning;cross validation;performance evaluation;training data;biological data;supervised machine learning;machine learning;ensemble methods;bioinformatics |
32 | An expressive aspect language for system applications with Arachne | C applications, in particular those using operating system level services, frequently comprise multiple crosscutting concerns : network protocols and security are typical examples of such concerns. While these concerns can partially be addressed during design and implementation of an application, they frequently become an issue at runtime, e.g., to avoid server downtime. A deployed network protocol might not be efficient enough and may thus need to be replaced. Buffer overflows might be discovered that imply critical breaches in the security model of an application. A prefetching strategy may be required to enhance performance. While aspect-oriented programming seems attractive in this context, none of the current aspect systems is expressive and efficient enough to address such concerns. This paper presents a new aspect system to provide a solution to this problem. While efficiency considerations have played an important part in the design of the aspect language, the language allows aspects to be expressed more concisely than previous approaches. In particular, it allows aspect programmers to quantify over sequences of execution points as well as over accesses through variable aliases. We show how the former can be used to modularize the replacement of network protocols and the latter to prevent buffer overflows. We also present an implementation of the language as an extension of Arachne, a dynamic weaver for C applications. Finally, we present performance evaluations supporting that Arachne is fast enough to extend high performance applications , such as the Squid web cache. | INTRODUCTION
Real-world applications typically comprise multiple crosscutting
concerns. This applies, in particular, to C applications
using operating system level services. We have exam-ined
three concerns which are typical for this domain in the
context of a large application, the open source web cache
Squid [36]. More concretely, we have considered translation
of network protocols (which may be necessary for efficiency
reasons), insertion of checks for buffer overflows (which are
at the heart of many of today's security issues), and introduction
of prefetching strategies within the cache (which
can be used to enhance efficiency of the web cache). We
have found that all these concerns are scattered over large
portions of the code of Squid.
Hence, the three concerns are crosscutting in the sense
of Aspect-Oriented Programming (AOP) [24] and aspects
should therefore be a means of choice for their modularization
. The concerns have three important characteristics.
First, they must frequently be applied at runtime, e.g., in
order to rapidly fix a buffer overflow and thus prevent security
breaches without incurring server downtime. A dynamic
aspect weaver is therefore needed. Second, they expose intricate
relationships between execution points, e.g., network
protocols are most concisely expressed in terms of sequences
of execution points, not individual ones. The aspect system
must therefore support expressive means for the definition of
aspects, in particular pointcuts. Third, efficiency is crucial
in the application domain we consider.
To our knowledge, none of the current aspect systems for
C meet these three requirements and is suitable for the modularization
of such concerns. Moreover, requirements for
dynamic weaving and efficiency often trade off with expressivity
. Squid should be as efficient as possible and therefore
exploit any suitable operating system and hardware particularity
. Its code base is therefore difficult to understand and
manipulate, thus hindering in particular modularization efforts
. It is therefore highly questionable that the considered
modularization problems can be solved without aspects.
In this paper we propose a solution to the aspectization of
such concerns of C applications. More concretely, we provide
three main contributions. First, we provide a new expressive
aspect language featuring a construct for quantification over
sequences of execution points as well as over accesses to local
aliases of global variables. We show how this aspect lan-27
guage permits concise expression of the considered concerns
as aspects. Second, we present how the aspect language can
be implemented efficiently through runtime weaving into binary
code. Concretely, this is done by integrating the aspect
language into our tool Arachne, a dynamic weaver for C applications
. Furthermore, we present how Arachne improves
on our previous work Dyner [32]. Finally, we give evidence
that our approach meets strong efficiency requirements by
showing performance evaluations in the context of Squid.
The paper is structured as follows. Section 2 presents the
motivating concerns we identified within Squid. Section 3
shows how to modularize these concerns as aspects and defines
our aspect language. Section 4 describes its implementation
within Arachne. Section 5 assesses the performance
of our implementation. Section 6 describes related work.
Section 7 concludes and suggests futures work.
MOTIVATIONS
Legacy C applications involve multiple crosscutting concerns
. Many of them remain challenging, both in terms
of expressiveness required to handle them properly in an
aspect-oriented language and in terms of constraints posed
on the weaver. This section describes three such concerns
in C applications: switching the network protocol, buffer
overflows and prefetching. The network protocol concern is
typically scattered through the entire application. It is an
issue when administrators discover at runtime that the retained
protocol is not efficient enough. Likewise the security
threats posed by buffer overflows is a real concrete problem
for administrators. While guarding all buffers against overflows
might decrease performance considerably, administrators
are left with no other option than accepting the trade-off
between security and performance chosen at application's
design time. Prefetching is another well-known crosscutting
concern [12]. Since prefetching aims at increasing performance
, prefetching aspects make only sense with an efficient
weaver. Yet, it is still difficult to modularize these three concerns
in today's aspect-oriented language. In this section,
we first describe the context in which the concerns arise before
showing their crosscutting nature and finally explaining
the lack in current aspect-oriented languages to handle them
properly.
2.1
TCP to UDP protocol
HTTP was essentially designed as a file transfer protocol
running on top of TCP, a connection-oriented protocol
ensuring communication reliability. While the average Web
page size does not exceed 8 KB [4], the cost of retrieving
a Web page is often dominated by data exchanged for control
purposes of TCP rather than by the page content itself.
This is not a new problem, many researches have already
pointed out that TCP is not suitable for short-lived connections
. While HTTP 1.1 has introduced persistent connections
allowing a client to retrieve multiple pages from the
same server through the same TCP connection, the number
of simultaneous TCP connections is limited by operating
systems. Servers have a strong incentive to close HTTP
connections as soon as possible. Hence, despite the persistent
connection mechanism, many studies conclude that
TCP should be replaced by UDP to retrieve short pages [10,
29, 7]. In spite of its performance improvements, the number
of legacy Web applications has prevented a wide adoption
of this solution. Typical legacy Web applications have to be
listen
accept
read
write
close
write
read
close
connect
socket
Server
Client
TCP Protocol
socket
bind
close
close
socket
Server
Client
UDP Protocol
recvfrom
sendto
recvfrom
socket
bind
Network
Network
sendto
Time
Figure 1: Typical usage of the TCP and UDP APIs.
stopped to switch the protocol. The traditional approach
to avoid depriving a subnetwork from Internet connectivity
while stopping the cache is to swap the application between
different machines. This approach is not only expensive in
terms of hardware, it complicates the administrative task of
the Web cache administrator and poses the problem of con-sistently
transferring the runtime state of the application
before restarting it. Stopping an e-commerce Web server
means a loss of money and many small companies can not
afford the cost of redundant servers. For a wide acceptance,
a HTTP dialect using UDP as transport protocol should
thus be deployable on demand at runtime.
In addition, replacing TCP by UDP in an application is
relatively difficult. The choice of a transport protocol is
usually based on standards believed to be ever-lasting and
made at an early design stage. Hence no particular effort is
made to localize this design decision in a single piece of code.
For example, despite a modularization effort, the TCP API
provided by the operating system is used directly in 7 of the
104 ".c" source files of the Squid Web cache.
As shown in Fig. 1, the TCP API is built around a set of
C functions to be invoked sequentially by the application. In
a properly written program, TCP functions are first used to
establish the connection (typically with socket, connect,
bind and listen), exchange data through the connection
(typically with read and write) and then close it (typically
close). UDP uses similar but less functions. UDP applications
first direct the operating system to dedicate the appropriate
resources to exchange data (typically with socket and
bind), then exchange data through these resources (typically
with sendto and recvfrom) before releasing them (typically
with close). Hence, the problem is not only difficult because
TCP-related function invocations are scattered but
because the relative order of each invocation is important in
order to map it onto the appropriate UDP function.
This example is typical of protocol based APIs. When
such an API is used in an undisciplined way, it becomes
quickly impossible to replace it by another one. Today,
aspect-oriented systems lack an appropriate sequencing construct
in their language. Moreover, many do not provide the
ability to weave aspects dynamically.
2.2
Buffer overflows
In C, the size of an array is fixed at allocation time. According
to ISO and ANSI standards [2], an invalid array
access does not result in an immediate error but leads to
an implementation-dependent behavior. Such behavior is
increasingly exploited by hackers to circumvent security re-28
strictions [37]. It is therefore crucial for C programmers to
ensure every access to an array to be valid. On the other
hand, bound checking code is error prone: it is easy to forget
to check an access and even when the access is checked,
it is easy to compare the index locating the access with an
inappropriate bound. Therefore, researchers have proposed
to make compilers responsible for enforcing proper array access
[22, 31]. The problem is that even the most efficient
system (CRED [31]) slows down an application up to 130%.
Moreover, most frequently used compilers like gcc do not
support bound checking.
Today, administrators discovering a buffer overflow in production
software are left with no other option than stopping
the application and restarting a bug free version. This was
the solution chosen when a buffer overflow was discovered
in Squid in [6]. While widely used, this solution suffers from
three major drawbacks. First, it does not enforce continuous
servicing since the service delivered by the application is not
available during the update. Second, this solution entails an
important information loss: an administrator has no means
to learn whether the buffer overflow has been exploited by
a hacker or not. Third, it misunderstands the performance
trade-off, i.e. it is not necessary to check every array access,
it is only necessary to perform enough checking to discourage
hackers. Therefore, bound checking code should only
run when an environment becomes hostile [23].
Bound checking code tends to crosscut the entire application
. For example, properly written C functions accepting
an array argument commonly take a second argument holding
the array size: the first one allows the function to access
the array while the second is used to ensure correctness of
accesses. In Squid, bound checking code can be found in
any of the 104 ".c" files of its source code. On the 57635
lines composing these ".c" files, at least 485 check bounds.
This problem fails to be handled properly in current aspect
languages as they lack the ability to trigger advices
upon access made through the alias of a variable. Again,
many aspect-oriented systems offer only static weaving capabilities
preventing the administrator to choose the trade-off
security/performance suiting his needs.
2.3
From fetching to prefetching
Operations like retrieving a file on a local disk or over the
Web can be sped up if the underlying software anticipates
user requests and start to fetch documents beforehand. Such
prefetching schemes distinguish themselves from each other
in the way they predict future user requests. These "ora-cles"
actually prevent a clean encapsulation of prefetching
in a single module communicating with the rest of the application
through well-defined interfaces since predictions are
based on information meant to be private to other modules.
In addition, it is very likely that there is no universal perfect
oracle [19]. A statically linked prefetching module is
therefore inappropriate, but prefetching modules along with
the necessary oracles should be loaded and unloaded on the
fly. Due to their crosscutting nature, prefetching modules
including such oracles are better written with aspects [32].
Coady et al. have already pointed out the crosscutting
nature of prefetching in the FreeBSD OS [12]. In our previous
work considering the Squid Web cache, we reached a
similar conclusion [32]. We have previously shown that this
concern can be addressed with cflow-like constructs.
Despite potential performance improvements, prefetching
also increases resource consumption (e.g. network prefetching
consumes local storage and bandwidth). When the pressure
on resources is too high, prefetching computation competes
for them against regular user requests, and slows down
their treatment instead of speeding it up. In such cases,
prefetching should therefore be, temporarily, disabled. Squid
essentially manages file descriptors, a resource only available
in a limited quantity. A file descriptor is used between the
underlying operating system and applications to describe a
network connection or a file on the disk. Squid's file descriptor
management is based on a global variable that tracks the
number of file descriptors currently in use. By comparing
its value with the maximum number of file descriptors allowed
by the operating system, it is possible to estimate that
prefetching should be disabled or resumed.
For this problem of file descriptor consumption, the current
practice of checking if prefetching should be disabled or
not within the advice, is a bad practice that impedes both
readability and maintainability. A mechanism is needed
within the aspect language to restraint the advice execution
at times where the pressure on resources is too high.
This problem were not addressed in our previous work.
AN EXPRESSIVE ASPECT LANGUAGE FOR SYSTEM PROGRAMMING IN C
While AOP seems to be the obvious choice to tackle the
crosscutting concerns introduced above, none of the existing
AO systems provides explicit support for some of their essential
elements, in particular, join point sequences for protocols
, and references to aliases which are local to a function.
In this section we introduce a new aspect language for
system programming in C that allows such crosscutting concerns
to be expressed concisely. In order to make this point,
we first revisit the examples by concisely aspectizing them
using our language. (Note that our aspect language is expressive
in the sense of enabling the concise definition of certain
types of aspects, especially compared to other tools for
system-level manipulations, but not necessarily more expressive
than existing approaches in a language-theoretic sense.)
We then define the join point model underlying our language
precisely, followed by the definition of its syntax and informal
semantics. Finally, we illustrate how its semantics can
be formally defined in terms of a small-step operational semantics
using the framework introduced in [14].
3.1
Example crosscutting concerns revisited
We now revisit the concerns discussed in section 2 in order
to show our language in action and give evidence that it
allows such concerns to be concisely modularized.
The aspect shown in Fig. 2 translates transport protocols
from TCP to UDP. A protocol defines a sequence of function
calls, so the top-level operator of this aspect is seq.
The sequence aspect syntactically consists of a list of pairs
of pointcut and advice (separated by then). In the example
, the TCP protocol starts with a call to socket() with
three constant arguments: AF INET, SOCK STREAM and
0. When such a call is matched, the second parameter is
replaced by SOCK DGRAM as required by the UDP protocol
. The result of this transformed call, the file descriptor,
is bound to fd by return(fd). Then the next call to connect
() with the same file descriptor fd as its first parameter
is matched. In this case the values of the other parameters
29
seq( call(int socket(int, int, int)) && args(AF INET, SOCK STREAM, 0) && return(fd)
then socket(AF INET, SOCK DGRAM, 0);
call(int connect(int, struct socketaddr, socklen t)) && args(fd, address, length)
then returnZero();
// where int returnZero() { return 0; }
( call(size t read(int, void, size t)) && args(fd, readBuffer, readLength)
then recvfrom(fd, readBuffer, readLength, 0, address, length);
|| call(size t write(int, void, size t)) && args(fd, writeBuffer, writeLength)
then sendto(fd, writeBuffer, writeLength, 0, address, length); )
call(int close(int)) && args(fd) ; )
Figure 2: An Aspect for Switching Transport Protocols, from TCP to UDP
seq( call(void malloc(size t))
&& args(allocatedSize) && return(buffer) ;
write(buf f er) && size(writtenSize)
&& if(writtenSize > allocatedSize)
then reportOverflow ();
call(void free(void)) )
Figure 3: An Aspect for Detecting Buffer Overflow
are bound to arguments address and length, and the original
call is replaced by returnZero(). Indeed, there is no connect
step in the UDP protocol. After that, calls to read() and
write() (using the `or' on aspects: ||) on the same file descriptor
fd are translated to UDP recvfrom() and sendto(),
respectively. Note that sequences of such access are potentially
translated (due to use of the repetition operator ).
Finally, a call to close() on fd terminates the TCP protocol
as well as the UDP protocol and thus is not modified (i.e.,
there is no then clause). This last step is required to free
the variables used in the sequence (here, fd, address and
length). Indeed, this aspect can use numerous (instances of
these) variables when it deals with interleaved sequences, as
each call to socket() creates a new instance of the sequence.
The aspect shown in Fig. 3 detects buffer overflows. The
corresponding sequence starts when the function malloc()
returns the buffer address which is then bound to buffer.
Then, each time this address is accessed (through a global
variable or a local alias) the size of the data to be written is
compared with the size of the initially allocated memory. If
the former exceeds the latter, an overflow is indicated. The
sequence ends when the memory is deallocated using free().
The aspect in Fig. 4 introduces prefetching in a web cache.
The first controlflow phrase initializes prefetching when
an HTTP response is built (clientBuildReply()) within the
control flow of a client request (clientSendMoreData()). The
until clause stops prefetching when the number of connection
becomes too large, a situation where prefetching would
effectively degrade performance. The second controlflow
phrase analyzes hyperlinks in a page being transmitted (i.e.,
when comm write mbuf() is called within the control flow
of clientSendMoreData()). Finally, the last call phrase pre-fetches
hyperlinks analyzed by the second aspect. It does so
by replacing the method call to clientWriteComplete() with
retrieveHyperlinks(). Finally, note that the two require
clauses at the top of the aspect declare the types of the
global variables of the base program used in the aspects.
3.2
Join points
A join point model defines the points in the execution
of the base program to which pointcuts may refer. In our
JP
::= callJP(val funId(val
))
|
readGlobalJP(varId, val)
|
readJP(@, val)
|
writeGlobalJP(varId, val, size)
|
writeJP(@, val, size)
|
controlflowJP(---f
unId, cfEnd)
|
controlflowstarJP(---f
unId, cfEnd)
cf End ::= callJP(val funId(val
))
|
readGlobalJP(varId, val)
|
writeGlobalJP(varId, val, size)
val
::= 0 | 1 | 2 | ...
// int
|
@0 | @1 | @2 | ... // int*
|
... // values of other C types
Figure 5: Join point model
case, join points are defined by JP in the grammar shown
in Fig. 5. A join point is either:
A call of a function callJP(v
1
funId(
v
2
)) with function
name funId, return value v
1
and a vector of arguments
v
2
.
A
read
access
which
comes
in
two
variants:
readGlobalJP(varId, v) denotes reading a global variable
with name varId holding the value v ; readJP(@, v)
denotes reading a global variable or a local alias with
address @ holding the value v .
Write access which also comes in two variants:
writeGlobalJP(varId, v, size) denotes assignment to a global
variable with name varId of the value v of size size.
writeJP(@, v, size) denotes assignment to a global variable
or a local alias with address @ of the value v of size size.
A cflow expression controlflowJP(---f
unId, c), where
---f
unId = [funId
1
, .., funId
n
] is a stack of function names, and
c (either a function call or an access to a global variable) occurs
within the body of function f unId
n
. Such a join point
requires a call to f unId
i+1
within the body of f unId
i
.
A cflow expression controlflowstarJP(---f
unId, c), where
---f
unId = [funId
1
, .., funId
n
] is a partial stack of function
names, and c (either a function call or an access to a global
variable) occurs within the control flow of function f unId
n
.
Such a join point requires a call to f unId
i+1
within the
control flow of (i.e., not necessarily in the body of) f unId
i
.
Two features of this join point model may be surprising
at first sight: distinction of accesses to aliases from those to
global variables and explicit representation of control flow
30
require N umber Of F d as int;
require Squid M axF d as int;
controlflow(call(void clientSendMoreData(void, char, size t)),
call(HttpReply clientBuildReply(clientHttpRequest, char, size t))
&& args( request, buf f er, buf f erSize ))
then startPrefetching(request, buffer, bufferSize);
&& until(writeGlobal(int N umber Of F d) && if((N umber Of F d) 100/(Squid M axF d) 75) ; )
controlflow( call(void clientSendMoreData(void, char, size t)),
call(void comm write mbuf(int, MemBuf, void, void))
&& args(fd, mb, handler, handlerData) && if(! isP ref etch(handler)) )
then parseHyperlinks(fd, mb, handler, handlerData);
call(void clientWriteComplete(int, char, size t, int, void))
&& args(fd, buf, size, error, data) && if(! isP ref etch(handler))
then retrieveHyperlinks(fd, buf, size, error, data);
Figure 4: An Aspect for Prefetching
expressions. Both are motivated by our quest for efficiency
and are grounded in strong implementation constraints in
the context of dynamic weaving of binary C code: an access
to a local alias is several magnitudes slower than that to a
global variable and matching of control flow join points can
be done using an atomic test on the implementation level.
3.3
Pointcuts
We now present a pointcut language (see Fig. 6) that provides
constructs to match individual join points.
Primitive pointcuts are defined by PPrim and comprise
three basic pointcuts matching calls, global variable accesses,
and control flow join points. Primitive pointcuts can also be
combined using a logical "or" noted ||.
A call pointcut PCall selects all function call join points
callJP(val funId(val
)), i.e., all calls to a function matching
the signature type funId(-type
), where the arguments of the
function can be bound to pointcut variables using argument
binder args( ----pattern
) and the return value can be bound to
a pointcut variable using a return clause return( pattern ).
The two constructs args( ----pattern
) and return( pattern )
can also provide pattern matching by using values (or already
bound pointcut variables) in pattern. Pointcuts can
also depend on a boolean condition using the if-constructor.
A global access pointcut PAccGlobal selects either all read
join points readGlobalJP(varId, val) or all write join points
writeGlobalJP(varId, val, size) on the global base program
variable varId. In these cases, the read or written value can
be bound to a variable using value(pattern); in addition, the
size of the written value can be bound with size(varName).
Pattern matching can also be used for variable access.
A control flow pointcut PCf of the form controlflow(
PCallSig
1
, ..., PCallSig
n
, PCfEnd) matches all join points
of the form controlflowJP(funId
1
, ..., funId
n
, cfEnd), where
the function identifier in P CallSig
i
is f unId
i
. Similarly, a
control flow pointcut may match a global variable access
for a given stack configuration. The pointcuts of the form
controlflowstar(. . . ) select calls or global variable accesses
in a stack context allowing for calls that are not directly
nested within one another.
Finally, P Acc, an access pointcut for a global variable or
all of its local aliases, matches all join points of the form
readJP or writeJP.
Asp
::= AspP rim [ && until( AspP rim ) ]
|
AspSeq [ && until( AspP rim ) ]
AspP rim
::= P P rim Advice
AspSeq
::= seq( AspP rim
AspSeqElts
AspSeqElt )
AspSeqElts ::= [AspSeqElts] AspSeqElt [ ]
AspSeqElt ::= AspP rim
|
P Acc Advice
|
(AspSeqElt || AspSeqElt)
Advice
::= [ then f unId(----pattern
) ] ;
Figure 7: Aspect language
3.4
Aspect Language
The aspect language we propose is defined in Fig. 7. Aspects
Asp are either primitive AspP rim, or sequences of
primitive aspects AspSeq.
A primitive aspect AspPrim combines a primitive pointcut
with an advice that will be applied to all join points
selected by the pointcut. If the primitive pointcut has the
form p
1
|| p
2
, then all variables used in the advice have to
be bound in both, p
1
and p
2
.
An advice (Advice) is a C function call that replaces a join
point in the base program execution (similarly to around in
AspectJ). It must have the same return type as the join
point it replaces: the type of the global variable in case of a
read access, void for a write access and the return type of
the function for a call. When the advice is empty (no then
clause), the original join point is executed. The original join
point can be skipped by calling an empty C function.
A sequence aspect is composed of a sequence of primitive
aspects. A sequence starts when the first primitive aspect
matches. Then the second primitive aspect becomes active
instead of the first one. When it matches, the third aspect
becomes active instead of the second one. And so on, until
the last primitive aspect in the sequence. All but the first
and last primitive aspects can be repeated zero or multiple
times by using : in this case, the primitive aspect is ac-31
P P rim
::= P Call
|
P AccGlobal
|
P Cf
|
P P rim || P P rim
P Call
::= P CallSig [ && args( ----pattern
) ] [ && return( pattern ) ] [ && P If ]
P CallSig
::= call( type f unId(-type
) )
P If
::= if( expr ) [ && P If ]
P AccGlobal
::= readGlobal( type varId ) [ && value( pattern ) ] [ && P If ]
|
writeGlobal( type varId ) [ && value( pattern ) ] [ && size( pattern ) ] [ && P If ]
P Cf
::= controlflow( P CallSigList, P Cf End )
|
controlflowstar( P CallSigList, P Cf End )
P CallSigList ::= P CallSig [ , P CallSigList ]
P Cf End
::= P Call | P AccGlobal
P Acc
::= read( var ) [ && value( pattern ) ] [ && P If ]
|
write( var ) [ && value( pattern ) ] [ && size( pattern ) ] [ && P If ]
pattern
::= var | val
Figure 6: Pointcut language
A
::= A
|
A || A
; parallelism
A
::= a.A
; recursive definition (a Rec)
|
C I; A
; prefixing
|
C I; a
; end of sequence (a Rec)
|
C I; STOP ; halting aspect
|
A P A
; choice
Figure 8: Tiny aspect language
tive as long as the following one in the sequence does not
match. Branching, i.e., a logical `or' between two primitive
aspects, can be introduced in a sequence by the operator ||.
An element of the sequence can also match a global variable
of the base program and accesses to its local aliases, as
soon as its address is known (i.e., a previous primitive pointcut
has already bound its address to a pointcut variable).
Hence, an aspect matching accesses cannot start a sequence.
Every join point matching the first primitive pointcut of a
sequence starts a new instance of the sequence. The different
instances are matched in parallel.
A primitive or a sequence aspect a can be used in combination
with an expression until(a
1
), to restrict its scope. In
this case, once a join point has been matched by a, the execution
of a proceeds as previously described until a
1
matches.
To conclude the presentation of our language, note that it
does not include some features, such as named pointcuts as
arguments to controlflows and conjunctive terms, which
are not necessary for the examples we considered but which
could easily be added. (As an aside, note that such extensions
of the pointcut language may affect the computability
of advanced algorithmic problems, such as whether a pointcut
matches some part of any base program [25].)
3.5
Towards a formal semantics for expressive
aspects
In the previous sections, we have given an informal semantics
of our aspect language. We now illustrate how the
aspect language could be formally defined by translating one
of the example aspects into formal aspect language by extension
of that used in the formal framework of [14].
The original formal language must be extended in order to
deal with halting aspects, an unbounded number of sequential
aspects and arbitrary join point predicates. The grammar
of the extension, our tiny aspect language, is defined in
Figure 8. In this language, aspect expressions A consists of
parallel combinations of aspects, C is a join point predicate
(similar to our pointcut language) expressed as a conjunction
of a term pattern and possibly an expression from the
constraint logic programming language CLP(R) [20].
An aspect A is either:
A recursive definition.
A sequence formed using the prefix operation C I ; X,
where X is an aspect or a recursion variable and I a piece
of code (i.e., an advice).
A choice construction A
1
P A
2
which chooses the first
aspect that matches a join point (the other is thrown away).
If both match the same join point, A
1
is chosen.
A parallel composition of two aspects A
1
||
A
2
that
cannot occur in choice construction.
A halting aspect STOP.
The semantics of the protocol translation aspect (from
TCP to UDP) is given in Fig. 9. A sequence can have several
instances. This is translated into the language A by the
expression a
1
|| ... which starts a new sequence a
1
once
the first join point has been matched and continue to match
the rest of the sequence in progress. The repetition operator
is translated into recursion on variable the a
2
. The
branching operator || is translated into the choice operator
32
a
1
. callJP(fd socket(AF INET, SOCK STREAM, 0)) socket(AF INET, SOCK DGRAM, 0);
a
1
|| ( callJP(a connect(fd, address, length)) returnZero();
a
2
. callJP(b close(fd)) skip; STOP
P callJP(c read(fd, readBuffer, readLength)) recvfrom(fd, readBuffer, readLength, 0, address, length); a
2
P callJP(d write(fd, writeBuffer, writeLength)) recvfrom(fd, writeBuffer, writeLength, 0, address, length); a
2
Figure 9: Definition of the protocol translation using the tiny aspect language
P. Finally, the last primitive aspect of the sequence occurs
as the first aspect of a choice to get priority over the join
points read and write because of the . Note that we use
pattern matching in A and that an overbar marks the first
occurrence of a variable (i.e., its definition not a use).
Note that formal definitions such as that of the protocol
translation aspect precisely define several important issues,
in particular, when new instances of the sequence aspect are
created, and disambiguate of potentially non-deterministic
situations, e.g., when two pointcuts of consecutive primitive
aspects in the sequence match at the same time.
DYNAMIC WEAVING WITH ARACHNE
Arachne is built around two tools, an aspect compiler and
a runtime weaver. The aspect compiler translates the aspect
source code into a compiled library that, at weaving time, directs
the weaver to place the hooks in the base program. The
hooking mechanisms used in Arachne are based on improved
techniques originally developed for Dyner [32]. These techniques
allow to rewrite the binary code of executable files
on the fly i.e.without pausing the base program, as long
as these files conform to the mapping defined by the Unix
standard [35] between the C language and x86 assembly language
. Arachne's implementation is structured as an open
framework that allows to experiment with new kinds of join
points and pointcut constructs. Another important difference
between Arachne and Dyner is, that Dyner requires
a compile time preparation of the base program, whereas
Arachne does not. Hence Arachne is totally transparent for
the base program while Dyner is not.
4.1
The Arachne Open Architecture
The Arachne open architecture is structured around three
main entities: the aspect compiler, the instrumentation kernel
, and the different rewriting strategies. The aspect compiler
translates the aspect source code into C before compiling
it. Weaving is accomplished through a command line
tool weave that acts as a front end for the instrumentation
kernel. weave relays weaving requests to the instrumentation
kernel loaded in the address space of the program
through Unix sockets. Upon reception of a weaving request,
the instrumentation kernel selects the appropriate rewriting
strategies referred by the aspects to be woven and instruments
the base program accordingly. The rewriting strategy
consults the pointcut analysis performed by the aspect
compiler to locate the places where the binary code of the
base program needs to be rewritten. It finally modifies the
binary code to actually tie the aspects to the base program.
With this approach, the Arachne core is independent of
a particular aspect, of the aspect language, of the particular
processor architecture, and of a particular base program.
In fact, all dependencies to aspect language implementation
are limited to the aspect compiler. All dependencies to the
operating system are localized in the instrumentation kernel
and finally all dependencies to the underlying hardware
architecture are modularized in the rewriting strategies.
4.1.1
The Arachne aspect compilation process
The aspect compilation scheme is relatively straightforward
: it transforms advices into regular C functions. Pointcuts
are rewritten as C code driving hook insertions into
the base program at weaving time. There are however cases
where the sole introduction of hooks is insufficient to determine
whether an advice should be executed. In this case,
the aspect compiler generates functions that complement
the hooks with dynamic tests on the state of the base program
. These dynamic tests are called residues in AspectJ
and the rewritten instructions within the base program the
shadow [16]. Once the aspects have been translated into C,
the Arachne compiler uses a legacy C compiler to generate a
dynamically linked library (DLL) for the compiled aspects.
4.1.2
The Arachne weaving process
From a user viewpoint, the Arachne weave and deweave
command line programs the same syntax than Dyner's version
. They both take two arguments. The first identifies the
process to weave aspects in or deweave aspects from, and
the second indicates the aspect DLL. However, Arachne can
target potentially any C application running on the machine
while Dyner was limited to applications compiled with it
running on the machine. When Arachne's weave receives a
request to weave an aspect in a process that does not contain
the Arachne instrumentation kernel, it loads the kernel
in the process address space using standard techniques [11].
The instrumentation kernel is transparent for the base
program as the latter cannot access the resources (memory
and sockets essentially) used by the former. Once injected
, the kernel creates a thread with the Linux system
call: clone. This thread handles the different weaving requests
. Compared to the POSIX pthread create function,
the usage of clone allows the instrumentation thread to prevent
the base program to access its sockets. The instrumentation
kernel allocates memory by using side effect free allocation
routines (through the Linux mmap API). Because the
allocation routines are side effect free, Arachne's memory is
totally invisible to the base program. It is up to the aspect
to use Arachne's memory allocation routines or base program
specific allocation functions. This transparency turns
out to be crucial in our experiments. Legacy applications
such as Squid use dedicated resource management routines
and expect any piece of code they run to use these routines.
Failures will result in an application crash.
After loading an aspect, the instrumentation kernel rewrites
the binary code of the base program. These rewriting strategies
are not included in the kernel and must be fetched on
demand by each loaded aspect.
4.2
Rewriting strategies
Rewriting strategies are responsible for transforming the
binary code of the base program to effectively tie aspects to
33
shadow: rewriting
site replaced by a
x86 instruction
x86 instruction
x86 instruction
x86 instruction
execution flow
generated at aspect compile time
Aspect DLL
Hooks generated at weaving
time
jump
Binary code of the
compiled base
program
and/or advices
Residue (dynamic tests)
Entry hook
save registers
Return hook
Restore registers
instruction(s)
Relocated tailored
updating registers
Legacy base program
Figure 10: Generic hook operations.
the base program at weaving time. These strategies localize
Arachne's main dependencies to the underlying hardware
architecture. In general, rewriting strategies need to collect
information about the base program. These information
typically consist of the addresses of the different shadows,
their size, the symbol (i.e.function or global variable name)
they manipulate, their length etc. In order to keep compiled
aspects independent from the base program, this information
is gathered on demand at runtime. The mapping between
a symbol name in the base program source code and
its address in memory is inferred from linking information
contained in the base program executable. However because
these information can be costly to retrieve, Arachne collects
and stores it into meta-information DLLs. these DLLs behave
as a kind of cache and lessen the problem of collecting
the information required to instrument the base program.
To implement our aspect language, Arachne provides a set
of eight rewriting strategies that might eventually use each
other.
4.2.1
Strategies for
call
,
readGlobal
and
writeGlobal
In Arachne, call, readGlobal and writeGlobal allow an
advice to be triggered upon a function call, a read on a
global variable or a write respectively. While the implementation
of readGlobal and writeGlobal in Arachne is close
to the one in Dyner, Arachne implements the strategy for
call by rewriting function invocations found in the base
program. Dyner instead rewrites the function body of the
callee. On the Intel architecture, function calls benefit from
the direct mapping to the x86 call assembly instruction
that is used by almost, if not all, compilers. Write and read
accesses to global variables are translated into instructions
using immediate, hard coded addresses within the binary
code of the base program. By comparing these addresses
with linking information contained in the base program executable
, Arachne can determine where the global variable
is being accessed. Therefore those primitive pointcuts do
not involve any dynamic tests. The sole rewriting of the
binary base program code is enough to trigger advice and
residue
1
executions at all appropriate points.
The size of the x86 call instruction and the size of an x86
jump (jmp) instruction are the same. Since the instruction
performing an access to a global variable involves a hard
coded address, x86 instructions that read or write a global
1
Residues (i.e. dynamic tests on the base program state) are
required when these primitive pointcuts are combined with
conditional pointcuts or when pattern matching is involved.
variable have at least the size of a x86 jmp instruction. Hence
at weaving time, Arachne rewrites them as a jmp instruction
to a hook. Hooks are generated on the fly on freshly allocated
memory. As shown in figure 10, hooks contain a few
assembly instructions that save and restore the appropriate
registers before and after an advice (or shadow) execution.
A generic approach is to have hooks save the whole set of
registers, then execute the appropriate residue and/or advice
code before restoring the whole set of registers; finally
the instructions found at the join point shadow are executed
to perform the appropriate side effects on the processor registers
. This is accomplished by relocating the instructions
found at the join point shadow. Relocating the instructions
makes the rewriting strategies handling read and write access
to global variable independent from the instruction generated
by the compiler to perform the access
2
. The limited
number of x86 instructions used to invoke a function allows
Arachne's rewriting strategy to exploit more efficient, relo-cation
free, hooks.
4.2.2
Strategies for
controlflow
and
controlflowstar
Every time a C function is called, the Linux runtime
creates an activation record on the call stack [35]. Like
Dyner, Arachne's implementation of the rewriting strategy
for controlflow uses the most deeply nested function
call (or global read or write access) in the control flow pointcut
as shadow. This shadow triggers a residue. This residue
uses the activation record's chaining to check whether the
remaining function calls of the control flow, are on the call
stack maintained by the Linux runtime. An appropriate
usage of hashtables that store the linking information contained
in the base program executables can thereby decrease
the cost of determining if a specific function is the
caller of another to a pointer comparison. Therefore, the
residue for a controlflow with n directly nested functions
implies exactly n pointer comparisons. However, the residue
worst case runtime for the indirect control flow operator
controlflowstar that allows for not directly nested functions
, is proportional to the base program stack depth.
4.2.3
Strategies for
read
and
write
read and write are new join points not included in Dyner
that have been added to the latest version of Arachne. Their
implementation relays on a page memory protection as allowed
by the Linux operating system interface (i.e. mprotect)
and the Intel processor specifications [18]. A read or write
pointcut triggers a residue to relocate the bound variable
into a memory page that the base program is not allowed
to access and adds a dedicated signal handler. Any attempt
made by the base program to access the bound variable identified
will then trigger the execution of the previously added
signal handler. This handler will then inspect the binary
instruction trying to access the protected page to determine
whether it was a read or a write access before eventually
executing the appropriate advice.
4.2.4
Strategies for
seq
Like read and write, seq is a new language feature of
Arachne. Dyner offers no equivalent construct. Arachne's
rewriting strategy of this operator associates a linked list to
2
About 250 x86 instruction mnemonics can directly manipulate
a global variable. This corresponds to more than one
thousand opcodes.
34
every stage inside the sequence except the last one. Each
stage in a sequence triggers a residue that updates these
linked lists to reflect state transitions of currently matching
execution flows. Upon matching of the first pointcut
of the first primitive aspect in the seq, a node is allocated
and added to the associated linked list. This node contains
a structure holding variables shared among the different
pointcuts within the sequence. Once a join point
matches a pointcut of an primitive aspect denoting a stage
in the sequence, Arachne consults every node in the linked
list associated with the previous stage and executes the corresponding
advice
3
. Arachne eventually updates the node
and in the absence of a moves it to the list associated
with the currently matched pointcut.If the matching pointcut
corresponds to the end of the sequence, structures are
not moved into another list but freed. Our aspect compiler
includes an optimization where structures are allocated from
a resizable pool and upon a sequence termination, structures
are not freed but returned to the pool.
4.3
Arachne limitations
Aggressive optimizations of the base program might prevent
Arachne to seamlessly weave aspects. Two optimizations
are not yet supported by Arachne. First if the compiler
inlines a function in another one within the binary code of
the base program, the Arachne weaver will fail to properly
handle pointcuts referring to that function. Second, control
flow pointcuts are based on the chaining of activation
records. On the x86 architecture, in leaf functions, optimizing
compilers sometimes do not maintain this chaining
to free one register for the rest of the computation. This
however has not been a problem during our experiments
as we used the open source C compiler gcc. Arachne supports
two of the three optimization levels proposed by gcc.
Stripping that removes linking information and aggressive
optimizations that break the interoperability between compilers
and/or debuggers are incompatible with Arachne. In
practice, Arachne can be used on applications compiled like
squid with two of the three gcc optimization level.
PERFORMANCE EVALUATION
Aspect-oriented solutions will be used if the aspect sys-tem's
language is expressive enough and if the aspect system
overhead is low enough, for the task at hand. The purpose
of this section is to study Arachne's performance. We first
present the speed of each Arachne language construct and
compare it to similar C language constructs. We then study
the overhead of extending Squid with a prefetching policy.
This case study shows that even if the cost of some Arachne
aspect language constructs might be high compared to C
language constructs, this overhead is largely amortized in
real applications.
5.1
Evaluation of the language constructs
This performance evaluation focuses on studying the cost
of each construct of our aspect language. To estimate the
cost for each construct of our aspect language, we wrote an
aspect using this construct that behaves as an interpreter of
3
In case the previous stage pointcut was used with a star
, Arachne examines nodes from linked list associated with
the last two previous stages, and so on, until a not starred
primitive aspect in the sequence is reached.
Execution times (cycles)
Arachne
Native
Ratio
call
28
2.3%
21
1.9%
1.3
seq
201
0.5%
63
1.7%
3.2
cflow
228
1.6%
42
1.8%
5.4
readGlobal
2762
4.3%
1
0.2%
2762
read
9729
4.9%
1
0.6%
9729
Table 1: Speed of each language construct used to
interpret the base program compared to a native
execution.
the base program. For example, to study the performance
of readGlobal, we wrote an aspect whose action returns the
value of the global variable referred in the pointcut, i.e., we
wrote aspects behaving like the base program. For each of
these aspects, we compare the time required to perform the
operation matching the pointcut, in case the operation is
interpreted by the woven aspect with the time required to
carry out the operation natively (without the woven aspect).
For example, to study the performance of readGlobal, we
first evaluate the time needed to retrieve the global variable
value through the code generated by the C compiler gcc
without any aspect woven and compare this value to the
time needed to retrieve the global variable value through
the aspect once it has been woven in the base program.
We express our measurements as a ratio between these two
durations to abstract from the experimentation platform.
This approach requires the ability to measure short periods
of time. For instance, a global variable value is usually
retrieved (readGlobal in our aspect language) in a single
clock tick. Since standard time measurement APIs were
not precise enough, our benchmarking infrastructure relies
on the rdtsc assembly instruction [18]. This instruction returns
the number of clock cycles elapsed since power up. The
Pentium 4 processor has the ability to dynamically reorder
the instructions it executes. To ensure the validity of our
measurement, we thus insert mfence instructions in the generated
code whose execution speed is being measured. An
mfence forces the preceding instructions to be fully executed
before going on. The pipeline mechanism in the Pentium 4
processor entails that the speed of a piece of assembly code
depends from the preceding instructions. To avoid such hidden
dependencies, we place the operation whose execution
time is being measured in a loop. We use gcc to unroll the
loop at compile time and we measure the time to execute
the complete loop. This measure divided by the number of
loop repetitions yields an estimation of the time required
to execute the operation. The number of times the loop is
executed is chosen after the relative variations of the measures
,i.e., we increased the number of repetitions until ten
runs yields an average relative variation not exceeding 5%.
To check the correctness of our experimental protocol, we
measured the time needed to execute a nop assembly instruction
, that requires one processor cycle according to the
Intel specification. The measures of nop presented a relative
variation of 1.6%.
Table 1 summarizes our experimental results. Using the
aspect language to replace a function that returns immediately
is only 1.3 times slower than a direct, aspect-less, call
to that empty function. Since the aspect compiler packages
advices as regular C functions, and because a call pointcut
involves no residue, this good result is not surprising. When
35
Controlflow
28 Cycles
228 Cycles
327 Cycles
424 Cycles
522 Cycles
1
2
5
3
4
10
20
30
Ratio with a normal function call
Number of imbricated calls
1
2
5
3
4
Ratio with 3 calls
Number of matching instances
5
10
200.6 Cycles
293.2 Cycles
380.8 Cycles
466.3 Cycles
Sequence
Ratio
1000
2000
3000
577 Cycles
Figure 11: controlflow, seq, and read performances
an access to a global variable is replaced by an advice execution
, the hooks generated by the rewriting strategy need
to prepare the processor to call the advice function. This
increases the time spent in the hooks. In addition, while
an access to a global variable is often performed by a single
x86 instruction, an empty function is often composed
of four instructions. Hence the relative cost of an aspect
triggered upon a global variable access and a direct, aspect-less
, access to a global variable is slightly higher than the
corresponding ratio for functions. A seq of three invocations
of empty functions is only 3.2 time slower than the
direct, aspect-less, three successive functions calls. Compared
to the pointcuts used to delimit the different stages,
the seq overhead is limited to a few pointer exchanges between
the linked lists holding the bound variable. On Intel
x86, global variable accesses benefit from excellent hardware
support. In the absence of aspects, a direct global variable
read is usually carried out in a single unique cycle. To trigger
the advice execution, the Arachne runtime has to save
and restore the processor state to ensure the execution co-herency
, as advices are packaged as regular C functions (see
also 4.2.1). It is therefore not surprising that a global variable
readGlobal appears as being 2762 times slower than
a direct, aspect-less global variable read. read performance
can be accounted in the same way: in the absence of aspect,
local variables are accessed in a single unique cycle. The
signal mechanism used in the read requires that the operating
system detects the base program attempt to read into
a protected memory page before locating and triggering the
signal handler set up by Arachne, as shown in 4.2.3. Such
switches to and from kernel space remain slow. Using read
to read a local variable is 9729 times slower than retrieving
the local variable value directly, without aspects.
seq and controlflow can refer to several points in the execution
of the base program (i.e. different stages for seq and
different function invocations for the controlflow). The
runtime of these pointcuts grows linearly with the number
of execution points they refer to and with the number of
matching instances. Figure 11 summarizes a few experimental
results for controlflow and seq proving these points.
5.2
Case Study on a real application
Since, depending on the aspect construct used, interpreting
the base program with aspects can slow it down by a factor
ranging between 1.3 and 9729, we studied Arachne's performance
on a real world application, the Web cache Squid.
Arachne
Manual
Top1
Top1
Diff
Top2
Top2
(%)
Throughput
(request/s)
5.59
5.59
5.58
5.59
Response Time
(ms)
1131.42
1146.07
1.2 -1
1085.31
1074.55
Miss response
time (ms)
2533.50
2539.52
0.2 1.8
2528.35
2525.34
Hit response
time (ms)
28.96
28.76
-0.6 3.8
30.62
31.84
Hit ratio
59.76
59.35
-0.6 0.7
61.77
62.22
Errors
0.51
0.50
-1.9 0
0.34
0.34
Table 2: Performances comparison between manual
modification and Arachne, for prefechting policy integration
in Squid
We extended Squid with a prefetching policy [9]. As described
in section 3.1, we implemented this policy as a set
of aspects and made a second implementation of this policy
by editing the Squid source code and recompiling it. This
section compares the performance of these two implemen-tations
using standard Web cache performance indicators:
throughput, response time and hit ratio.
Obtaining access traces adequate to study a Web cache
performance is difficult. The trace must be long enough to
fill the cache. Due to privacy issues, traces are usually not
publicly available. Since traces do not include the content of
the accessed pages, these pages must be downloaded again.
In the meantime the page contents may have changed and
even the URLs may have disappeared.
Instead of traces, we based our evaluation on Web Polygraph
[30]. Polygraph is a benchmarking tool developed by
the Squid team and featuring a realistic HTTP and SSL
traffic generator and a flexible content simulator.
We filled up the cache and simulated a one day workload
with its two request rate peaks observed in real life environments
[30]. Table 2 shows results of our simulation. Measures
have been made during the two request peaks. The
hit time and the miss time, time needed to deliver a document
present, respectively not present, in the cache are very
similar. It shows that differences are imperceptible between
the version of Squid extended by Arachne and the one extended
manually (less than 1%). Hence, even if the cost
of Arachne's aspect language constructs might seem high,
they are largely amortized in real applications. To give a
typical example observed on our experimental platform: in
case of a cache hit, a 3.8 MB page was retrieved in a single
second, the time spent in prefetching advices amounted to
1801 sec, and the time spent within Arachne to execute the
hooks and dynamic tests to 0.45 sec. In a miss case, on
the average, a client retrieved the same page in 1.3 seconds,
16679 sec were spent in the advices and 0.67 sec within
Arachne itself.
RELATED WORK
Our work is directly related to other aspect weavers for
C, approaches for expressive aspect languages, and dynamic
weaving, in particular for C. In this section, we consider
related work in each of these fields in turn.
Apart from Dyner and Arachne, there are few aspect
36
weavers for C (or even C like languages); some noteworthy
exceptions are AspectC [12] (no available implementation
), AspectC++ and [33]. All of these rely on source-code
transformation and thus cannot apply aspects to running
C applications as required by the applications we consider.
Furthermore, none of these systems provides explicit support
for aspects over join point sequences.
There is quite a large body of work now on the notion of
expressive aspect languages where "more expressive" typically
compares to w.r.t. AspectJ's pointcut and advice models
. Our work has been inspired by Event-based AOP [15],
which aims at the definition of pointcuts in terms of arbitrary
relations between events. Nevertheless, many other
approaches to expressive aspect languages exist: e.g., data-flow
relations [26], logic programming [13], process algebras
[3], graphs [5], and temporal logics [1], have all been proposed
as a basis for the definition of expressive aspect languages
. However, few of these encompass dynamic weaving
and only the latter has been applied to C code under efficiency
considerations similar to our setting.
Dynamic weaving is commonly realized in Java through
preprocessing at load-time like [8] or through the JVM Debugging
Interface [28]. These tools rely on bytecode rewriting
techniques, have typically limited expressivity (some do
not support field accesses) and incur a huge performance
overhead. Dynamic weaving through modification at runtime
is found infrequently for compiled languages. An exception
for Java is JasCo [21] whose most recent version (0.7)
supports dynamic weaving through the new instrumentation
API of Java 5.
Many instrumentation techniques have been proposed to
rewrite binary code on the fly. In these approaches, difficulty
issues range from the complexity to rewrite binary
code to the lack of a well-defined relationship between source
code and the compiler generated binary code. Hence many
approaches work on an intermediate representation of the
binary code and source language [34]. Producing this representation
first and then regenerating the appropriate binary
executable code has proven to be costly both in terms of
memory consumption and in CPU time.
A few other approaches have considered a direct rewriting
of the binary code at runtime. Dyninst [17] and dynamic
probes [27] allow programmers to modify any binary instruction
belonging to an executable. Dyninst however relies on
the Unix debugging API: ptrace. ptrace allows a third
party process to read and write the base program memory.
It is however highly inefficient: before using ptrace, the
third party process has to suspend the execution of the base
program and resume its execution afterwards. In comparison
, Arachne uses ptrace at most once, to inject its kernel
DLL into the base program process. In addition, Dyninst
does not free the programmer from dealing with low level
details. For example, it seems difficult to trigger an advice
execution upon a variable access with Dyninst: the translation
from the variable identifier to an effective address is left
to the user. Worse, Dyninst does not grant that the manipulation
of the binary instructions it performs will succeed.
Dyninst uses an instrumentation strategy where several adjacent
instructions are relocated. This is unsafe as one of
the relocated instructions can be the target of branching
instructions. In comparison, Arachne join point model has
been carefully chosen to avoid these kind of issues; if an aspect
can be compiled with Arachne, it can always be woven.
CONCLUSION AND FUTURE WORK
In this paper we have discussed three different crosscutting
concerns which are typical for C applications using OS-level
services and which frequently need to be applied at
runtime. We have motivated that such concerns can be expressed
as aspects and have defined a suitable aspect language
. This language is more expressive than those used in
other aspect weavers for C in that it provides support for
aspects defined over sequences of execution points as well as
for variable aliases. We have presented an integration of this
language into Arachne, a weaver for runtime weaving of aspects
in C applications. Finally, we have provided evidence
that the integration is efficient enough to apply such aspects
dynamically to high-performance applications, in particular
the web cache "squid."
As future work, we intend to investigate the suitability of
the proposed aspect language for other C-applications. We
also intend to investigate Arachne extension to the C++
language. Indeed, object-oriented programming heavily uses
protocol-based interfaces collaboration (hence sequence aspects
). Along with its open architecture, extending Arachne
to support C++, will pave the way to a relatively language
independent aspect and weaving infrastructure.
Finally,
Arachne's toolbox should be extended with support for aspect
interactions (e.g., analyses and composition operators).
REFERENCES
[1] R. A.
Aberg, J. L. Lawall, M. S
udholt, G. Muller, and
A.-F. L. Meur. On the automatic evolution of an os
kernel using temporal logic and AOP. In Proceedings
of Automated Software Engineering (ASE'03), pages
196204. IEEE, 2003.
[2] American National Standards Institute.
ANSI/ISO/IEC 9899-1999: Programming Languages
-- C. American National Standards Institute, 1430
Broadway, New York, NY 10018, USA, 1999.
[3] J. H. Andrews. Process-algebraic foundations of
aspect-oriented programming. In Proceedings of the
3rd International Conference on Metalevel
Architectures and Separation of Crosscutting
Concerns, volume 2192 of LNCS. Springer Verlag,
Sept. 2001.
[4] M. Arlitt and T. Jin. A workload characterization
study of the 1998 world cup web site. IEEE Network,
14(3):3037, May 2000.
[5] U. Amann and A. Ludwig. Aspect weaving by graph
rewriting. In U. W. Eisenecker and K. Czarnecki,
editors, Generative Component-based Software
Engineering (GCSE), Erfurt, Oct. 1999.
[6] CERT - Carnegie Mellon University. Vulnerability
note vu#613459, Feb. 2002. published on line:
http://www.kb.cert.org/vuls/id/613459.
[7] H. Chen and P. Mohapatra. Catp: A context-aware
transportation protocol for http. In International
Workshop on New Advances in Web Servers and
Proxy Technologies Held with ICDCS, 2003.
[8] S. Chiba and K. Nakagawa. Josh: An open
AspectJ-like language. In Proceedings of the third
37
international conference on Aspect-oriented software
development, pages 102111. ACM Press, Mar. 2004.
[9] K.-I. Chinen and S. Yamaguchi. An interactive
prefetching proxy server for improvement of WWW
latency. In Seventh Annual Conference of the Internet
Society (INET'97), Kuala Lumpur, June 1997.
[10] I. Cidon, A. Gupta, R. Rom, and C. Schuba. Hybrid
tcp-udp transport for web traffic. In Proceedings of the
18th IEEE International Performance, Computing,
and Communications Conference (IPCCC'99), pages
177184, Feb. 1990.
[11] S. Clowes. Injectso: Modifying and spying on running
processes under linux. In Black hat briefings, 2001.
[12] Y. Coady, G. Kiczales, M. Feeley, and G. Smolyn.
Using AspectC to improve the modularity of
Path-Specific customization in operating system code.
In V. Gruhn, editor, Proc. of the Joint 8th European
Software Engeneering Conference and 9th ACM
SIGSOFT Symposium on the Foundation of Software
Engeneering (ESEC/FSE-01), volume 26, 5 of
SOFTWARE ENGINEERING NOTES, pages 8898,
New York, Sept. 1014 2001. ACM Press.
[13] K. de Volder. Aspect-oriented logic meta
programming. In P. Cointe, editor, Meta-Level
Architectures and Reflection, 2nd International
Conference on Reflection, volume 1616 of LNCS,
pages 250272. Springer Verlag, 1999.
[14] R. Douence, P. Fradet, and M. S
udholt. A framework
for the detection and resolution of aspect interactions.
In Proceedings of the ACM SIGPLAN/SIGSOFT
Conference on Generative Programming and
Component Engineering (GPCE'02), volume 2487 of
LLNCS, pages 173188. Springer-Verlag, Oct. 2002.
[15] R. Douence, O. Motelet, and M. S
udholt. A formal
definition of crosscuts. In Proceedings of the 3rd
International Conference on Metalevel Architectures
and Separation of Crosscutting Concerns, volume 2192
of LNCS, pages 170186. Springer Verlag, Sept. 2001.
[16] E. Hilsdale and J. Hugunin. Advice weaving in
aspectj. In Proceedings of the 3rd international
conference on Aspect-oriented software development,
pages 2635. ACM Press, 2004.
[17] J. K. Hollingsworth, B. P. Miller, M. J. R. Goncalves,
O. Naim, Z. Xu, and L. Zheng. MDL: A language and
compiler for dynamic program instrumentation. In
IEEE Conference on Parallel Architectures and
Compilation Techniques (PACT), pages 201213, Nov.
1997.
[18] Intel Corportation. IA-32 Intel Architecture Software
Developer's Manual. Intel Corportation, 2001.
[19] V. Issarny, M. Ban^atre, B. Charpiot, and J.-M.
Menaud. Quality of service and electronic newspaper:
The Etel solution. Lecture Notes in Computer Science,
1752:472496, 2000.
[20] J. Jaffar, S. Michaylov, P. J. Stuckey, and R. H. C.
Yap. The clp( r ) language and system. ACM Trans.
Program. Lang. Syst., 14(3):339395, 1992.
[21] JasCo home page. http://ssel.vub.ac.be/jasco/.
[22] R. Jones and P. Kelly. Backwards-compatible bounds
checking for arrays and pointers in c programs. In
M. Kamkar, editor, Proceedings of the Third
International Workshop on Automatic Debugging,
volume 2, pages 1326, May 1997.
[23] A. D. Keromytis. "Patch on Demand" Saves Even
More Time? IEEE Computer, 37(8):9496, 2004.
[24] G. Kiczales, J. Lamping, A. Menhdhekar, C. Maeda,
C. Lopes, J.-M. Loingtier, and J. Irwin.
Aspect-oriented programming. In M. Aksit and
S. Matsuoka, editors, Proceedings European
Conference on Object-Oriented Programming, volume
1241, pages 220242. Jyvaskyla, Finland, June 1997.
[25] K. J. Lieberherr, J. Palm, and R. Sundaram.
Expressiveness and complexity of crosscut languages.
Technical Report NU-CCIS-04-10, Northeastern
University, Sept. 2004.
[26] H. Masuhara and K. Kawauchi. Dataflow pointcut in
aspect-oriented programming. In First Asian
Symposium on Programming Languages and Systems
(APLAS'03), 2003.
[27] R. J. Moore. Dynamic probes and generalised kernel
hooks interface for Linux. In USENIX, editor,
Proceedings of the 4th Annual Linux Showcase and
Conference, Atlanta, October 1014, 2000, Atlanta,
Georgia, USA, Berkeley, CA, USA, 2000. USENIX.
[28] A. Popovici, G. Alonso, and T. Gross. Just-in-time
aspects: efficient dynamic weaving for Java. In
Proceedings of the 2nd international conference on
Aspect-oriented software development, pages 100109,
Boston, Massachusetts, Mar. 2003. ACM Press.
[29] M. Rabinovich and H. Wang. DHTTP: An efficient
and cache-friendly transfer protocol for web traffic. In
INFOCOM, pages 15971606, 2001.
[30] A. Rousskov and D. Wessels. High-performance
benchmarking with Web Polygraph. Software Practice
and Experience, 34(2):187211, Feb. 2004.
[31] O. Ruwase and M. S. Lam. A practical dynamic buffer
overflow detector. In Proceedings of the 11th Annual
Network and Distributed System Security Symposium.
Internet Society, Feb. 2004.
[32] M. Segura-Devillechaise, J.-M. Menaud, G. Muller,
and J. Lawall. Web cache prefetching as an aspect:
Towards a dynamic-weaving based solution. In
Proceedings of the 2nd international conference on
Aspect-oriented software development, pages 110119,
Boston, MA, USA, Mar. 2003. ACM Press.
[33] O. Spinczyk, A. Gal, and W. Schroeder-Preikschat.
AspectC++: an aspect-oriented extension to the C++
programming language. In Proceedings of the Fortieth
International Conference on Tools Pacific, pages
5360. Australian Computer Society, Inc., 2002.
[34] A. Srivastava and A. Edwards. Vulcan: Binary
transformation in a distributed environment. Microsoft
Research Tech. Rpt. MSR-TR-2001-50, 2001.
[35] U. S. L. System Unix. System V Application Binary
Interface Intel 386 Architecture Processor Supplement.
Prentice Hall Trade, 1994.
[36] D. Wessels. Squid: The Definitive Guide. O'Reilly and
Associates, Jan. 2004.
[37] J. Wilander and M. Kamkar. A comparison of publicly
available tools for dynamic buffer overflow prevention.
In Proceedings of the 10th Network and Distributed
System Security Symposium, pages 149162, San
Diego, California, February 2003.
38
| aspect language;buffer overflows;prefetching;sequence pointcut;system applications;binary code;dynamic weaving;Arachne;web cache;operating system;network protocol;C applications |
33 | An Index System for Quality Synthesis Evaluation of BtoC Business Website | It is important for successful electronic business to have a hi-quality business website. So we need an accurate and effective index system to evaluate and analyses the quality of the business website. In this paper, the evaluation index system following the `grey box' principle is proposed which considers both efficiency of business website and performance of electronic business system. Using R-Hierarchical clustering method to extract the typical indexes from sub-indexes is theoretically proved to have a rationality and effectiveness. Finally, the evaluation method is briefly discussed. | INTRODUCTION
Business website is an online media between buyer and seller.
A hi-quality website is crucial to a company for a successful
e-business. What is a hi-quality business website? In terms of
maintaining the website, what do we focus on so that the quality
meets the users' needs? Apparently, using click-through rate to
assess the popularity cannot objectively and accurately evaluate
the quality of the business websites. Instead, we need to rely on
scientific evaluation index system and methods.
At present, there are many methods available for business
website comparison or ranking, such as Usage Ranking,
Purchase Comparison, Expert Opinion and Synthesis
Evaluation etc. You can find both official authority and
non-governmental organization that issue their power ranking.
The former one is to monitor and regulate the market, such as
CNNIC, which organized the competition for the Top Ten
Websites in domestic. The latter one, such as Consumerreports
(
www.consumerreports
. org
), BizRate(www.bizrate.com),
Forrester Research etc., is mainly to guide the web users'
activity. These kinds of comparison or ranking have special
value in getting reputation and increasing recognition of the
business websites among the users, however, e-business
enterprise can not improve the quality of their websites directly
based on the results of these kinds of assessments.
The main purpose of this paper is to develop an index system
for quantitative evaluation of the BtoC websites, which dose not
emphasize the income of the website but focus on evaluating of
its synthesis quality. We hope that the applying of this index
system will provide the technique developers and maintainers
some references for designing, appraising and diagnosing their
e-business system to improve its quality level, and to support
managers to make decisions for operation of the websites.
OVERVIEW OF PREVIOUS STUDIES
Comparing to the fast growing of e-business websites in the
world, currently we can rarely find the particular research on
the evaluation index system of business website. QEM (The
website quality evaluation method) proposed by Olsina and
Godoy etc. in 1999 can be considered as one of the
representative approaches. It based on the main factors to
evaluate the quality of the websites, including functionality
(global search, navigability, and content relativity), usability
(website map, addresses directory), efficiency and reliability. In
2000, American researcher, Panla Solaman, presented
e-SERVQUAL model based on the conventional service quality
evaluation model SERVQUAL. It contains some factors like
efficiency, deal completeness, reliability, privacy protection,
responsiveness, recompense and contact etc. In the same year,
another American researcher, Hauler, introduced an e-QUAL
model which includes the factors of content, accessibility,
navigability, design and presentation, responsiveness,
background, personalization and customization, etc. In 2004,
F.J. Miranda Gonzalez and T.M.Banegil Palacios developed an
universal evaluation index system WIS (Web Assessment Index)
that can be employed to assess websites by different
organizations. It consists of four indexes of accessibility,
navigability, speed and content.
[1]
However, the universal index
system cannot measure a website exactly and absolutely due to
the industry specialty, organizational characteristics and
different usages. One of the representative researches is Mr.
ZhongHai Li's paper about ergonomics standard of online store.
It assesses the business websites by testing if the design of the
website coincides with the shopping process of online
consumers. This standard has five factors, such as search and
browse, merchandise information, shopping cart, register and
pay, service and support.
[4]
Another index system for small and
medium business websites covers the factors of general features,
design, promotion, information and the others.
[5]
Here we list our major findings from the previous researches:
2.1 Unreasonable Selection of the Index
Some research consider not only the original design but also the
factors such as promotion and income of business website.
75
Some evaluation systems have correlative or contradictive
indexes. For example, it considers the download speed, at the
same time, it requires the web designers not to excessively use
flash and sounds to slow down the speed.
2.2 Unilateral Evaluation
Most of the research takes the users' view to evaluate the
function and design of website. It treats the business system as a
`black box' and ignores the impact of system performance on
the websites quality. But considering the factors of system
performance alone is also not a complete evaluation for
improving service quality of website.
2.3 Lack of a Complete Set of Quality
Synthesis Evaluation System
A complete set of tool to evaluate the websites must include the
following important elements: categories, factors, weights,
rankings standard and assessment model. So far, we have not
seen any literature discussing complete set of evaluation index
system aiming at the quality of BtoC websites.
PRINCIPLE FOR THE QUALITY SYNTHESIS EVALUATION
First, the three fundamental principles we need to follow are to
be comprehensive, to be scientific and to be feasible. We should
evaluate all the facets of the website from different dimensions
and avoid missing value of important factors. Moreover, the
definition of the evaluation index should be accurateobjective
and logical so it can eliminate the impact on the evaluation
result brought by the correlative indexes. Concurrently, we need
reduce the quantity of indexes or adopt the simple ones which
data is easier to be collected, and prevent from complicated
calculation due to the excessive indexes.
The main purpose of improving business websites is to serve
the users better. They are concerned only about the websites'
external attributes, such as content, function, presentation and
browse speed, etc. So, evaluating only by taking their views
cannot directly guide to develop, maintain and administrate the
website. Just like treating the patient's symptom but not the
disease itself, the technique developer or maintainer cannot
radically improve the quality of their websites by correcting
system structure and web design according to the evaluation
result. Only after we adopt the `grey box' index system that
considers both efficiency of business website and performance
of e-business system, we can establish a quality synthesis
evaluation index system to benefit the management of BtoC
websites.
QUALITY EVALUATION INDEXES FOR BUSINESS WEBSITE
Selection of index items lays down the foundation for
constructing evaluation index system. After we thoroughly
analyze the evaluation objectives based on the characteristics of
business website, we propose an initial index system includes 5
categories and totally 28 index items shown in the following
Table 1.
Table 1 Quality evaluation indexes for business
websites
Categories Indexes
Function
Effectiveness
1
Integrative Function
2
Interactive Function
3
Convenience
4
Service Personalization
5
Website Credibility
6
Business Authorization
Business
Information
7
Accuracy
8
Authoritativeness
9
Variety in Type
10
Inclusiveness
11
Uniqueness
12
Orderliness
13
Timeliness
14
Variety in Search Method
15
Search Effectiveness
16
Version Internationalization
Website
Design
17
User Interface Friendliness
18
Development Standardization
19
Website Uniqueness
20
Columns Originality
21
Website Structure Clarity
22
Page Style Consistency
23
Harmonization
System
Usability
24
System Stableness
25
Compatibility
26
System Security
27
Self-adaptability
System
Efficiency
28
System Speediness
Website self-adaptability refers to capability of e-business
system intelligently providing personalized service and
dynamic optimizing system performance. System Efficiency
refers to the ability that the system response quickly to the
requests of numbers of web users. It can be measured through
values of some quantitative indexes, such as response time,
throughput or utilization rate, etc.
OPTIMIZING THE EVALUATION INDEXES
It is necessary for our initial evaluation system to optimize if it
can be applied in practice. First, the indexes are more or less
correlative which will affects the objectiveness of the
evaluation. Second, there are too more indexes that will result
in lower efficiency. Therefore, we try to extract and simplify
the indexes by using R-Hierarchical clustering method.
Generally, R indicates the coefficient of correlation between
two items. R-Hierarchical clustering method is usually applied
to cluster the indexes. The steps are described as following.
5.1 Calculate Coefficient of Correlation and
Clustering
It firstly treats every index as one cluster. So, we have 28
clusters. Then, coefficient of correlation is calculated between
every two clusters by minimum-distance method. Next, the two
clusters with the maximal coefficient of correlation are
clustered into a new one. The same process is repeated until all
the indexes are clustered into one.
76
5.2 Analyze the Clustering Process and
Determine Clusters
We analyze the variation of minimum coefficient of correlation
during the clustering process to find the leap points. According
to the number of leap points and the knowledge of special field,
we can eventually determine how many clusters we need. The
whole process is illustrated in the following Figure 1.
Figure 1 The process of R-Hierarchical clustering
Following the principle of simplification and feasibility and
considering the characteristics of BtoC website, we cluster the
28 index items into 10. The precision rate is over 0.75.
5.3 Calculate Correlation Index and Extract the
Representative Indexes
First, we calculate the correlation index that is the average of R
between one index and every other index in the same cluster.
1
2
2
=
j
j
m
r
R
mi in this formula is the number of the indexes in the cluster
that index Xj belongs to.
Then, we select the index with the maximal correlation index in
the total 10 clusters individually and identify 10 of them as the
most representative indexes.
Finally, the weights of the indexes are derived by the expert
grade method. The final indexes and their weights are shown in
the following table 2.
Table 2 The final indexes and their weights
Category Weight
Index
Weight
1.1 Service Personalization
0.10
Function
Effectiveness
0.22 1.2 Website Credibility
0.12
2.1 Information Inclusiveness
0.10
Business
Information
0.18 2.2 Version Internationalization 0.08
3.1 Columns Originality
0.09
Website
Design
0.28 3.2 Website Structure Clarity
0.10
3.3 Harmonization
0.09
4.1 System Stableness
0.10
System
Usability
0.22 4.2 System Security
0.12
System
Efficiency
0.10 5.1 System Speediness
0.10
CONCLUSION
In this paper, we have proposed an index system for quality
synthesis evaluation and diagnosis of the BtoC websites
following the `Grey Box' evaluation principle, and
scientifically determined and simplified the index items.
Usually, factor analysis or principal component analysis is used
to solve the problem of common-factor and multiple indexes.
But these methods are only suitable for the quantitative indexes,
and the evaluation process is not truly simplified. Because the
new index is the linear function of some original ones, it still
needs to calculate the value of new indexes by collecting all the
values of the original ones.
In our index system, most of index is descriptive one. So we
have finalized the indexes by using the R-Hierarchical
clustering method. It really has reduced the number of the
evaluation indexes without losing the major information from
the original indexes. Furthermore, it has effectively avoided the
impact of common-factors on the evaluation result.
Only the index of system efficiency can be measured through
quantitative sub-indexes such as response time, etc. Most of
depictive indexes are subjective and fuzzy. In view of this, we
should use fuzzy comprehensive analysis method to evaluate to
get more efficiency result.
In our future work we are intended to propose an evaluation
model and conduct evaluation to some famous domestic BtoC
websites to prove if this index system is scientific and feasible.
Moreover, we will improve this set of index system including
evaluation model to make the whole set of index system more
feasibility.
REFERENCES
[1] F.J. Miranda Gonzalez, T.M. Banegil Palacios,
Quantitative evaluation of commercial web sites: an
empirical study of Spanish firms, International Journal of
Information Management, 24(2004)313-328
[2] Chang Liu, Kirk P. Amett, Exploring the factors associated
with Web site success in the context of electronic
commerce, Information & Management , 38 (2000)
23-33
[3] Evans, J. R., & King, V. E.. Business-to-business
marketing and the World Wide Web: Planning, managing
and assessing web sites. Industrial Marketing Management,
28(1999)343358
[4] Zhonghai Li, Jianqiao Liao, Hui Xiao; the analyze of work
efficiency on webshop design in China; human work
efficiency; 4(2002) 43-45
[5] Research a index system for evaluation of on enterprise
website,
http://www.365un.com/
xmb/viewthread.php?tid=1
998
77
| Business website;B2C websites;System performance;representitive indexes;fuzzy analysis;R-Hierarchical clustering;Evaluation system;quality synthesis evaluation;quality evaluation index;R-Hierarchical clustering method;index optimization;e-commerce;clustering;correlation index |
34 | An Integrated Environment to Visually Construct 3D Animations | In this paper, we present an expressive 3D animation environment that enables users to rapidly and visually prototype animated worlds with a fully 3D user-interface. A 3D device allows the specification of complex 3D motion, while virtual tools are visible mediators that live in the same 3D space as application objects and supply the interaction metaphors to control them. In our environment, there is no intrinsic difference between user interface and application objects. Multi-way constraints provide the necessary tight coupling among components that makes it possible to seamlessly compose animated and interactive behaviors. By recording the effects of manipulations, all the expressive power of the 3D user interface is exploited to define animations. Effective editing of recorded manipulations is made possible by compacting all continuous parameter evolutions with an incremental data-reduction algorithm, designed to preserve both geometry and timing. The automatic generation of editable representations of interactive performances overcomes one of the major limitations of current performance animation systems. Novel interactive solutions to animation problems are made possible by the tight integration of all system components. In particular, animations can be synchronized by using constrained manipulation during playback. The accompanying video-tape illustrates our approach with interactive sequences showing the visual construction of 3D animated worlds. All the demonstrations in the video were recorded live and were not edited. | INTRODUCTION
Modern 3D graphics systems allow a rapidly growing user
community to create and animate increasingly sophisticated
worlds. Despite their inherent three-dimensionality, these systems
are still largely controlled by 2D WIMP user-interfaces. The lack
of correlation between manipulation and effect and the high
cognitive distance from users to edited models are the major
drawbacks of this solution [13]. The inadequacy of user-interfaces
based on 2D input devices and mindsets becomes particularly
evident in the realm of interactive 3D animation. In this case, the
low-bandwidth communication between user-interface and
application and the restrictions in interactive 3D motion
specification capabilities make it extremely difficult to define
animations with straight-ahead actions. This inability to
interactively specify the animation timing is a major obstacle in all
cases where the spontaneity of the animated object's behavior is
important [21; 35; 4].
In this paper, we present an expressive 3D animation
environment that enables users to rapidly and visually prototype
animated worlds with a fully 3D user-interface. A 3D device
allows the specification of complex 3D motion, while virtual tools
supply the interaction metaphors to control application objects. In
our environment, there is no intrinsic difference between user interface
and application objects. Multi-way constraints provide
the necessary tight coupling among components that makes it
possible to compose animated and interactive behaviors. By
recording the effects of manipulations, all the expressive power of
the 3D user interface is exploited to define animations. Effective
editing of recorded manipulations is made possible by compacting
all continuous parameter evolutions with our data-reduction
algorithm, designed to preserve both geometry and timing. Novel
interactive solutions to animation problems are made possible by
the tight integration of all system components. In particular,
animations can be synchronized using constrained manipulation
during playback.
In the following sections, we present an overview of the
system, we make comparisons with related work, and we conclude
with a view of future directions. The accompanying video-tape
illustrates our approach with interactive sequences showing the
visual construction of 3D animated worlds. All demonstrations in
the video were recorded live and were not edited.
SYSTEM OVERVIEW
Our animation environment is built on top of VB2 [17; 18], a
graphics architecture based on objects and constraints. During
interaction, the user is the source of a flow of information
propagating from input device sensors to manipulated models.
Permission to make digital/hard copy of part or all of this work
for personal or classroom use is granted without fee provided
that copies are not made or distributed for profit or commercial
advantage, the copyright notice, the title of the publication and
its date appear, and notice is given that copying is by permission
of ACM, Inc. To copy otherwise, to republish, to post on
servers, or to redistribute to lists, requires prior specific
permission and/or a fee.
1995 ACM-0-89791-701-4/95/008...$3.50
395
VB2 applications are represented by a network of interrelated
objects, and the maintenance of relationships is delegated to a
constraint-based change propagation mechanism. Different
primitive elements represent the various aspects of the system's
state and behavior: active variables store the system's state,
domain-independent hierarchical constraints [9] maintain multi way
relations between active variables, daemons provide support
for discrete simulation tasks, and indirect expressions allow
constraints and daemons to dynamically locate their variables.
Constraints are maintained using an efficient local propagation
algorithm based on Skyblue [27; 17; 18]. The solver is domain independent
and can maintain a hierarchy of multi-way, multi output
dataflow constraints. The fact that constraint solving
consists in performing method selection on the basis of constraint
priorities and graph structure, without considering the variables'
values, allows an effective application of a lazy evaluation
strategy [17; 18]. The main drawback of such a local propagation
algorithm is the limitation to acyclic constraint graphs. However,
as noted by Sannella et al. [28], cyclic constraint networks are
seldom encountered in the construction of user interfaces, and
limiting the constraint solver to graphs without cycles gives
enough efficiency and flexibility to create highly responsive
complex interactive systems. In VB2 , the objects' internal
constraint networks are designed so as to reduce the possibility of
creating cyclic constraint graphs. Runtime introduction of a
constraint that would create a cyclic graph causes an exception
that can be handled to remove the offending constraint
1.
The state manager behavior and the constraint solving
techniques are detailed in [17; 18].
2.2 Interaction
The system's desktop configuration uses keyboard commands to
trigger mode changes and animation playback, a Spaceball for
continuous specification of spatial transformations, and a mouse
for picking. Both hands are thus used simultaneously to input
information. LCD shutter glasses provide binocular perception of
the synthetic world. Since our main research goal is to explore the
potentialities of 3D interaction, we do not provide a two dimensional
graphical user interface. A 3D cursor, controlled by
the Spaceball , is used to select and manipulate objects of the
synthetic world.
Direct manipulation and virtual tools are the two techniques
used to input information. Both techniques involve using mediator
objects that transform the cursor's movements into modifications
of manipulated objects. Virtual tools are visible first class objects
that live in the same 3D space as application objects and offer the
interaction metaphor to control them. Their visual appearance is
determined by a modeling hierarchy, while their behavior is
controlled by an internal constraint network [18].
As in the real world, users configure their workspaces by
selecting tools, positioning and orienting them in space, and
binding them to application objects. At the moment of binding, the
tool decides whether to accept the connection by checking if the
application object contains all the needed information and by
verifying that the constraint graph obtained by connecting the tool
to the model can be handled by the underlying solver (i.e. it is
acyclic). The binding mechanism is defined in a declarative way
by using indirect constraints [18].
1
VB2's current constraint solver [17; 28] is unable to find acyclic
solutions of potentially cyclic constraint graphs. An algorithm that
removes this limitation is presented in [36].
Information control
Information display
MODEL
TOOL
v1
v2
c1
c2
bound
bound.v1
bound.v2
v1
v2
out_variable
in_variable
constraint
in_out_variable
direct reference
indirect reference
Instance
(a)
(b)
Figure 1a.
Design notation
Figure 1b.
Model and virtual tool
When bound, the tool changes its visual appearance to a shape that
provides information about its behavior and offers semantic
feedback. During manipulation, the tool's and the application
object's constraint networks remain continuously connected, so as
to ensure information propagation. Multiple tools can be active
simultaneously in the same 3D environment in order to control all
its aspects. The environment's consistency is continuously ensured
by the underlying constraint solver. The bi-directionality of the
relationships between user-interface and application objects makes
it possible to use virtual tools to interact with a dynamic
environment, opening the door to the integration of animation and
interaction techniques.
2.3 Animation
By recording the effects of manipulations, animations can be
sketched. In order to be able to edit the captured performance, a
compact representation of continuous parameter evolution must be
obtained. This representation must not only precisely approximate
the shape of the initial parameter curves but also their timing. The
data reduction algorithm must therefore treat the geometry and
time components simultaneously in order to avoid the introduction
of errors that would be difficult to control. We have developed an
algorithm that incrementally builds, from the input sequence, a
parametric B-spline preserving value and time of each input
sample within a given tolerance. It is an incremental version of the
Lyche and Mrken algorithm [22] that works in parallel with the
interactive specification by considering only a small portion of the
input curve at any time. Latency time and memory requirements
for handling each portion of the curve are constant. Data reduction
may therefore be performed concurrently with interactive
parameter input, and the responsiveness of the application can be
ensured when handling animations defined by any number of
samples. The algorithm is presented in detail in [2; 4]. This
performance-based approach complements key-framing by
providing the ability to create animations with straight-ahead
actions. It provides complete control over the animation shape and
timing, while key-framing offers control only at a limited number
of points.
The mediation of virtual tools makes it possible to sketch the
evolution of non-geometric attributes, while constrained or free
motion can be specified with 3D devices. Since these devices offer
continuous control of spatial transformations, subtle
synchronizations between position and orientation components
can be directly specified. In our environment, straight-ahead
animations are defined by expressing the desire to record
parameter evolution during interaction. This is done simply by
pressing a different mouse button when starting an interaction
task. A controller object is connected to each animatable model
and is responsible for monitoring model state changes. While
recording, all changes are handled by the controller to feed the
animation tracks. Continuous tracks apply the data reduction
396
algorithm to the incoming information, while discrete tracks
simply store a change value event. During playback, information
propagates from the animation tracks through the controllers and
down to the models. All connections are realized by bi-directional
constraints. Since playback constraints are weaker than interaction
constraints, the user can take control over animated models during
playback.
Animations involving synchronizations with the
environment's evolution can thus be specified by interacting
during playback [5].
Discrete
Track
Interaction
Mediator
Data reduction
Data sampling
Editing
Animation recording
Animation playback
Continuous
Track
Application
Object
Animation
Controller
Figure 2.
Interactive animation and playback
RELATED WORK
Constraint-based architectures have long been used for 2D
graphics systems (see [28] for a survey). In the 3D graphics world,
one-way constraints are commonly employed to maintain
dependencies between components [20; 34; 37; 38]. This type of
constraint cannot easily model mutual relations between objects,
thus hindering the tight coupling between user-interface and
application objects [28]. Our system uses instead multi-way local
propagation constraints, which offer support for two-way
communication between objects while remaining efficient enough
to ensure the responsiveness of the system [17; 18; 27]. TBAG
[14] also uses multi-way constraints maintained by Skyblue [27],
but its functional approach concentrates more on modeling time varying
behaviors than on creating interactive systems. Much
effort has been spent in developing powerful numerical solvers for
computer graphics (e.g. [7; 15; 16]). This work is complementary
to ours, which focuses more on providing ways to interact with
constrained environments. Such advanced solvers could replace
local propagation in our system for the maintenance of numerical
relationships.
3.2 Three-dimensional User Interfaces
Much recent research has focused on obtaining rich interaction
with 3D environments by means of advanced devices and 3D
interaction metaphors [8; 10; 11; 13; 16; 19; 26; 30; 32]. 3D
widgets or manipulators, similar to our virtual tools, are presented
in [13; 32]. These works focused on providing support for 3D
widget construction, while we concentrate more on the integration
of multiple tools in a single dynamic environment. We are not
aware of any attempts to apply the results of 3D interaction
research to enhance animation capabilities.
3.3 Performance Animation
A number of authors have proposed using live performances to
drive computer animations (e.g. [1; 23; 33; 35]). We strive to
bring the expressiveness of these approaches to general purpose
animation systems running on graphics workstations. Instead of
relying on advanced motion capture devices, we exploit our fully
3D user-interface to control the animated environment at a higher
level of abstraction. The guiding approach proposed in [23] also
seeks to provide better control of synthetic objects by raising the
abstraction level of user interaction. That work concentrates on
modeling complex behaviors in a discrete simulation framework,
while we focus on providing intuitive user interfaces. A major
limitation of current performance animation systems is the
inability to build editable representations out of captured
performances [35].
3.4 Data Reduction
Data reduction or curve fitting techniques have been successfully
applied for the interactive specification of 2D or 3D curves or
surfaces (e.g. [12; 24; 25; 29]). These techniques cannot be easily
adapted to sketching animations of multi-dimensional parameters
because they all exhibit one or more of the following problems: (i)
restriction to 2D or 3D geometric constructions, (ii) lack of control
on parameterization errors, and (iii) need to consider the entire
input curve before reduction. An early attempt to use data reduction
for animation is described in [29]. In that system, path
geometry and path timing specifications were decoupled, loosing
thus the advantages of performance approaches. Banks and Cohen
[6] proposed for their drafting tool an incremental version of the
Lyche and Mrken algorithm [22] that does not have the
aforementioned drawbacks and could be used in a performance
animation context. Their method shares with ours the idea of
processing successive portions of the input curve which are then
spliced together, but is unable to ensure constant latency times and
memory needs [4].
CONCLUSIONS AND FUTURE WORK
In this video-paper, we have presented an integrated environment
for the rapid and visual prototyping of 3D animated worlds. Using
our fully 3D user-interface, non-professional users can swiftly
create complex animations with pose-to-pose and straight-ahead
techniques. Thanks to automatic data-reduction, animations
created by interactive performances can then be effectively edited.
In our future work, we intend to develop new virtual tools and
visualizations that will improve our 3D user interface for discrete
and continuous track manipulation. To allow the system to adhere
to timing requirements, we are developing time-critical techniques
for controlling rendering complexity and constraint evaluation.
ACKNOWLEDGMENTS
The authors would like to thank Ronan Boulic for providing the
walking engine used in the interactive sequences, Sally Kleinfeldt
as well as Dean Allaman for helpful comments and suggestions,
Angelo Mangili for technical help, and Michele Mller for doing
the voice on the video.
This research was conducted by the authors while at the Swiss
Federal Institute of Technology in Lausanne.
397
REFERENCES
[1]
Baecker RM (1969) Picture-driven Animation. Proc. Spring
Joint Computer Conference 34: 273-288.
[2]
Balaguer JF (1993) Virtual Studio: Un systme d'animation
en environnement virtuel . PhD Thesis, Swiss Federal
Institute of Technology in Lausanne.
[3]
Balaguer JF, Gobbetti E (1995) Animating Spaceland. To
appear in IEEE Computer Special Isssue on Real-world
Virtual Environments 28(7).
[4]
Balaguer JF, Gobbetti E (1995) Sketching 3D Animations.
To appear in Proc. EUROGRAPHICS.
[5]
Balaguer JF, Gobbetti E (1995) Supporting Interactive
Animation using Multi-way Constraints. Submitted for
publication.
[6]
Banks M, Cohen E (1990) Real-time Spline Curves from
Interactively Sketched Data. Proc. SIGGRAPH Symposium
on Interactive 3D Graphics: 99-107
[7]
Barzel R, Barr A (1988) A Modeling System Based on
Dynamic Constraints. Proc. SIGGRAPH: 179-188.
[8]
Bier EA (1990) Snap-Dragging in Three Dimensions. Proc.
SIGGRAPH Symposium on Interactive 3D Graphics : 193 204
.
[9]
Borning A, Freeman-Benson B, Wilson M (1992) Constraint
Hierarchies. Lisp and Symbolic Computation 5(3): 221-268.
[10] Butterworth J, Davidson A, Hench S, Olano TM (1992)
3DM: A Three Dimensional Modeler Using a Head-Mounted
Display. Proc. SIGGRAPH Symposium on Interactive 3D
Graphics: 135-138.
[11] Card SK, Robertson GG, Mackinlay JD (1991) The
Information Visualizer: An Information Workspace. Proc.
SIGCHI : 181-188.
[12] Chou JJ, Piegl LA (1992) Data Reduction Using Cubic
Rational Splines. IEEE Computer Graphics and Applications
12(3): 60-68.
[13] Conner DB, Snibbe SS, Herndon KP, Robbins DC, Zeleznik
RC, Van Dam A (1992) Three-Dimensional Widgets.
SIGGRAPH Symposium on Interactive 3D Graphics : 183 188
.
[14] Elliott C, Schechter G, Yeung R, Abi-Ezzi S (1994) TBAG:
A High Level Framework for Interactive, Animated 3D
Graphics Applications. Proc. SIGGRAPH: 421-434.
[15] Gleicher M (1993) A Graphics Toolkit Based on Differential
Constraints. Proc. UIST : 109-120.
[16] Gleicher M, Witkin A (1992) Through-the-Lens Camera
Control. Proc. SIGGRAPH: 331-340.
[17] Gobbetti E (1993) Virtuality Builder II: Vers une
architecture pour l'interaction avec des modes sysnthtiques.
PhD Thesis, Swiss Federal Institute of Technology in
Lausanne.
[18] Gobbetti E, Balaguer JF (1993) VB2: A Framework for
Interaction in Synthetic Worlds. Proc. UIST: 167-178.
[19] Herndon KP, van Dam A, Gleicher M (1994) Report:
Workshop on the Challenges of 3D Interaction, CHI
Bulletin, October.
[20] Kass M (1992) CONDOR: Constraint-based Dataflow. Proc.
SIGGRAPH: 321-330.
[21] Lasseter J (1987) Principles of Traditional Animation
Applied to 3D Computer Animation. Proc. SIGGRAPH: 35 44
.
[22] Lyche T, Mrken K (1987) Knot Removal for Parametric B spline
Curves and Surfaces. Computer Aided Geometric
Design 4: 217-230.
[23] McKenna M, Pieper S, Zeltzer D (1990) Control of a Virtual
Actor: The Roach. Proc. SIGGRAPH Symposium on
Interactive 3D Graphics: 165-174.
[24] Plass M, Stone M (1983) Curve Fitting with Piecewise
Parametric Cubics. Proc. SIGGRAPH: 229-239.
[25] Pudet T (1994) Real Time Fitting of Hand Sketched Pressure
Brushstrokes. Proc. EUROGRAPHICS: 205-220.
[26] Sachs E, Roberts A, Stoops D (1990) 3-Draw: A Tool for
Designing 3D Shapes. IEEE Computer Graphics and
Applications 11(6): 18-26.
[27] Sannella M (1994) Skyblue: A Multi-Way Local Propagation
Constraint Solver for User Interface Construction. Proc.
UIST : 137-146.
[28] Sannella M, Maloney J, Freeman-Benson B, Borning A
(1992) Multi-way versus One-way Constraints in User Interfaces
. Software Practice and Experience 23(5): 529-566.
[29] Schneider PJ (1988) Phoenix: An Interactive Curve Design
System Based on the Automatic Fitting of Hand-Sketched
Curves. Master's Thesis, University of Washington.
[30] Shaw C, Green M (1994) Two-Handed Polygonal Surface
Design. Proc. UIST : 212-215.
[31] Shelley KL, Greenberg DP (1982) Path Specification and
Path Coherence. Proc. SIGGRAPH: 157-166.
[32] Strauss PS, Carey R (1992) An Object-Oriented 3D Graphics
Toolkit. Proc. SIGGRAPH: 341-347.
[33] Tice S (1993) VActor Animation Creation System.
SIGGRAPH Tutorial 1.
[34] Upson C, Fulhauber T, Kamins D, Laidlaw D, Schlegel D,
Vroom J, Gurwitz R, van Dam A (1989) The Application
Visualization System: A Computational Environment for
Scientific Visualization. IEEE CG&A 9(4): 30-42.
[35] Walters G (1993) Performance Animation at PDI.
SIGGRAPH Tutorial 1.
[36] Vander Zanden B (1995) An Incremental Algorithm for
Satisfying Hierarchies of Multi-way, Dataflow Constraints .
Technical Report, University of Tennessee, Knoxville.
[37] Zeleznik RC, Conner DB, Wlocka MM, Aliaga DG, Wang
NT, Hubbard PM, Knepp B, Kaufman H, Hughes JF, van
Dam A (1991) An Object-Oriented Framework for the
Integration of Interactive Animation Techniques. Proc.
SIGGRAPH: 105-112.
[38] Zeltzer D, Pieper S, Sturman DJ (1989) An Integrated
Graphical Simulation Platform. Proc. Graphics Interface:
266-274.
398 | animation synchronization;computer graphics;Object-Oriented Graphics;3d animation environment;data reduction;visualization;multi-way constrained architecture;human interaction;Data Reduction;3D Animation;Local Propagation Constraints;recording 3d manipulation;Virtual Tools;3D Widgets;3d user interface;3D Interaction;dynamic model |
35 | An Intensional Approach to the Specification of Test Cases for Database Applications | When testing database applications, in addition to creating in-memory fixtures it is also necessary to create an initial database state that is appropriate for each test case. Current approaches either require exact database states to be specified in advance, or else generate a single initial state (under guidance from the user) that is intended to be suitable for execution of all test cases. The first method allows large test suites to be executed in batch, but requires considerable programmer effort to create the test cases (and to maintain them). The second method requires less programmer effort, but increases the likelihood that test cases will fail in non-fault situations, due to unexpected changes to the content of the database. In this paper, we propose a new approach in which the database states required for testing are specified intensionally, as constrained queries, that can be used to prepare the database for testing automatically . This technique overcomes the limitations of the other approaches, and does not appear to impose significant performance overheads. | INTRODUCTION
Modern information systems are typically organised as
collections of independent application programs that communicate
with one another by means of a central database.
The database records the state of the organisation that the
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
ICSE'06,
May 2028, 2006, Shanghai, China.
Copyright 2006 ACM 1-59593-085-X/06/0005 ...
$
5.00.
information system supports, while the application programs
implement the business processes that manipulate the state.
To take a simple but ubiquitous example, a database system
might record details of customers, products and sales,
while the application programs associated with it handle operations
such as new product purchases and update of the
product catalogue, as well as supporting decision making
by generating reports regarding the most profitable product
lines, names and addresses of loss-making customers, etc.
In order to test such application programs, it is necessary
to create test fixtures that simulate the presence of the rest
of the information system. Fixtures for traditional test cases
typically consist of in-memory objects and data structures
that provide the inputs to the program being tested. This
kind of fixture is also needed when testing database applications
(especially when performing unit testing); however,
since it is unrealistic (and often incorrect) to execute test
cases against an empty database, we need to create additional
fixture elements within the database itself.
Current practice in the software industry is to maintain
one or more test databases that can be used for testing individual
programs. These databases can be artificially generated
(e.g., using tools such as DBMonster
1
and DataFac-tory
2
) or they may be subsets of the live database, taken
as a snapshot at some recent point in time. Copies of the
live data sets have the advantage that they are more likely
to be representative of the patterns of data encountered in
practice, while artificial data sets have the advantage that
they can be made to embody specific characteristics (such
as particular data skew patterns or volumes), which may be
useful for load and stress testing.
Both approaches, however, suffer from several disadvantages
. The most significant problem occurs when none of
the available test databases are suitable starting points for a
particular test case. For example, suppose a particular test
case executes a program which purges inactive customers,
with the aim of verifying that the business rule forbidding
deletion of customers with negative balances is correctly en-forced
. If none of the test databases contains any inactive
customers with negative balances, then the test case cannot
be executed successfully. For a one-off test run, testing
personnel can choose a database that is close to what is required
, and manually update it so that it is suitable for use
with the test case. But if a complete test suite is to be executed
(possibly including test cases which themselves make
modifications to the database state) then in the worst case
1
http://DBMonster.kernelpanic.pl
2
http://www.quest.com/datafactory
102
this manual intervention will be required in between every
test case execution. This is clearly undesirable if test suites
are large or time-consuming to execute, or if the test suite
is to be run in batch (as in the case of overnight regression
testing, for example).
Current research in testing for database systems proposes
two approaches to this problem. One of these is to include
within the test case description a full (extensional) specification
of the database state against which it is to be run (and
of the database state that should be produced if the test has
executed successfully) [13, 14]. This solution is exemplified
by DBUnit
3
, an extension of the JUnit testing framework
4
that is designed for testing database applications written in
Java. Each DBUnit test case is accompanied by an XML
file describing the data set required for the test. Before each
test run, DBUnit clears the database state and inserts the
data described by the XML file.
This approach has the advantage of simplicity, but it places
a considerable burden on testing personnel, especially when
complex database states are required. It is also inefficient,
since the database must be continually destroyed and recre-ated
between tests, even when significant parts of the database
might have been reused by the succeeding tests. Moreover,
maintenance of a large suite of such tests is extremely challenging
, since any small change to the database schema may
require corresponding changes to many test cases.
The second approach that has been explored in the literature
is more efficient in that it requires the creation of only
one database state per test suite (rather than one per test
case). It is exemplified by the AGENDA database testing
toolkit [6, 7], which can automatically generate a database
state given information about the schema, some data generation
functions for individual attributes and some user-selected
heuristics describing the kind of database state required
. The AGENDA tool also generates test cases from a
simple analysis of the program being verified. The user must
then add preconditions to each test case that are checked
just before it is executed and that will prevent a case from
being executed against an inappropriate database state. This
approach successfully relieves the user of the need to specify
complete database states in full detail, but at a cost. The
user must accept that some of the test cases may not be
executed because the database state fails the precondition,
even when it would require only a small change to bring the
database into a suitable state for the test. Since only one
database state is created per test suite, this problem of failed
tests is likely to become more severe as the size of the test
suite grows. There is also a potential inefficiency involved
in generating test descriptions and inputs, and in creating
the additional log tables and constraints/triggers needed by
the AGENDA tool, for test cases that are not in fact going
to be executed.
Ideally, we would prefer to be able to combine the advantages
of both these approaches, to give a form of database
test case that is quick and natural to specify, and which
maximises the number of cases within the suite that can be
executed while minimising the number of full test databases
that need to be maintained.
Our thesis is that this can
be achieved by allowing testing personnel to describe the
database states involved in their test cases intensionally, in
3
http://www.dbunit.org
4
http://www.junit.org
the form of declarative conditions that the input database
must satisfy, and by providing a testing harness that can
automatically adjust the input database so that the test
conditions are satisfied [19].
In this paper, we present a language for specifying such
intensional database tests, and describe its semantics and
operational behaviour (Section 2). We present an algorithm
for automatically modifying database states so that test preconditions
are satisfied (Section 3), thus ensuring that all
test cases can be executed without requiring any human
intervention. We further describe how we have extended the
JUnit testing framework to allow intensional database tests
to be specified and executed in practice (Section 4). Finally,
we present the results of an evaluation of the performance
of the techniques (Section 5) and conclude (Section 6).
SPECIFYING INTENSIONAL TESTS
A conventional test case is typically modelled as a triple
< p, i, o >, which denotes a test that executes program p
with inputs (e.g., parameters) denoted by i. If no faults are
encountered during the test execution, the output that will
be produced is o. In the case of test cases for database applications
, we must add two further elements--the specification
of the database state against which p is to be executed,
and some statement of the database state that should result
from the execution of p if it is operating correctly according
to its specification.
For example, consider the example program mentioned
in Section 1 that prunes inactive customer details from the
database. For this test case, we require a database state that
contains at least one inactive customer. This could easily
be stated as a predicate logic condition over the database,
assuming the obvious mapping between stored relations and
predicates, e.g.:
(custNo, lastOrderOn, a, b, c)
customer (custNo, a, b, c, lastOrderOn)
lastOrderOn < today - 90
The program in question does not access any parts of the
database other than the customer table. Therefore, we do
not care what values the other tables contain and need not
mention them in the intensional specification of the test.
This approach works equally well for observing the results
of the test. For example, when testing the customer pruning
behaviour, we might require that no inactive customer with
a non-negative balance should exist in the database after
the test:
((custNum, lastOrderDate, a, b, c)
customer (custNum, a, bal , c, lastOrderDate)
lastOrderDate < today - 90 bal > 0)
Effectively, the test case describes a set of valid (i.e., fault-free
) state transition for the database, as a classic pre/post-condition
pair.
This first-order-logic style of database specification does
not work so well when we consider the testing problem in
more depth, however. The problem is that we need to do
more than test the input database for compliance with the
requirements of the test case; we also need to extract information
from it to be used to instantiate other elements
103
of the test case. For example, suppose we wish to test a
program that deletes details of individual customers. Such
programs typically require some input from the user, identifying
the specific customer record that is to be deleted (e.g.,
by supplying the relevant customer code as a parameter).
This could be achieved by requiring the tester to embed the
customer code into the test case elements, as literal values.
Alternatively, we could search for a suitable customer that
already exists in the database, using a standard database
query, and use the values from that in specifying the inputs
for the test case. This would minimise the amount of work
required to prepare the database for test execution (since we
would be using data already present in the database), and it
would also mean that test cases can be written very quickly,
since the user does not need to specify every last detail of
the data to be used.
Under this approach, the specification of the input database
state now has a dual role: it must state the condition that
determines whether the database state is suitable for execution
of the test case and it must also return bindings for the
free variables that appear in the remaining components of
the test case. For the latter purpose, we would prefer to use
a straightforward query language, while for the former we
require the ability to place conditions on the data. With a
simple extension of a standard query language such as SQL,
we can combine both these purposes in a single statement.
For example, the following statement:
ANY :cn GENERATED BY
SELECT custNo FROM customer
WHERE lastOrderDate < today() - 90
AND balance < 0
retrieves the customer code of some record that meets the
given conditions (an inactive customer with negative balance
) from the database, and binds it to the variable :cn.
It also places a cardinality constraint on the result of the
query, that at least one such binding must exist (implied by
the use of the keyword ANY).
The variable :cn can then be used to specify other elements
of the test case. The obvious usage in this example is
in specifying the inputs to the program being tested, but it
can also be used in describing the expected outputs of the
program. In this example test case, the correct behaviour
of the DeleteCustomer program is to reject the deletion
of :cn, since customers with a negative balance cannot be
purged from the database. We might therefore give the following
specification of the desired output database state:
AT LEAST 1 :cn2 GENERATED BY
SELECT custNo FROM customer
WHERE custNo = :cn
Of course, not all test cases are best specified in terms of
values retrieved from the database. For example, suppose
that we wish to write test cases for a program that adds new
customers to the database. The inputs to this program are
the details of the new customer, and the precondition for one
particular test case states that no customer should exist that
has the same customer code as that of the customer being
created. We cannot retrieve the customer details from the
database in this case, as they have not yet been stored in it.
Again, we could force the user to include the required values
as literals in the test case, but ideally we would like to give
<CONDITION> ::= <TYPE> <BINDINGLIST>
GENERATED BY <SELECT>
<TYPE> ::= ANY | NO | AT LEAST <i> |
AT MOST <i> | EXACTLY <i> |
ALL | FIRST
<i>
::= {0-9}
<BINDINGLIST>
::=
<BINDING> { `,' <BINDINGLIST> }
<BINDING> ::= {A-Z | a-z}
<SELECT>
::= ...
Figure 1: Simplified BNF Grammar for SQL Extensions
more support to the process of test case generation. One
way to achieve this is to allow user-defined data generator
functions to be incorporated within queries as though they
were relations. For example, the following expression states
our requirements for this test case, while also binding the
variables needed for input to the program:
ANY :cn, :name, :addr, :bal GENERATED BY
SELECT gc.custno, gc.name, gc.addr, 0
FROM genCustomerDetails() AS gc
WHERE gc.custno NOT IN (
SELECT custno
FROM customer
WHERE balance > 0)
Here, the data generator function getCustomerDetails()
is used as if it were a normal relation, whereas in fact the
results it returns are computed on the fly. In fact, several
of the main commercial database management systems already
allow user-defined functions to be embedded in queries
in this way, so this does not require a further extension of
SQL. Figure 1 shows the minimal extensions that are needed
to support all the kinds of constrained query shown above
using the SQL99 standard [17].
2.1
Test Case Semantics
Clearly, the semantics of these intensional database test
cases is more complex than for traditional extensional tests.
However, we can define their semantics formally in terms
of a mapping from intensional tests to sets of equivalent
extensional database test cases. We first present a formal
definition of the structure of our intensional test cases:
Definition 1. An intensional database test case is a quintuple
< p, i, DB
i
, o, DB
o
>, where:
p is the program to be executed in the test,
i is a tuple of n variables and literals that describes the
inputs to be given to program p, where n is the number
of parameters expected by p,
DB
i
is a set of constrained queries that together specify
the initial database state.
o is a tuple of m variables and literal that describes the
expected outputs from the program p.
DB
o
is a set of constrained queries that together specify
the conditions that must hold in the database state after
execution of p if no fault has been encountered.
104
A constrained query has the form < Q, min, max , vars >,
where Q is a standard relational algebra query, min and
max describe the constraints on the cardinality of the query
result set, and vars is the list of variables bound by the
query result.
A database test case is well-formed for use with a particular
database schema iff:
for every variable v that occurs free in i, DB
i
, o and
DB
o
, there exists a query in DB
i
that provides a binding
for v,
for every query < q, n, m, vs > in DB
i
DB
o
, q is a
well-formed query over that returns k-tuples, where
|vs| = k, and
there are no circular variable dependencies amongst
the queries in DB
i
.
We can now define a semantics for the intensional database
test cases as follows. Every intensional test case is equivalent
to a set of extensional test cases. An extensional test case
defines a specific test run, in terms of actual inputs and
outputs, rather than expressions denoting sets of inputs and
outputs.
The set of all possible extensional test cases is
given by:
P L
n
DB L DB
where P is the set of all programs, L is the set of all literals
, L
n
is the set of all n-tuples formed from L and DB
is the set of all database states (relative to all schemas)
5
.
The components of each extensional test are the program
to be tested, the input values, the initial database state,
the expected output and the expected final database state,
respectively.
An intensional test case is effectively a shorthand expression
for a set of extensional test cases that are all derived
from the same equivalence partition of the test case inputs.
An intensional database test < p, i, DB
i
, o, DB
o
>, where
DB
i
= {< q
i
, n
i
, m
i
, v
i
>} and DB
o
= {< q
o
, n
o
, m
o
, v
o
>},
is equivalent to the following set of extensional tests:
{< p, i[v
i
/v], db
i
, o[v
i
/v], db
o
> |
db
i
DB
(n
i
|q
i
(db
i
)| m
i
)
v q
i
(db
i
)
db
o
DB
(n
o
|(q
o
[v
i
/v])(db
o
)| m
o
)}
We use the notation exp[
1
/
2
] to express the substitution of
the values in
1
by the corresponding values in
2
whereever
they occur in exp. Therefore, this expression denotes the set
of extensional tests where the input database satisfies the
constraints imposed by the initial constrained query, and
where the bindings from execution of that query (here expressed
as the tuple of variables v) are substituted into the
5
For simplicity of presentation, we assume that all programs
require the same number of inputs (n). In practice, n can
be the largest number of inputs required by any program,
and the unused values can be filled with nulls.
expressions defining the inputs, expected output and expected
final database state before they too are evaluated
6
.
The idea underlying this notion of an intensional test is
that when any of its corresponding extensional sets are executed
, the intensional test is itself deemed to have been
executed. Thus, the use of intensional tests allows much
greater freedom at test execution time, since we may choose
any of the possible extensional tests, depending on which is
closest to our starting environment. In the next section, we
will consider the practical ramifications of this approach to
testing, and describe how the semantics just described can
be implemented in practice.
DATABASE PREPARATION
The execution of an intensional database test case consists
of three distinct phases: 1) preparation of the environment
for test execution; 2) execution of the test with the
prepared inputs; and 3) capture and storage of the results,
for later analysis.
Since all the work of finding bindings
for the variables in the test case specification is done in the
preparation phase, the final two phases are straightforward
and differ little from standard testing procedures. When
program execution is complete, the constrained query that
determines whether the test has been successful or not is
evaluated against the database, and the output from the
program is checked against what is expected. In the case
of test failure, the details of the actual extensional test that
was executed are recorded, for diagnosis purposes.
The first phase, however, is more complex. If we were
content to execute only those test cases which happen to
be suitable for use with the initial database state, then the
preparation phase would simply be a matter of executing
the input constrained queries against the database and, if
they are all successful, using the bindings thus produced
to instantiate the remaining components of the test case.
However, thanks to the declarative nature of our test case
specifications, the testing framework can be pro-active in
cases where the given database is not suitable for use by
the test case, and can automatically generate a sequence of
updates that will cause the constrained queries to produce
the required number of bindings.
In fact, this problem is similar (though not identical) to
one that has been studied by the database and artificial intelligence
communities for many years. It is known variously
as the view update problem [9], the knowledge base update
problem [12], and the transaction repair problem [10]. Many
database systems have the capability to define views on top
of the basic database. A view is a kind of virtual relation.
To the user, it appears to be a normal relation, but it contains
no stored data. Instead, the contents of the view are
defined by a expression over other relations, and attempts
to retrieve data from the view are converted into queries
over these relations. To take a simple example for illustration
, we might create a view called Debtors which appears
to be a relation of the same name containing all customers
with a negative balance. Attempts to retrieve Debtors is
6
For simplicity of presentation, we assume here that there
is only one query in each of DB
i
and DB
o
. In practice,
it may be necessary to include several queries, each producing
different bindings and imposing different cardinality
constraints. In this case, the constraints must be conjoined,
and the full set of bindings can be retrieved by performing
a natural join of all the queries, with join condition true.
105
converted into a query against the customer table with an
added constraint on the balance.
If views are truly to act as normal relations then it should
be possible to update them as well query them. But what
does it mean to update a virtual relation? In this case, the
view update must be converted into a sequence of updates
on the stored relations that will cause the desired change in
the contents of the view itself. This is a non-trivial problem
for realistic view languages, and becomes even more difficult
when we move into the context of knowledge bases, where
virtual relations can be defined using rules over other relations
, and when we add integrity constraints that must be
maintained by all updates [1, 2, 3, 4, 5, 8, 11].
Only in very narrow circumstances does a view update
have a single translation into real updates [15, 18]. Various
heuristics for selecting from amongst the possible translations
have been proposed (of which the most common is to
choose the update that results in the smallest change to the
existing data set [2]), but in real applications user input is
needed in order to identify the translation that corresponds
most closely to the real world state that the database should
reflect [10].
In the case of intensional database tests, we have a query
(the constrained query that describes our requirements for
the test) that does not produce the correct number of answers
when executed against the test database. We need to
find a sequence of updates to the base data that will cause
our query to produce the number of answers we need. However
, in this case, there is no requirement to find the set of
updates that matches the state of reality -- any sensible update
that satisfies the query conditions will be acceptable.
This simplifies the problem considerably, removing the need
for complex search procedures and for any user input.
3.1
The Preparation Algorithm
One of the advantages of using a query-based language
for test specification (as opposed to a predicate calculus-based
language) is that we can make use of a very common
and easy-to-analyse internal form for (relational) database
queries, called relational algebra. This form provides a small
number of operations on relations that can be combined to
form complex queries. For example, the three most basic
(and useful) relational algebra operators are:
The projection operator,
Atts
R, which creates a relation
from R by deleting all attributes not in Atts.
For example,
[Country]
Customer produces a relation
that contains just the countries that appear in the
Customer
relation.
The selection operator,
c
R, which creates a relation
that contains all the rows from relation R that satisfy
the condition c. For example,
bal <0
Customer returns
a relation containing details of all customers with negative
balances.
The join operator, R
1
c
S, which creates a relation
containing rows from the cross product of R and S that
satisfy the join condition c. The query Debtor
1
dNo=iNo
Inactive returns details of all debtors who are also inactive
.
Since the result of each relational algebra operator is itself
a relation, together they form a closed algebra. This means
that we can form arbitrarily complex queries by applying
operators to the results of other operators. For example, a
query which retrieves the customer number of all customers
with a negative balance would be written as:
[custNo]
(
balance<0
Customer )
A common way to visualise such expressions is as a tree of
operators. The tree for the above query is shown in Figure 2.
Figure 2: Relational Algebra Tree for Negative Balance
Query.
Our algorithm for preparing a database for testing is based
around this notion of a relational algebra tree. We take the
cardinality constraints from the test specification, and push
them down through the nodes of the input database query
tree, collecting up additional conditions as we go. When we
reach a leaf node (i.e. a base relation), we make updates
to the database so that the pushed-down constraints are
satisfied for that relation.
At each stage, we collect up the different kinds of constraint
and push them further down into the tree. These
constraint types are:
Min and Max, the upper and lower bounds on the desired
cardinality of the result set.
SelC, the selection conditions on the relations that we
are interested in.
UAtts, the collection of attributes that are used in the
constrained query, and that must be populated in any
new data that we insert.
We also build up a collection of queries that describe the
data that has been prepared for testing so far, as we progress
through the tree. We call these queries "bindings" (Bgs),
since they give us values for the variables that occur within
the selection and join conditions. At each stage, the bindings
should contain one query for each leaf node that has so far
been prepared.
It is easiest to see how this works by considering a simple
example, such as that shown in Figure 2. Let us assume we
have a constrained query that requires at least one customer
with negative balance to exist, and that our database does
not currently contain any such customers. We begin at the
root node of the tree, with only the cardinality constraints
extracted from the test specification:
Min = 1, Max = null, SelC = true,
UAtts = , Bgs =
The top node is a projection operator. Projection does not
affect the cardinality of the result set, nor impose any conditions
, but it does tell us something about the attributes used
106
Figure 3: Relational Algebra Tree Showing Multiple
Joins
by the query. We therefore add the projection attributes to
UAtts and push the constraints down to the next node:
Min = 1, Max = null, SelC = true,
UAtts = {custNo}, Bgs =
Next we must deal with the selection node. Selection nodes
reduce the cardinality of their input, so we need to push
down the selection conditions to ensure that any updates
we may make affect the correct tuples. We also need to add
any attributes appearing in the selection condition to UAtts:
Min = 1, Max = null, SelC = balance < 0,
UAtts = {custNo, balance}, Bgs =
The final node is the leaf node, representing the Customer
relation. We construct a query from the conditions on that
relation and execute it, to find out how many answers are
currently in the database. In this case, there are none, so
we need to insert a new Customer record with at least
the custNo and balance attributes populated, and with
a negative balance. If there are any integrity constraints
on this relation, then we need to make sure they are also
satisfied by the new data.
We use the DBMonster data generator mentioned earlier
to create the new data. It allows generation functions to
be specified for attributes, and additional constraints to be
placed on them. It will also maintain primary key, foreign
key, non-null and domain constraints if configured appro-priately
using the information present in the pushed-down
constraints.
Of course, this is a very simple example. In general, we
can expect to have to deal with more complicated queries
involving several joins, such as that shown in Figure 3. This
relational algebra tree is equivalent to the following constrained
query:
ANY :orderNo, :productNo GENERATED BY
SELECT o.orderno, p.productno
FROM Order o, Orderdetail d, Product p
WHERE o.orderno = d.orderno AND
d.productno = p.productno AND
p.price > 50
which requires that at least one order must exist that involves
the purchase of at least one product that costs more
than 50. Joins complicate the process of preparing the
database, because they introduce dependencies between the
updates that take place at different leaf nodes. For example,
imagine that we have processed the tree shown in Figure 3 as
far as the leaf node representing the OrderDetail relation.
Join operators further constrain the selection condition (by
conjoining in their join condition), but add no other constraints
. So, by the time we reach this leaf node, SelC will
have been set to:
o.orderno = d.orderno d.productno = p.productno
We need to find out whether a suitable OrderDetail record
exists within the database. However, in order to do this,
we need to know something about what preparation actions
were performed when the Product leaf node was processed.
Maybe there were already plenty of 50-plus products in
the catalogue, or maybe there were none and one had to
be created. How is this information passed through to the
OrderDetail
node so that the correct tuple can be identified
or created?
In the current version of our algorithm, we have chosen
to use the database itself to communicate these values. If
there are many suitable Product records, then we can find
one by querying the database directly once again. If a new
product had to be created, then it will now be present in
the database, so we can still retrieve it by querying. The
information needed to construct these queries is present in
the selection conditions that have been considered during
the processing of the relational algebra tree up to this point.
For example, in order to search for an OrderDetail tuple
that is connected to a suitable Product, we need to issue
the following query:
SELECT d.* FROM OrderDetail d, Product p
WHERE d.productno = p.productno AND
p.price > 50
This query cannot be constructed from only the constraints
pushed-down from the parent nodes of the leaf node; instead,
we need to collect up the constraints imposed by all nodes
visited before the current node, so that they are available for
query formation. This is done using the Bgs data structure
mentioned earlier.
Figure 4 presents the complete algorithm, showing the behaviour
required for each different type of operator. The algorithm
is presented as a side-effecting function which takes
the constrained query that is to be satisfied by the database,
and a set of initial conditions that state the required cardinality
bounds and initialise SelC to true, UAtts to and Bgs
to . The function returns a set of bindings, but these are
discarded. The main task of the algorithm is carried out
by the side-effecting updates that occur when leaf nodes are
processed.
DOT-UNIT TESTING FRAMEWORK
The intensional database test language and accompanying
preparation algorithm have been implemented within a testing
tool, called DOT-Unit. This tool is part of a larger Data-Oriented
Testing
7
framework that is under development at
the University of Manchester [20]. DOT-Unit has been implemented
as an extension to the JUnit testing framework
7
http://www.cs.man.ac.uk/
willmord/dot/
107
Projection operator
prepare(
Atts
Q, Min, Max, UAtts, SelC, Bgs)
= prepare(Q, Min, Max, UAtts Atts, SelC, Bgs)
Selection operator
prepare(
c
Q, Min, Max, UAtts, SelC, Bgs)
= prepare(Q, Min, Max, UAtts, SelC c, Bgs)
Join operator
prepare(Q
1
1
jc
Q
2
, Min, Max, UAtts, SelC, Bgs)
= prepare(Q
2
, Min, Max, UAtts, SelC jc,
prepare(Q
1
, Min, Max, UAtts, SelC, Bgs))
Relation (leaf node)
prepare(Rasv , Min, Max, UAtts, SelC, Bgs)
Q = bindingQuery(v, SelC, Bgs)
Execute Q to produce result set RS
if |RS| < Min then
Invoke DBMonster to create (Min - |RS|) more
instances of R that satisfy the conditions in Q
else if |RS| > Max then
Delete the first (|RS| - Max) tuples in RS
else
No preparation updates needed
return (Bgs binding(v, Q))
Figure 4: The Database Preparation Algorithm
for the unit testing of Java applications [16]. We have subclassed
the standard JUnit TestCase class, to create a dedicated
DatabaseTestCase class for specifying and managing
intensional database tests. DatabaseTestCase provides
facilities for specifying pre-conditions on database state,
generating and manipulating the bindings that are produced
by such pre-conditions, and evaluating post-conditions on
the database state after the test has been completed. The
standard JUnit methods for determining the results of test
execution on the in-memory fixture can also be used.
Figure 5 shows an example DatabaseTestCase that includes
two individual tests. The first verifies that when a
customer with a non-negative balance is deleted, all customers
with that customer number really do disappear from
the database. The second uses a data generation function to
propose attribute values for a new customer record (including
a unique customer number), and checks that after the
program has executed only one customer with the generated
customer number exists.
We use a prefixed colon to indicate variables that are
shared amongst the test components -- a notation that will
be familiar to many database programmers, since it is commonly
used in various forms of embedded SQL. The shared
variables acquire their values when the test harness evaluates
the precondition (and performs any necessary database
preparation steps). These values can then be accessed using
the binding method, and can be used in arbitrarily
complex assert conditions, as well as in instantiating the
post-condition query.
One of the main advantages of using the JUnit framework
as the basis for the implementation of DOT-Unit is that it
allows us to integrate our tool seamlessly into existing development
environments, such as Eclipse
8
. Thus, DOT-Unit
tests are executed in exactly the same way as a standard JUnit
test case, and the results are displayed using the same
interface components. This allows testing of database and
non-database components to be interleaved in a convenient
and natural manner.
8
http://www.eclipse.org
EVALUATION
The practicality of this intensional test case approach depends
largely on the performance overhead imposed by the
database preparation algorithm. If the time required to execute
each individual test case is significantly higher using
our approach than with DBUnit, say, then fewer tests will
be able to be executed in the time available and the benefits
of faster test development and fewer spurious test failures
will be negated.
To gain a handle on the degree of performance overhead
to be expected from DOT-Unit, we made use of an existing
extensional DB test suite that we created for earlier
work [20]. This suite was designed for mp3cd browser
9
, an
open-source Java/JDBC program that stories information
about mp3 files in a MySQL 5.0 database
10
. The schema
of the database consists of 6 relations with 22 attributes, 7
primary key constraints and 6 foreign key constraints. We
created an equivalent intensional test suite, consisting of 20
test cases, from the extensional suite by converting each test
case into DOT-Unit pre- and post-conditions. We also re-placed
each hard-coded test parameter in the original tests
into constrained query bindings.
We wanted to investigate two specific aspects of the performance
of DOT-Unit. First, we wanted to compare its
performance with that of DBUnit over the equivalent test
cases as the database size grows. Second, we wanted to gain
some idea of what aspects of DB preparation and testing
were dominating the performance of DOT-Unit. The results
of the experiments we performed are presented below.
All experiments were run on a Pentium-M 2.0GHz machine,
with 1Gb RAM, running Ubuntu Linux.
5.1
Comparison with DBUnit
At first sight, the extensional approach, as exemplified
by DBUnit, would seem to be the more efficient method
of the two, as the testing harness does not need to spend
any time figuring out what updates need to be made prior
to each test--it only needs to execute them.
This does
9
http://mp3cdbrowser.sourceforge.net/mp3cd/
10
http://www.mysql.com
108
public class ProgramTest extends DatabaseTestCase {
public void testDeleteCustomer() {
preCondition("ANY :cn GENERATED BY SELECT custNo FROM customer WHERE balance > 0;");
Program p = new Program();
p.deleteCustomer(binding(":cn"));
postCondition("NO :cn2 GENERATED BY SELECT custno FROM customer WHERE custNo = :cn;");
}
public void testNewCustomer() {
preCondition("ANY :cn, :name, :addr GENERATED BY SELECT gc.custNo, gc.name, gc.addr FROM
genCustomerDetails() AS gc WHERE gc.custNo NOT IN (SELECT custNo FROM customer);");
Program p = new Program();
boolean b = p.newCustomer(binding(":cn"), binding(":name"), binding(":addr"));
assertTrue(b);
postCondition("EXACTLY 1 :cn, :name, :addr GENERATED BY SELECT custno, name, addr
FROM customer;");
}
}
Figure 5: Example DOT-Unit Test Case
not happen by accident, but because a human programmer
has spent time earlier, deciding exactly what the database
should look like for each test case. However, when writing
DBUnit tests, it is common to try to reuse database descriptions
for multiple test cases where possible, to reduce
the amount of programming and maintenance time. In this
case, some redundant updates will be made before each test
case - updates that our extensional approach will not bother
to make. It is also the case that DBUnit makes its updates
blindly, whether they are needed or not, whereas the intensional
approach will be able to reuse much of the existing
database state for each new test case.
Given this, it seems likely that the performance of DBUnit
will be better when the database state required for each
test case is relatively small, but that the situation will be
reversed when the database state grows much larger.
In
order to gauge the point at which this change occurs, we
ran our two test suites (extensional and intensional) with
databases of varying sizes, and measured the execution time
taken to execute the whole test suite.
In each case, we generated initial database states of varying
sizes at random - either populating the database directly
(for the intensional test cases) or generating XML descriptions
of the required state (for the extensional test cases).
The results are shown in Figure 6.
Figure 6:
Comparison of Approaches as DB Size
Increases
To our surprise, although the performance of DOT-Unit was
initially worse than that of DBUnit, it overtook its competitor
at a comparatively small database size of around 20
tuples per relation. Obviously, this experiment is a little
unfair to DBUnit, since programmers are unlikely to create
database descriptions consisting of 1000s of tuples per relation
. However, tests of this scale will be needed at some
point in the development cycle, in order to verify the behaviour
of the system on more realistic data sets.
In order to assess the behaviour of DOT-Unit more pre-cisely
, consider the graph in Figure 7, which shows the results
at small databases sizes in more detail. It can be ob-served
that the performance of DOT-Unit first improves and
then begins to degrade again at a database size of around
50 tuples per relation.
Figure 7: Detailed Comparison of Approaches
One possible explanation for this initial improvement in performance
is that, as the database size rises, so does the
probability that the data needed for the test case is already
present in the database. For the very small states,
a lot of preparation work is required to create the needed
data, whereas less work is needed for a more fully populated
database. As the database size increases further, however,
the costs of making the queries needed to test the preconditions
and formulate the preparation updates rises, pushing
up the time required for the entire preparation step. This
109
behaviour may be a peculiarity of the particular test suite
used, of course, and further, more extensive studies will be
required in order to completely characterise the performance
of the DOT-Unit test harness.
From these initial results, however, DOT-Unit appears to
scale well relative to database size, and the execution times
are of the same order of magnitude as those resulting from
DBUnit. This suggests that the intensional approach may
provide a good compromise between saving expensive programmer
time in developing new test cases and expenditure
of cheaper processing time in executing the test cases.
5.2
Effect of Constraint Complexity
A further concern was the effect of increasing constraint
complexity on the performance of DOT-Unit test cases. How
much additional overhead is added for conditions involving
a higher number of selection conditions and (most impor-tantly
) joins? In order to assess this, we grouped the test
cases into three groups, according to their complexity:
A: queries with one or more selections and no joins,
B: queries with one or more selections and a join between
two relations,
C: queries with one or more selections and joins between
three relations.
This gave a test suite with 5 test cases in each of these
categories, which we executed against a randomly generated
database state with 500 tuples per relation that does not
satisfy any of the test case pre-conditions. Figure 8 shows
the results obtained for the three complexity categories. We
measured the average time taken to execute the test cases
in each category, including a breakdown of where the time
is spent in each case:
Test: the time required to execute the procedural aspects
of the test case;
Query: the time required to execute the query aspect
of the test case condition;
Prepare the time required to execute the preparation
aspect of the test case condition.
While the overall time required to execute the test cases rises
as the complexity rises (unsurprisingly), the relative proportions
of time spent in the various phases remains roughly the
same. The preparation phase seems to account for slightly
more than half of the time in each case, indicating that significant
improvements could be achieved with a less-naive
preparation algorithm.
CONCLUSIONS
We have presented a new approach to the specification
of test cases for database systems that attempts to reduce
the amount of manual intervention required in between test
case runs while also minimising the number of spurious test
failures due to inappropriate input database states. The approach
has the further advantage that it sits naturally on top
of test data sets taken from live databases, and this allows
testing to be carried out using realistic data sets without requiring
significant programmer effort to tailor the data set to
the test cases. In effect, the intensional approach we have
Figure 8: The Affect of Changing Constraint Complexity
described allows software developers to trade programmer
time for test execution time
Our experience has indicated that intensional test cases
are quick and natural to write for anyone who is familiar
with SQL and database programming, although a study
with an independent testing team would be necessary before
we can make any strong claims in this regard. However
, compared with what is involved in writing pure JDBC
database test cases and DBUnit test cases, we found that
the self-contained nature of the intensional test cases was a
definite advantage. Writing DBUnit test cases requires the
programmer to continually check that the test case is compatible
with the database description. Moreover, since it is
common to try to reuse database descriptions for multiple
test cases by combining their requirements into one database
state, it becomes very easy to break one test case by changing
the database description in order to ready it for another.
These problems do not arise with intensional testing, since
all the information about the test case is present in a single
file (the Java class file).
We designed this first version of the preparation algorithm
for simplicity and correctness rather than efficiency, and as
such it performs rather stupidly in many cases. We are currently
exploring options for improving the algorithm, including
more intelligent selection of the order in which the relational
algebra tree is traversed, alternating between passing
query bindings and passing literal value bindings as is most
efficient, and making use of modifications to existing tuples
as well as simply adding and deleting tuples (both of which
are comparatively expensive operations). The complexity of
the conditions we can handle is at present limited by the
capabilities of DBMonster, and can be expanded by development
of a custom data generation facility. We also need
to expand the range of queries that can be handled, beyond
simple select-project-join queries.
For example, standard
SQL also allows aggregation and ordering within queries-both
of which offer challenges in terms of automatic preparation
.
A further problem with our current algorithm is that it
may sometimes fail to find a solution to the database preparation
problem, even though one exists. This is due to the
fact that updates are made at leaf nodes before the full set of
constraints on those nodes has been encountered. It should
110
be possible to address the problem with more sophisticated
querying techniques (this is an example of a fairly standard
constrained search problem, after all), although this will add
to the performance overhead. A thorough study of the trade-offs
between spurious failures and more intelligent searching
will need to be carried out before any concrete recommendations
can be made.
Finally, we note that where it is important to test large
numbers of frame constraints (i.e. aspects of the original
database state that are not affected by the execution of the
program under test), it may be easier to express the test case
using DBUnit, rather than cluttering up the intensional test
with many such constraints.
Our work presents a number of possible avenues for future
work beyond the improvements mentioned above, of which
the most urgent is the question of ordering of test cases
within suites. This ordering can be in terms of reducing the
cost of the modifications to database state or to maximise
fault coverage. There is also the question of whether the
modifications to database state should always persist between
test cases or under certain conditions discarded. For
example, a test case may specify that a relation be empty
and to satisfy the condition the content is discarded. However
, this relation may be required by later test cases and so
by discarding its contents we increase the divide between the
test state and the real world. This could be accomplished
by either embedding the modifications inside of a transaction
which can then be aborted or by using a hypothetical
database engine.
ACKNOWLEDGMENTS
We thank Leonardo Mariani and the anonymous reviewers
for comments on earlier drafts of this paper. David Willmor
is supported by a research studentship from the UK Engineering
and Physical Sciences Research Council.
REFERENCES
[1] M. Arenas, L. E. Bertossi, and J. Chomicki.
Consistent query answers in inconsistent databases. In
Proceedings of the 18th ACM
SIGACT-SIGMOD-SIGART Symposium on Principles
of Database Systems (PODS), pages 6879. ACM
Press, 1999.
[2] L. E. Bertossi and J. Chomicki. Query answering in
inconsistent databases. In J. Chomicki, R. van der
Meyden, and G. Saake, editors, Logics for Emerging
Applications of Databases, pages 4383. Springer,
2003.
[3] P. Bohannon, M. Flaster, W. Fan, and R. Rastogi. A
cost-based model and effective heuristic for repairing
constraints by value modification. In Proceedings of
the SIGMOD Conference, pages 143154. ACM, 2005.
[4] L. Bravo and L. E. Bertossi. Logic programs for
consistently querying data integration systems. In
G. Gottlob and T. Walsh, editors, Proceedings of the
18th International Joint Conference on Artificial
Intelligence (IJCAI), pages 1015. Morgan Kaufmann,
August 2003.
[5] A. Cal`i, D. Lembo, and R. Rosati. On the decidability
and complexity of query answering over inconsistent
and incomplete databases. In Proceedings of the 22nd
ACM SIGACT-SIGMOD-SIGART Symposium on
Principles of Database Systems (PODS), pages
260271. ACM, June 2003.
[6] D. Chays, S. Dan, P. G. Frankl, F. I. Vokolos, and
E. J. Weber. A framework for testing database
applications. In Proceedings of the International
Symposium on Software Testing and Analysis
(ISSTA), pages 147157, August 2000.
[7] D. Chays, Y. Deng, P. G. Frankl, S. Dan, F. I.
Vokolos, and E. J. Weyuker. An AGENDA for testing
relational database applications. Software Testing,
Verification and Reliability, 14(1):1744, 2004.
[8] J. Chomicki and J. Marcinkowski. On the
computational complexity of minimal-change integrity
maintenance in relational databases. In L. E. Bertossi,
A. Hunter, and T. Schaub, editors, Inconsistency
Tolerance, volume 3300 of Lecture Notes in Computer
Science, pages 119150. Springer, 2005.
[9] S. S. Cosmadakis and C. H. Papadimitriou. Updates
of relational views. Journal of the ACM,
31(4):742760, 1984.
[10] S. M. Embury, S. M. Brandt, J. S. Robinson,
I. Sutherland, F. A. Bisby, W. A. Gray, A. C. Jones,
and R. J. White. Adapting integrity enforcement
techniques for data reconciliation. Information
Systems, 26(8):657689, 2001.
[11] G. Greco, S. Greco, and E. Zumpano. A logical
framework for querying and repairing inconsistent
databases. IEEE Transactions on Knowledge and
Data Engineering, 15(6):13891408, 2003.
[12] A. Guessoum and J. W. Lloyd. Updating knowledge
bases. New Generation Computing, 8(1):7189, 1990.
[13] F. Haftmann, D. Kossmann, and A. Kreutz. Efficient
regression tests for database applications. In
Proceedings of the 2nd Biennial Conference on
Innovative Data Systems Research (CIDR), pages
95106. Online Proceedings, January 2005.
[14] G. M. Kapfhammer and M. L. Soffa. A family of test
adequacy criteria for database-driven applications. In
Proceedings of the 11th ACM SIGSOFT Symposium
on Foundations of Software Engineering, pages
98107. ACM, September 2003.
[15] R. Langerak. View updates in relational databases
with an independent scheme. ACM Transactions on
Database Systems (TODS), 15(1):4066, 1990.
[16] P. Louridas. Junit: Unit testing and coding in
tandem. IEEE Software, 22(4):12 15, July-Aug 2005.
[17] J. Melton and A. R. Simon. SQL:1999 Understanding
Relational Language Components. Morgan Kaufmann,
2002.
[18] H. Shu. Using constraint satisfaction for view update.
Journal of Intelligent Information Systems,
15(2):147173, 2000.
[19] D. Willmor and S. M. Embury. Exploring test
adequacy for database systems. In Proceedings of the
3rd UK Software Testing Research Workshop
(UKTest), pages 123133. The University of Sheffield,
September 2005.
[20] D. Willmor and S. M. Embury. A safe regression test
selection technique for databasedriven applications.
In Proceedings of the 21st International Conference on
Software Maintenance (ICSM), pages 421430. IEEE
Computer Society, September 2005.
111
| database testing;Efficient Testing;software testing;Seamless Integration;Query Based Language;Improvement for the Intensional Test Cases;DOT-UNIT;databases;Lesser Programmer Effort for Test Cases;Intensional Test Cases;Testing Framework;Testing for Database Systems;Performance Testing |
36 | Analysis of Soft Handover Measurements in 3G Network | A neural network based clustering method for the analysis of soft handovers in 3G network is introduced. The method is highly visual and it could be utilized in explorative analysis of mobile networks. In this paper, the method is used to find groups of similar mobile cell pairs in the sense of handover measurements. The groups or clusters found by the method are characterized by the rate of successful handovers as well as the causes of failing handover attempts. The most interesting clusters are those which represent certain type of problems in handover attempts. By comparing variable histograms of a selected cluster to histograms of the whole data set an application domain expert may find some explanations on problems. Two clusters are investigated further and causes of failing handover attempts are discussed. | INTRODUCTION
Mobility management is a great challenge in current and
future radio access networks. In third generation (3G) networks
user experienced quality of service (QoS) under a
move of mobile station (MS) from one mobile cell to another
cell has been improved by implementing soft handover
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
MSWiM'06,
October 26, 2006, Torremolinos, Malaga, Spain.
Copyright 2006 ACM 1-59593-477-4/06/0010 ...$5.00.
(SHO). Soft handover makes it possible to have connections
on several base stations (BS) simultaneously.
In this paper, a set of measurements which can be used for
soft handover decision making are analyzed and compared
with other measurements in which statistics of successful-ness
of handover attempts have been collected. We do not
know exactly the parameters of used SHO algorithm. SHOs
are investigated only on basis of data set and some general
knowledge of 3G systems. Mobile cell pairs with handovers
(HO) are divided in groups using clustering algorithm. Cell
pairs in which SHOs are similar with each other fall in same
group. Different types of SHO failures are analyzed using
clustering information and distributions of measurements in
each cluster.
In Section 2 the soft handover concept, the measurements
and used neural network algorithm are shortly introduced.
Analysis methods which have been used are described in
Section 3. Preliminary results are shown and discussed in
Section 4. Finally, some conclusions are drawn in the last
section.
BACKGROUND
In this section, the basics of soft handover in 3G network
is explained and the available data set is introduced. Neural
network algorithm used in data clustering is also presented.
2.1
Soft handover
Soft handover is a state of MS being connected to several
BSs simultaneously. In GSM networks, a fixed threshold for
handover from one cell to another is used. In 3G networks,
each MS is connected to a network via a set of BSs called
active set. Members of active set are updated on basis of
measurements made by MS. The advantage of having connections
on several BS simultaneously is realized when MS
is moving towards another BS, the MS should have a connection
at least on one BS all the time. In GSM system, the
older connection has to be terminated before the new one
can be setup. The connection setup phases are the most
vulnerable steps in a call. The connection between MS and
BS is setup in a beginning of a call or later when handover
occurs. If the setup is not successful, it is useful to have an
existing connection to another BS or otherwise the call will
be abnormally terminated.
Handover can occur due to signal quality reasons or when
the traffic capacity in a cell has reached its maximum or is
approaching it. In the latter case, traffic load in the network
can be distributed more uniformly by handing over some
users from the most crowded cells. The above method is
330
called cell breathing. Use of cell breathing without giving
the information to the analyzer increases the complexity of
the analysis and can mix up a lot in the analysis process.
For a user soft handover means power saving (in uplink)
and less abnormally terminated calls. For an operator lower
MS transmitting powers mean less interference. When MS
is in SHO, several BSs listen the same uplink channel, but
all BSs have their own downlink channel. The offered diversity
is resource consuming in downlink direction. There is
a tradeoff between better QoS in mobility management and
consumption of resources.
Decision of soft handover is made in mobile station by
comparing the signal-to-noise ratios of active and candidate
BSs Common Pilot Channel (CPICH) [2]. Members of active
set are selected on basis of powers of this pilot signal [5,
12, 16].
BSs which are not in the active set but next from it in the
sense of measured quantity are in candidate set. Candidate
set BSs are constantly monitored whether their offer better
connection than cells in active set. Cells not in active or
candidate set are monitored less frequently whether their
can enter the candidate set.
Cell is either added to the
active set if the maximum amount of cells in the active set
is not reached or cell replaces the cell which offers the lowest
quality connection. Cells which are no more able to offer
a connection which is good enough are removed from the
active set.
Thresholds are used in adding, replacing and removing
BSs from active set by BSs in candidate set to avoid ping
pong effect. This means that a value of measured quantity
should be with a certain threshold better than the old one
for changing cells in active set. If measurement which is only
slightly better (i.e. with zero threshold) is enough for changing
cells in sets, it is quite possible that the same change is
performed in opposite direction very soon. Thus, the original
update of the set was useless and resource consuming in
the sense of all required signaling.
2.2
Data
Three data sets of Key Performance Indicator (KPI) level
measurements related on handover events are saved. Each
set consists of measurements collected during one hour. KPI
is considered as an important measure to be followed. It can
be a measurement by itself or it has been computed from a
group of raw counters [10]. One data vector consists of probabilities
, means, sums and counters computed over one hour
of one source target cell pair. Here, source refers on cell
in active set and target on another cell which is measured
and possibly added in active or candidate set.
Measurements
of target cell are compared with those of source cell.
Handover decisions are made in MS on basis of measured
and computed base stations received signal signal-to-noise
ratios (E
c
/N
0
). For each source and target cell pair mean
of signal-to-noise ratio differences is computed using
EcnoDiffMean = mean n[E
c
/N
0
]
target
- [E
c
/N
0
]
source
o .
Mean value and number of made comparisons (EcnoDiffNum)
are saved. Four bin pdfs of these measurements are also
stored with bin centers in -6, -3, 0 and 3dB, correspond-ingly
.
In addition to E
c
/N
0
measurements, averages of received
pilot signal power ratios between BS pairs (av rscp ratio)
have been computed and stored in database. The time and
probability of being in SHO with each other have also been
measured. Time of target and source cell being in SHO with
each other simultaneously is counted in variable t act. Then,
at least one MS is in SHO having both source and target cell
in its active set. The measurement is symmetric for a switch
of source and target cells. Time of target cell being in SHO
with source cell is stored in t act dir. Cell total time in
SHO is saved in tot time sho. It has been counted over all
the targets of fixed source cell. Probability of target and
source being in same active set is stored in variable p act.
Total number of SHO attempts to add target to active
set is stored in SHO total att. Ratio of successful SHO attempts
which lead to addition of target cell in active set is
saved in add ratio. In addition to those above, the number
of SHO failures is stored in pfail total and ratios of four
different failure causes are saved. Failure occurs in setup
or active time phase of SHO and it is either radio channel
problem or not. Probability of cell being in monitored state
is also measured (p4th 5th). All the measurements used in
the analysis are shortly described in Table 1.
A lot of data has been saved in data sets, but also some
very important information is missing. Due to missing information
on cell capacities, their locations and performed
manual and automatic tuning operations on network configuration
between successive data set saves, only preliminary
analysis can be performed. The rest of the analysis process
is described on theoretical level.
2.3
Self-Organizing Map
Self-Organizing Map (SOM) [8] is an unsupervised neural
network algorithm which adapts the codebook vectors
of neurons so that they approximate the input data distribution
. When the training has converged topological areas
or domains corresponding to certain types of inputs can be
found from the map. The topology and the size of the network
is fixed before adaptation.
In the SOM algorithm, the codebook vectors w
j
of the
SOM are at first initialized. Then, the following steps are
repeated for each input vector x: Find the index of best-matching
or nearest codebook vector using
i(x) = argmin||x - w
j
||,
in which j goes through all the neurons in the map. Next,
the codebook vectors of winner neuron and its neighbors are
updated using
w
j
(t + 1) = w
j
(t) + h
ij
(x)(x(t) - w
j
(t)).
Here, is the learning rate and h
ij
(x) is the neighborhood
function centered around the winner neuron. Input sample
x defines the winner neuron and the topological distance
between indexes i and j defines how much the neuron is
updated. Neighborhood function is typically Gaussian or
bubble function i.e. function which decrease monotonically
and even goes to zero when the distance increases.
In this paper, a batch version of the SOM algorithm is
used. In batch SOM, all codebook vectors of the SOM are
computed after the best-matching units of all input data
vectors have been found. The same data set is used several
times.
METHODS
Handover related measurement from 3G network can be
analyzed using standard data mining methods [1]. In this
331
Table 1: Measurements in the analysis. Data set has one sample vector for each source target cell pair.
Variable
Explanation
Type
EcnoDiffNum
Computed E
c
/N
0
differences
number
EcnoDiffMean
Computed E
c
/N
0
differences
mean
EcnoDiffPdf-6.0
-6 dB bin of E
c
/N
0
difference pdf
ratio
EcnoDiffPdf-3.0
-3 dB bin of E
c
/N
0
difference pdf
ratio
EcnoDiffPdf0.0
0 dB bin of E
c
/N
0
difference pdf
ratio
EcnoDiffPdf3.0
3 dB bin of E
c
/N
0
difference pdf
ratio
t act
Target and source simultaneously in SHO
mean
t act dir
Time of target being in SHO with source
mean
tot time sho
Cell total time in SHO
sum
p act
Target in active set of source
ratio
SHO total att
SHO attempts to add Target to active set
number
add ratio
Successful attempts leading to addition
ratio
pfail total
Failures
number
pfail ini
Setup phase failures due to non-radio
ratio
pfail ini radio
Setup phase failures due to radio
ratio
pfail act
Active time failures due to non-radio
ratio
pfail act radio
Active time failures due to radio
ratio
p4th 5th
Cell is in monitored state (=4th or 5th)
ratio
av rscp ratio
Target / Source Received power ratio
mean
r fail
Ratio pfail total / SHO total att
ratio
r EcnoDNum
Ratio EcnoDiffNum / SHO total att
ratio
Variable defined in the analysis.
study, methods presented in Figure 1 are used. At first,
the miner have to decide what could be interesting in this
data. The analysis task has to be defined. On basis of that
the first choice of variables will be done. Next, the selected
variables are preprocessed, in order to be able to use them
in later analysis.
In data mining tasks, variable selection and preprocessing
are the most critical phases, because in this step the miner
decides which variables are important and how should they
be processed. The whole data mining process consists of several
cycles performed repeatedly. The cycles include testing
how different variable selections and preprocessing methods
effect on final results. The process has inner loops in which
some tasks or parameters are fixed on basis of selections
made in outer loop. The inner loops are performed more
frequently. Loops with more general task like the definition
of mining task are repeated less frequently. When the
mining task is defined the analyzer should be able to decide
what is (s)he looking out for.
Now, the analysis task is defined as finding groups of sim-ilarly
behaving cell pairs in SHO situations. Importance of
measurements can also be highlighted using proper weighting
of variables. In addition to clustering, also other tasks
for data analysis can be defined. One possibility is to try to
find cells or cell pairs with anomalous behavior. Anomalies
can also be found by clustering, but expert knowledge in
variable selection and preprocessing steps are very important
.
Using different variables, preprocessing methods and weighting
of variables different clustering results can be found. To
find out which of them is useful, interpretation of clusters is
needed. This can be done using histograms or rules defined
by data samples falling in clusters. The results which have
been found using clustering methods should be visualized
together with spatial locations to be able to understand the
usefulness of results. Methods should be performed repeatedly
to analyze successive data sets under the knowledge of
performed tuning operations. Thus, there is a possibility to
find explanations to changing results. In this study, results
of only one data set are shown, because more information
on application domain is needed to be able to combine and
compare successive clustering results.
3.1
Preprocessing
Different preprocessing methods have been tested. The
final method was selected on basis of histograms and the
clusters which were found using the selected method. At
the first step, the distributions are truncated. Outliers in
the selected variables were replaced by their maximum permitted
values. Two variables, pfail total and EcnoDiffNum,
were scaled using the number of performed soft handover
attempts (see Table 1). Logarithms of some of the variables
were taken, but finally only scaled EcnoDiffNum was preprocessed
with logarithmic function. Sample vectors with high
amount of undefined measurements were canceled.
Used
clustering method (see section 3.2) allows using sample vectors
in which some variables are undefined. However, they
are not so useful when the rate of undefined values increases.
Here, sample vectors with 15 or more missing values in 20
variables are canceled.
In Figure 2 the histograms of the most interesting variables
preprocessed using selected methods are visualized.
Some of the variables have quite high peaks in distributions,
but due to the origin of variables no other preprocessing
have been performed. For example, handover failure reasons
pfail ini, pfail ini radio, pfail act radio and pfail act sum up
to unity. However, pfail act is not analyzed because it is zero
all the time in the first data set.
332
Interpretation
of clusters
Clustering
Preprocessing
Parameter tuning
Task definition
Visualization
with locations
Variable selection
Figure 1: Used data analysis method.
Steps connected
with solid arrows have been performed.
3.2
Clustering
Cluster analysis is used to divide data vectors in groups.
Data vectors falling in same cluster are similar with each
other.
Here, clustering is performed using a two-phase
method [15]. In this method, data vectors are at first used
to train a Self-Organizing Map. Neurons of the SOM adapt
to incoming data so that the input data can in later analysis
be represented by the codebook vectors of neurons. Number
of these codebook vectors is much smaller than the number
of original data vectors. Thus, computational complexity of
the final crisp clustering algorithm is decreased. Another
advantage of using a SOM based two-phase method instead
of direct clustering of data vectors is the visualization capability
of SOM.
In addition to preprocessing, SOM algorithm provides another
possibility to emphasize important properties of data.
Larger weights in distance computation are given to the
most important properties defined by the analyzer. Smaller
or even zero weight can be given to those variables which
are not used in organization of the SOM i.e. in building clusters
. However, values of them can be compared to those with
larger weights using various visualization methods. Weighting
by variable importance can also be built into SOM training
algorithm by utilizing learning distance metrics [7].
Figure 2: Logarithmic histograms after distribution
cuts, logarithmic preprocessing of r EcnoDNum and
scaling of all variables between [0,1]
The codebook vectors are further clustered using k-means
or some hierarchical clustering method. In this paper, Ward
agglomerative clustering method has been used [4]. In the
beginning of hierarchical clustering, each codebook vector
is a cluster of its own. In the next step, the most similar
clusters are combined and this is continued until all vectors
are in same cluster. The clustering results form a tree structure
called dendrogram. In visualization of a dendrogram,
the clusters combined in each step and the distance between
them are shown. Final clustering is selected by cutting this
tree at certain level. The number of clusters can be selected
manually or some cluster validation index can be utilized
to find the optimum. In this paper, Davies-Bouldin validation
index has been used [3]. Similar clustering methods
have earlier been used in the analysis of both GSM and 3G
network BTSs [9, 11, 13].
As a result of clustering, each data vector is represented
by index of one neuron or by the codebook vector stored in
that neuron. Furthermore, the neuron and the data vectors
the neuron represents belong to same cluster. On basis of
the clustering result, some clusters can be selected for more
specific analysis. Cluster selection is usually done on basis
of found higher values of some critical variables. It is possible
to build a system in which rules are found for clusters
[14, 9] and these are used to select interesting clusters au-tomatically
. Here, interesting clusters are selected manually
on basis of clusterwise variable mean values and histograms.
RESULTS
In this section, handover measurement data is used to
train a Self-Organizing Map of size 17 12. Then, the codebook
vectors of the SOM are clustered using hierarchical
Ward method. Results of clustering are described and two
clusters are then selected for more specific analysis. Characteristics
of sample vectors falling in those clusters are studied
using histograms.
Only the most interesting variables are used to find the
333
nearest neuron of input data vector. These variables have
nonzero mask which can also be considered as a weighting
factor in a search for the best-matching neuron. Rest of
the variables have zero mask, which means that they can
be visualized and updated using SOM algorithm, but they
do not have an effect on organization of the SOM and on
selection of the cluster in which the sample belongs to.
In Figure 3 all other component planes of SOM with positive
mask are shown, except E
c
/N
0
difference distributions
which are shown in Figure 6. In component plane visualization
, distributions of components (or variables) of SOM
codebook vectors are shown. Component values of one codebook
vector are visualized using grayscaling and their locate
in the same position at each plane. For example, values of
one codebook vector are shown at upper right corner in each
plane.
Figure 3: Component planes of SOM with denor-malized
scales. Shown variables have nonzero mask
and they are not describing E
c
/N
0
difference distributions
.
Some component values which were not used in SOM
training (i.e. they were masked out) are shown in Figure
4. Although, they have no effect on SOM organization, they
are adapted to be able to compare their distributions even
with those used in organizing the SOM.
By visual comparison of variables in Figures 3 and 4, it can
be
seen
that
the
total
number
of
SHO
attempts
(SHO total att)
and
E
c
/N
0
difference
measurements
(EcnoDiffNum) is higher in upper part of the SOM. However
, when the latter is scaled by total number of attempts,
higher rate of measurements (r EcnoDNum) is in lower part
of the map. Also, the total number of failuring SHO attempts
(pfail total) is high in upper right corner, but scaling
this by number of attempts tells us that the failure rate
(r fail) in upper right corner is quite moderate. Instead,
higher failure rates exists in both lower corners i.e. in clusters
5 and 8 (see Figure 5).
Trained SOM codebook vectors are clustered using hierarchical
Ward algorithm. The clustering result selected by
Davies-Bouldin index is shown in Figure 5. Four bin E
c
/N
0
difference histograms are visualized on top of clustered SOM
Figure 4: Denormalized component planes of variables
which were not used in SOM training.
in Figure 6. When component values of SOM (see Figures
3, 4 and 6) are compared with clustering result (see Figure
5) several types of source target pairs can be found. Most of
them are behaving as expected, but some of them represent
handover attempts with certain type of problems.
Figure 5: SOM which is clustered using hierarchical
Ward method and Davies-Bouldin validation index.
To find out the most interesting clusters of the SOM for
further investigations, distribution of data samples on SOM
is visualized. In Figure 7a hits of all samples on SOM nodes
are visualized and in Figure 7b hits of samples with SHO
failure rate (r fail) larger than 22% are shown. Samples are
distributed all over the map, only some edge nodes have
slightly larger hit rate. Lower part of the map has more hits
when samples with increased failure rate are considered.
In Figure 8 hits of samples which represent two differ-334
Figure 6: EcnoDiff distributions on top of clustered
SOM. In each SOM node a four bin E
c
/N
0
histogram
is shown.
ent types of SHO failures are shown. Samples are from cell
pairs in which the rate of selected type of failures is larger
than 75%. However, handover initialization failures due to
some other reason than radio channel resources (i.e. pfail ini
type failures) are obviously more frequent than failures due
to radio channel initialization problems (pfail ini radio type
failures). Cell pairs with SHO failures originating mainly
from these two reasons are mapped on separate clusters.
All SHO failures due to radio channel initialization are in
cluster 9 (see Figures 5 and 8b) and most of all other initialization
failures are in cluster 5 (see Figures 5 and 8a). In
the following, these two clusters are studied in more detail.
In Figures 9 and 10 histograms of samples which belong
to clusters 5 and 9 are shown.
These histograms should
be compared with histograms of whole data set which were
shown in Figure 2. In histograms of cluster 5 (see Figure
9), the average received signal power ratio (av rscp ratio) is
slightly lower than in general. Distributions of three largest
E
c
/N
0
difference measurement bins are completely different
than corresponding distributions from the whole data set.
In cluster 5 most of the samples have about 3dB E
c
/N
0
difference
(EcnodiffPdf3.0) which means that at least this measurement
makes successful SHOs possible and SHO should
be performed. Exceptional E
c
/N
0
difference measurements
(a) All
(b) SHO failure rate > 22%
Figure 7: Sample vector hits on SOM nodes. Size of
black hexagonal on SOM node denotes the number
of hits. Maximum number of hits per node is shown
above the plot.
(a) pfail ini
(b) pfail ini radio
Figure 8: Hits of samples of two failure types. Samples
of which more than 75% are failuring due to
selected cause are counted.
of this cluster can also be seen in Figure 6. All the failing
cell pairs fail in initialization due to other than radio channel
reasons (pfail ini). Total rate of failures is very high
(r fail). One reason for high rate of failures can be that all
the capacity is in use.
In histograms of cluster 9 (see Figure 10), the average received
power ratios are a bit higher than usual, but there are
no samples with high rate of 3dB E
c
/N
0
differences (EcnoDiffPdf3
.0). However, in such a situation it should be possible
to perform successful SHOs. The rate of initialization failures
in radio channels (pfail ini radio) is higher than usually,
but because only a small part of samples in this cluster have
above mentioned problems the total SHO failure rate is not
higher than usually. The total number of samples or cell
pairs with high rate of initialization failures in SHO is so
small, that it is impossible to make any further inferences
from these clusters. It is possible to check histograms of
only those samples which fulfill the failure rate criteria, but
the number of samples is anyway quite low.
335
Figure 9: Histograms of data vectors of cluster 5.
Cell pairs with high rate of radio channel initialization
failures in SHO attempts vary from data set to another,
but without any information on network topology and with
uncomplete information on performed tuning operations, it
is impossible to make any further inferences.
Figure 10: Histograms of data vectors of cluster 9.
CONCLUSIONS
In this paper, a data analysis method based on a neural
network has been presented. The method is utilized in
data visualization and clustering. The presented method is
only one possibility for finding data clusters. However, the
benefits of the proposed method are the decrease in computational
complexity due to used two-phase clustering algorithm
and the visualization capability of the method. Thus,
it is well suitable for this kind of explorative data analysis.
It is desirable to find clusters with characteristics which
differ from one cluster to another. In the presented method,
selection of variables and variable weighting factors have
been used to find interesting clusters. In the preprocessing
phase, also the number of permitted undefined measurement
values in sample vector has an effect on found clusters.
Sample vectors with high rate of missing values are not so
usable and describable as samples without them. Vectors
with missing values can be used in the SOM training but
the benefit of using them decreases when the rate of undefined
values increases.
In this study, histograms are used both when preprocessing
methods are decided and when an interpretation for the
found clusters are looked for. However, clusters can also be
compared using other visual methods, finding limiting rules
for variable values in clusters or comparing distributions of
variable values in clusters using more sophisticated distribution
comparison measures like Kullback-Leibler divergences
[6].
The results which have been obtained using all available
data sets differ slightly from each other, but due to uncomplete
information on network configuration and parameter
tuning, further inferences cannot be made. However, adding
this information would offer interesting possibilities to continue
this study.
REFERENCES
[1] P. Chapman, J. Clinton, T. Khabaza, T. Reinartz,
and R. Wirth. CRISP-DM 1.0 step-by-step data
mining guide. Technical report, CRISM-DM
consortium, 2000. http://www.crisp-dm.org.
[2] Y. Chen. Soft Handover Issues in Radio Resource
Management for 3G WCDMA Networks. PhD thesis,
Queen Mary, University of London, 2003.
[3] D. Davies and D. Bouldin. A cluster separation
measure. IEEE Transactions on Pattern Analysis and
Machine Intelligence, 1(2):224227, April 1979.
[4] B. Everitt. Cluster Analysis. Arnold, 1993.
[5] V. K. Garg. Wireless Network Evolution: 2G to 3G.
Prentice-Hall, Inc., 2002.
[6] S. Haykin. Neural Networks, a Comprehensive
Foundation. Macmillan, 1999.
[7] S. Kaski and J. Sinkkonen. Metrics that learn
relevance. In Proceedings of the International Joint
Conference on Neural Networks, volume 5, pages
547552, 2000.
[8] T. Kohonen. Self-Organizing Maps. Springer-Verlag,
Berlin, 1995.
[9] J. Laiho, K. Raivio, P. Lehtim
aki, K. H
at
onen, and
O. Simula. Advanced analysis methods for 3G cellular
networks. IEEE Transactions on Wireless
Communications, 4(3):930942, May 2005.
[10] J. Laiho, A. Wacker, and T. Novosad, editors. Radio
Network Planning and Optimisation for UMTS. John
Wiley & Sons Ltd., 2001.
[11] P. Lehtim
aki and K. Raivio. A SOM based approach
for visualization of GSM network performance data.
In IEA/AIE, pages 588598, 2005.
[12] R. Prakash and V. Veeravalli. Locally optimal soft
handoff algorithms. IEEE Transactions on Vehicular
Technology, 52(2):347356, March 2003.
[13] K. Raivio, O. Simula, and J. Laiho. Neural analysis of
mobile radio access network. In IEEE International
336
Conference on Data Mining, pages 457464, San Jose,
California, USA, November 29 - December 2 2001.
[14] M. Siponen, J. Vesanto, O. Simula, and P. Vasara. An
approach to automated interpretation of SOM. In
N. Allinson, H. Yin, L. Allinson, and J. Slack, editors,
Advances in Self-Organizing Maps, pages 8994.
Springer, 2001.
[15] J. Vesanto and E. Alhoniemi. Clustering of the
self-organizing map. IEEE Transactions on Neural
Networks, 11(3):586600, May 2000.
[16] J. Zander. Radio Resource Management for Wireless
Networks. Artech House, Inc., 2001.
337
| Two-Phase Clustering Algorithm;data mining;mobility management;Key Performance Indicator of Handover;soft handover;Data Mining;Soft Handover;Visualization Capability;Neural Network Algorithm;neural networks;hierarchical clustering;Self-Organizing Map;Cluster Analysis;Histograms;3G network;Decrease in Computational Complexity |
37 | Aspect Oriented Programming for a component-based real life application: A case study | Aspect Oriented Programming, a relatively new programming paradigm, earned the scientific community's attention . The paradigm is already evaluated for traditional OOP and component-based software development with remarkable results. However, most of the published work, while of excellent quality, is mostly theoretical or involves evaluation of AOP for research oriented and experimental software . Unlike the previous work, this study considers the AOP paradigm for solving real-life problems, which can be faced in any commercial software. We evaluate AOP in the development of a high-performance component-based web-crawling system, and compare the process with the development of the same system without AOP. The results of the case study mostly favor the aspect oriented paradigm. | INTRODUCTION
Aspect Oriented Programming, a relatively new programming
paradigm introduced by Kiczales ([2]), recently earned
the scientific community's attention.
Having around six
years of life, the paradigm was already presented in important
conferences, and recently triggered the creation of
several conferences and workshops to deal with it.
The paradigm is already evaluated for traditional OOP
and component-based software development and is found
very promising. Several evaluations consider it to be the
continuation of the OOP paradigm. However, most of the
published work while of excellent quality is mostly theoretical
or involves evaluation of AOP for research oriented and
experimental software. Unlike previous works, this study
considers the AOP paradigm for solving real-life problems,
which need to be faced in any commercial software. We evaluated
Aspect Oriented Programming in the development of
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
SAC'04 March 14-17, 2004, Nicosia, Cyprus
Copyright 2004 ACM 1-58113-812-1/03/04 ...
$
5.00.
a high-performance component-based web-crawling system,
and compared the process with the development of the same
system without AOP. The results of the case study, mostly
favoring the aspect oriented paradigm, are reported in this
work.
This introduction is followed by an introduction to the
AOP approach. We then describe the application that was
used for our evaluation and proceed with a description of
our evaluation scenario. We then present and comment our
evaluation results. We continue with references to similar
evaluation attempts, and, finally, we summarize the conclusions
from our evaluation, and report on future work.
ASPECT ORIENTED PROGRAMMING
Aspect Oriented Programming, as proposed by Kiczales
([2]), is based on the aspectual decomposition. Aspectual
decomposition is somewhat complementary to functional decomposition
, and tries to overcome the limitation of functional
decomposition to capture and represent crosscutting
functionality. After separating the system to functional constructs
, aspectual decomposition is applied to the design in
order to catch the crosscutting concerns. Crosscutting functionality
usually includes extra-functional requirements (e.g.
timing constraints or logging facility to all the system components
). This functionality is usually replicated a number
of times, spread over the whole system. There is no single
point of reference where the developer can say that the
aspectual functionality belongs and should be implemented.
The main purpose of AOP is to capture the crosscutting
concerns throughout the system, and promote them as first-class
citizens, in order to enable the modeling and reusing of
them. The high level goals of such an approach, as reported
in various publications, follow:
1. AOP makes programming easier and faster, closer to
the human perception ([2, 3, 7]). Developers understand
the concept of crosscutting concerns and crosscutting
functionality, and they use it in understanding
the whole system. However, apart from AOP, there is
no easy way to implement such a crosscutting concern
.
With AOP, aspects are closer to the human
perception for crosscutting concerns and simplify the
design and implementation of systems with such requirements
. Aspects can even allow code reuse for the
extra-functional requirements they implement, which
usually crosscut the whole system. Thus, they make
system implementation easier and faster.
1554
2004 ACM Symposium on Applied Computing
2. AOP makes programming less error-prone and easier
to debug and maintain ([2, 3, 7, 6]). Not only the code
becomes more modular, thus, easier to maintain and
enhance, but also the goal for debugging is more easily
gained (offered from the AOP inherent ability of automatic
aspect invocation). Furthermore, AOP favors
reusability and modular representation of crosscutting
concerns, which make the code more readable and prevent
tangling code.
The AOP approach is already used in the implementation
of several academic-oriented systems such as [4], but there
is not much work reported on AOP relating with commercial
environment. However, we strongly believe that AOP can
enter the industrial environment, and that it has much to
offer. We expect to witness that in the near future.
THE HIGH PERFORMANCE COMPONENT-BASED WEB CRAWLER
To evaluate the AOP paradigm, we chose a high performance
component-based web crawler, which would serve the
needs of our laboratory. However, it was important for us to
make the crawler easily extensible and changeable in order
to be able to reuse it in different projects. Furthermore, the
crawler should not be characterized as experimental (e.g.
unstable or with extremely complicated configuration) since
it should be reusable in a number of different projects, and
without needing to know the complete infrastructure. We
also needed the crawler to be easily adjustable to different
configurations, hardware, and network situations, because
of the variety of our hardware, as this would be desired in a
real-life application.
This application was found suitable for our AOP evaluation
, since it was of respectable size, which would give
us the opportunity for better results.
Furthermore, the
non-experimental characterization of the current application
, which is rarely the outcome in the academic environment
, would ensure a more practical approach of our evaluation
. For the same reason, the extra-functional requirements
implemented for the evaluation, were carefully selected. It
was important for us to keep the whole implementation and,
consequently, the AOP evaluation not far from the commercial
field, which we feel to be the important end-user of the
programming paradigms.
Having these points in mind, we decided to use the following
design, comprising three basic multi-threaded components
: (i) the database component, (ii) the crawling component
, and (iii) the processing component.
Figure 1: The architecture of a high-performance
component-based web-crawling system.
The database component was responsible for two tasks:
(a) updating the database with the processed information,
received from the processing component, and (b) feeding the
crawling component with the necessary URLs to be crawled.
Furthermore, as in all the components, a number of threads
were running in parallel in each component, so that the fast
devices like CPU and memory (as opposed to the usually
slow devices like I/O and network) would be more efficiently
utilized. The number of threads running in parallel in each
component could be selected from the user, and also adjusted
dynamically from each component for optimal performance
. Selecting a very small number of threads, the
user would let fast resources like processor and the memory
rather unutilized, while selecting an overly large number of
threads would result to large context switching overhead.
The crawling component's responsibility was to download
the URLs from the web and provide the processing component
with the page information for further processing.
Page information included the page's URL, IP address, and
page text. Again, the crawling component ran a number of
threads to maximize resource utilization.
Finally, the processing component was responsible for receiving
the page information from the crawling component
and processing it, and passing the results to the database
component for permanent storage. As in the other components
, this component was also multi-threaded, thus utilizing
the resources better.
EVALUATION SCENARIO
To evaluate AOP in the crawling project, we ran the following
scenario: First, we set our metrics for the evaluation
of AOP, trying to keep them as objective as possible; then,
we designed the component-based web crawler and located
the different functionalities that could be modeled as aspects
. Following that, we implemented and tested the three
components independently. The implementation up to that
point did not include any of the functionalities identified as
aspects in the earlier step. Finally, we tried to integrate the
three components, and also include the extra functionality,
implemented with and without AOP.
Our selection for the metrics was mostly to favor (as much
as possible) objective results. Our goal, as Murphy in [5]
suggests, was to answer two important questions: (a) if AOP
makes it easier to develop and change a certain category of
software (usefulness), and (b) what is the effect of AOP in
the software development (usability). For these reasons, we
selected the following metrics:
1. We measure effectiveness of AOP for implementing the
extra functionality, compared to traditional OOP.
2. We measure the learning curve of AOP methodology.
3. We measure time that took to complete the project
with the two approaches, AOP and traditional OOP.
4. We measure complete lines of code for the added functionality
with AOP and with traditional OOP.
5. We compare code tangling in the AOP and the traditional
OOP model.
6. We report on the stability of the AOP model for creating
component-based software.
The types of functionality that we identified as being best
modeled as aspects were the following ones:
1555
Logging : This functionality requires saving extended program
execution trace to a file or printing it to the
screen.
The trace should include entrance and exit
messages from the methods, exceptions thrown, and
time of each event.
Overloading checks : Since the crawling function is expensive
in resources, we must constantly check for overloading
in any of the resources, in order to avoid driving
the machines to collapse. The two resources we
had to monitor were the DNS server that was serving
our crawler and the machine that was hosting our
crawling database.
Database optimizer : Even with the combination of the
expensive high performance hardware and software that
was used for the database server, we still needed to
follow some optimization techniques to minimize the
need for database connectivity. This was due to the
heavy load that our database server experienced from
the crawling function.
The Logging aspect, the most common aspect in AOP,
was mostly to help debugging during the developing stage
of the application, but it would also be used for identifying
bottlenecks (profiling) and performing optimizations to
the components in a later stage. When the logging aspect
was enabled, entering or exiting a method would print (to
stderr) the method's name, the exact time, and some other
useful information. Moreover, a method throwing an exception
would result in invoking the logging aspect to print the
exception with the method's name in stderr.
The overloading checks were broken in two aspects, the
DNS monitoring aspect and the database monitoring aspect
. The DNS monitoring aspect was trying to adjust the
number of active downloading threads according to the DNS
server status. More to the point, the problem we faced was
that the DNS server that was serving our crawler was shared
with other machines, some of them running experimental
software, doing extensive use of the DNS server for DNS
resolution. This practically meant that the efficiency of the
DNS server was dependent of the number of software clients
using it in parallel. Running more than the appropriate (for
each moment) downloading threads in our crawler (that were
doing the DNS resolution) resulted in more DNS resolution
requests that our DNS server could handle, and eventually,
collapsing of our DNS server. On the other hand, underestimating
our DNS server's abilities in low-usage hours would
result in significantly lower crawling speed. For these reasons
we constructed and used the DNS monitoring aspect,
which would adjust the number of the downloading threads
according to the running DNS load. Each DNS resolution
was timed, and when discovering latency higher than expected
, we were temporarily pausing some of the downloading
threads (the pause time and the number of the threads
that we were pausing were analogous to the latency), thus,
causing less DNS lookups in a specific time.
The database monitoring aspect's goal was to disable overloading
in the database machine.
A similar approach to
the DNS monitoring aspect was used. We were monitoring
the responses from our database server and when we
were detecting overloading of the database we would pause
some of the downloading threads. The reason that we could
not predict the ideal number for the database component
threads from the beginning was because of the variety of
the web-pages. For example, a web-page with many new
words (words that are for first time parsed from the crawler)
would result in much database load, while words that are
seen before from the crawler would result in much less (due
to some optimizations, similar to those proposed in
[1]).
For these reasons, we constructed the database monitoring
aspect to monitor database queries. The aspect would time
every interaction with the database server and try to detect
overloading. When the time demanded for the query was
bigger than a threshold (all the queries we were executing
were having the same average time for execution in normal
circumstances), we would pause some of the downloading
threads for some time, in order to allow the database server
to complete its work without extra work added at the same
time. Later on, the downloading threads would resume their
work.
These two last aspects would not contradict each other,
since they were both doing the same action, pausing some of
the downloading threads. However, the pause time and the
number of the downloading threads to pause were not the
same in the two cases. Each of the aspects was calculating
the time and the number of threads to pause with a different
algorithm.
Finally, we also constructed the database optimizer aspect
which acted as a database cache and released some of
the database load. More specifically, for the parsing function
we were making heavy usage of the crawling dictionary
table from the database. That dictionary was matching every
word we found up to the moment with its id number.
The choice was to avoid needless and costly replication of
data and enable saving the page text as numbers (smaller
in storing size and faster in seeking). By keeping a memory
cache of that table as in Brin's implementation ([1]), we
would manage to get important workload off the database
server and speed things up. More to the point, prior addressing
the database for a word's serial number, we were
querying an indexed structure in the local memory. If the
query failed, we were then inserting the word in the database
and in the RAM dictionary and continuing our work. This
minimized the database interactions and boosted the complete
process, since RAM access was enormously faster than
access to the database. Processing English language pages
with an average-size dictionary of 1 million words would result
to around 99,9% success from the RAM table, thus, it
would prevent querying the database very efficiently.
EVALUATION RESULTS
As already mentioned, these four aspects were implemented
in two distinct ways: (a) injected in the program code, using
standard OOP approach, and (b) modeled and implemented
as aspects. The two versions were then compared and evaluated
in the described metrics. The results from the evaluation
were mostly in favor of the AOP methodology. While
the developers were not long experienced in AOP, the new
model boosted the implementation speed and helped in more
modular software.
Regarding effectiveness of the AOP approach compared
to the traditional OOP approach, the two approaches were
the same. We managed to add the extra functionality in
both versions of the software (however, it was not always
trivial to do so). In short, for the presented aspects there
was always an AOP-oriented and an OOP-oriented solution
1556
available, and there was not a noticeable performance difference
between the two.
Regarding the time demanded to learn the AOP methodology
, this was not significant. Both the developers that
were working on the project were very experienced with
OOP, but did not have previous practical experience with
AOP. Fortunately for the project, they were able to learn
AOP sufficiently without tutoring using only publicly available
online sources in a single week. There was also another
short overhead of one day for installing and getting familiar
to an AOP-aware IDE (we used Eclipse with the AOP
modules).
The complete time that was required to finish the crawler
was shorter in the AOP version (this time did not include the
time spent for learning AOP however). Both the versions
used the same core already developed (the three components
demonstrated earlier) but they were continued com-pletely
independently, without reusing knowledge or code
from one version to the other (the nature of the two versions
prohibited reusing knowledge or code anyway). The
time demanded for completing the crawler with the aspects
in the AOP version was 7 man-hours, while the OOP version
demanded 10 man-hours in order to design and develop
the code. Most of this time, in the case of the OOP
version, was needed for locating the methods and putting
the necessary code to them. For implementing the logging
functionality for instance, in the OOP version there were
73 such methods counted, while AOP did not demand this
task since the pointcuts were found automatically from the
aspect definition. It was the developers' feeling that most of
the man-hours spent in the OOP version of the crawler were
wasted, because they were repeating trivial code in the application
. Furthermore, as they said, the result in the OOP
case was not satisfactory for them since, if they needed to
change something in an aspect, they should relocate the aspect
code from the beginning and this would be difficult to
be done.
We also measured the number of lines we needed to add
in the two approaches to implement the extra functionality.
For the logging aspect with the AOP approach, we needed
less than 20 lines in one single file, while the same functionality
for the traditional OOP version required 126 lines of
code spread in eight different files (the number of lines for
the AOP code also include the pointcuts definitions and the
java include directives). This, apart from a time-demanding
approach, also reveals important code tangling since we had
to modify eight classes for a simple logging requirement.
The other two aspects, the DNS monitoring and the database
monitoring aspect, needed roughly the same number of lines
for the two versions. To implement both the DNS monitoring
and the database monitoring functionality, we needed
around 30 lines for the pure OOP solution: (a) four lines
for timing the DNS or the database query, (b) ten lines for
checking for overloading, and proceeding in alternate behavior
if overloading occurs, and finally (c) one line for invoking
the check wherever needed. In the AOP solution, we
were able to join the two concerns in a single aspect - something
that we were unable to do in the OOP version - and
reuse some of the code. The AOP version of the solution
demanded roughly the same number of lines, around forty
for both the concerns (the additional code was because of
aspect and advice headers and the pointcuts definition).
Finally, the database optimizer needed the same number
of lines in the two versions, that is forty lines. These lines
in the OOP version were split in three different places in
the original database component file, while at the AOP approach
the original file was kept intact and all the new code
was in a single aspect-definition file.
We also tried to capture the code tangling that occurs in
the two versions, after the extra functionality is added. To
do that, we found the distribution of the added code in the
eight affected files. The OOP version of the logging aspect,
as expected, was spread in all the eight files in seventy-three
different places. The OOP version of the DNS monitoring
aspect resulted in addition of code in one file only, the down-loader
component file, in three different places. Similarly,
the OOP version of the database monitoring aspect resulted
in addition of code in the database component file, in three
different places. Finally, the database optimizer aspect implementation
, without AOP, also resulted in code addition
in three different places (again, in the database component
file).
On the other hand, implementation of the four aspects
with the AOP approach, as expected, created no code tangling
. The complete code for the extra functionalities was
included in the three (instead of four, since the DNS and the
database monitoring concerns were implemented as a single
aspect) aspect files. For the case of the database optimizer,
this offered us another important advantage since we often
needed to disable the database optimizer due to hardware
(memory) limitations in weaker machines.
While we did
not take any provision for that, the AOP version, unlike
the OOP version, enabled removal of the optimizer without
changing any code. In the OOP version, the developer had
to remove or modify some of the original code.
Table 1 summarizes the results for the code size and code
tangling for the four aspects:
Finally, the implementation of AOP we used, combined
with the IDE tool, were stable and did not cause us any unexpected
problems (such as bugs in the compiler). While the
crawling system was not extremely big, it did make extensive
use of the machines' resources, and AspectJ compiled
files did not face any trouble with that. AspectJ compiled
files proved to work fine under pressure with the standard
virtual machine, and aspects introduction was not causing
a noticeable overhead to the machines.
RELATED WORK
Several publications try to evaluate AOP. Almost all of
them report results similar to ours. However, while of excellent
quality, most of the previous work we are aware of
follow a theoretical approach or limit their hands-on evaluation
for academic or experimental software. We will now
briefly comment on some of them.
Walker in [8] constructs several experiments and a case-study
to evaluate AOP. The outcome of the evaluation is
that AOP can help faster software development (programming
, debugging, etc.) under certain conditions, while other
cases make development of AOP less attractive. While of superb
quality and significant importance, this work is limited
to the evaluation of AOP based on a preliminary version
of AspectJ, version 0.1.
Since then, AOP and especially
AspectJ, changed significantly confronting most of the limitations
detected in the evaluation, and also powering the
users with more functionalities. Furthermore, CASE tools
and powerful IDE environments were developed to assist the
1557
Aspect
# Lines of code
# Places to add
# Files to add
OOP
AOP
OOP
AOP
OOP
AOP
Logging
126
19
73
1
8
1
DNS Monitoring
15
40
3
1
1
1
Database Monitoring
15
3
1
Database optimizer
45
45
3
1
1
1
Table 1: Size of added code and code tangling for implementation of the aspects. The two aspects, DNS and
Database monitoring were easily joined to a single aspect in the AOP version
developers in the process.
Mendhekar in [4] also presents a case study, evaluating
AOP in an image processing application. Although AOP
was then still in infancy, this case study presents results very
similar to ours. However, Mendhekar, being in Xerox labs
where AOP was born, follows a more research-oriented approach
during the evaluation. The evaluation uses an AOP
implementation that cannot easily be used from people outside
the Xerox environment. Also, being interested in performance
, this work does not elaborate on various other important
measures, such as the learning curve and the time
that took the developer to complete.
Several other important publications ([2, 3, 7, 6]) evaluate
AOP from mostly a theoretical approach. Most of them also
report results that favor AOP programming. Some of their
results are reported in section 3 of this report.
CONCLUSIONS
During the construction of the component-based high-performance
web crawler, we had the opportunity to evaluate
the relatively new aspect oriented paradigm for building
component-based systems. Having defined our extra functionality
, we implemented and compared the two versions of
the web crawler, the AOP and the OOP one. For the required
extra functionality, both the paradigms proved able
to implement a correct solution. The quantity of code (number
of lines) that the developer needed to implement in the
two versions was not of much difference, with the only exception
of the logging aspect where the OOP implementation
was much larger than the AOP one. Furthermore, in
both the versions of the application there was no apparent
performance difference. Both the versions were stable, even
when working under high load and in varying system environments
. The significant difference however between the
two implementations was in the time required to develop and
debug each of them, and the quality of the produced code.
The AOP approach not only completed the system faster,
but it also produced modular high quality code, while the
traditional approach was creating the well-known spaghetti
code.
More specifically, the AOP version was having all
the extra functionality apart of the code implementing the
standard functional requirements. This not only kept the
original components reusable in different implementations,
but also prevented tangling the code, thus, making future
maintenance easier. Furthermore, this enabled us to easily
enable and disable the extra functionality, depending on the
hardware resources available and on our requirements.
Concluding, we have to report that the AOP model in general
appears to favor the development of quality component-based
software. The AOP model itself is able to boost the
implementation speed without negatively affecting quality
of the software. Moreover, the learning time of the model,
judging from our experience, is not long. While not having
much experience of AOP implementation languages, we
were able to produce AOP-based code in no time. Finally,
while AOP cannot offer any solution to problems unsolv-able
from traditional approaches, and while AOP does not
always target to less code, it can offer better and easier solutions
to programs that are otherwise difficult to be implemented
. Therefore, we can safely arrive to the conclusion
that AOP has much to offer in component-based software
development. We strongly believe that integration of AOP
with component-based software is going to be the target of
important research attempts in the near future and can produce
some very interesting results, and we await for the introduction
of AOP software in commercial component-based
software products.
REFERENCES
[1] S. Brin and L. Page. The anatomy of a large-scale
hypertextual Web search engine. Computer Networks
and ISDN Systems, 30(17):107117, 1998.
[2] G. Kiczales, J. Lamping, A. Menhdhekar, C. Maeda,
C. Lopes, J.-M. Loingtier, and J. Irwin.
Aspect-oriented programming. In Proceedings of the
European Conference on Object-Oriented Programming
(ECOOP), LNCS 1241, pages 220242,
Springer-Verlag, 1997.
[3] C. Lopes. D: A Language Framework for Distributed
Programming. PhD thesis, College of Computer
Science, Northeastern University, November 1997.
[4] A. Mendhekar, G. Kiczales, and J. Lamping. RG: A
case-study for aspect-oriented programming. Technical
Report SPL97-009 P9710044, Xerox Palo Alto Research
Center, Palo Alto, CA, USA, February 1997.
[5] G. C. Murphy, R. J. Walker, and E. L. Baniassad.
Evaluating emerging software development
technologies: Lessons learned from assessing
aspect-oriented programming. Technical Report
TR-98-10, Department of Computer Science, University
of British Columbia, 1998.
[6] A. Navasa, M. A. Perez, J. Murillo, and J. Hernandez.
Aspect oriented software architecture: a structural
perspective. In Proceedings of the Aspect-Oriented
Software Development, 2002, The Netherlands.
[7] D. Shukla, S. Fell, and C. Sells. Aspect-oriented
programming enables better code encapsulation and
reuse. MSDN Magazine,
http://msdn.microsoft.com/msdnmag/, March 2002.
[8] R. J. Walker, E. L. A. Baniassad, and G. C. Murphy.
An initial assessment of aspect-oriented programming.
Technical Report TR-98-12, Department of Computer
Science, University of British Columbia, Sept. 1998.
1558
| case study;evaluation;Web Crawler Implementation;AOP;component-based application;Aspect Oriented Programming;development process experiment metrics;Software development process experiment;programming paradigm comparison;OOP;Object Oriented Programming;programming paradigms;Aspect Oriented Programming application |
38 | Attack-Resilient Hierarchical Data Aggregation in Sensor Networks | In a large sensor network, in-network data aggregation, i.e., combining partial results at intermediate nodes during message routing, significantly reduces the amount of communication and hence the energy consumed. Recently several researchers have proposed robust aggregation frameworks, which combine multi-path routing schemes with duplicate-insensitive algorithms, to accurately compute aggregates (e.g., Sum, Count, Average) in spite of message losses resulting from node and transmission failures. However, these aggregation frameworks have been designed without security in mind. Given the lack of hardware support for tamper-resistance and the unattended nature of sensor nodes, sensor networks are highly vulnerable to node compromises. We show that even if a few compromised nodes contribute false sub-aggregate values, this results in large errors in the aggregate computed at the root of the hierarchy. We present modifications to the aggregation algorithms that guard against such attacks, i.e., we present algorithms for resilient hierarchical data aggregation despite the presence of compromised nodes in the aggregation hierarchy. We evaluate the performance and costs of our approach via both analysis and simulation . Our results show that our approach is scalable and efficient. | INTRODUCTION
In large sensor networks, computing aggregates in-network, i.e.,
combining partial results at intermediate nodes during message routing
, significantly reduces the amount of communication and hence
the energy consumed [11, 23]. An approach used by several data
acquisition systems for sensor networks is to construct a spanning
tree rooted at the querying node, and then perform in-network aggregation
along the tree. Partial results propagate level-by-level up
the tree, with each node awaiting messages from all its children
before sending a new partial result to its parent.
Tree-based aggregation approaches, however, are not resilient to
communication losses resulting from node and transmission failures
, which are relatively common in sensor networks [11, 22,
23]. Because each communication failure loses an entire subtree
of readings, a large fraction of sensor readings are potentially un-accounted
for at the querying node, leading to a significant error
in the query answer. To address this problem, researchers have
proposed the use of multi-path routing techniques for forwarding
sub-aggregates [11]. For aggregates such as Min and Max which
are duplicate-insensitive, this approach provides a fault-tolerant solution
. For duplicate-sensitive aggregates such as Count and Sum,
however, multi-path routing leads to double-counting of sensor readings
, resulting in an incorrect aggregate being computed.
Recently researchers [3, 12, 14] have presented clever algorithms
to solve the double-counting problem associated with multi-path
approaches. A robust and scalable aggregation framework called
Synopsis Diffusion
has been proposed for computing duplicate- sensitive
aggregates such as Count and Sum. There are two primary
elements of this approach - the use of a ring-based topology instead
of a tree-based topology for organizing the nodes in the aggregation
hierarchy, and the use of duplicate-insensitive algorithms for
computing aggregates based on Flajolet and Martin's algorithm for
counting distinct elements in a multi-set [5].
As presented, the Synopsis Diffusion aggregation framework does
not include any provisions for security. Although we can easily prevent
unauthorized nodes from launching attacks by augmenting the
aggregation framework with authentication and encryption protocols
[15, 24], compromised nodes present an entirely new set of security
challenges. The lack of tamper-resistance and the unattended
nature of many networks renders sensor nodes highly vulnerable to
compromise. Standard authentication mechanisms cannot prevent
a compromised node from launching attacks since all its keys are
also compromised. In this paper, we present novel mechanisms for
making the synopsis diffusion aggregation framework resilient to
attacks launched by compromised nodes.
We present counter-measures against attacks in which a compromised
node attempts to change the aggregate value computed at the
root of the hierarchy. In particular, we focus on an attack in which
71
a sensor node that is not a leaf node in the aggregation hierarchy
relays a false sub-aggregate value to its parents. We refer to this
attack as the falsified sub-aggregate attack.
We show that if the synopsis diffusion approach is used to compute
aggregates such as Count and Sum, an adversary can use the
falsified sub-aggregate attack to cause the answer computed at the
base station in response to a query to differ from the true value by
an arbitrary amount. Moreover, we show that this attack can be
launched with a high rate of success, even if only one or a small
number of nodes are compromised.
We present an approach in which the synopsis diffusion aggregation
frameork is augmented with a set of countermeasures that
mitigate the effect of the falsified sub-aggregate attack. In our approach
, a subset of the total number of nodes in the network include
an authentication code (MAC) along with their response to a query.
These MACs are propagated to the base station along with the partial
results that are computed at each level in the hierarchy. By verifying
these MACs, the base station can estimate the accuracy of
the final aggregate value it computes, and can filter out the effect of
any false sub-aggregates contributed by compromised nodes. Thus,
our approach can be used in conjunction with synopsis diffusion to
compute basic aggregates such as Count and Sum despite the presence
of compromised nodes in the aggregation hierarchy.
The communication overhead of our approach depends upon the
number of contributing nodes which send a MAC to the base station
. We evaluate the performance and costs of our approach via
both analysis and simulation. We show that our approach is scalable
since the number of contributing nodes (and hence the average
communication overhead) do not increase with network size. To
further reduce the communication overhead, we describe a variation
of our basic approach that trades communication costs for
latency.
BACKGROUND SYNOPSIS DIFFUSION FOR ROBUST AGGREGATION
In this section, we provide a brief overview of the synopsis diffusion
approach for robust aggregation [3, 14]. Figure 1 illustrates
how the synopsis diffusion approach uses a rings topology for aggregation
.
R0
R1
R2
q
C
B
A
D
Figure 1: Synopsis Diffusion over a rings topology
In the query distribution phase, nodes form a set of rings around
the querying node q based on their distance in hops from q. During
the subsequent query aggregation period, starting in the outermost
ring each node generates a local synopsis s
= SG(v) where v is the
sensor reading relevant to the query, and broadcasts it. (SG
() is
the synopsis generation function.) A node in ring R
i
will receive
broadcasts from all the nodes in its range in ring R
i
+1
. It will then
combine its own local synopsis with the synopses received from its
children using a synopsis fusion function SF
(), and then broadcast
the updated synopsis. Thus, the fused synopses propagate level-by
-level until they reach the querying node, who first combines the
received synopses with its local synopsis using SF
() and then uses
the synopsis evaluation function SE
() to translate the final synopsis
to the answer to the query.
The functions SG
(), SF(), and SE() depend upon the target aggregation
function, e.g. Count, Sum, etc. We now describe the
duplicate-insensitive synopsis diffusion algorithms for the Count
aggregate, i.e., the total number of nodes in the sensor network,
and the Sum aggregate, i.e., the sum of the sensor readings of the
nodes in the network. These algorithms are based on Flajolet and
Martin's well-known probablistic algorithm for counting the number
of distinct elements in a multi-set[5].
2.1
COUNT
In this algorithm, each node generates a local synopsis which is
a bit vector ls of length k
> log n, where n is an upper bound on
the nodes in the network. To generate its local synopsis, each node
executes the function CT
(X, k) given below, where X is the node's
identifier and k is the length of ls in bits. CT
() can be interpreted
as a coin-tossing experiment (with a cryptographic hash function
h
(), modeled as a random oracle whose output is 0 or 1, simulating
a fair coin-toss), which returns the number of coin tosses until the
first heads occurs or k
+ 1 if k tosses have occurred with no heads
occurring. In the local synopsis ls of node X , a single bit i is set to
1, where i is the output of CT
(X, k). Thus ls is a bitmap of the form
0
i
-1
1
with probability 2
-i
.
Algorithm 1 CT
(X, k)
i=1;
while i
< k + 1 AND h(X, i) = 0 do
i
= i + 1;
end while
return i;
The synopsis fusion function SF
() is simply the bitwise Boolean
OR of the synopses being combined. Each node fuses its local
synopsis ls with the synopses it receives from its children by computing
the bit-wise OR of all the synopses. Let S denote the final
synopsis computed by the querying node by combining all the synopses
received from its children and its local synopsis. We observe
that S will be a bitmap of length k of the form 1
r
-1
0
. The querying
node can estimate Count from S via the synopsis evaluation
function SE
(): if r is the lowest-order bit in S that is 0, the count
of nodes in the network is 2
r
-1
/0.7735. The synopsis evaluation
function SE
() is based on Property 2 below. Intuitively, the number
of sensor nodes is proportional to 2
r
-1
since no node has set the rth
bit while computing CT
(X, k).
We now present a few important properties of the final synopsis S
computed at the querying node that have been derived in [5, 3], and
that we will find useful in the rest of this paper. Let S
[i], 1 i k
denote the ith bit of S, where bits are numbered starting at the left.
Property 1
For i
< log
2
n
-2log
2
log
2
n
, S
[i] = 1 with probability
1. For i
3
2
log
2
n
, S
[i] = 0 with probability 1.
This result implies that for a network of n nodes, we expect that
S
has an initial prefix of all ones and a suffix of all zeros, while
only the bits around S
[log
2
n
] exhibit much variation. This provides
an estimate of the number of bits, k, required for a node's local
synopsis. In practice, k
= log
2
n
+ 4 bits are sufficient to represent
S
with high probability [5]. This result also indicates that the length
of the prefix of all ones in S can be used to estimate n. Let r
=
72
min
{i|S[i] = 0}, i.e., r is the location of the leftmost zero in S.
Then R
= r -1 is a random variable representing the length of the
prefix of all ones in the sketch. The following results hold for R.
Property 2
The expected value of R, E
(R) log
2
(n) where the
constant
is approximately 0.7735.
This result implies that R can be used for an unbiased estimator
of log
2
(n), and it is the basis for the synopsis evaluation function
SE
() which estimates n as 2
R
/.
Property 3
The variance of R, denoted as
2
R
n
, satisfies
2
R
n
=
2
R
+ Q(log
2
n
) + o(1),
where constant
R
is approximately 1
.1213 and Q(x) is a periodic
function with mean value 0 and period 1.
This property implies that the standard deviation of R is approximately
1
.1213, i.e., the estimates of n derived from R will often
be off by a factor of two or more in either direction. To reduce
the standard deviation of R, Flajolet et al [5] proposed an algorithm
named PCSA, where m synopses are computed in parallel and the
new estimator (
R
) is the average of all individual R's of these synopses
. For PCSA, the standard error in the estimate of n, i.e.,
n
/n,
is equal to 0
.78/m [5].
Property 4
In a network of n nodes, the expected number of nodes
that will have the ith bit of their local synopsis ls
[i] = 1 is n/2
i
. This
result implies that the expected number of nodes that contribute a 1
to the ith bit of S and the bits to the right of the ith bit in S (i.e., bits
j
, where i
j k) is n/2
i
-1
.
2.2
SUM
Considine et al. [3] extended the Count algorithm described above
for computing the Sum aggregate. The synopsis generation function
SG
() for Sum is a modification of that for Count while the
fusion function SF
() and the evaluation function SE() for Sum are
identical to those for Count.
To generate its local synopsis for a sensor reading v, a node X
invokes the function CT
() v times
1
and ORs the results. As a result,
the local synopsis of a node is a bitmap of length k
= log
2
u
s
+ 4
where u
s
is an upper bound on the value of Sum aggregate. Unlike
the local synopsis of a node for Count, more than one bit in the
local synopsis of a node for Sum will be equal to 1. Count can
be considered as a special case of Sum where each node's sensor
reading is equal to one unit.
Considine et al. [3] proposed an optimized version of SG
() for
Sum to make it suitable for a low-end sensor node, even if the
sensed value v is high. Moreover, they showed that Properties 14
described above for Count also hold for Sum (with appropriate
modifications). Similarly, as in the case of Count, the PCSA algorithm
can be used to reduce the standard deviation of the estimate
for Sum.
ATTACKS ON SYNOPSIS DIFFUSION
The Synopsis Diffusion aggregation framework does not include
any provisions for security; as a result, it is vulnerable to many attacks
that can be launched by unauthorized or compromised nodes.
To prevent unauthorized nodes from eavesdropping on or participating
in communications between legitimate nodes, we can augment
the aggregation framework with any one of several recently
proposed authentication and encryption protocols [15, 24]. However
, compromised nodes pose an entirely new set of security challenges
.
Sensor nodes are often deployed in unattended environments, so
they are vulnerable to physical tampering. Current sensor nodes
1
Each sensor reading is assumed to be an integer
lack hardware support for tamper-resistance. Consequently, it is
relatively easy for an adversary to compromise a node without being
detected. The adversary can obtain confidential information
(e.g., cryptographic keys) from the compromised sensor and repro-gram
it with malicious code.
A compromised node can be used to launch multiple attacks
against the sensor application. These attacks include jamming at
physical or link layer, other denial of service attacks like flooding,
route disruption, message dropping, message modification, false
data injection and many others. Standard authentication mechanisms
cannot prevent these insider attacks since the adversary knows
all the keying material possessed by the compromised nodes.
In this paper, we focus on defending against an important subclass
of these insider attacks which can potentially corrupt the final
result of the aggregation query. Below we describe these attacks in
the context of the Count and Sum aggregates.
A compromised node M can corrupt the aggregate value computed
at the root (i.e., the sink) of the hierarchical aggregation
framework in three ways. First, M can simply drop aggregation
messages that it is supposed to relay towards the sink. If M is located
at a relatively high position in the aggregation hierarchy, this
has the effect of omitting a large fraction of the set of sensor readings
being aggregated. Second, M can falsify its own sensor reading
with the goal of influencing the aggregate value. Third, M can
falsify the sub-aggregate which M is supposed to compute based
on the messages received from M's child nodes.
The effect of the first attack in which a node intentionally drops
aggregation messages is no different from the effect of transmission
and node failures, which are common in sensor networks [7].
The synopsis diffusion approach employs multi-path routing for addressing
these failures, and thus it also addresses message losses
due to compromised nodes [3, 12, 14]. We refer to the second attack
in which a sensor intentionally falsifies its own reading as the
falsified local value attack
. This attack is similar to the behavior of
nodes with faulty sensors and can be addressed by well-studied approaches
for fault tolerance such as majority voting and reputation-based
frameworks [10, 6]. The third attack, however, in which a
node falsifies the aggregate value it is relaying to its parents in the
hierarchy is much more difficult to address, and is the main focus
of this paper. We refer to this attack as the falsified sub-aggregate
attack
.
The Falsified Sub-Aggregate Attack
Since the sink estimates the
aggregate based on the lowest-order bit r that is 0 in the final fused
synopsis, a compromised node would need to falsify its own fused
synopsis such that it would affect the value of r. It can accomplish
this quite easily by simply inserting ones in one or more bits in positions
j, where r
j k, in its own fused synopsis which it broadcasts
to its parents. Note that the compromised node does not need
to know the true value of r; it can simply set some higher-order bits
to 1 in the hope that this will affect the value of r computed by the
sink. Since the synopsis fusion function is a bitwise Boolean OR,
the resulting synopsis computed at the sink will reflect the contributions
of the compromised node.
Let r
be the lowest-order bit that is 0 in the corrupted synopsis,
whereas r is the lowest-order bit that is 0 in the correct synopsis.
Then the sink's estimate of the aggregate will be larger than the
correct estimate by a factor of 2
r
-r
. It is easy to see that, with the
above technique, the compromised node can inject a large amount
of error in the final estimate of the sink.
We also observe that even a single node can launch this attack
with a high rate of success because the use of multi-path routing
in the synopsis diffusion approach makes it highly likely that the
falsified synopsis will be propagated to the base station. If p is the
73
packet loss rate and if each node has
parents in the aggregation
hierarchy then the probability of success for this attack is
(1 - p
)
h
,
where the compromised node is h hops away from the sink. As an
example, if p
= 0.2, = 3, and h = 5 then the probability that the
attack will succeed is 96%.
On the other hand, it is very hard to launch an attack which results
in the aggregate estimated at the sink being lower than the true
estimate. This is because setting a bit in the falsified synopsis to 0
has no effect if there is another node X that contributes a 1 to the
same position in the fused synopsis. To make this attack a success
the attacker has to compromise all the possible paths from node X
to the sink so that X 's 1 cannot reach the sink, which is hard to
achieve. If there is more than one node which contributes to the
same bit then it is even harder. As an example, in Count algorithm,
half of the nodes are likely to contribute to the leftmost bit of the
synopsis, one-fourth nodes of contribute to the second bit, and so
on. There are bits in the synopsis to which only one or two nodes
contribute but it is very hard to predict in advance which nodes will
be contributing to these particular bits if the sink broadcasts along
the query request a random seed to be used with the hash function
in the synopsis generation phase. Hence, we can safely assume that
this attack is extremely difficult to launch. In the rest of this paper,
we restrict our discussion to the previous attack where the goal of
the attacker is only to increase the estimate.
PROBLEM DESCRIPTION & ASSUMPTIONS
In a sensor network where some fraction of the nodes are potentially
compromised, there are three sources that contribute to the
error in the sink's estimate of the aggregate being computed: (i)
error due to packet losses, (ii) error due to the approximation algorithm
used, e.g., Flajolet and Martin's probabilistic algorithm [5],
and (iii) error injected by compromised nodes.
The first two types of error are already addressed by the synposis
diffusion aggregation framework. Our paper is complementary
to this previous work; our objective is to filter out the third
type of error. In particular, we aim to make the synopsis diffusion
approach resilient to the falsified local value attack and the falsified
sub-aggregate attack
, i.e., to enable the sink to get the "true"
estimate of the aggregate being computed despite the presence of
compromised nodes in the aggregation hierarchy. By "true" estimate
we mean the estimate of the aggregate which the sink would
compute if there were no compromised nodes.
4.2
Assumptions
We now discuss our assumptions with respect to the sensor network
and the adversary.
System Assumptions
We assume that the base station is located at
the center of the sensor network, and nodes are deployed around
the base station. However, our approach for attack-resilient aggregation
does not depend upon this assumption. We assume that
sensor nodes are similar to the current generation of sensor nodes,
e.g., Mica2 motes [13], in their computational and communication
capabilities and power resources, while the sink is a laptop class
device supplied with long-lasting power.
We assume that the sink has an estimate of the upper bound on
the value of the Count aggregate. If the sink does not have any
further knowledge, the upper bound of Count can be set to the total
number of nodes deployed. We also assume that there exists an
upper bound on the value of a sensor reading. The upper bound of
Sum can be conservatively set to be equal to product of the upper
bound of Count and the upper bound of a sensor reading. Previous
works on the synopsis diffusion approach [3, 14] have made the
same assumptions regarding the upper bounds for Count and Sum;
these bounds provide an estimate of the length of the synopsis.
Security Assumptions
We assume that the sink cannot be compromised
and it uses a protocol such as Tesla [15]) to authenticate its
broadcast messages. We also assume that each node shares a pairwise
key with the sink, which is used to authenticate the messages
it sends to the sink.
We assume that the adversary can compromise sensor nodes without
being detected. If a node is compromised, all the information it
holds will also be compromised. We use a Byzantine fault model,
where the adversary can inject malicious messages into the network
through the compromised nodes. We conservatively assume that all
compromised nodes can collude, or are under the control of a single
attacker.
Notations
The following notations are used in the description of
our attack-resilient aggregation algorithms.
BS refers to the base station, i.e., the sink. X is the identifier
of a the sensor node whereas M represents a compromised
node.
K
X
is the pair-wise key X shares with the sink.
m1|m2 denotes the concatenation of two message fields m1
and m2.
MAC(K,m) is the message authentication code (MAC) of the
message m generated using the key K.
X Y : m denotes a one-hop delivery of message m from X
to Y , while X
: m denotes that X broadcasts message m
to all of its one-hop neighbors, and X
: m denotes that
X
broadcasts message m to all nodes in the network.
ATTACK-RESILIENT AGGREGATION THE BASIC APPROACH
In this section, we present an attack-resilient approach for computing
the Count and Sum aggregates. In this approach we assume
that the BS has an estimate of the lower bound and the upper bound
of the aggregates. We will see that this approach is scalable only if
the ratio of the upper bound to the lower bound is small. Despite
this limitation, we discuss this approach in detail because it provides
the background and motivation for our extended approach,
which is discussed in Section 6. We first present the main idea underlying
the basic approach and then present the detailed protocol
for securing Count and Sum.
5.1
The Main Idea
In our approach, nodes execute the synopsis diffusion aggregation
algorithm as specified in [3, 14]. However, a subset of the
nodes include along with their synopses a message authentication
code (MAC) that can be used by the sink to verify the validity of
their contribution to the aggregate function.
The key observations behind the design of our approach are that
In order to derive the correct estimate from the final synopsis
(say S) computed at the sink, we need only to figure out the
correct lowest order bit (say r) in S that is 0.
The number of nodes contributing a 1 to bit j decreases exponentially
as we move from the lowest order bit ( j
= 1) to
higher order bits of the synopsis. For example, in the case
74
of Count, on average, half the nodes in the network will contribute
2
to the leftmost bit of the synopsis, one-fourth of the
nodes contribute to the second bit of the synposis, and so on.
Thus, we expect that only a small number of nodes will contribute
to the bits around the expected value of r. Each such node includes
along with its response to an aggregation query a MAC computed
using a pairwise key shared exclusively with sink. We demonstrate
that these MACs enable the sink to filter out the contributions of
the falsified sub-aggregates injected by the compromised nodes to
the final aggregate.
For our scheme to work, two issues need to be addressed. First,
since the the value of r is not known before the execution of the
query, we need to specify a criterion whereby a node can determine
if it needs to include a MAC along with its synopsis. Second, this
criterion should be designed so that the number of such nodes who
include a MAC is minimized.
In our basic approach, we assume that the BS has an estimate
of the lower bound and the upper bound of Count which are denoted
by l
c
and u
c
respectively. Based upon these bounds, the BS
knows that bit r will lie between a and b, which are the bit positions
in the synopsis S corresponding to l
c
and u
c
respectively,
i.e., a
= log
2
(l
c
) and b = log
2
(u
c
) (by Property 2 in Section
2). Thus, there is no need for the BS to verify the bits to the
left of a; only nodes contributing to bits in the range a to b need
to prove to the BS that their contribution to the synopsis S is valid.
We refer to the collection of bits in the range a to b in synopsis S
as the synopsis-edge as shown in Figure 2. It is easy to see that the
length of the synopsis-edge is
(log
2
(
u
c
l
c
)+1) bits. If we denote
the number of nodes contributing to the synopsis-edge by
, then,
by Property 4 in Section 2,
(
u
c
2
a
+ . . . +
u
c
2
b
)
1
(
2u
c
l
c
-1).
The upper bound for Count (u
c
) can be set to the total number
of nodes deployed. The lower bound for Count (l
c
) can be guessed
depending on the the energy reserve of the sensor nodes and rate
of energy expenditure. As an example, if 2000 nodes are deployed
then u
c
= 2000 and l
c
= 1000 may be a safe estimate at the time
of the Count query's execution. For this example, the length of the
synopsis-edge
is
u
c
l
c
= 2 and the expected number of nodes contributing
to synopsis-edge is less than 3.87.
synopsis-edge
corresponds to
Lower Bound
corresponds to
Upper Bound
Figure 2: Securing Count synopsis.
To securely compute
Count synopsis, the base station needs to verify only bits in the
synopsis-edge.
For the ease of presentation, we present the basic approach assuming
that only one synopsis is computed. We can easily extend
this approach to compute m synopses in parallel as in algorithm
PCSA.
5.2
Securing Count
To compute the Count aggregate securely, we extend the original
Count algorithm discussed in Section 2 as follows. For the sake
2
For convenience, henceforth, we say that a node "contributes" to
a position j in the synopsis S if bit j in its local synopsis is 1.
of completeness, we first briefly describe the query dissemination
phase, and then we present the aggregation procedure in detail.
In the query dissemination phase, the BS broadcasts the name
of the aggregation function, a random number (Seed) and the bit
positions of the start and the end of the synopsis-edge, which are
specified by a and b respectively. Each node will use the random
number, Seed, as an input to the hash function in the synopsis generation
procedure. In more concrete terms, a query packet that the
BS broadcasts is as follows:
BS
: F
agg
, Seed, a, b, s,t, h
where F
agg
is the name of the aggregation function (i.e. `Count'), s
denotes the time when the aggregation phase will start, t represents
the duration of one round i.e. t
=
T
h
, where h is the total number
of hops and T is the duration of the aggregation phase (also called
epoch
). Note that, as in the original Count algorithm discussed in
Section 2, the epoch is sub-divided into a series of rounds, one for
each hop, starting from the farthest hop. Tesla [15] can be used
for authenticating the broadcast packet.
In the aggregation phase, each node executes the synopsis generation
function SG
() and the synopsis fusion function SF() for
Count as discussed in Section 2. In addition, each node checks
whether it contributes to the synopsis-edge, and if so, it generates a
MAC and forwards the MAC along with its fused synopsis. Specifically
, if node X contributes to bit i in the synopsis-edge, it generates
a MAC, M
= MAC(K
X
, m) over the message m whose format
is
[X|i|Seed], where Seed is the random number which was dissem-inated
in the query distribution phase. Each node X forwards to
its parents its fused synopsis along with the set of MACs (
M
) it
received from its child nodes and its own MAC if it generated one.
The format of the message a node X forwards to its parents is as
follows:
X
: S
l
|
M
,
where S
l
is the fused synopsis computed by X . If the message does
not fit into one packet, node X breaks it into several packets and
forwards them. In Appendix A, we formally describe the algorithm
(SecureCount) executed by each node in response to an aggregation
query.
After the BS receives the MACs, it checks their validity. In particular
, for each message and MAC pair
[m|MAC(K
X
, m)] where
m
is
[X|i|Seed], the BS executes the synopsis generation function
SG
() of X and verifies whether node X really contributes to bit i
in the synopsis-edge, and then checks whether the attached MAC
is valid. If any of these tests fail, the corresponding MAC is discarded
.
After this verification step, the BS checks whether it has received
at least one valid MAC for each bit in the synopsis-edge. The bits in
the synopsis-edge for which the BS has not received a valid MAC
are reset to 0. The bits at positions to the left of the synopsis-edge
are set to 1. Finally, the BS computes the Count using the synopsis
evaluation function SE
().
Security Analysis
The security of our approach follows from two
facts:
The sink can independently verify the output of SG() for a
particular node X . This is because the output of SG
() depends
only upon the node id X , and the random seed included
in the query message.
Each bit that is set to 1 in the synopsis edge has an associated
MAC that can be verified by the sink. This MAC is computed
using a pairwise key that is known only to the contributing
node and the sink, thus the MAC cannot be fabricated by an
attacker (as long as it is reasonably long.)
75
Although a compromised node can falsely set some bits in its
fused synopsis and forward false MACs corresponding to those
bits, the sink will be able to discard any false MACs. This implies
that the attacker cannot falsely increase the Count. On the other
hand, the attacker may attempt to decrease the Count by dropping
a genuine MAC (or by corrupting a genuine MAC) sent by a contributing
node, but the genuine MAC is likely to reach BS via an
alternate path. If BS receives at least one valid MAC for each 1 bit
in the synopsis-edge, then BS obtains the true estimate of Count as
discussed below.
As discussed in Section 2, the synopsis diffusion approach uses
a multi-path routing scheme to reduce the aggregation error due to
packet losses resulting from node and link failures. The effect of
packets being dropped by compromised nodes is simply to increase
the overall packet loss rate, and this can be countered by an appropriate
choice of
, the number of parents of a node in the synopsis
diffusion ring-based aggregation hierarchy. Specifically, if each
node has more than
parents, the total number of rings in the rings
topology is h, and if the probability of a node being compromised
is p then, on average, a contributing node's MAC will reach the BS
with probability q, where
q
1h
h
j
=1
(1 - p
)
j
Here we have assumed that the contributing nodes are uniformly
distributed over the rings in the hierarchy. As an example, if p
=
0
.05, = 3, and h = 10 then q is greater than 0.999, i.e., the impact
of the compromised nodes on the communication error is negligible
.
We also note that while deriving q we assumed that there is only
one node which contributes to a particular bit in the synopsis. In
reality, the expected number of nodes contributing to a bit increases
exponentially as we move from the Rth bit, where R is the the
length of the prefix of all ones in the synopsis S, to the lower-order
bits, thereby increasing the probability that at least one MAC corresponding
to a bit position reaches the sink.
Computation and Communication Overhead
Each contributing
node computes one MAC. The expected number of contributing
nodes is
=
1
(
2u
c
l
c
-1), which is independent of network size.
Thus, only a subset of nodes will incur any computational overhead
. With respect to communication overhead, the maximum number
of MACs that any node will need to forward is
. Thus this approach
is scalable, and can be used in large-scale sensor networks
as long as the ratio u
c
/l
c
is reasonably small.
5.3
Securing SUM
We can extend the approach used for making the Count aggregate
resilient to compromised nodes to the Sum aggregate. To derive
the synopsis-edge for Sum we need to assume upper and lower
bounds for the value of a sensor reading in addition to the upper
and lower bounds for the number of sensor nodes.
A node X sends to the BS a MAC, M
= MAC(K
X
, m), only if it
contributes to the synopsis-edge as in SecureCount. The format of
the message m sent by a node is
[X|A|Seed|v], where X is the node
id, Seed is the random seed included in the broadcast query, A represents
the collection of bits in the synopsis to which X contributes,
and v is X 's sensed value.
Security Analysis
In the case of the Sum aggregate, the attacker
could falsely set some bits in its synopsis not only by using a false
node id but also using a false sensor reading. Although MACs from
the contributing nodes enable the BS to verify the node Ids, the BS
cannot verify the sensed value of a node. A compromised node can
claim to have a large sensed value close to the upper bound u
v
to
increase its chance of being able to contribute to the synopsis-edge.
The following theorem (whose proof can be found in Appendix
B) shows that this attack's impact is limited.
Theorem 1.
Let
be the number of compromised nodes in a network
of n nodes. Let u
v
and a
v
denote the upper bound and the
average value of the sensor reading respectively. Let S be the final
synopsis computed at the sink and let R be the length of the prefix
of all ones in S. Let s denote the value of the Sum aggregate. If each
compromised node claims that its sensed value is equal to the upper
bound u
v
, and if
( u
v
) < s, then the probability Pr[S[R + 1] = 1]
is proportional to the product of the fraction of compromised nodes
in the network,
/n.
Note that if the compromised node contributes to the
(R + 1)th
bit BS's estimate of Sum doubles. Thus, the theorem shows that for
a large network, as long the fraction of compromised nodes
grows
sub-linearly, the probability of this attack succeeding is small. For
smaller networks, the probability of this attack succeeding depends
upon the ratio
/n and on the ratio u
v
/a
v
. As an example, if n
=
1000,
= 25, and u
v
/a
v
= 4, then Pr[S[R + 1] = 1] = 0.064.
The impact of the attack is further reduced if we employ the
PCSA algorithm in which m independent synopses are computed
and the final estimator
R
is calculated by averaging these m esti-mators
. As an example, to add an error of 40% to the final Sum,
the attacker needs to set the R
+ 1-th bit in at least
m
2
synopses. In
the example above where Pr[S[R
+ 1] = 1] is 0.064, this probability
is close to zero when m is 20. This example illustrates that this
attack's impact is limited when
(
u
v
n a
v
) is small.
On the other hand, when
(
u
v
n a
v
) is large, we cannot neglect the
possibility that the attacker will succeed in injecting a significant
error to the Sum computed at the sink. To address this scenario, we
can use a scheme in which a node that contributes to the synopsis-edge
needs an endorsement from at least
neighbors attesting to
the validity of its sensed value. We assume that the sensed values
of one-hop neighbors are correlated so that one node can verify
the reading of its neighbors. We assume that there are fewer than
compromised nodes among the one hop neighbors of any node.
Each contributing node X collects at least
endorsements from its
one-hop neighbors in the form of a MAC computed over the sensor
reading using the pairwise key that the neighbor shares with the
sink. Then X computes an XMAC [1] by XORing the collected
MACs and X 's own MAC, and sends the XMAC to the BS. (Zhu
et al. [25] use an identical scheme to reduce the total size of the
MACs.) We also assume that BS has the knowledge to verify if a
set of nodes are one-hop neighbors, which prevents the collusion
attack. (We refer to this scheme as the XMAC-based scheme.)
Computation and Communication Overhead
The number of
contributing nodes
is less than
1
(
2u
s
l
s
-1), where u
s
and l
s
are
the upper bound and lower bound of Sum. As in the case of Count,
is independent of the network size and thus this approach is scalable
. With respect to worst case communication overhead, a node
will need to forward at most
MACs.
THE EXTENDED APPROACH TRADING LATENCY FOR COMMUNICATION OVERHEAD
When the ratio (
) of the upper bound of the aggregate to the
lower bound is high, the basic approach described in the previous
section is not scalable because the worst case communication cost
incurred by a node is proportional to
. In this section, we describe
an approach which has lower communication costs in comparison
to the basic approach at the expense of greater latency.
76
6.1
Protocol Overview
Our extended approach is based on the observation that the expected
number of nodes that contribute to bits i, where R
< i k in
the synopsis (k is the length of the synopsis) is very small. In fact,
using Property 2 and Property 4 from Section 2, we can show that
expected number of nodes contributing to the Rth and higher-order
bits of S is less than 2
/ 2.58.
We use a sliding-window based approach in which the aggregation
phase is divided into multiple epochs
3
. Starting at the right-most
bit k, we proceed from right to left within the synopsis S using
a bit window of w bits. In each epoch, only the nodes that contribute
a 1 to the bits in S corresponding to the current position of
the window, send MACs to the sink that can be used to verify their
contribution to S. In other words, in the first epoch, only nodes that
contribute a 1 to bits k to k
-w+1 respond to the query. In epoch
two, nodes that contribute to bits between k
- w and k - 2w + 1
respond, and so on.
The algorithm terminates when the querying node has determined
that the remaining bits of S to the left of the current window are
likely to be 1 with high probability. The design of this termination
criterion is the main challenge in using this approach; we discuss
the termination criterion and its analytical underpinnings in detail.
Once the querying node determines that the algorithm can terminate
it broadcasts a STOP message to the network to announce the
end of the iterative aggregation procedure.
6.2
Protocol Operation
The operation of the protocol is similar to that of the protocol
used in the basic approach with some minor differences as follows.
The query message broadcast to the network includes the window
size w in addition to the other parameters. As in the original synopsis
diffusion algorithm [3, 14], we assume that the time is syn-chronized
among BS and the sensor nodes. Each node computes
the start and end time of the current epoch, based on the window w.
Further, although the MACs generated by nodes are sent to the
BS over the course of multiple epochs, the fused synopsis computed
by each node is forwarded to its parent in the first epoch.
Thus, the BS can compute the aggregate at the end of the first epoch
itself, although this aggregate may be erroneous in the presence of
compromised nodes.
6.3
Termination Criterion
The goal of our algorithm is to find r, the lowest-order bit in S
that is 0. Further, recall that S is of the form 1
r
-1
0
, where the
bits at positions i
> r are highly likely to be 0. Thus, the intuition
behind our termination criterion is simple: as we examine the bits
of S moving from right to left, if we observe two consecutive 1's,
i.e., if we observe the string "110", it is highly likely that the 0
is at the rth position. In fact, we can show analytically that the
probability of this event is greater than 90% which follows from
the following theorem.
Theorem 2.
Let F denote the event that the string "0s
l
11" where
s
l
represents any string of length l, l
0 appears in a synopsis S.
The probability of the event F is less than 10%. (The proof is given
in the appendix.)
Further, we can take advantage of the fact that most applications
will use the PCSA algorithm to reduce the approximation error in
estimating R
= r -1. Recall that in the PCSA algorithm m synopses
are computed in parallel. Let R
i
denote the value of R estimated
from the ith synopsis. Then, according to the PCSA algorithm,
3
The original synopsis diffusion algorithm [3, 14] takes one epoch
to complete.
the the expected value of the random variable R is estimated by
averaging the individual values of R for each synopsis, i.e., E
[R] =
R
=
i
=m
i
=1
R
i
.
Although there is likely to be some variation among the R
i
, we
know from Property 3 in Section 2 that the variation is expected
to correspond to two bit position both to the left and the right of
the true value of R. This suggests that there is a high degree of
correlation between the R
i
for different synopses. Thus, in our
window-based approach, we can increase our confidence that we
have found the correct position of R, if we observe the bit pattern
"11" in multiple synopses among the m that are being computed in
parallel. Based on this intuition, our termination criterion consists
of checking whether we have observed the string "11" in at least m
out of the m synopses.
1
1
1
1
1
1
0
0
1st Synopsis
2nd Synopsis
m-th Synopsis
3rd Synopsis
1st position of
the window
the aggregation
after this window
process stops
the termination-test
passes in this
window
1
1
1
1
1
1
1
0
Figure 3: Each synopsis is divided into several windows of
width w
= 2 bits. After the termination criterion is satisfied,
the base station broadcasts a STOP message and the aggregation
phase stops after the next epoch. In each epoch, nodes
which contribute to the corresponding window send a MAC to
the base station. The MACs which correspond to the crossed
bits are never sent.
Our goal in selecting the threshold m
is to reduce the likelihood
of both a false positive, which means that the algorithm was terminated
too early, and a false negative, which means that the algorithm
terminated too late, i.e., after the sliding window had already
crossed the true position of R. A false positive results in an
over-estimate of R, whereas a false negative results in additional
communication overhead. We now show that it is possible to find a
suitable value for m
such that the probability of false positive and
the probability of false negative are both low.
Theorem 3.
Let G
i
denote the event that both bit i and
(i + 1) in
a synopsis S are 1. Let
denote the expected value of the estimator
R
. Then, Pr
[G
] = 0.3454, Pr[G
+1
] = 0.1315, and Pr[G
+2
] =
0
.0412.
Because of space limitations, the proof of this theorem can be
found in the appendix.
If the sliding window in our algorithm is two bits wide, i.e, w
= 2,
from the definition of the false positive (FP), we get that the probability
Pr
[FP] is the probability that the event G
+2
occurs in m
or more synopses. Similarly, the probability of a false negative,
Pr
[FN] is the probability that the event G
occurs in fewer than m
synopses. For m
= 20 (which is the typical value used in previous
77
work [3, 14], we find that the best value of m
is 4 in which case
Pr
[FP] = 0.0082 and Pr[FN] = 0.0484. The same approach illus-trated
here can be used to derive the appropriate threshold m
for
other window sizes.
Figure 3 illustrates the operation of our algorithm for w
= 2. Assume
that the termination criterion is satisfied in epoch e. The BS
broadcasts a STOP message which directs all nodes to terminate
the aggregation phase. Note that by the time each node in the network
receives the broadcast STOP message, many of the nodes will
have already sent MACs corresponding to their contributions to the
next epoch e
+ 1 of the algorithm. Thus, the effect of the termination
criterion being satisfied in epoch e message is to terminate the
aggregation after epoch e
+ 1.
We can take advantage of this extra epoch to further increase our
level of confidence in the estimated value of R. Let the bit position
of the sliding window in epoch e correspond to bits
and + 1.
Instead of estimating R
= + 1 because m
out of m synopses had
both bits
and +1 equal to 1, we can now estimate R based on the
observed value of R
i
for all m synopses. Our simulations show that
estimate of the aggregate computed using our extended approach is
close to the estimate computed using the original synopsis diffusion
[3, 14] algorithm.
Latency
The number of epochs taken by our sliding window approach
depends on the ratio (
) of the upper bound of the aggregate
to the actual value. If the upper bound is u and the actual value is
, for a window of width w the number of epochs is equal to
(log
2
u
m
-log
2
m
w
+2) = (log
2
w
+2)
Communication Overhead
Theorem 3 implies that it is highly
likely that the sliding window contains the position
R
when the termination
criterion is satisfied. As discussed above, if the termination
criterion is satisfied in epoch e, the aggregation completes after
epoch e
+ 1. Thus, by property 2 and property 4 in Section 2, if m
synopses are computed in parallel, the expected number of nodes
which send a MAC varies in the range of
(2.58 m) to (5.16m).
Even if a sensor node contributes to more than one bits, it sends
just one MAC validating all the bits. Note that the number of contributing
nodes does not exceed this range even if the network size
is increased. Our simulation results show that 85 MACs are sent on
average when m
= 20.
We observe that the width of the window w determines a tradeoff
between the communication overhead and the latency. If we divide
the synopses into wider windows, the number of MACs sent and
hence the communication overhead will increase while the latency
of the aggregation process will decrease, and vice versa.
6.4
Discussion
An alternative approach to the sliding window-based approach
described above is one in which the base station computes the aggregate
of interest in the first epoch using the original Synopsis
Diffusion algorithm. It then broadcasts a message requesting only
the nodes that contribute to the bit window that contains R to send
the MACs authenticating their local synopses. If the BS success-fully
verifies all the MACs it receives, then the protocol terminates
at the end of the second epoch. However, if it does not receive the
requested MACs or if one or more MACs are invalid, the BS executes
the sliding-window protocol described above to compute the
correct value of R. If the probability of compromised nodes being
present in the network is low, then this alternative approach is
preferable to the extended approach since it will have much lower
latency on average.
SIMULATION RESULTS
In this section, we report on a detailed simulation study that examined
the performance of our attack-resilient aggregation algorithms
discussed in Sections 5 and 6. Our simulations were written
using the TAG simulator developed by Madden et al. [11]. We
added the attack-resilient functionality to the source code provided
by Considine et al. [3] which simulates their multipath aggregation
algorithms in the TAG simulator environment.
7.1
Simulation Environment
For our basic experimental network topology, we used a regular
30
30 grid with 900 sensor nodes, where one sensor is placed at
each grid point and the base station is at the center of the grid, as in
[3]. The communication radius of each node is 2 unit allowing
the nearest eight grid neighbors to be reached.
The goal of our simulation experiments is to examine the communication
overhead and accuracy of our scheme in the presence
of packet losses, which are relatively frequent in sensor networks.
We use a simple packet loss model in which packets are dropped
with a fixed probability; this packet loss rate is assumed to include
packets that are lost due to being dropped by compromised nodes.
We do not model any additional attacks by compromised nodes,
specifically the falsified subaggregate and the falsified local value
attacks, in our simulation. This is because we have already shown
that these attacks cannot affect the estimate of the aggregate computed
at the sink. Consequently, these attacks simply have the effect
of increasing the communication and computation overhead; in effect
, they become a form of DOS or resource consumption attacks.
We assign a unique id to each sensor, and we assume that the sensor
reading is a random integer uniformly distributed in the range
of 0 to 250 units. We compute 20 synopses in parallel using the
PCSA algorithm as in the experiments reported in [3, 14]. We use
the method of independent replications as our simulation methodology
. Each simulation experiment was repeated 200 times with a
different seed. The plots below show the 95% confidence intervals
of the reported metric.
7.2
Results and Discussion
Due to space constraints, we will only present the results of our
extended approach for computing the Sum aggregate.
Accuracy of our estimate
In the first set of experiments, we validate
our claim that our attack-resilient approach has the same accuracy
in computing the true value of the aggregate as the original
synopsis diffusion approach. Figure 4a plots the estimates of our
approach and the synopsis diffusion approach as a function of the
packet loss rate. We observe that the two estimates are indeed very
close in all loss rate conditions. We observe that the average value
of the sensor reading is approximately 125, i.e., the accurate Sum
is 900
125 = 11250.
Communication overhead
We now compare the communication
overhead of our approach to that of the original synopsis diffusion
approach. Figure 4(b) plots the total number of bytes transmitted
for computing the Sum aggregate. As discussed in Section 5.3,
for preventing a node from using a false reading to generate its
own local synopsis, we can adopt two approaches. In the first approach
, we ignore the impact of the falsified local value attack; in
the figure, this approach is labeled as ARSD (attack-resilient synopsis
diffusion). The second approach requires the contributing
node to include a XMAC, which corresponds to an endorsement
from its neighbors, in the message; in the figure, this approach is
labeled ARSD+XMAC.
For ARSD+XMAC, each contributing node sends an authentication
message which has two parts: the first part contains the ID (2
78
0
0.1
0.2
0.3
0.6
0.8
1
1.2
1.4
x 10
5
Link Loss Rate
Estimated Sum
ARSD
SD
Actual Sum
0
0.1
0.2
0.3
1
2
3
4
5
6
x 10
4
Link Loss Rate
Total Byte Transmitted
ARSD
ARSD+XMAC
SD
a. Accuracy of Sum
b. Byte overhead
0
2
4
6
8
10
12
0
2
4
6
8
Log (upper bound / actual Sum)
Number of epochs
ARSD (window width=2)
1000
2000
3000
4000
5000
40
60
80
100
120
Number of sensor nodes
Number of Contributing nodes
ARSD
c. Latency
d. Varying the network size
Figure 4: Experimental Results
bytes) of the contributing node and its sensed value (3 bytes), and
the second part includes the IDs of the k neighbors and a XMAC
(4 bytes). If the value of k is not more than 4 then a node needs
8 bytes to specify the identity of the neighbors whose MACs are
used to generate the XMAC. Thus, the size of one authentication
message is 17 bytes. For ARSD, the contributing node just needs
to send its own MAC; no neighbor endorsement is needed, which
reduces the authentication message size to 9 bytes.
Figure 4(b) shows that the byte overhead of the ARSD+XMAC
scheme is roughly 5 times larger than the original approach, whereas
ARSD is 2.5 times larger than the original approach. One might expect
that if loss rate is high our extended approach may take more
time to stop because some MACs could be lost en-route, and, as a
result, the communication overhead could increase. But Figure 4(b)
demonstrates that the overhead of the extended approach does not
increase with the loss rate.
Latency
As discussed in Section 6, the latency of the extended
approach depends on the looseness of the base station's estimate of
the upper bound of Sum. Figure 4(c) plots the number of epochs
taken by our extended approach as a function of the ratio of the
upper bound to the actual value of the aggregate. The figure shows
that the number of epochs increases at logarithmic scale with the
ratio of the upper bound to the actual Sum. We note, however, that
the byte overhead of our scheme is independent of this ratio.
Effect of network size
In this experiment, we study the impact of
the network size on the communication overhead of the extended
approach. The communication overhead depends upon the number
of contributing nodes that send a MAC to the base station, authenticating
their synopsis. Recall from Section 6 that the expected
number of contributing nodes is independent of the network size.
Figure 4(d) confirms our analysis; we observe that the number of
contributing nodes is more or less constant as the network size increases
4
. This figure thus illustrates the scalability of our approach
for attack-resilient aggregation.
RELATED WORK
Several data aggregation protocols [11, 19, 23] have been proposed
in the literature which efficiently fuse the sensed information
en-route to the base station to reduce the communication overhead.
Since packet losses and node failures are relatively common in sensor
networks, several studies have investigated the design of robust
aggregation algorithms. Considine et al. [3] and Nath et al. [14,
12] have presented robust aggregation approaches that combine the
use of multi-path routing with clever algorithms that avoid double-counting
of sensor readings. Jelasity et al. [9] proposed a robust
gossip-based protocol for computing aggregates over network components
in a fully decentralized fashion. They assume that nodes
form an overlay network where any pair of nodes are considered to
be neighbors, which makes this protocol impractical for sensor networks
. We note that none of the above algorithms were designed
with security in mind.
Recently several researchers have examined security issues in
aggregation. Wagner [17] examined the problem of resilient data
aggregation in presence of malicious nodes, and provided guidelines
for selecting aggregation functions in a sensor network. Buttyan
et al. [2] proposed a model of resilient aggregation and analyzed
the maximum deviation from the true value of the aggregate that an
adversary could introduce while remaining undetected. The models
used by by both Buttyan et al and Wagner assume that there is
no in-network aggregation, that is, the aggregation is performed at
the sink. Przydatek et al [16] present protocols that can be used by
a trusted remote user to query a sensor network in which the base
4
The link loss rate is held at 20% in this set of experiments.
79
station may be compromised and the base station is the only aggregator
. One of the protocols described by Przydatek et al is a robust
approach for counting distinct elements in a data stream that can
be used for estimating the size of the network, i.e., the Count aggregate
. Their approach for counting distinct elements is similar to
our scheme for Count in the sense that in both cases only a subset
of elements need to be verified.
The first secure in-network data aggregation protocol was designed
by Hu and Evans [8]. Their protocol is effective only if no
more than one node is compromised. Recently, Yang et al. [18] proposed
SDAP, a secure hop-by-hop data aggregation protocol which
can tolerate more than one compromised node. SDAP is a tree-based
aggregation protocol with communication cost comparable
with that of the ordinary aggregation protocols while it provides
certain level of assurance on the trustworthiness of the aggregation
result. As SDAP is a tree-based protocol, it is vulnerable to link
loss and node failures which are relatively common in sensor networks
, whereas our protocol is robust to this communication loss
and, at the same time, secure against compromised nodes.
We note that our work is related to the general problem of preventing
false data injection. Du et al. [4] proposed a mechanism
that allows the base station to check the aggregated values submit-ted
by several designated aggregators, based on the endorsements
provided by a certain number of witness nodes around the aggregators
. Their scheme does not provide per-hop aggregation. Several
other works [20, 21, 25] have also proposed solutions to prevent
false data injection attacks in sensor networks, but they do not involve
data aggregation.
CONCLUSION
In this paper, we investigated the security issues of synopsis diffusion
framework in presence of compromised nodes. We showed
that a compromised node can launch several simple attacks on the
existing aggregation algorithms, which could significantly deviate
the estimate of the aggregate. We also proposed modifications to
the aggregation algorithms that guard against these attacks. Our
analytical results and simulation results show that our approach is
effective and it incurs minimal computation and communication
overhead.
In this paper, we assume that a sensor node has a security association
only with the base station, and, as a result, the authentication
messages cannot be processed in-network in our approach. To further
reduce the communication overhead, we plan to exploit other
security settings, e.g., local pairwise keys among nodes, as a part
of our future work.
REFERENCES
[1] M. Bellare, R. Guerin, and P. Rogaway. XOR MACs: New
methods for message authentication using finite
pseudorandom functions. In Proc. of the 15th Annual
International Cryptology Conference on Advances in
Cryptology - CRYPTO'95
, pages 1528, 1995.
[2] L. Buttyan, P. Schaffer, and I. Vajda. Resilient aggregation
with attack detection in sensor networks. In Proc. of 2nd
IEEE Workshop on Sensor Networks and Systems for
Pervasive Computing
, 2006.
[3] J. Considine, F. Li, G. Kollios, and J. Byers. Approximate
aggregation techniques for sensor databases. In Proc. of
IEEE Int'l Conf. on Data Engineering (ICDE)
, 2004.
[4] W. Du, J. Deng, Y. S. Han, and P. Varshney. A pairwise key
pre-distribution scheme for wireless sensor networks. In
Proc. of the 10th ACM Conference on Computer and
Communications Security (CCS '03).
, 2003.
[5] P. Flajolet and G. N. Martin. Probabilistic counting
algorithms for data base applications. Journal of Computer
and System Sciences
, 31(2):182209, 1985.
[6] S. Ganeriwal and M. B. Sribastava. Reputation-based
framework for highly integrity sensor networks. In Proc. of
ACM Workshop on Security of Sensor and Adhoc Networks
(SASN)
, Washington, DC, 2004.
[7] D. Ganesan, R. Govindan, S. Shenker, and D. Estrin.
Highly-resilient energy-efficient multipath routing in
wireless sensor networks. Mobile Comuting and
Communication Review
, 4(5):1125, 2001.
[8] L. Hu and D. Evans. Secure aggregation for wireless
networks. In Proc. of Workshop on Security and Assurance in
Ad hoc Networks.
, 2003.
[9] M. Jelasity, A. Montresor, and O. Babaoglu. Gossip-based
aggregation in large dynamic networks. ACM Transactions
on Computer Systems
, 23(3):219252, 2005.
[10] F. Koushanfar, M. Potkonjak, and
A. Sangiovanni-Vincentelli. Fault tolerance techniques in
wireless ad-hoc sensor networks. In Sensors 2002.
Proceedings of IEEE
, pages 1491 1496.
[11] S. Madden, M. J. Franklin, J.M. Hellerstein, and W. Hong.
TAG: A tiny aggregation service for ad hoc sensor networks.
In Proc. of 5th USENIX Symposium on Operating Systems
Design and Implementation
, 2002.
[12] A. Manjhi, S. Nath, and P. Gibbons. Tributeries and deltas :
Efficient and robust aggregation in sensor network streams.
In Proc. of ACM International Conference on Management
of Data (SIGMOD)
, 2005.
[13] Mica Motes. http://www.xbow.com.
[14] S. Nath, P. B. Gibbons, S. Seshan, and Z. Anderson.
Synopsis diffusion for robust aggregation in sensor networks.
In Proc. of the 2nd international conference on Embedded
networked sensor systems (SenSys)
, 2004.
[15] A. Perrig, R. Szewczyk, V. Wen, D. Culler, and J. D. Tygar.
SPINS: Security protocols for sensor networks. In Seventh
Annual International Conference on Mobile Computing and
Networks (MobiCOM)
, 2001.
[16] B. Przydatek, D. Song, and A. Perrig. SIA: Secure
information aggregation in sensor networks. In Proc. of the
1st international conference on Embedded networked sensor
systems (SenSys)
, 2003.
[17] D. Wagner. Resilient aggregation in sensor networks. In
Proc. of ACM Workshop on Security of Sensor and Adhoc
Networks (SASN)
, 2004.
[18] Y. Yang, X. Wang, S. Zhu, and G. Cao. SDAP: A secure
hop-by-hop data aggregation protocol for sensor networks.
In Proc. of ACM MOBIHOC, 2006.
[19] Y. Yao and J. E. Gehrke. The cougar approach to in-network
query processing in sensor networks. ACM SIGMOD Record,
31(2):918, September 2002.
[20] Fan Ye, Haiyun Luo, Songwu Lu, and Lixia Zhang.
Statistical en-route filtering of injected false data in sensor
networks. In Proc. of IEEE Infocom, 2004.
[21] W. Zhang and G. Cao. Group rekeying for filtering false data
in sensor networks: A predistribution and local
collaboration-based approach. Proc. of IEEE Infocom, 2005.
[22] J. Zhao and R. Govindan. Understanding packet delivery
performance in dense wireless sensor networks. In Proc. of
80
the 1st international conference on Embedded networked
sensor systems (SenSys)
, 2003.
[23] J. Zhao, R. Govindan, and D. Estrin. Computing aggregates
for monitoring sensor networks. In Proc. of the 2nd IEEE
International Workshop on Sensor Network Protocols and
Applications
, 2003.
[24] S. Zhu, S. Setia, and S. Jajodia. LEAP: Efficient security
mechanisms for large-scale distributed sensor networks. In
Proc. of the 10th ACM Conference on Computer and
Communications Security (CCS '03).
, 2003.
[25] S. Zhu, S. Setia, S. Jajodia, and P. Ning. An interleaved
hop-by-hop authentication scheme for filtering injected false
data in sensor networks. In Proc. of IEEE Symposium on
Security and Privacy
, 2004.
Appendix
A.
Below we describe the algorithm (SecureCount) executed by each
node in response to a Count query. X represents the node Id.
Algorithm 2
SecureCount(X
, Seed, a, b)
1:
M
= {}; //
M
is initialized as an empty set
2: i
= SG(X, Seed); // X contributes to bit i
3: if (a
i b) then
4:
m
= [X|i|Seed];
5:
M
= MAC(K
X
, m);
6:
M
=
M
M;
7: end if
8: S
l
= SF(); // S
l
is the fused synopsis at X
9:
M
=
M
C
; //
C
represents the set of MACs X received from
// its child nodes
10: X
: S
l
|
M
;
B. Proofs of Theorems
We provide the proofs for the theorems present in the paper.
Theorem 1.
Let there be n nodes in the sensor network among
which
nodes are compromised. Let u
v
and a
v
denote the upper
bound and the average value of the sensor reading respectively. Let
S
be the final synposis computed at the sink and let R be the length
of the prefix of all ones in S. Let s denote the value of the Sum
aggregate. If each compromised node claims that its sensed value is
equal to the upper bound u
v
, and if
( u
v
) < s, then the probability
Pr[S
[R + 1] = 1] is proportional to the product of the fraction of
compromised nodes in the network,
/n, and the ratio u
v
/a
v
.
P
ROOF
. By property 2 in Section 2, the expected value of the
estimator R, for the Sum synopsis S, is log
2
(s), where s denotes
the Sum. As a node X with sensed value v invokes the function
CT() v times (in the synopsis generation phase), the probability that
X
does not contribute to bit i in S is
(1 1
2
i
)
u
v
. So, the probability
(p) that a node with sensed value u
v
will contribute to the
(R + 1)th
bit is (1
-(11
2
R
+1
)
u
v
). After simplifying, we get
p
= 1 -(1- 1
2
s )
u
v
1-(1- u
v
2
s ) =
u
v
2
s
The above approximation is valid as u
v
is smaller than s. If there
are
compromised nodes, then Pr (S[R + 1] = 1) is
q
= 1 -(1- p)
p = u
v
2
s = (
1
2
) (
n ) (
u
v
a
v
)
To prove Theorem 2, we first prove the following results.
Lemma 1.
Let E
i
, 1
i k -2 denote the event that the string
"011" appears in a synopsis S from bit i to bit
(i + 2) (i.e., S[i] = 0,
S
[i + 1] = 1, and S[i + 2] = 1), where k is the length of S. The
maximum value of the probability (p
i
) of the event E
i
is 0.037 for
any value of i and for any value of Count (or Sum) shared by S.
P
ROOF
. If function CT
() is invoked once (ref. Section 2), then
Pr
[S[ j] = 1] = q
j
=
1
2
j
, 1 j k. This probability increases if it is
given that bit j
, 1
j
k will remain 0. Specifically,
Pr
[S[ j] = 1|S[j
] = 0] =
q
j
1
-q
j
=
1
(2
j
) (1 1
2
j
) .
If
is the total Count (or Sum) shared by synopsis S, then Pr[S[i] =
0
] = (1 -q
i
)
, and
Pr
[S[i + 1] = 1, S[i + 2] = 1]
= 1 -Pr[S[i+1] = 0]-Pr[S[i+2] = 0]
+ Pr[S[i + 1] = 0, S[i + 2] = 0]
(1)
= 1 -(1-q
i
+1
)
-(1-q
i
+2
)
+ (1 -q
i
+1
-q
i
+2
)
= 1 -(11
2
i
+1
)
-(11
2
i
+2
)
+ (1 1
2
i
+1
1
2
i
+2
)
So, the probability of the event E
i
is
p
i
= Pr[S[i] = 0]
Pr
[S[i + 1] = 1, S[i + 2] = 1 | S[i] = 0]
= (1 1
2
i
)
[1 -(11
(2
i
+1
) (11
2i
)
)
-(11
(2
i
+2
) (11
2i
)
)
+ (1 1
(2
i
+1
) (11
2i
)
1
(2
i
+2
) (11
2i
)
)
]
(2)
Note that if i
<< log
2
(), the 1st factor is close to 0 and second
factor is close to 1, making p
i
close to 0. On the other hand, if
i
>> log
2
(), the 1st factor is close to 1, but the 2nd factor are
close to 0, again making p
i
close to 0. p
i
attains the highest value
when i is close to log
2
(). We have numerically found that the
maximum value of p
i
is 0.037, for any value of i or
.
Lemma 2.
Let E denote the event that the string "011" appears in
a synopsis S at any position. The probability of the event E is less
than 0.099.
P
ROOF
. E
i
denotes the event that "011" appears in a synopsis
S
where 0 is at the ith bit. We observe that the events E
i
, E
i
+1
,
E
i
+2
are mutually exclusive, for any value of i. Following the
same direction of Lemma 1, we can show that the probability that
" 011s
l
011" appears in synopsis S is close to zero, where s
l
represents
any string of length l, l
0. So, the probability that two
events E
i
and E
j
where j
(i+3) can occur together is negligible,
for any value of i. As a result, we can approximate that events E
i
s
are mutually exclusive and hence the probability of event E is
p
=
k
i
=1
p
i
,
where p
i
is given by expression (2) and k is the length of S. We
have numerically found that maximum value of p is 0.099.
Lemma 3.
Let F
i
denote the event that a string "0s
i
11" appears
in a synopsis S, and let F denote the general event that a string
81
"0s
l
11", l
0 appears in S, where s
l
represents any string of length
l
. Pr
[F] = Pr[F
0
]
P
ROOF
. As the string "011" is a special case of string "0s
l
11"
where l
= 0, Pr[F] Pr[F
0
]. On the other hand, if string s
=
"0s
l
11", l
0 appears in S, string "011" must also appear as a
substring of s
. As an example, if s
= "01011" where s
l
= "10",
we can see "011" as a substring of s
. Hence, Pr
[F] Pr[F
0
]. So,
we get that Pr
[F] = Pr[F
0
].
Theorem 2.
Let F denote the event that the string "0s
l
11" where
s
l
represents any string of length l, l
0 appears in a synopsis S.
The probability of the event F is less than 10%.
P
ROOF
. As the event F
0
in Lemma 3 is same as the event E
in Lemma 2, we get that the probability of event F is less than
10%.
Theorem 3.
Let G
i
denote the event that both bit i and
(i + 1) in
a synopsis S are 1. Let
denote the expected value of the estimator
R
. Then, Pr
[G
] = 0.3454, Pr[G
+1
] = 0.1315, and Pr[G
+2
] =
0
.0412.
P
ROOF
. The expected value of
R
is log
2
(
m
), which we denote
by
, where is the total Count (or Sum) shared by m synopses
following the algorithm PCSA. If function CT
() is invoked once
(ref. Section 2),
Pr
[S[i] = 1] = q
i
= 1
m
1
2
i
, 1 i k
because synopsis S is selected with probability
1
m
among m synopses
. As
is the total Count (or Sum) shared by all synopses, we
get by using similar expression as (1) in Lemma 1 that
Pr
[G
i
] = 1 -(11
m
2
i
)
-(11
m
2
i
+1
)
+ (1 1
m
2
i
1
m
2
i
+1
)
So,
Pr
[G
] = 1 -(11
m
m
)
-(11
m
2
m
)
+ (1 1
m
m
1
m
2
m
)
1-e
1
-e
1
2
+ e
3
2
= 0.3454
Similarly, we find Pr[G
+1
]
= 0.1315, and Pr[G
+2
]
= 0.0412.
82 | sensor networks;node compromise prevention;falsified local value attack;in-network data aggregation;Attack resilient heirarchical data aggregation;Sum aggregate;falsified sub-aggregate attack;Attack-Resilient;Count aggregate;Sensor Network Security;robust aggregation;Synopsis Diffusion;Data Aggregation;synopsis diffusion aggregation framework;network aggregation algorithms;Hierarchical Aggregation |
39 | Automated Rich Presentation of a Semantic Topic | To have a rich presentation of a topic, it is not only expected that many relevant multimodal information, including images, text, audio and video, could be extracted; it is also important to organize and summarize the related information, and provide users a concise and informative storyboard about the target topic. It facilitates users to quickly grasp and better understand the content of a topic. In this paper, we present a novel approach to automatically generating a rich presentation of a given semantic topic. In our proposed approach, the related multimodal information of a given topic is first extracted from available multimedia databases or websites. Since each topic usually contains multiple events, a text-based event clustering algorithm is then performed with a generative model. Other media information, such as the representative images, possibly available video clips and flashes (interactive animates), are associated with each related event. A storyboard of the target topic is thus generated by integrating each event and its corresponding multimodal information. Finally, to make the storyboard more expressive and attractive, an incidental music is chosen as background and is aligned with the storyboard. A user study indicates that the presented system works quite well on our testing examples. | INTRODUCTION
In the multimedia field, a major objective of content analysis is to
discover the high-level semantics and structures from the low-level
features, and thus to facilitate indexing, browsing, searching,
and managing the multimedia database. In recent years, a lot of
technologies have been developed for various media types,
including images, video, audio and etc. For example, various
approaches and systems have been proposed in image content
analysis, such as semantic classification [1], content-based image
retrieval [2] and photo album management [3]. There are also a lot
of research focuses on video analysis, such as video segmentation
[4], highlight detection [5], video summarization [6][7], and video
structure analysis [8], applied in various data including news
video, movie and sports video. Since audio information is very
helpful for video analysis, many research works on audio are also
developed to enhance multimedia analysis, such as audio
classification [9], and audio effect detection in different audio
streams [10]. Most recently, there are more and more approaches
and systems integrating multimodal information in order to
improve analysis performance [11][12].
The main efforts of the above mentioned research have focused on
understanding the semantics (including a topic, an event or the
similarity) from the multimodal information. That is, after the
multimedia data is given, we want to detect the semantics implied
in these data. In this paper, we propose a new task, Rich
Presentation, which is an inverse problem of the traditional
multimedia content analysis. That is, if we have a semantic topic,
how can we integrate its relevant multimodal information,
including image, text, audio and video, to richly present the target
topic and to provide users a concise and informative storyboard?
In this paper, the so-called "semantic topic" is a generic concept.
It could be any keyword representing an event or events, a
person's name, or anything else. For example, "World Cup 2002"
and "US election" could be topics, as well as "Halloween" and
"Harry Potter". In this paper, our task is to find sufficient
information on these topics, extract the key points, fuse the
information from different modalities, and then generate an
expressive storyboard.
Rich presentation can be very helpful to facilitate quickly
grasping and better understanding the corresponding topic.
People usually search information from (multimedia) database or
the Internet. However, what they get is usually a bulk of
unorganized information, with many duplicates and noise. It is
tedious and costs a long time to get what they want by browsing
the search results. If there is a tool to help summarize and
integrate the multimodal information, and then produce a concise
and informative storyboard, it will enable users to quickly figure
out the overview contents of a topic that they want to understand.
Rich presentation provides such a tool, and thus it could have
many potential applications, such as education and learning,
multimedia authoring, multimedia retrieval, documentary movie
production, and information personalization.
In this paper, we will present the approach to rich presentation. In
order to produce a concise and informative storyboard to richly
present a target topic, we need to answer the following questions.
1) How to extract the relevant information regarding the target
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
MM'05, November 611, 2005, Singapore.
Copyright 2005 ACM 1-59593-044-2/05/0011...$5.00.
745
topic? 2) How to extract the key points from the relevant
information and build a concise and informative storyboard? 3)
How to fuse all the information from different modality? and 4)
how to design the corresponding rendering interface?
Storyboard
Relevant Media
Music
Multiple Events Clustering
Event summary (4w + time)
Geographic information
Relevant multimodal information Retrieval
Text
A Target Topic
Rhythm Analysis
Onset/Beat Sequence
Strength confidence
Media Association
Representative images
Relevant video clips
Storyboard Generation
Event presentation, multimodal information fusion, layout design
Music and storyboard synchronization
Rich Presentation
User
Interaction
Fig. 1 The system framework of rich presentation of a target
semantic topic. It is mainly composed of three steps, relevant
multimodal information extraction, media analysis, and rich
presentation generation.
In this paper, we propose a number of novel approaches to deal
with the above issues and also present an example system. Fig. 1
illustrates the proposed system framework of rich presentation. It
is mainly composed of three steps, relevant multimodal information
extraction, media analysis including multiple events clustering
, representative media detection and music rhythm analysis;
and the final storyboard generation and music synchronization.
In the proposed system, given the semantic topic, the relevant
information, including text, image, video and music, is first
extracted from the available multimedia database or the web database
. User interaction is also allowed to provide extra relevant
material or give relevant feedback. Then, the information is
summarized, with an event clustering algorithm, to give a concise
representation of the topic and figure out the overview of the
contents. Other multimedia materials, such as representative
images (or image sequences) and geographic information, are
subsequently associated with each event. In the next step, all the
above information is integrated to generate a storyboard, in which
each event is presented as one or multiple slides. An incidental
music, which is also possibly relevant to the topic, is finally
synchronized with the storyboard to improve its expressiveness
and attractiveness. Thus, with these steps, a concise and
informative rich presentation regarding the target topic is generated
.
The rest of the paper is organized as follows. Section 2 discusses
the relevant information extraction corresponding to the target
topic. Section 3 presents our approach to the topic representation,
including multiple events clustering, event description, and
representative media selection. Section 4 describes the approach
to rich presentation generation, including storyboard generation,
incidental music analysis and synchronization. Experiments and
evaluations are presented in the Section 5. Conclusions are given
in the Section 6.
OBTAINING RELEVANT INFORMATION
To obtain the multimodal information which is relevant to the
input topic (keyword), generally, we could search them from
various databases which have been indexed with the "state-of-the-art"
multimedia analysis techniques. However, in current stage,
there is lack of such publicly available multimedia databases. The
public search engine like MSN or Google indexes all the Internet
web-pages and can return a lot of relevant information, but the
search results usually contain much noise. We could also build a
private database for this system to provide more relevant and
clean results, but it will be too much expensive to collect and
annotate sufficient multimedia data for various topics. In order to
obtain relatively accurate and sufficient data for an arbitrary topic,
in our system, we chose to collect the relevant multimodal
information of the given topic from the news websites such as
MSNBC, BBC and CNN, instead of building an available
database from the scratch. These news websites are usually well
organized and managed; and contain various kinds of high quality
information including text, image and news video clips. Although
the news websites are used as the information sources in our
system, other various multimedia databases can be also easily
incorporated into the system if they are available.
Instead of directly submitting the topic as a query and getting the
returned results by using the search function provided by the
websites, in our system, we crawled the news documents from
these websites in advance and then build a full-text index. It
enables us to quickly obtain the relevant documents, and also enable
us to use some traditional information retrieval technologies,
such as query expansion [13], to remove the query ambiguousness
and get more relevant documents.
In our approach, user interaction is also allowed to provide more
materials relevant to the topic, or give relevant feedback on the
returned results. For example, from the above websites, we can
seldom find a music clip relevant to the target topic. In this case,
users could provide the system a preferred music, which will be
further used as incidental music to accompany with the storyboard
presentation. Users could also give some feedbacks on the
obtained documents. For example, if he gives a thumb-up to a
document, the relevant information of the document needs to be
presented in the final storyboard. On the other side, users could
also thumb-down a document to remove the related information.
TOPIC REPRESENTATION
A semantic topic is usually a quite broad concept and it usually
contains multiple events. For example, in the topic "Harry Potter",
the publication of each book and the release of each movie could
be considered as an event; while in the topic "World Cup 2002",
each match could also be taken as an event. For each event, there
are usually many documents reporting it. Therefore, in order to
generate an informative and expressive storyboard to present the
topic, it would be better to decompose the obtained information
and cluster the documents into different events.
However, event definition is usually subjective, different
individuals may have different opinions. It is also confusing in
which scale an event should be defined. Also take "World Cup"
as an example, in a larger scale, "World Cup 2002" and "World
Cup 2006" could also be considered as a big event. Therefore,
due to the above vagueness, in this paper, we do not strictly define
746
each event of the target topic. Following our previous works on
news event detection [14], an event is assumed as some similar
information describing similar persons, similar keywords, similar
places, and similar time duration. Therefore, in our system, an
event is represented by four primary elements: who (persons),
when (time), where (locations) and what (keywords); and event
clustering is to group the documents reporting similar primary
elements. As for the scale of event, in the paper, it could be
adaptively determined by the time range of the obtained
documents or the required event number.
In this section, we present a novel clustering approach based on a
generative model proposed in [14], instead of using traditional
clustering methods such as K-means. After event clusters are
obtained, the corresponding event summary is then extracted and
other representative media is associated with each event.
3.1 Multiple Event Clustering
To group the documents into different events, essentially, we need
to calculate p(e
j
|
x
i
), which represents the probability that a document
x
i
belongs to an event e
j
. Here, as mentioned above, an
event e
j
(and thus the document x
i
describing the event) is
represented by four primary elements: who (persons), when (time),
where (locations) and what (keywords). That is,
}
,
,
,
{
/
time
keywords
locations
persons
Docment
Event
=
Assuming that a document is always caused by an event [14] and
the four primary elements are independent, to calculate the
probability p(e
j
|
x
i
), in our approach, we first determine the likelihood
that the document x
i
is generated from event e
j
, p(x
i
|
e
j
)
which could be further represented by the following generative
model,
)
|
(
)
|
(
)
|
(
)
|
(
)
|
(
j
i
j
i
j
i
j
i
j
i
e
time
p
e
key
p
e
loc
p
e
name
p
e
x
p
=
(1)
where name
i
, loc
i
, key
i
, and time
i
are the feature vectors
representing persons, locations, keywords and time in the
document x
i
, respectively. In our approach, the above entities are
extracted by the BBN NLP tools [15]. The tool can extract seven
types of entities, including persons, organizations, locations, date,
time, money and percent. In our approach, the obtained organization
entity is also considered as a person entity; and all the words
except of persons, locations, and other stop-words are taken as
keywords.
In more detail, name
i
(similarly, loc
i
and key
i
) is a vector <c
i1
,
c
i2
, ..., c
iNp
>, where c
in
is the occurrence frequency of the person
n
appears in the document x
i
, and person
n
is the nth person in the
person vocabulary, which is composed of all the persons appeared
in all the obtained documents (similarly, we can define keyword
vocabulary and location vocabulary). Assuming N
p
is the size of
person vocabulary, p(name
i
|e
j
) could be further expressed by
=
=
p
in
N
n
c
j
n
j
i
e
person
p
e
name
p
1
)
|
(
)
|
(
(2)
Since the person, location and keyword are discrete variables
represented by words, and the probability of the location and
keyword can be also defined similarly as that of the person in (2),
in the flowing sections, we will not discriminate them and
uniformly represent the probability p(person
n
|
e
j
) (correspond-ingly
, the p(location
n
|
e
j
) and p(keyword
n
|
e
j
)) as p(w
n
|
e
j
), which
denotes the probability that the word w
n
appears in the event e
j
On the other hand, the time of an event usually lasts a continuous
duration. It is also observed, especially in the news domain, that
the documents about an event usually increases at the beginning
stage of the event and then decreases at the end. Therefore, in
our approach, a Gaussian model N(u
j
,
j
) is utilized to roughly
represent the probability p(time
i
|
e
j
), where u
j
and
j
is the mean
and standard deviation, respectively.
To this end, in order to estimate the probability p(e
j
|
x
i
), we need
to estimate the parameters = {p(w
n
|
e
j
), u
j
,
j
, 1jK}, assuming
K is the number of events (the selection of K is discussed in
section 3.2). In our approach, the Maximum Likelihood is used to
estimate the model parameters, as,
=
=
=
=
=
=
M
i
K
j
j
i
j
M
i
i
e
x
p
e
p
x
p
X
p
1
1
1
*
))
,
|
(
)
(
log(
max
arg
))
|
(
log(
max
arg
))
|
(
log(
max
arg
(3)
where X represents the corpus of the obtained documents; M and
K are number of documents and events, respectively.
Since it is difficult to derive a close formula to estimate the
parameters, in our approach, an Expectation Maximization (EM)
algorithm is applied to maximize the likelihood, by running E-step
and M-step iteratively. A brief summary of these two steps is
listed as follows, and more details can be found in [14].
In E-step, the posterior probability p(e
j
| x
i
) is estimated as:
)
(
)
(
)
|
(
)
|
(
)
(
)
(
)
1
(
i
t
j
t
j
i
t
i
j
x
p
e
p
e
x
p
x
e
p
=
+
(4)
where the upper script (t) indicate the tth iteration.
In M-step, the model parameters are updated, as,
+
+
=
=
=
+
=
+
+
M
i
N
s
t
i
j
M
i
t
i
j
t
j
n
s
i
tf
x
e
p
N
n
i
tf
x
e
p
e
w
p
1
1
)
1
(
1
)
1
(
)
1
(
)
)
,
(
)
|
(
(
)
,
(
)
|
(
1
)
|
(
(5)
=
=
+
=
+
+
M
i
t
i
j
M
i
i
t
i
j
t
j
x
e
p
time
x
e
p
u
1
)
1
(
1
)
1
(
)
1
(
)
|
(
)
|
(
(6)
=
=
+
=
+
+
+
M
i
t
i
j
M
i
tj
i
t
i
j
t
j
x
e
p
u
time
x
e
p
1
)
1
(
1
2
)
1
(
)
1
(
)
1
(
2
)
|
(
)
(
)
|
(
(7)
where tf(i,n) is the term frequency of the word w
n
in the
document x
i
and N is the corresponding vocabulary size. It
is noted that, in (5), the Laplace smoothing [16] is applied to
prevent zero probability for the infrequently occurring word.
At last, the prior of each event is updated as:
M
x
e
p
e
p
M
i
t
i
j
t
j
=
=
+
+
1
)
1
(
)
1
(
)
|
(
)
(
(8)
The algorithm can increase the log-likelihood consistently with
the iterations; and then converge to a local maximum. Once the
parameters are estimated, we can simply assign each document to
an event, as following
))
|
(
(
max
arg
i
j
j
i
x
e
p
y
=
(9)
where y
i
is the event label of the document x
i
.
747
The advantage of this generative approach is that it not only
considers the temporal continuity of an event, it also can deal with
the issue that some events overlap in some time durations. In this
case, the Gaussian model of the event time can also be overlapped
through this data-driven parameter estimation. From this view,
the event clustering is also like a Gaussian mixture model (GMM)
estimation in the timeline.
3.2 Determining the Number of Events
In the above approach to event clustering, the event number K is
assumed known (as shown in (3)-(8)). However, the event number
is usually very difficult to be determined a priori. In our approach,
an intuitive way is adopted to roughly estimate the event number
based on the document distribution along with the timeline.
As mentioned above, it is assumed that each document is caused
by an event, and the document number of an event changes with
the development of the event. According to this property, each
peak (or the corresponding contour) of the document distribution
curve might indicate one event [14], as the Fig. 2 shows. Thus, we
can roughly estimate the event number by simply counting the
peak number. However, the curve is quite noisy and there
inevitably exist some noisy peaks in the curve. In order to avoid
the noisy peaks, in our approach, only the salient peaks are
assumed to be relevant to the event number.
To detect the salient peaks, we first smooth the document curve
with a half-Hamming (raised-cosine) window, and then remove
the very small peaks with a threshold. Fig.2 illustrates a
smoothed document distribution with the corresponding threshold,
collected on the topic "US Election" in four months. In
experiments, the threshold is adaptively set as
d
d
/2, where
d
and
d
are the mean and standard deviation of the curve,
respectively.
After the smoothing and tiny peaks removal, we further detect the
valleys between every two contingent peaks. Thus, the range of
an event (which is correlated to the corresponding peak) can be
considered as the envelope in the two valleys. As shown in Fig2,
the duration denoted by L
i
+R
i
is a rough range of the event
correlated to the peak P
i
. Assuming an important event usually
has more documents and has effects in a longer duration, the
saliency of each peak is defined as,
)
)(
(
avr
i
i
avr
i
i
D
R
L
P
P
S
+
=
(10)
where P
i
is the ith peak, L
i
and R
i
is the duration from the ith peak
to the previous and next valley; P
avr
is the average peak value and
D
avr
is average duration between two valleys in the curve. S
i
is the
saliency value of the peak P
i
. It could also be considered as the
normalized area under peak P
i
, and thus, it roughly represents the
document number of the corresponding event.
In our approach, the top K salient peaks are selected to determine
the event number:
}
/
{
max
arg
'
1
1
'
=
=
=
i
N
i
k
i
i
k
S
S
K
(11)
where
'
i
S is the sorted saliency value from large to small, N is
total number of detected peaks and is a threshold. In our
experiments, is set as 0.9, which roughly means that at least
90% documents will be kept in the further initialization of event
clustering. This selection scheme is designed to guarantee there is
no important information is missed in presentation. After the
event number and initial clusters (the most salient peaks with their
corresponding range) are selected, the event parameters could be
initialized and then updated iteratively.
0
5
10
15
20
0
20
40
60
80
100
120
#Doc
Threshold
P
i
P
i+1
P
i-1
L
i
R
i
Peaks relevant to event
Fig.2 Peak saliency definition. It also illustrates the smoothed
document distribution (document number per day) with the
corresponding threshold for tiny peak removal. Each peak P
i
is
assumed to be correlated with each event.
It is noted that some technology such as Bayesian Information
Criteria (BIC) or minimum description length (MDL) [17] could
be used to estimate the optimal event number, by searching
through a reasonable range of the event number to find the one
which maximizes the likelihood in (3). However, these algorithms
take long time, and it is usually not necessary to estimate
the exact event number in our scenario of rich presentation.
Actually, in our system, the most important point of event clustering
is that the clustered documents `really' represent the same
event, rather than the event number, as observed in the experiments
. Moreover, in the step of synchronization between the
music and storyboard (in the section 4.2), the number of presented
events may be further refined, based on the user's preference, in
order to match the presentation duration with the music duration.
3.3 Event Description
After obtaining the events and the corresponding documents, we
not only need a concise event summary, but also need to extract
some representative media to describe each event.
3.3.1 Event Summary
A simple way to summarize an event is to choose some
representative words on the persons, locations and keywords of
the event. For example, for the event e
j
, the `leading actor' could
be chosen as the person with the maximum p(person
n
|
e
j
), while
the major location could be selected based on p(location
n
|
e
j
).
However, such brief description might have a bad readability.
Therefore, in order to increase the readability of the summary, in
our system, we also provide an alterative way. That is, we choose
a candidate document to represent an event. For example, the
document with the highest p(x
i
|e
j
) is a good candidate representative
of the event e
j
. However, a document might be too long to
be shown on the storyboard. Therefore, in our system, only the
"title-brow" (the text between the news title and news body) of
the document, which usually exists and is usually a good
overview (summary) of the document based on our observation
(especially true in our case of news document), is selected to
describe the event.
748
I
III
II
IV
Fig. 3 The event template of the Storyboard, which illustrates (I) the representative media, (II)geographic information, (III) event summary,
and (IV) a film strip giving an overview of the events in the temporal order.
3.3.2 Extracting Representative Media
In the obtained documents describing an event, there are usually
many illustrational images, with possible flashes and video clips.
These media information is also a good representative of the
corresponding event. However, since the obtained documents are
directly crawled from the news websites, they usually contain
many noisy multimedia resources, such as the advertisements.
Moreover, there also possible exist some duplicate images in
different documents describing the same event. Therefore, to
extract the representative media from the documents, we need to
remove noisy media and possible duplicate images. Before this,
we also performed a pre-filtering to remove all the images smaller
than 50 pixels in height or width.
Noisy Media Detection. In our approach, a simple but
efficient rule is used to remove the noisy media resources.
We find almost all advertisements are provided by other
agencies rather than these news websites themselves. That is,
the hosts of advertisement resources are from different
websites. Thus, in our approach, we extract the host names
from the URLs of all multimedia resources, and remove
those resources with different host name.
Duplicate Detection. A number of image signature schemes
can be adopted here to accomplish duplicate detection. In
our implementation, each image is converted into grayscale,
and down-sampled to 88. That is, a 64-byte signature for
each image is obtained. Then the Euclidean distance of the
64-byte signature are taken as the dissimilarity measure.
Images have sufficiently small distance are considered as
duplicates.
Once removing the noisy resources and duplicate images, we
simply select the 1-4 large images from the top representative
documents (with the top largest p(x
i
|e
j
)), and take them as
representative media of the corresponding event. The exact
number of the selected images is dependent on the document
number (i.e., the importance) of the event and the total image
number the event has. It is noted that, in our current system, we
only associates images with each event. However, other media
like video and flashes can be chosen in a similar way.
RICH PRESENTATION GENERATION
In the proposed system, the above obtained information, including
event summary and representative media, are fused to generate a
concise and informative storyboard, in order to richly present the
target topic. In this section, we will first describe the storyboard
generation for the target topic, by presenting each event with the
multimodal information. Then, we present the approach to
synchronizing the storyboard with an incidental music.
4.1 Storyboard Generation
In our approach, a storyboard of a target topic is generated by
presenting each event of the topic slide by slide. To describe an
event, we have obtained the corresponding information including
the person, time, location, event summary and other relevant
images. Therefore, to informatively present each event, we need
first to design an event template (i.e., an interface) to integrate all
the information.
Fig. 3 illustrates the event template used in our proposed system,
with an example event in the topic `US Election". First, the
template presents the representative images in the largest area
(part I), since the pictures are more vivid than the words. As for
each representative picture, the title and date of the document from
which it is extracted is also illustrated. In the Fig.3, there are 4
pictures extracted from 3 documents. Then, the corresponding
event summaries of these three documents are presented (part III),
where each paragraph refers to the summary of one document. If a
user is interested in one document, he can click on the corresponding
title to read more details. Moreover, the geographic information
of the event is shown with a map in the top-left corner (part
II), to give users a view of the event location. The map is obtained
from "MapPoint Location" service [18], which can return a
749
corresponding map based on user's location query. However, the
mapping is usually difficult, especially when the event location is
confusing so that the representative location is not accurately
detected. For example, the event shown in the Fig 1 is mapped to
Washington D.C. rather than New York where the republic
convention is held, since Washington is the most frequently
mentioned places in the documents. Finally, a film strip (part IV)
is also presented, arranging each event in the temporal order,
where each event is simply represented by a cluster of images,
with the current event highlighted. It enables users to have a quick
overview of the past and the future in the event sequence.
By connecting various events slide by slide, we could get an
informative storyboard regarding the target topic. In order to
catch the development process of a topic, the events are ordered
by their timestamps in the generated storyboard.
4.2 Synchronizing with Music
To make the storyboard more expressive and attractive, and to
provide a more relaxing way to read information, in the proposed
system, we will accompany the storyboard with an incidental
music and align the transitions between event slides with the
music beats, following the idea in music video generation [19][20].
Sometimes, music could also provide extra information about the
target topic. For example, when the target topic is a movie, the
corresponding theme song could be chosen for the rich presentation
. In this sub-section, we will present our approach to music
analysis and synchronization with the storyboard.
4.2.1 Music Rhythm Analysis
In the proposed system, we detect the onset sequences instead of
the exact beat series to represent music rhythm. This is because
the beat information is sometimes not obvious, especially in light
music which is usually selected as incidental music. The strongest
onset in a time window could be assumed as a "beat". This is
reasonable since there are some beat positions in a time window
(for example, 5 seconds); thus, the most possible position of a beat
is the position of the strongest onset.
The process of onset estimation is illustrated in Fig. 4. After FFT
is performed on each frame of 16ms-length, an octave-scale filter-bank
is used to divide the frequency domain into six sub-bands,
including [0,
0
/2
6
), [
0
/2
6
,
0
/2
5
), ..., [
0
/2
2
,
0
/2], where
0
refers to the sampling rate.
Acoustic Music Data
FFT
Sub-Band 1
Envelope
Extractor
Difference curve
Onset Curve
Sub-Band N
Envelope
Extractor
Difference curve
... ... ...
.
.
.
.
.
.
Fig. 4 The process of onset sequence estimation
After the amplitude envelope of each sub-band is extracted by
using a half-Hamming window, a Canny operator is used for onset
sequence detection by estimating its difference function,
)
(
)
(
)
(
n
C
n
A
n
D
i
i
=
(12)
where D
i
(n) is the difference function in the ith sub-band, A
i
(n) is
the amplitude envelope of the ith sub-band, and C(n) is the Canny
operator with a Gaussian kernel,
]
,
[
)
(
2
2
/
2
2
c
c
i
L
L
n
e
i
n
C
=
(13)
where L
c
is the length of the Canny operator and is used to
control the operator's shape, which are set as 12 and 4 in our
implementation, respectively.
Finally, the sum of the difference curves of these six sub-bands is
used to extract onset sequence. Each peak is considered as an
onset, and the peak value is considered as the onset strength.
Based on the obtained onsets, an incidental music is further
segmented into music sub-clips, where a strong onset is taken as
the boundary of a music sub-clip. These music sub-clips are then
used as the basic timeline for the synchronization in the next step.
Thus, to satisfy the requirement that the event slide transitions of
the storyboard should occur at the music beats, we just need to
align the event slide boundaries and music sub-clip boundaries.
To give a more pleasant perception, the music sub-clip should not
be too short or too long, also it had better not always keep the
same length. In our implementation, the length of music sub-clips
is randomly selected in a range of [t
min
, t
max
] seconds. Thus, the
music sub-clips can be extracted in the following way: given the
previous boundary, the next boundary is selected as the strongest
onset in the window which is [t
min
, t
max
] seconds away from the
previous boundary. In the proposed system, users can manually
specify the range of the length of the music sub-clip. The default
range in the system is set as [12, 18] seconds, in order to let users
have enough time to read all the information on each event slide.
4.2.2 Alignment Scheme
To synchronize the transitions between different event slides and
the beats of the incidental music, as mentioned above, we actually
need to align the slide boundaries and music sub-clip boundaries.
To satisfy this requirement, a straightforward way is to set the
length of each event slide be equal to the corresponding length of
the sub-music clip.
However, as Fig. 5 illustrates, the number of event slides is
usually not equal to the number of music sub-clip. In this case, in
our proposed system, we provide two schemes to solve this
problem.
1) Music Sub-clip Based. In this scheme, only the top N important
events of the target topic are adaptively chosen and used in the
rich presentation, where N is supposed as the number of music
sub-clip in the corresponding incidental music, as the Fig.5 shows.
Although a formal definition of event importance is usually hard
and subjective, in our approach, the importance score of an event
is simply measured by the number of documents reporting it,
assuming that the more important the event, the more the
corresponding documents. The assumption is quite similar as that
in the definition of (10).
750
2) Specified Event Number Based. In this scheme, users can
specify the number of the event he wants to learn. For example, a
user could choose to show the top 30 important events or all the
events. Thus, to accommodate all the events in the music duration,
we will repeat the incidental music if it is needed and then fade out
the music at the end.
E1
E2
E3
E4
E5
E6
E8
E7
S2
S1
S3
.......
S4
S5
.......
Event
Slide List
Music
Sub-Clip
Fig. 5 Music and storyboard synchronization: a music sub-slip
based scheme, that is, only the top important events are presented
to match the number of music sub-clips.
4.2.3 Rendering
After the alignment between storyboard and incidental music, in
our system, fifteen common transition effects, such as cross-fade,
wipe and dissolve, are also randomly selected to connect the event
slides, producing a better rich presentation in final rendering.
EVALUATIONS
In this section, we evaluate the performance of the proposed
approach to rich presentation and its key component, event
clustering. In the experiments, we randomly select 8 topics of
different types, including Earthquake, Halloween, Air Disaster,
US Election, Nobel Prize, Britney Spears, David Beckham, and
Harry Potter, from some hot news topics in the end of 2004 and
beginning of 2005. Once the topic is selected, the topic name is
used as a query and the relevant documents are collected from
CNN, MSNBC and BBC. More details about the selected topics
and the corresponding documents are shown in the Table 1, which
lists the topic name, the time range of the collected documents,
and the number of documents and its corresponding events.
Table 1. A list of testing topics in the rich presentation evaluations
No. Topic Time
#doc
#event
1 Earthquake
1995-2004
976 17
2 Halloween
1995-2004
762 9
3 Air
Disaster
1995-2004
210 13
4 US
Election
1995-2004
2486 -5
Britney
Spears
2000-2004 1311 -6
Nobel
Prize
1995-2004
186 -7
David
Beckham
1995-2004 877 -8
Harry
Potter
2000-2004
841 -Total
--7649
It
is noted that, in the table, only 3 topics have labeled events,
while another 5 topics have not. This is because that, the labeling
work of a topic is very subjective and usually hard for individuals
to manually decide the event number of a given topic. Therefore,
we only label the topics which are easily to be annotated based on
the criterion in Topic Detection and Tracking (TDT) project [21].
For example, Halloween is a topic which is reported once a year,
thus, each year's documents can be regarded as an event; as for
Earthquake and Air Disaster, their events lists could be found
from corresponding official websites. In the annotation, we
remove the events which do not have or have few (less than 4)
relevant documents, and also remove the documents not belonging
to any events.
After parsing the obtained documents, for each topic, we usually
can obtain 3.8 images per document in average. With further
duplicate detection, only 1.6 images per document are remained.
Moreover, from each document, we could also obtain about 3.0
unique location entities and 2.8 unique name entities. Other words
except of these entities are taken as keywords. Fig.6 shows a real
representation of an example document with extracted entities in
the XML format, from which the event clustering is performed.
Fig. 6. XML representation of a document on "US Election" with
extracted entities
5.1 Event Clustering
As mentioned above, the evaluation of the approach to event
clustering is evaluated on three topics, including Earthquake, Halloween
, and Air Disaster, for which the corresponding event numbers
are determined and the documents are labeled using a similar
method in the TDT project. However, in the proposed appraoch,
we actually do not estimate the optimal event number, but use a
much larger one. Therefore, in order to better evaluate the
performance of the event clustering algorithm and compare with
its counterpart, we use the event number in the ground truth to
initialize the cluster number in the proposed clustering algorithm.
.....
<URL>http://news.bbc.co.uk/1/hi/world/americas/4071845.stm </URL>
<Abstract>The US battleground state of Ohio has certified the victory
of President George W Bush's in last month's poll. </Abstract>
<Date> 2004/12/6 </Date>
<NLPRESULT>
<LOCATION>
<entity> Ohio </entity> <freq>4</freq>
<entity> US </entity> <freq> 2 </freq>
</LOCATION>
<PERSON>
<entity> Bush </entity> <freq> 3 </freq>
<entity>David Cobb</entity> <freq>1</freq>
...
</PERSON>
...
<DATE>
<entity> 6 December, 200</entity> <freq> 1 </freq>
<entity> Friday </entity> <freq> 2 </freq>
...
</DATE>
<KEYWORDS>
...
<entity> recount </entity> <freq>7</freq>
<entity> elect </entity> <freq>3</freq>
<entity> America </entity> <freq>3</freq>
<entity> poll </entity> <freq>3</freq>
...
</KEYWORDS>
</NLPRESULT>
751
In the experiments, K-means, which is another frequently used
clustering algorithm (as well in TDT [22]), is adopted to compare
with the proposed approach. The comparison results of two
clustering approaches are illustrated in Table 2, with precision and
recall for each topic.
Table 2. The performance comparison between our approach and
K-means on the event clustering
Precision Recall
K-means
Ours
K-means
Ours
Earthquake 0.74
0.87
0.63
0.74
Halloween 0.88
0.93 0.72 0.81
Air Disaster
0.57
0.68
0.55
0.61
Average 0.73
0.83
0.63
0.72
From Table 2, it can be seen that the results of our approach are
significantly better than those of K-means, both on precision and
recall. On the three testing topics, the average precision of our
approach is up to 0.83 and the average recall achieves 0.72, which
is 10% and 9% higher than those of K-means, respectively. By
tracing the process of K-means, we find that K-means usually
assigns documents far away from each other on the timeline into
the same cluster, since the time information affects little in K-means
. It also indicates the advantages of our approach with time
modeling.
The algorithms also show different performance on different kind
topics. As for the "Air disaster", its performance is not as good as
that of the other two, since the features (words and time) of its
events are more complicated and intertwined in the feature space.
As for the topics (4-8 in Table I) which could not have an
objective evaluation, the clustering performance on these topics
could be indirectly reflected by the subjective evaluation of the
rich presentation presented in section 5.2. This is because users
will be more satisfied when the grouped documents shown in each
event slide really belong to the same event; while users are not
satisfied if the documents from different events are mixed in one
event slide.
5.2 Rich Presentation
It is usually difficult to find a quantitative measure for rich
presentation, since the assessment of the goodness of rich presentation
is a strong subjective task. In this paper, we carry out a preliminary
user study to evaluate the performance of the proposed
rich presentation schemes.
To indicate the performance of rich presentation, we design two
measures in the experiments, including `informativeness' and
`enjoyablity', following the criteria used in the work [7]. Here, the
informativeness measures whether the subjects satisfy with the
information obtained from the rich presentation; while enjoyablity
indicates if users feel comfortable and enjoyable when they are
reading the rich presentation. In evaluating the informativeness,
we also provide the documents from which the rich presentation is
generated. They are used as baseline, based on which the subjects
can more easily evaluate if the important overview information
contained in the documents is conveyed by the rich presentation.
Moreover, in order to reveal the subjects' opinion on the design of
the storyboard template, like the one shown in Fig 3, we also ask
the subjects to evaluate the `interface design'.
In the user study, 10 volunteered subjects including 8 males and 2
females are invited. The subjects are around 20-35 years old, have
much experience on computer manipulation, and usually read
news on web in their leisure time. We ask them to give a
subjective score between 1 and 5 for each measure of the rich
presentation of each testing topic (an exception is `interface
design', which is the same for each rich presentation). Here, the
score `1' to `5' stands for unsatisfied (1), somewhat unsatisfied (2),
acceptable (3), satisfied (4) and very satisfied (5), respectively.
In experiments, we first check with the `interface design' measure.
We find 7 out of 10 subjects satisfy with the event template design
and the left three also think it is acceptable. The average score is
up to 3.9. An interesting observation is that, some subjects like
the template design very much at the first glance, but they feel a
little boring after they finish all the user study since every slide in
the rich presentation of each topic has the same appearance. It
hints us that we had better design different templates for different
topics to make the rich presentation more attractive.
As for the other two measures, we average the score across all the
subjects to represent the performance for each topic, and list the
detailed results in Table 3. It can be seen that the average score of
both enjoyablity and informativeness achieves 3.7, which indicates
that most subjects satisfy the provided overview information of the
target topic, and they enjoy themselves when reading these rich
presentations.
Table 3. The evaluation results of rich presentation on each topic
No. Topic Informative
Enjoyable
1 Earthquake
4.3
3.2
2
Halloween 3.6 4.0
3 Air
Disaster
4.0
3.4
4 US
Election
4.1
4.0
5 Britney
Spears
3.6
4.1
6 Nobel
Prize
3.3
3.4
7 David
Beckham
3.4
4.0
8 Harry
Potter
3.3
3.4
Average 3.7
3.7
In the experiments, we find informativeness is highly depended on
the correlation between the presented documents and the target
topic. If the presented information is consistent with the topic,
subjects usually give a high score for informativeness, such as
those on Earthquake and US Election; otherwise, they will give a
low score, like those on David Beckham and Nobel Prize. It
indicates that it is quite important to provide users clean
information of the target topic with less noise. However, in
current system, the documents are crawled from web and
inevitably contain many noises. It affects much on the performance
of informativeness in the current system. We need to consider
how to prone the information of the target topic in the future
works.
We also find that the enjoyablity score is usually related with
informativeness. If the subjects do not get enough information
from the rich presentation, they will be not enjoyable as well, such
as the topics of Nobel Prize and Harry Potter. Enjoyablity is also
topic-related, the subjects usually feel unconformable when they
are facing with miserable topics, such as Earthquake and Air
Disaster, although their informativeness is quite high. On the
752
contrary, users give a high score for enjoyablity on the interesting
topics, such as Britney Spears and David Beckham, although their
informative score is not high. This is because that there are
usually many funny and interesting pictures in the presentation of
these topics. Another finding is that users usually fell unenjoyable
if the images and summaries in one event slide are not consistent
with each other. From this view, the high enjoyablity score in our
experiments also indicates that our event clustering algorithm
works promisingly
CONCLUSIONS
To facilitate users to quickly grasp and go through the content of a
semantic topic, in this paper, we have proposed a novel approach
to rich presentation to generate a concise and informative
storyboard for the target topic, with many relevant multimodal
information including image, text, audio and video. In this
approach, the related multimodal information of a given topic is
first extracted from news databases. Then, the events are clustered,
and the corresponding information, such as representative images,
geographic information, and event summary, is obtained. The
information is composed into an attractive storyboard which is
finally synchronized with incidental music. A user study indicates
that the presented system works well on our testing examples.
There is still some room for improving the proposed approach.
First, the proposed approach could be extended to other
multimedia databases or more general websites. For example,
some standard multimedia database like NIST TRECVID could
provide a nice platform for the implementation and evaluation of
event detection and rich presentation. Second, to integrate more
relevant multimedia information (such as video clips and flashes)
and more accurate information regarding the target topic is highly
expected by users. Thus, more advanced information retrieval/
extraction techniques and other multimedia analysis techniques are
needed to be exploited and integrated, such as relevance ranking,
mapping schemes, important or representative video clips
detection and video clip summarization. We also need to design a
much natural way to incorporate video clips in the event template.
Third, we also consider designing various storyboard templates for
different kind of topics. For example, each topic may be belonging
to different clusters such as politics, sports and entertainments,
each of which can have a representative template. Forth,
appropriate user interaction will be added to further make the
storyboard more interactive and easy to control. Finally, a
thorough evaluation will be implemented to evaluate the effect of
each component in the framework and storyboard template.
REFERENCES
[1] A. Vailaya, M.A.T. Figueiredo, A. K. Jain, and H.-J. Zhang.
"Image classification for content-based indexing". IEEE
Transactions on Image Processing, Vol.10, Iss.1, 2001
[2] F. J., M.-J. Li, H.-J. Zhang, and B. Zhang. "An effective
region-based image retrieval framework". Proc. ACM
Multimedia'02, pp. 456-465, 2002
[3] J. Platt "AutoAlbum: Clustering Digital Photographs using
Probabilistic Model Merging" Proc. IEEE Workshop on
Content-Based Access of Image and Video Libraries, pp. 96
100, 2000.
[4] A. Hanjalic, R. L. Lagendijk, J. Biemond, "Automated high-level
movie segmentation for advanced video-retrieval
systems", IEEE Trans on Circuits and Systems For Video
Technology, Vol. 9, No. 4, pp. 580-588, 1999.
[5] J. Assfalg and et al, "Semantic annotation of soccer videos:
automatic highlights identification," CVIU'03, vol. 92, pp.
285-305, 2003.
[6] A. Ekin, A. M. Tekalp, and R. Mehrotra, "Automatic soccer
video analysis and summarization," IEEE Trans. on Image
Processing, 12(7), pp. 796-807, 2003.
[7] Y. -F. Ma, L. Lu, H. -J. Zhang, and M.-J Li. "A User
Attention Model for Video Summarization". ACM
Multimeida'02, pp. 533-542, 2002.
[8] L. Xie, P. Xu, S.F. Chang, A. Divakaran, and H. Sun,
"Structure analysis of soccer video with domain knowledge
and hidden markov models," Pattern Recognition Letters,
vol. 25(7), pp. 767-775, 2004.
[9] L. Lu, H. Jiang, H. J. Zhang, "A Robust Audio Classification
and Segmentation Method," Proc. ACM Multimedia'01, pp.
203-211, 2001
[10] R. Cai, L. Lu, H.-J. Zhang, and L.-H. Cai, "Highlight Sound
Effects Detection in Audio Stream," Proc. ICME'03 Vol.3,
pp.37-40, 2003.
[11] Y. Rui, A. Gupta, and A. Acero, "Automatically Extracting
Highlights for TV Baseball Programs", Proc. ACM Multi-media'00
, pp.105-115, 2000.
[12] C. Snoek, and M. Worring. "Multimodal Video Indexing: A
Review of the State-of-the-art". Multimedia Tools and
Applications, Vol. 25, No. 1 pp. 5 35, 2005
[13] E.M. Voorhees, "Query expansion using lexical-semantic
relations" Proc. ACM SIGIR Conference on Research and
Development in Information Retrieval , pp 61 - 69, 1994
[14] Z.-W. Li, M.-J. Li, and W.-Y. Ma. "A Probabilistic Model for
Retrospective News Event Detection", Proc. SIGIR
Conference on Research and Development in Information
Retrieval, 2005
[15] D. M. Bikel, R. L. Schwartz, and R. M. Weischedel. "An
Algorithm That Learns What's in a Name". Machine
Learning, 34(1-3), 1999
[16] K. Nigam, A. McCallum, S. Thrun, and T. Mitchell. "Text
Classification from Labeled and Unlabeled Documents using
EM". Machine Learning, 39(2-3), 2000
[17] T. Hastie, R. Tibshirani, and J. Friedman. "The Elements of
Statistical Learning: Data Mining, Inference and Prediction".
Springer-Verlag, 2001
[18] MapPoint Web Service http://www.microsoft.com/mappoint/
products/ webservice/default.mspx
[19] X.-S. Hua, L. Lu, H.-J. Zhang. "Automated Home Video
Editing", Proc. ACM Multimedia'03, pp. 490-497, 2003
[20] J. Foote, M. Cooper, and A. Girgensohn. "Creating Music
Videos Using Automatic Media Analysis". ACM
Multimedia'02, pp.553-560, 2002.
[21] Topic Detection and Tracking (TDT) Project: http://www.
nist.gov/speech/tests/tdt/
[22] J. Allan, R. Papka, and V. Lavrenko. "On-line New Event
Detection and Tracking". Proc. SIGIR Conference on
Research and Development in Information Retrieval 98,
pp.37-45, 1998
753 | documentary and movie;Rich presentation;events clustering;Communication and Multimedia;Representative Media;Images, videos and Audio Technologies;Rich video clips and flashes;Multi-modal information;Generate storyboard;storyboard;Subjective multiple events;multimedia fusion;High-level semantics;Event clustering;multimodality;multimedia authoring |
4 | A Database Security Course on a Shoestring | Database security has paramount importance in industrial, civilian and government domains. Despite its importance, our search reveals that only a small number of database security courses are being offered. In this paper, we share our experience in developing and offering an undergraduate elective course on database security with limited resources. We believe that database security should be considered in its entirety rather than being component specific. Therefore , we emphasize that students develop and implement a database security plan for a typical real world application . In addition to the key theoretical concepts, students obtain hands-on experience with two popular database systems . We encourage students to learn independently making use of the documentation and technical resources freely available on the Internet. This way, our hope is that they will be able to adapt to emerging systems and application scenarios. | INTRODUCTION
Database systems are designed to provide efficient access
to large volumes of data. However, many application
domains require that the data access be restricted for security
reasons.
For example, an unauthorized access to
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
SIGCSE'06, March 1�5, 2006, Houston, Texas, USA.
Copyright 2006 ACM 1-59593-259-3/06/0003...
$
5.00.
a bank database can potentially cost millions of dollars.
The federal Health Insurance Portability and Accountability
Act (HIPAA) regulates the disclosure of information from a
patient database, allowing access to health care providers,
health plans, and health care clearinghouses, simultaneously
protecting the privacy of patients. For obvious reasons, a
Department of Defense (DoD) database needs to be protected
from unauthorized access. Since many organizations
increasingly entrust their information resources with database
systems, in today's highly networked environment, the
sensitive information can be at high risk unless there are security
mechanisms in place to protect the data at the source
itself. However, a large number of databases are incorrectly
installed, configured, and maintained. This, in part, may be
attributed to the lack of database security education in our
computer science programs. We feel that a new undergraduate
course on database security will help our students face
the ever increasing challenges in this field.
Our search shows that, despite the importance, only a
handful database security courses are being offered. Most
of the courses we found are graduate courses and are highly
theoretical. We also found a few extension program courses,
which are product specific.
Although a large number of
database courses exist at both undergraduate and graduate
levels, we feel that, one reason for not offering database
security courses may be the scarcity of textbooks, reference
materials, and other resources.
Realizing the importance of database security in computer
science curriculum, [8] proposes adding a new module to the
basic database course. Since the basic database course has
already many topics to cover, we feel that the addition of
new material will not completely serve the purpose. Further,
we find it difficult to incorporate hands-on component to
such a course. Similarly, a computer security course is too
broad in scope, and rarely includes database security topics.
Therefore, we decided to develop a new undergraduate level
elective on database security. This paper is based on our
experience of offering a database security course in Spring
2005. We have adjusted the contents and assignments in
response to the feedback and course outcome. The modified
version is presented here.
Since many of our students seek industrial positions after
graduation, we have designed our course to meet their needs
with the right blend of theory and practice. The course objective
is to develop an understanding of security aspects
of databases, database administration, and database supported
applications. We collected information from alumni
as well as potential employers before finalizing the contents.
7
Although students were expected to gain hands-on experience
with some popular databases in our course, we tried to
focus on concepts rather than just syntax or product specific
features. Often, students were asked to learn software packages
on their own by reading the product documentation.
We also offered an online feedback page for receiving anonymous
student comments, which helped us know if students
needed additional assistance. By encouraging students to
learn and experiment on their own, we hope that they can
easily apply the learned concepts to emerging application
scenarios.
This is particularly needed since today's work
environment expects agility from employees to quickly master
and develop software systems. To facilitate participation
further, we asked students to research and make presentations
choosing from a set of specified topics. Most impor-tantly
, since many of our educational institutions are cash
strapped, we designed our course to execute with a small
budget.
In the next section, we detail topics, which may be included
in a database security course, with references that,
we hope, will be useful for other instructors. We also discuss
labs and assignments in detail. Finally, we conclude the
paper with an account of lessons learned and future possibilities
DATABASE SECURITY TOPICS
Although a large number of topics can be included, we try
to focus on a few important ones that, in our judgment, are
likely to be immediately useful after graduation. We also include
topics on securing the data within a database, as well
as the security of database systems and operating systems
as suggested in [8]. Our position is that database security
should be considered in whole rather than adopting a piecemeal
approach. However, we recognize that, in practice, it
is often easy to overlook some aspects of database security.
Therefore, we recommend that students develop a database
security plan. We also include other relevant topics such as
statistical database security, and security and privacy issues
of data mining. Table 1 shows the schedule of topics for a
typical sixteen week semester course on database security.
Major labs and assignments are given in Table 2.
The course begins with an "Introduction to database se-curity"
, where the objective is to highlight the importance
of database security and to motivate students to learn the
rest of the topics.
2.1
Introducing Database Security
One way to emphasize the importance of database security
would be to reflect on the impact of not having security at
all in application domains such as military, medical, financial
, credit card, credit file, driving records, and insurance
databases. Students may survey the incidents of database
security breaches and evaluate the efforts to ensure database
security by industry and government.
Since database security is a combination of database technology
and computer security, basics of both will be helpful
. A discussion on security properties such as confiden-tiality
, integrity, availability and non-repudiation should be
included.
Although an in-depth study of cryptography is not within
the scope of this course, basics of secret key cryptography
and public key cryptography will benefit students. A good
reference book we found is "Data Security and Cryptogra-Week
Topic
1
Course Overview and Introduction to Database
Security, Basics of Data Security and Cryptography
2
Overview of Security Models
3
Access Control Models,
Covert Channels and Inference Channels
4
MySQL Security
5
Oracle Security
6
Oracle Label Security
7
Developing a Database Security Plan
8
Spring Break
9
SQL Server Security
10
Security of Statistical Databases
11
Security and privacy issues of Data Mining
12
Database Applications Security,
SQL Injection,
Defensive Programming
13
Database Intrusion Prevention, Audit,
Fault Tolerance and Recovery
14
Hippocratic Databases,
XML Security
15
Network Security,
Biometrics
16
Final Examination Week
Table 1: Course Schedule
phy" by Dorothy Denning [5]. Digital signatures, digital certificates
and Public Key Infrastructure (PKI) [21] are other
topics to consider.
An overview of security and integrity models [4] will also
be helpful at this point.
This is the best time to introduce
the computer security lingo such as subjects and objects.
The difference between widely used access control techniques
may also be highlighted.
2.2
Access Control
Discretionary Access Control (DAC) mechanisms such as
capabilities, profiles, access control lists, passwords, and permission
bits may be discussed. Here we also introduce the
operating system security aspects (using Windows
R
and
Linux environments), and how they impact database security
in general. Although details are not required until
we introduce Oracle security, overview of Role-Based Access
Control (RBAC) [6, 18] may be discussed.
Unlike the above access control techniques, in Mandatory
Access Control (MAC) the security is enforced by the system
as dictated in the security policy, not by the owner of
an object. Although there are many security models suggested
for providing Mandatory security, Bell-LaPadula [2]
model is probably the simplest to learn. Even when a system
enforces Mandatory Access Control, information leakage
through covert channels [11] and inference channels [13]
may still be possible. A few examples will help students understand
how the information leakage can take place through
such means.
Databases enforcing MAC often assign security classification
levels for objects and security clearance levels for
subjects. Access control is performed by the system based
on these levels. A lab may be developed, where students
8
simulate a multilevel database on an ordinary database system
. This means students will have to modify the schema to
add additional fields for storing security classification levels.
They also develop views for users having different clearance
levels. Further, to support poly-instantiation, the primary
key will have to be redefined to include security level to accommodate
the possibility of the same key values existing
at multiple security levels.
Another topic of interest would be to explore how the Discretionary
Access Control and the Mandatory Access Control
can be combined and applied in some scenarios.
2.3
Securing Real Life Databases
The candidate database systems we chose for hands-on
experience were MySQL
TM
, Oracle
R
, and Microsoft
R
SQL
Server
TM
. Because of time constraints, students were able
to focus only on the first two databases, but an overview of
SQL server security was also provided.
2.3.1
MySQL Security
With more than six million installations worldwide [14],
the simplicity and open source architecture make MySQL,
probably, the first database to study. The primary source
of information would be the MySQL manual itself (available
from MySQL site [14]), particularly the section on "MySQL
Access Privilege System". Another source, MySQL Security
Handbook [22], explains MySQL security system and
provides a few practical examples.
Labs/Assignments
1 Multilevel Security � Poly-instantiation
2 MySQL Grant Privilege System
3 SQL Injection
4 Oracle Security � Basic Lab
5 Database Security Plan Development
6 Backend Development for B2C Application
7 Probability Distributions, Sampling
8 Statistical Databases - Breach of Security
9 Statistical Databases - Inference Protection Techniques
10 Data Mining Security - Reading and Presentation
Table 2: Major Labs/Assignments
MySQL Access Privilege System authenticates a user based
on user name, host name, and password. Further, it ensures
that users perform only permitted operations based on the
privileges specified in the grant tables (namely, user, db,
and host). The format and contents of these tables, therefore
, are of particular importance. Since most of the critical
information including the grant tables are stored on a default
database named mysql, the security of mysql database
is also crucial. Students should learn to apply the "principle
of least privilege" when granting privileges in order to
perform the task at hand.
Each student was given a MySQL instance with root level
access. Students were asked to create users and assign privileges
while monitoring the privilege tables for changes. Students
also experimented with the privilege system by man-ually
modifying privilege tables.
We created two person
administrator-user teams for enabling the students to experience
the system from both perspectives. Users were assigned
certain tasks to perform. Some of the tasks given
were specifically designed to understand the limitations of
the MySQL privilege system. The role of the administrators
was to grant privileges just sufficient for users to perform
the task. Users could access the system in any manner they
wish � in fact, users will be encouraged to expose the weaknesses
in the privilege assignments. The administrators, on
the other hand, controlled access based on need-to-know, at
the same time trying not to be too restrictive for users to
perform the required tasks. We found the users very excited
to expose security weaknesses in the privilege assignment.
Although administrators were a little embarrassed, they too
were motivated by the exercise. For the the next lab session,
students switched roles, i.e., those who were administrators
became users and vice versa.
MySQL supports data security by providing functions such
as ENCRYPT, DES ENCRYPT, AES ENCRYPT, PASSWORD
, OLD PASSWORD and ENCODE. Since these functions
may not be safe under all circumstances, it would be
useful to highlight the unsafe scenarios.
Students may also learn how to use SSL for security, and
simultaneously make sure that the system performance is
not significantly impacted. Also useful would be to study
how the authentication requirements may vary when using
options such as REQUIRE SSL, REQUIRE ISSUER and
REQUIRE X509. Even when using SSL, the data security
can depend on the type of cipher and the key lengths used.
Therefore, students may learn how to specify these parameters
using the REQUIRE CIPHER option.
Some privileges in MySQL, if not carefully used, can expose
the system to high security risk. For example, FILE
privilege may be misused to gain access to the system. Hence,
a comprehensive study of unsafe privileges will be extremely
useful.
Even when the privilege system is correctly set up and
maintained the entire privilege system can be circumvented
using a MySQL startup option like --skip-grant-tables.
On the other hand, some startup options make the server
safer. Therefore, MySQL startup options and their security
consequences must be discussed.
Many web applications have MySQL database server deployed
as the backend, and HTML based form acting as the
front end. Since user input is used to generate SQL queries
to interact with the database, if unchecked, malicious users
or programs can inject unsafe SQL queries. Basic concepts
of preventing SQL injection may also be discussed.
Students
may be asked to analyze a number of SQL queries for
potential vulnerability.
Other topics, which can be included are: using MySQL
network scanner to detect MySQL servers on the network
with default passwords, MySQL resource control, data backup
and recovery, auditing, and firewalls.
2.3.2
Planning Database Security
Since enforcing database security is extremely complex
task with a large number of factors affecting the security of
a database, the best way to approach the problem would
be to systematically develop and implement a comprehensive
database security plan. Therefore, in our course, we
required that students develop a database security plan for
a small Business-to-Consumer (B2C) E-Commerce application
. See [19] Ch7 (available online) for detailed exposition
on database security planning. Although the text is on Oracle
security, the concepts can be applied to any database.
9
2.3.3
Oracle Security
For security reasons, the computer science department
was reluctant to grant administrative privileges to students
on our Oracle server. Therefore, we ended up creating a
separate Oracle instance for the course. For each student,
we created one administrative account with DBA privileges,
and then the students were allowed to create user accounts
as needed, provided they follow a naming convention to
avoid conflicting names.
In addition to Oracle Security
Handbook [12], we found the Oracle Database Administrators
Guide [15] also useful. The guide is available online
from the Oracle Database Documentation Library.
First, we had an Oracle Security Basics Lab. Students
were introduced to the Oracle security system through a
series of tasks. The next lab was more advanced, and built
up on the database security plan developed in a previous
assignment.
The task was to develop the backend for a
small B2C E-Commerce application. Students were asked
to create user accounts, roles, tables, views and triggers as
required. The privileges were to be assigned by observing
the "principle of least privilege", as per the security plan.
Further, students may also be trained to perform some
standard checks for security such as checking for default user
accounts, default passwords, users having excessive privileges
(e.g., DBA, ALTER SYSTEM, CREATE LIBRARY,
CREATE ANY TRIGGER), security impact of WITH AD-MIN
and WITH GRANT options on privileges, EXTER-NALLY
authenticated users, and the existence of database
links. Students may also learn how to display information on
items such as triggers, views and externally authenticated
users. A section on security issues of using default Oracle
supplied roles will be useful.
Other topics to include are: Transparent Network Substrate
(TNS) security and listener management from remote
machines and setting up listener passwords, buffer overflow
attacks and prevention, auditing, and undocumented Oracle
features. Students may also be introduced to reading security
advisories and obtaining Oracle Critical Patch Updates
(CPU).
Recently, a large number of security breaches have been
reported.
Interestingly, however, many of these breaches
were incidents of missing or stolen backup storage devices.
Therefore, we feel appropriate to include a session on security
and protection needs of exports, cold backups, hot
backups, and disaster recovery sites.
2.3.4
Oracle Label Security
Oracle Label Security provides built-in row level access
control for high security applications. Essentially, Oracle
adds a new field to each row for storing the row's sensitivity
labels. Row access is granted or denied by comparing the
user's identity and security clearance label with the row's
sensitivity labels. Earlier, in Assignment 1, students have
simulated a multilevel database. Therefore, the above concepts
should be easy to learn at this point.
As a source of information on Oracle Label Security Architecture
, we used Oracle Label Security Administrator's
Guide [16]. We covered levels, compartments, groups, session
and row labels, label security algorithm, and management
of label security using Oracle Internet Dictionary.
2.3.5
Microsoft SQL Server Security
As we mentioned earlier at the beginning of this section,
due to time constraints, we could not provide an extensive
coverage of Microsoft SQL Server. We briefly discussed
SQL Server security model, authentication mechanisms, authentication
modes, and good security practices for SQL
servers. Students presented information they gathered on
SQL server vulnerabilities, security breaches, and prevention
techniques. We found a few excellent articles on SQLServer-Central
.com, an online community of DBAs, developers, and
SQL server users.
We also found SQL Server Developer
Center (http://msdn.microsoft.com/sql) useful in providing
a large number of resources in this area.
2.4
Statistical Security
As for the rest of the course, this section is application
oriented, giving the students the gist of the concepts they
need to know and then putting them to work in the context
of a real database. Thus, the first lab is a simulation based
assignment designed as an introduction to probability distributions
, expectation, spread, sampling methods, and sampling
distributions of relevant statistics. We find that, even
for students with prior coursework in probability and statistics
, an assignment of this type is very beneficial. The second
lab presents the task of setting up a sequence of queries, so
that students can extract from a database what should have
been secure information.
At this point we introduce the
main conceptual techniques for inference protection such as
the lattice model and partitioning the database entities into
populations. See [4], Ch 5 for details. The third major assignment
aims at teaching inference protection techniques.
Given a database, the students are asked to answer queries
without disclosing sensitive information by applying restriction
, perturbation, and combined techniques.
2.5
Security Issues of Data Mining
Data mining may be misused to obtain confidential information
from a database. So we believe, a course on database
security should include an overview of security and privacy
concerns of data mining. Organizations would like to share
the data for operational convenience, at the same time prevent
the mining of data for information they do not want to
disclose. Likewise, private individuals would like to submit
their personal information for data mining without compromising
their privacy while keeping the key association rules
intact. Secure data mining techniques appear similar to statistical
security methods, however, their computational efficiency
is a major concern. We found a number of interesting
papers [3, 17, 20] that can be used for reading assignments
and group discussions.
2.6
Other Topics
Malicious users may bypass security mechanisms provided
by an application by directly connecting to the database.
Therefore, whenever possible stored procedures, and views
must be used for providing data access. Database application
security and defensive programming was briefly covered
. Semi-structured nature of Extensible Markup Language
(XML) documents make them ideal candidates for
use in many applications including E-Business. Therefore,
XML [7] security was also discussed.
10
Other topics of interest are: database intrusion detection
and prevention [17], database fault tolerance and recovery,
Hippocratic databases [1], network security [10], and biometrics
[9].
RELATED COURSES
Department of Computer Science at University of Alberta
has offered an independent study on database security with
topics such as security models, security mechanisms, intrusion
detection systems, and statistical database protection.
University of Maryland University College has a graduate
level course on database security with theory and applications
, including frameworks for discretionary and mandatory
access control, data integrity, availability and performance,
secure database design, data aggregation, data inference, secure
concurrency control, and secure transaction processing.
University of South Carolina and George Mason University
offer graduate level elective courses on Database Security.
Similar courses are offered at a few other institutions, but we
do not discuss them here due to space constraints. Among
the undergraduate courses we found, the closest one to what
we offered is taught at University of Arkansas Little Rock. It
provides database security theory and background on Oracle
security environment.
CONCLUSIONS
Our new undergraduate elective course on database security
covers basic concepts and provides practical experience
on two popular databases. We emphasized that students develop
a database security plan that, we hope, will encourage
them to view the problem of ensuring database security as a
task that needs to be carefully planned in whole rather than
something that can be addressed in parts.
The initial offering of the course had "Data Structures"
as the only pre-requisite, because we wanted to keep the
course open to a larger audience. Students, in general, were
found be more motivated to follow through on course work
than other courses we have taught. We received excellent
numerical score as well as comments from students in the
departmental student evaluations. We had a good mixture
of students. All were computer science majors, and forty
seven percent were honors students. Sixty seven percent of
the class completed the course with an overall score of 80%
or higher with all honors students falling into this category.
However, it was felt that a few students lacked basics to
fully grasp the material. Therefore, having a basic database
technology course as the pre-requisite will help cover more
of the suggested topics in depth.
In closing, we hope that our experience shared herein will
help other instructors develop and offer a similar course on
database security with limited resources.
REFERENCES
[1] R. Agrawal, J. Kiernan, R. Srikant, and Y. Xu.
Hippocratic Databases. In Proc. of the Very Large
Data Bases (VLDB) Conference, Hong Kong, China,
August 2002.
[2] D. Bell and L. LaPadula. Secure Computer Systems:
Mathematical Foundations. Technical Report
ESD-TR-73-278, MITRE Corporation, 1973.
[3] C. Clifton and D. Marks. Security and Privacy
Implications of Data Mining. In Workshop on Data
Mining and Knowledge Discovery, Montreal, Canada,
February 1996.
[4] S. Castano, M. G. Fugini, G. Martella, and
P. Samarati. Database Security. Addison-Wesley &
ACM Press, 1995.
[5] D. E. Denning. Cryptography and Data Security.
Addison-Wesley, 1982.
[6] D. Ferraiolo and R. Kuhn. Role-Based Access
Controls. In Proc. 15th NIST-NCSC National
Computer Security Conference, Baltimore, MD,
October 1992.
[7] B. Dournaee. XML Security. RSA Press, Berkeley,
CA, USA, 2002.
[8] M. Guimaraes, H. Mattord, and R. Austin.
Incorporating Security Components into Database
Courses. In Proc. of the InfoSecCD Conference'04,
Kennesaw, GA, September 2004.
[9] A. Jain, L. Hong, and S. Pankanti. Biometric
Identification. Commun. ACM, 43(2), 2000.
[10] C. Kaufman, R. Perlman, and M. Speciner. Network
Security: Private Communication in a Public World,
Second Edition. Prentice-Hall, 2002.
[11] B. W. Lampson. A Note on the Confinement Problem.
Commun. ACM, 16(10), October 1973.
[12] M. Theriault and A. Newman. Oracle Security
Handbook : Implement a Sound Security Plan in Your
Oracle Environment. Osborne McGraw-Hill, 2001.
[13] M. Morgenstern. Security and Inference in Multi-Level
Database and Knowledge-Base Systems. In ACM
SIGMOD Conf. on the Management of Data, San
Francisco, CA, May 1987.
[14] http://www.mysql.com.
[15] Oracle Database Administrator's Guide. Oracle
Corporation, 2001.
[16] Oracle Label Security Administrator's Guide. Oracle
Corporation, 2003.
[17] R. Agrawal and R. Srikant. Privacy-preserving Data
Mining. In Proc. of the ACM SIGMOD Conference on
Management of Data, Dallas, TX, May 2000.
[18] R. Sandhu and Q. Munawer. How to do Discretionary
Access Control Using Roles. In RBAC '98:
Proceedings of the third ACM workshop on Role-based
access control, Fairfax, VA, 1998.
[19] M. Theriault and W. Heney. Oracle Security. O'Reilly
& Associates, Inc., 1998.
[20] V. Verykios, E. Bertino, I. Fovino, L. Provenza, Y.
Saygin and Y. Theodoridis. State-of-the-art in Privacy
Preserving Data Mining. SIGMOD Record, 33(1),
2004.
[21] W. Ford and M. S. Baum. Secure Electronic
Commerce: Building the Infrastructure for Digital
Signatures and Encryption. Prentice Hall, 2000.
[22] Wrox Author Team. MySQL Security Handbook. Wrox
Press, 2003.
11
| Database security course;Statistical Database Security;Statistical Security;Database security;High Risk of Sensitive Information;Database security plan;Security breaches;Labs;Database system;Undergraduate Database Security Course;Real Life Database Security;Database Security Education;XML Security;Undergraduate students;Cryptography;Secure information;Hands-on experience;Database Security Plan;Data access;Security Plan;Database Privacy;Administrators;Hands-on;Database Security Course;Undergraduate course;Access Privilege System;Database Security;Privacy Issues;Laboratory/Active Learning;Right Blend of Theory and Practice;Assignments;Real Life Databases Hands-on;MySQL Security;Topics;Importance of Database Security;Few Database Security Courses;Oracle Security |
40 | Automatic Extraction of Titles from General Documents using Machine Learning | In this paper, we propose a machine learning approach to title extraction from general documents. By general documents, we mean documents that can belong to any one of a number of specific genres, including presentations, book chapters, technical papers, brochures, reports, and letters. Previously, methods have been proposed mainly for title extraction from research papers. It has not been clear whether it could be possible to conduct automatic title extraction from general documents. As a case study, we consider extraction from Office including Word and PowerPoint. In our approach, we annotate titles in sample documents (for Word and PowerPoint respectively) and take them as training data, train machine learning models, and perform title extraction using the trained models. Our method is unique in that we mainly utilize formatting information such as font size as features in the models. It turns out that the use of formatting information can lead to quite accurate extraction from general documents. Precision and recall for title extraction from Word is 0.810 and 0.837 respectively, and precision and recall for title extraction from PowerPoint is 0.875 and 0.895 respectively in an experiment on intranet data. Other important new findings in this work include that we can train models in one domain and apply them to another domain, and more surprisingly we can even train models in one language and apply them to another language. Moreover, we can significantly improve search ranking results in document retrieval by using the extracted titles. | INTRODUCTION
Metadata of documents is useful for many kinds of document
processing such as search, browsing, and filtering. Ideally,
metadata is defined by the authors of documents and is then used
by various systems. However, people seldom define document
metadata by themselves, even when they have convenient
metadata definition tools [26]. Thus, how to automatically extract
metadata from the bodies of documents turns out to be an
important research issue.
Methods for performing the task have been proposed. However,
the focus was mainly on extraction from research papers. For
instance, Han et al. [10] proposed a machine learning based
method to conduct extraction from research papers. They
formalized the problem as that of classification and employed
Support Vector Machines as the classifier. They mainly used
linguistic features in the model.
1
In this paper, we consider metadata extraction from general
documents. By general documents, we mean documents that may
belong to any one of a number of specific genres. General
documents are more widely available in digital libraries, intranets
and the internet, and thus investigation on extraction from them is
1
The work was conducted when the first author was visiting
Microsoft Research Asia.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
JCDL'05, June 711, 2005, Denver, Colorado, USA
Copyright 2005 ACM 1-58113-876-8/05/0006...$5.00.
145
sorely needed. Research papers usually have well-formed styles
and noticeable characteristics. In contrast, the styles of general
documents can vary greatly. It has not been clarified whether a
machine learning based approach can work well for this task.
There are many types of metadata: title, author, date of creation,
etc. As a case study, we consider title extraction in this paper.
General documents can be in many different file formats:
Microsoft Office, PDF (PS), etc. As a case study, we consider
extraction from Office including Word and PowerPoint.
We take a machine learning approach. We annotate titles in
sample documents (for Word and PowerPoint respectively) and
take them as training data to train several types of models, and
perform title extraction using any one type of the trained models.
In the models, we mainly utilize formatting information such as
font size as features. We employ the following models: Maximum
Entropy Model, Perceptron with Uneven Margins, Maximum
Entropy Markov Model, and Voted Perceptron.
In this paper, we also investigate the following three problems,
which did not seem to have been examined previously.
(1) Comparison between models: among the models above, which
model performs best for title extraction;
(2) Generality of model: whether it is possible to train a model on
one domain and apply it to another domain, and whether it is
possible to train a model in one language and apply it to another
language;
(3) Usefulness of extracted titles: whether extracted titles can
improve document processing such as search.
Experimental results indicate that our approach works well for
title extraction from general documents. Our method can
significantly outperform the baselines: one that always uses the
first lines as titles and the other that always uses the lines in the
largest font sizes as titles. Precision and recall for title extraction
from Word are 0.810 and 0.837 respectively, and precision and
recall for title extraction from PowerPoint are 0.875 and 0.895
respectively. It turns out that the use of format features is the key
to successful title extraction.
(1) We have observed that Perceptron based models perform
better in terms of extraction accuracies. (2) We have empirically
verified that the models trained with our approach are generic in
the sense that they can be trained on one domain and applied to
another, and they can be trained in one language and applied to
another. (3) We have found that using the extracted titles we can
significantly improve precision of document retrieval (by 10%).
We conclude that we can indeed conduct reliable title extraction
from general documents and use the extracted results to improve
real applications.
The rest of the paper is organized as follows. In section 2, we
introduce related work, and in section 3, we explain the
motivation and problem setting of our work. In section 4, we
describe our method of title extraction, and in section 5, we
describe our method of document retrieval using extracted titles.
Section 6 gives our experimental results. We make concluding
remarks in section 7.
RELATED WORK
Methods have been proposed for performing automatic metadata
extraction from documents; however, the main focus was on
extraction from research papers.
The proposed methods fall into two categories: the rule based
approach and the machine learning based approach.
Giuffrida et al. [9], for instance, developed a rule-based system for
automatically extracting metadata from research papers in
Postscript. They used rules like "titles are usually located on the
upper portions of the first pages and they are usually in the largest
font sizes". Liddy et al. [14] and Yilmazel el al. [23] performed
metadata extraction from educational materials using rule-based
natural language processing technologies. Mao et al. [16] also
conducted automatic metadata extraction from research papers
using rules on formatting information.
The rule-based approach can achieve high performance. However,
it also has disadvantages. It is less adaptive and robust when
compared with the machine learning approach.
Han et al. [10], for instance, conducted metadata extraction with
the machine learning approach. They viewed the problem as that
of classifying the lines in a document into the categories of
metadata and proposed using Support Vector Machines as the
classifier. They mainly used linguistic information as features.
They reported high extraction accuracy from research papers in
terms of precision and recall.
2.2 Information Extraction
Metadata extraction can be viewed as an application of
information extraction, in which given a sequence of instances, we
identify a subsequence that represents information in which we
are interested. Hidden Markov Model [6], Maximum Entropy
Model [1, 4], Maximum Entropy Markov Model [17], Support
Vector Machines [3], Conditional Random Field [12], and Voted
Perceptron [2] are widely used information extraction models.
Information extraction has been applied, for instance, to part-of-speech
tagging [20], named entity recognition [25] and table
extraction [19].
2.3 Search Using Title Information
Title information is useful for document retrieval.
In the system Citeseer, for instance, Giles et al. managed to
extract titles from research papers and make use of the extracted
titles in metadata search of papers [8].
In web search, the title fields (i.e., file properties) and anchor texts
of web pages (HTML documents) can be viewed as `titles' of the
pages [5]. Many search engines seem to utilize them for web page
retrieval [7, 11, 18, 22]. Zhang et al., found that web pages with
well-defined metadata are more easily retrieved than those without
well-defined metadata [24].
To the best of our knowledge, no research has been conducted on
using extracted titles from general documents (e.g., Office
documents) for search of the documents.
146
MOTIVATION AND PROBLEM SETTING
We consider the issue of automatically extracting titles from
general documents.
By general documents, we mean documents that belong to one of
any number of specific genres. The documents can be
presentations, books, book chapters, technical papers, brochures,
reports, memos, specifications, letters, announcements, or resumes.
General documents are more widely available in digital libraries,
intranets, and internet, and thus investigation on title extraction
from them is sorely needed.
Figure 1 shows an estimate on distributions of file formats on
intranet and internet [15]. Office and PDF are the main file
formats on the intranet. Even on the internet, the documents in the
formats are still not negligible, given its extremely large size. In
this paper, without loss of generality, we take Office documents as
an example.
Figure 1. Distributions of file formats in internet and intranet.
For Office documents, users can define titles as file properties
using a feature provided by Office. We found in an experiment,
however, that users seldom use the feature and thus titles in file
properties are usually very inaccurate. That is to say, titles in file
properties are usually inconsistent with the `true' titles in the file
bodies that are created by the authors and are visible to readers.
We collected 6,000 Word and 6,000 PowerPoint documents from
an intranet and the internet and examined how many titles in the
file properties are correct. We found that surprisingly the accuracy
was only 0.265 (cf., Section 6.3 for details). A number of reasons
can be considered. For example, if one creates a new file by
copying an old file, then the file property of the new file will also
be copied from the old file.
In another experiment, we found that Google uses the titles in file
properties of Office documents in search and browsing, but the
titles are not very accurate. We created 50 queries to search Word
and PowerPoint documents and examined the top 15 results of
each query returned by Google. We found that nearly all the titles
presented in the search results were from the file properties of the
documents. However, only 0.272 of them were correct.
Actually, `true' titles usually exist at the beginnings of the bodies
of documents. If we can accurately extract the titles from the
bodies of documents, then we can exploit reliable title information
in document processing. This is exactly the problem we address in
this paper.
More specifically, given a Word document, we are to extract the
title from the top region of the first page. Given a PowerPoint
document, we are to extract the title from the first slide. A title
sometimes consists of a main title and one or two subtitles. We
only consider extraction of the main title.
As baselines for title extraction, we use that of always using the
first lines as titles and that of always using the lines with largest
font sizes as titles.
Figure 2. Title extraction from Word document.
Figure 3. Title extraction from PowerPoint document.
Next, we define a `specification' for human judgments in title data
annotation. The annotated data will be used in training and testing
of the title extraction methods.
Summary of the specification: The title of a document should be
identified on the basis of common sense, if there is no difficulty in
the identification. However, there are many cases in which the
identification is not easy. There are some rules defined in the
specification that guide identification for such cases. The rules
include "a title is usually in consecutive lines in the same format",
"a document can have no title", "titles in images are not
considered", "a title should not contain words like `draft',
147
`whitepaper', etc", "if it is difficult to determine which is the title,
select the one in the largest font size", and "if it is still difficult to
determine which is the title, select the first candidate". (The
specification covers all the cases we have encountered in data
annotation.)
Figures 2 and 3 show examples of Office documents from which
we conduct title extraction. In Figure 2, `Differences in Win32
API Implementations among Windows Operating Systems' is the
title of the Word document. `Microsoft Windows' on the top of
this page is a picture and thus is ignored. In Figure 3, `Building
Competitive Advantages through an Agile Infrastructure' is the
title of the PowerPoint document.
We have developed a tool for annotation of titles by human
annotators. Figure 4 shows a snapshot of the tool.
Figure 4. Title annotation tool.
TITLE EXTRACTION METHOD
Title extraction based on machine learning consists of training and
extraction. The same pre-processing step occurs before training
and extraction.
During pre-processing, from the top region of the first page of a
Word document or the first slide of a PowerPoint document a
number of units for processing are extracted. If a line (lines are
separated by `return' symbols) only has a single format, then the
line will become a unit. If a line has several parts and each of
them has its own format, then each part will become a unit.
Each
unit will be treated as an instance in learning. A unit contains not
only content information (linguistic information) but also
formatting information. The input to pre-processing is a document
and the output of pre-processing is a sequence of units (instances).
Figure 5 shows the units obtained from the document in Figure 2.
Figure 5. Example of units.
In learning, the input is sequences of units where each sequence
corresponds to a document. We take labeled units (labeled as
title_begin, title_end, or other) in the sequences as training data
and construct models for identifying whether a unit is title_begin
title_end, or other. We employ four types of models: Perceptron,
Maximum Entropy (ME), Perceptron Markov Model (PMM), and
Maximum Entropy Markov Model (MEMM).
In extraction, the input is a sequence of units from one document.
We employ one type of model to identify whether a unit is
title_begin, title_end, or other. We then extract units from the unit
labeled with `title_begin' to the unit labeled with `title_end'. The
result is the extracted title of the document.
The unique characteristic of our approach is that we mainly utilize
formatting information for title extraction. Our assumption is that
although general documents vary in styles, their formats have
certain patterns and we can learn and utilize the patterns for title
extraction. This is in contrast to the work by Han et al., in which
only linguistic features are used for extraction from research
papers.
4.2 Models
The four models actually can be considered in the same metadata
extraction framework. That is why we apply them together to our
current problem.
Each input is a sequence of instances
k
x
x
x
L
2
1
together with a
sequence of labels
k
y
y
y
L
2
1
.
i
x
and
i
y
represents an instance
and its label, respectively (
k
i
,
,
2
,
1 L
=
). Recall that an instance
here represents a unit. A label represents title_begin, title_end, or
other. Here, k is the number of units in a document.
In learning, we train a model which can be generally denoted as a
conditional probability distribution
)
|
(
1
1
k
k
X
X
Y
Y
P
L
L
where
i
X
and
i
Y
denote random variables taking instance
i
x
and label
i
y
as values, respectively (
k
i
,
,
2
,
1 L
=
).
Learning Tool
Extraction Tool
2
1
1
2
1
2
22
21
2
22
21
1
12
11
1
12
11
nk
n
n
k
n
n
k
k
k
k
y
y
y
x
x
x
y
y
y
x
x
x
y
y
y
x
x
x
L
L
L
L
L
L
L
L
)
|
(
max
arg
1
1
mk
m
mk
m
x
x
y
y
P
L
L
)
|
(
1
1
k
k
X
X
Y
Y
P
L
L
Conditional
Distribution
mk
m
m
x
x
x
L
2
1
Figure 6. Metadata extraction model.
We can make assumptions about the general model in order to
make it simple enough for training.
148
For example, we can assume that
k
Y
Y
,
,
1
L
are independent of
each other given
k
X
X
,
,
1
L
. Thus, we have
)
|
(
)
|
(
)
|
(
1
1
1
1
k
k
k
k
X
Y
P
X
Y
P
X
X
Y
Y
P
L
L
L
=
In this way, we decompose the model into a number of classifiers.
We train the classifiers locally using the labeled data. As the
classifier, we employ the Perceptron or Maximum Entropy model.
We can also assume that the first order Markov property holds for
k
Y
Y
,
,
1
L
given
k
X
X
,
,
1
L
. Thus, we have
)
|
(
)
|
(
)
|
(
1
1
1
1
1
k
k
k
k
k
X
Y
Y
P
X
Y
P
X
X
Y
Y
P
=
L
L
L
Again, we obtain a number of classifiers. However, the classifiers
are conditioned on the previous label. When we employ the
Percepton or Maximum Entropy model as a classifier, the models
become a Percepton Markov Model or Maximum Entropy Markov
Model, respectively. That is to say, the two models are more
precise.
In extraction, given a new sequence of instances, we resort to one
of the constructed models to assign a sequence of labels to the
sequence of instances, i.e., perform extraction.
For Perceptron and ME, we assign labels locally and combine the
results globally later using heuristics. Specifically, we first
identify the most likely title_begin. Then we find the most likely
title_end within three units after the title_begin. Finally, we
extract as a title the units between the title_begin and the title_end.
For PMM and MEMM, we employ the Viterbi algorithm to find
the globally optimal label sequence.
In this paper, for Perceptron, we actually employ an improved
variant of it, called Perceptron with Uneven Margin [13]. This
version of Perceptron can work well especially when the number
of positive instances and the number of negative instances differ
greatly, which is exactly the case in our problem.
We also employ an improved version of Perceptron Markov
Model in which the Perceptron model is the so-called Voted
Perceptron [2]. In addition, in training, the parameters of the
model are updated globally rather than locally.
4.3 Features
There are two types of features: format features and linguistic
features. We mainly use the former. The features are used for both
the title-begin and the title-end classifiers.
4.3.1 Format Features
Font Size: There are four binary features that represent the
normalized font size of the unit (recall that a unit has only one
type of font).
If the font size of the unit is the largest in the document, then the
first feature will be 1, otherwise 0. If the font size is the smallest
in the document, then the fourth feature will be 1, otherwise 0. If
the font size is above the average font size and not the largest in
the document, then the second feature will be 1, otherwise 0. If the
font size is below the average font size and not the smallest, the
third feature will be 1, otherwise 0.
It is necessary to conduct normalization on font sizes. For
example, in one document the largest font size might be `12pt',
while in another the smallest one might be `18pt'.
Boldface: This binary feature represents whether or not the
current unit is in boldface.
Alignment: There are four binary features that respectively
represent the location of the current unit: `left', `center', `right',
and `unknown alignment'.
The following format features with respect to `context' play an
important role in title extraction.
Empty Neighboring Unit: There are two binary features that
represent, respectively, whether or not the previous unit and the
current unit are blank lines.
Font Size Change: There are two binary features that represent,
respectively, whether or not the font size of the previous unit and
the font size of the next unit differ from that of the current unit.
Alignment Change: There are two binary features that represent,
respectively, whether or not the alignment of the previous unit and
the alignment of the next unit differ from that of the current one.
Same Paragraph: There are two binary features that represent,
respectively, whether or not the previous unit and the next unit are
in the same paragraph as the current unit.
4.3.2 Linguistic Features
The linguistic features are based on key words.
Positive Word: This binary feature represents whether or not the
current unit begins with one of the positive words. The positive
words include `title:', `subject:', `subject line:' For example, in
some documents the lines of titles and authors have the same
formats. However, if lines begin with one of the positive words,
then it is likely that they are title lines.
Negative Word: This binary feature represents whether or not the
current unit begins with one of the negative words. The negative
words include `To', `By', `created by', `updated by', etc.
There are more negative words than positive words. The above
linguistic features are language dependent.
Word Count: A title should not be too long. We heuristically
create four intervals: [1, 2], [3, 6], [7, 9] and [9,
) and define one
feature for each interval. If the number of words in a title falls into
an interval, then the corresponding feature will be 1; otherwise 0.
Ending Character: This feature represents whether the unit ends
with `:', `-', or other special characters. A title usually does not
end with such a character.
DOCUMENT RETRIEVAL METHOD
We describe our method of document retrieval using extracted
titles.
Typically, in information retrieval a document is split into a
number of fields including body, title, and anchor text. A ranking
function in search can use different weights for different fields of
149
the document. Also, titles are typically assigned high weights,
indicating that they are important for document retrieval. As
explained previously, our experiment has shown that a significant
number of documents actually have incorrect titles in the file
properties, and thus in addition of using them we use the extracted
titles as one more field of the document. By doing this, we attempt
to improve the overall precision.
In this paper, we employ a modification of BM25 that allows field
weighting [21]. As fields, we make use of body, title, extracted
title and anchor. First, for each term in the query we count the
term frequency in each field of the document; each field
frequency is then weighted according to the corresponding weight
parameter:
=
f
tf
f
t
tf
w
wtf
Similarly, we compute the document length as a weighted sum of
lengths of each field. Average document length in the corpus
becomes the average of all weighted document lengths.
=
f
f
f
dl
w
wdl
In our experiments we used
75
.
0
,
8
.
1
1
=
=
b
k
. Weight for content
was 1.0, title was 10.0, anchor was 10.0, and extracted title was
5.0.
EXPERIMENTAL RESULTS
We used two data sets in our experiments.
First, we downloaded and randomly selected 5,000 Word
documents and 5,000 PowerPoint documents from an intranet of
Microsoft. We call it MS hereafter.
Second, we downloaded and randomly selected 500 Word and 500
PowerPoint documents from the DotGov and DotCom domains on
the internet, respectively.
Figure 7 shows the distributions of the genres of the documents.
We see that the documents are indeed `general documents' as we
define them.
Figure 7. Distributions of document genres.
Third, a data set in Chinese was also downloaded from the internet.
It includes 500 Word documents and 500 PowerPoint documents
in Chinese.
We manually labeled the titles of all the documents, on the basis
of our specification.
Not all the documents in the two data sets have titles. Table 1
shows the percentages of the documents having titles. We see that
DotCom and DotGov have more PowerPoint documents with titles
than MS. This might be because PowerPoint documents published
on the internet are more formal than those on the intranet.
Table 1. The portion of documents with titles
Domain
Type
MS DotCom DotGov
Word 75.7%
77.8% 75.6%
PowerPoint 82.1% 93.4% 96.4%
In our experiments, we conducted evaluations on title extraction in
terms of precision, recall, and F-measure. The evaluation
measures are defined as follows:
Precision:
P = A / ( A + B )
Recall:
R = A / ( A + C )
F-measure:
F1 = 2PR / ( P + R )
Here, A, B, C, and D are numbers of documents as those defined
in Table 2.
Table 2. Contingence table with regard to title extraction
Is title
Is not title
Extracted A B
Not extracted
C
D
6.2 Baselines
We test the accuracies of the two baselines described in section
4.2. They are denoted as `largest font size' and `first line'
respectively.
6.3 Accuracy of Titles in File Properties
We investigate how many titles in the file properties of the
documents are reliable. We view the titles annotated by humans as
true titles and test how many titles in the file properties can
approximately match with the true titles. We use Edit Distance to
conduct the approximate match. (Approximate match is only used
in this evaluation). This is because sometimes human annotated
titles can be slightly different from the titles in file properties on
the surface, e.g., contain extra spaces).
Given string A and string B:
if ( (D == 0) or ( D / ( La + Lb ) < ) ) then string A = string B
D:
Edit Distance between string A and string B
La:
length of string A
Lb:
length of string B
:
0.1
+
+
+
=
t
t
n
N
wtf
avwdl
wdl
b
b
k
k
wtf
F
BM
)
log(
)
)
1
((
)
1
(
25
1
1
150
Table 3. Accuracies of titles in file properties
File Type
Domain
Precision
Recall
F1
MS 0.299
0.311
0.305
DotCom 0.210 0.214 0.212
Word
DotGov 0.182 0.177
0.180
MS 0.229
0.245
0.237
DotCom 0.185 0.186 0.186
PowerPoint
DotGov 0.180 0.182
0.181
6.4 Comparison with Baselines
We conducted title extraction from the first data set (Word and
PowerPoint in MS). As the model, we used Perceptron.
We conduct 4-fold cross validation. Thus, all the results reported
here are those averaged over 4 trials. Tables 4 and 5 show the
results. We see that Perceptron significantly outperforms the
baselines. In the evaluation, we use exact matching between the
true titles annotated by humans and the extracted titles.
Table 4. Accuracies of title extraction with Word
Precision
Recall
F1
Model Perceptron 0.810 0.837
0.823
Largest font size
0.700
0.758
0.727
Baselines
First line
0.707
0.767
0.736
Table 5. Accuracies of title extraction with PowerPoint
Precision
Recall
F1
Model Perceptron 0.875 0.
895
0.885
Largest font size
0.844
0.887
0.865
Baselines
First line
0.639
0.671
0.655
We see that the machine learning approach can achieve good
performance in title extraction. For Word documents both
precision and recall of the approach are 8 percent higher than
those of the baselines. For PowerPoint both precision and recall of
the approach are 2 percent higher than those of the baselines.
We conduct significance tests. The results are shown in Table 6.
Here, `Largest' denotes the baseline of using the largest font size,
`First' denotes the baseline of using the first line. The results
indicate that the improvements of machine learning over baselines
are statistically significant (in the sense p-value < 0.05)
Table 6. Sign test results
Documents Type
Sign test between
p-value
Perceptron vs. Largest
3.59e-26
Word
Perceptron vs. First
7.12e-10
Perceptron vs. Largest
0.010
PowerPoint
Perceptron vs. First
5.13e-40
We see, from the results, that the two baselines can work well for
title extraction, suggesting that font size and position information
are most useful features for title extraction. However, it is also
obvious that using only these two features is not enough. There
are cases in which all the lines have the same font size (i.e., the
largest font size), or cases in which the lines with the largest font
size only contain general descriptions like `Confidential', `White
paper', etc. For those cases, the `largest font size' method cannot
work well. For similar reasons, the `first line' method alone
cannot work well, either. With the combination of different
features (evidence in title judgment), Perceptron can outperform
Largest and First.
We investigate the performance of solely using linguistic features.
We found that it does not work well. It seems that the format
features play important roles and the linguistic features are
supplements..
Figure 8. An example Word document.
Figure 9. An example PowerPoint document.
We conducted an error analysis on the results of Perceptron. We
found that the errors fell into three categories. (1) About one third
of the errors were related to `hard cases'. In these documents, the
layouts of the first pages were difficult to understand, even for
humans. Figure 8 and 9 shows examples. (2) Nearly one fourth of
the errors were from the documents which do not have true titles
but only contain bullets. Since we conduct extraction from the top
regions, it is difficult to get rid of these errors with the current
approach. (3). Confusions between main titles and subtitles were
another type of error. Since we only labeled the main titles as
titles, the extractions of both titles were considered incorrect. This
type of error does little harm to document processing like search,
however.
6.5 Comparison between Models
To compare the performance of different machine learning models,
we conducted another experiment. Again, we perform 4-fold cross
151
validation on the first data set (MS). Table 7, 8 shows the results
of all the four models.
It turns out that Perceptron and PMM perform the best, followed
by MEMM, and ME performs the worst. In general, the
Markovian models perform better than or as well as their classifier
counterparts. This seems to be because the Markovian models are
trained globally, while the classifiers are trained locally. The
Perceptron based models perform better than the ME based
counterparts. This seems to be because the Perceptron based
models are created to make better classifications, while ME
models are constructed for better prediction.
Table 7. Comparison between different learning models for
title extraction with Word
Model Precision Recall F1
Perceptron 0.810 0.837 0.823
MEMM 0.797 0.824 0.810
PMM 0.827 0.823 0.825
ME 0.801 0.621
0.699
Table 8. Comparison between different learning models for
title extraction with PowerPoint
Model Precision Recall F1
Perceptron
0.875
0. 895
0. 885
MEMM 0.841 0.861 0.851
PMM
0.873 0.896 0.885
ME 0.753 0.766
0.759
6.6 Domain Adaptation
We apply the model trained with the first data set (MS) to the
second data set (DotCom and DotGov). Tables 9-12 show the
results.
Table 9. Accuracies of title extraction with Word in DotGov
Precision
Recall
F1
Model Perceptron 0.716 0.759
0.737
Largest font size
0.549 0.619
0.582
Baselines
First line
0.462 0.521
0.490
Table 10. Accuracies of title extraction with PowerPoint in
DotGov
Precision
Recall
F1
Model Perceptron 0.900 0.906
0.903
Largest font size
0.871 0.888
0.879
Baselines
First line
0.554 0.564
0.559
Table 11. Accuracies of title extraction with Word in DotCom
Precisio
n
Recall F1
Model Perceptron 0.832 0.880
0.855
Largest font size
0.676 0.753
0.712
Baselines
First line
0.577 0.643
0.608
Table 12. Performance of PowerPoint document title
extraction in DotCom
Precisio
n
Recall F1
Model Perceptron 0.910 0.903
0.907
Largest font size
0.864 0.886
0.875
Baselines
First line
0.570 0.585
0.577
From the results, we see that the models can be adapted to
different domains well. There is almost no drop in accuracy. The
results indicate that the patterns of title formats exist across
different domains, and it is possible to construct a domain
independent model by mainly using formatting information.
6.7 Language Adaptation
We apply the model trained with the data in English (MS) to the
data set in Chinese.
Tables 13-14 show the results.
Table 13. Accuracies of title extraction with Word in Chinese
Precision
Recall
F1
Model Perceptron 0.817 0.805
0.811
Largest font size
0.722
0.755
0.738
Baselines
First line
0.743
0.777
0.760
Table 14. Accuracies of title extraction with PowerPoint in
Chinese
Precision
Recall
F1
Model Perceptron 0.766 0.812
0.789
Largest font size
0.753
0.813
0.782
Baselines
First line
0.627
0.676
0.650
We see that the models can be adapted to a different language.
There are only small drops in accuracy. Obviously, the linguistic
features do not work for Chinese, but the effect of not using them
is negligible. The results indicate that the patterns of title formats
exist across different languages.
From the domain adaptation and language adaptation results, we
conclude that the use of formatting information is the key to a
successful extraction from general documents.
6.8 Search with Extracted Titles
We performed experiments on using title extraction for document
retrieval. As a baseline, we employed BM25 without using
extracted titles. The ranking mechanism was as described in
Section 5. The weights were heuristically set. We did not conduct
optimization on the weights.
The evaluation was conducted on a corpus of 1.3 M documents
crawled from the intranet of Microsoft using 100 evaluation
queries obtained from this intranet's search engine query logs. 50
queries were from the most popular set, while 50 queries other
were chosen randomly. Users were asked to provide judgments of
the degree of document relevance from a scale of 1to 5 (1
meaning detrimental, 2 bad, 3 fair, 4 good and 5 excellent).
152
Figure 10 shows the results. In the chart two sets of precision
results were obtained by either considering good or excellent
documents as relevant (left 3 bars with relevance threshold 0.5), or
by considering only excellent documents as relevant (right 3 bars
with relevance threshold 1.0)
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
P@10
P@5
Reciprocal
P@10
P@5
Reciprocal
0.5
1
BM25 Anchor, Title, Body
BM25 Anchor, Title, Body, ExtractedTitle
Name All
RelevanceThreshold Data
Description
Figure 10. Search ranking results.
Figure 10 shows different document retrieval results with different
ranking functions in terms of precision @10, precision @5 and
reciprocal rank:
Blue bar BM25 including the fields body, title (file
property), and anchor text.
Purple bar BM25 including the fields body, title (file
property), anchor text, and extracted title.
With the additional field of extracted title included in BM25 the
precision @10 increased from 0.132 to 0.145, or by ~10%. Thus,
it is safe to say that the use of extracted title can indeed improve
the precision of document retrieval.
CONCLUSION
In this paper, we have investigated the problem of automatically
extracting titles from general documents. We have tried using a
machine learning approach to address the problem.
Previous work showed that the machine learning approach can
work well for metadata extraction from research papers. In this
paper, we showed that the approach can work for extraction from
general documents as well. Our experimental results indicated that
the machine learning approach can work significantly better than
the baselines in title extraction from Office documents. Previous
work on metadata extraction mainly used linguistic features in
documents, while we mainly used formatting information. It
appeared that using formatting information is a key for
successfully conducting title extraction from general documents.
We tried different machine learning models including Perceptron,
Maximum Entropy, Maximum Entropy Markov Model, and Voted
Perceptron. We found that the performance of the Perceptorn
models was the best. We applied models constructed in one
domain to another domain and applied models trained in one
language to another language. We found that the accuracies did
not drop substantially across different domains and across
different languages, indicating that the models were generic. We
also attempted to use the extracted titles in document retrieval. We
observed a significant improvement in document ranking
performance for search when using extracted title information. All
the above investigations were not conducted in previous work, and
through our investigations we verified the generality and the
significance of the title extraction approach.
ACKNOWLEDGEMENTS
We thank Chunyu Wei and Bojuan Zhao for their work on data
annotation. We acknowledge Jinzhu Li for his assistance in
conducting the experiments. We thank Ming Zhou, John Chen,
Jun Xu, and the anonymous reviewers of JCDL'05 for their
valuable comments on this paper.
REFERENCES
[1] Berger, A. L., Della Pietra, S. A., and Della Pietra, V. J. A
maximum entropy approach to natural language processing.
Computational Linguistics, 22:39-71, 1996.
[2] Collins, M. Discriminative training methods for hidden
markov models: theory and experiments with perceptron
algorithms. In Proceedings of Conference on Empirical
Methods in Natural Language Processing, 1-8, 2002.
[3] Cortes, C. and Vapnik, V. Support-vector networks. Machine
Learning, 20:273-297, 1995.
[4] Chieu, H. L. and Ng, H. T. A maximum entropy approach to
information extraction from semi-structured and free text. In
Proceedings of the Eighteenth National Conference on
Artificial Intelligence, 768-791, 2002.
[5] Evans, D. K., Klavans, J. L., and McKeown, K. R. Columbia
newsblaster: multilingual news summarization on the Web.
In Proceedings of Human Language Technology conference /
North American chapter of the Association for
Computational Linguistics annual meeting, 1-4, 2004.
[6] Ghahramani, Z. and Jordan, M. I. Factorial hidden markov
models. Machine Learning, 29:245-273, 1997.
[7] Gheel, J. and Anderson, T. Data and metadata for finding and
reminding, In Proceedings of the 1999 International
Conference on Information Visualization, 446-451,1999.
[8] Giles, C. L., Petinot, Y., Teregowda P. B., Han, H.,
Lawrence, S., Rangaswamy, A., and Pal, N. eBizSearch: a
niche search engine for e-Business. In Proceedings of the
26th Annual International ACM SIGIR Conference on
Research and Development in Information Retrieval, 413-414
, 2003.
[9] Giuffrida, G., Shek, E. C., and Yang, J. Knowledge-based
metadata extraction from PostScript files. In Proceedings of
the Fifth ACM Conference on Digital Libraries, 77-84, 2000.
[10] Han, H., Giles, C. L., Manavoglu, E., Zha, H., Zhang, Z., and
Fox, E. A. Automatic document metadata extraction using
support vector machines. In Proceedings of the Third
ACM/IEEE-CS Joint Conference on Digital Libraries, 37-48,
2003.
[11] Kobayashi, M., and Takeda, K. Information retrieval on the
Web. ACM Computing Surveys, 32:144-173, 2000.
[12] Lafferty, J., McCallum, A., and Pereira, F. Conditional
random fields: probabilistic models for segmenting and
153
labeling sequence data. In Proceedings of the Eighteenth
International Conference on Machine Learning, 282-289,
2001.
[13] Li, Y., Zaragoza, H., Herbrich, R., Shawe-Taylor J., and
Kandola, J. S. The perceptron algorithm with uneven margins.
In Proceedings of the Nineteenth International Conference
on Machine Learning, 379-386, 2002.
[14] Liddy, E. D., Sutton, S., Allen, E., Harwell, S., Corieri, S.,
Yilmazel, O., Ozgencil, N. E., Diekema, A., McCracken, N.,
and Silverstein, J. Automatic Metadata generation &
evaluation. In Proceedings of the 25th Annual International
ACM SIGIR Conference on Research and Development in
Information Retrieval, 401-402, 2002.
[15] Littlefield, A. Effective enterprise information retrieval
across new content formats. In Proceedings of the Seventh
Search Engine Conference,
http://www.infonortics.com/searchengines/sh02/02prog.html,
2002.
[16] Mao, S., Kim, J. W., and Thoma, G. R. A dynamic feature
generation system for automated metadata extraction in
preservation of digital materials. In Proceedings of the First
International Workshop on Document Image Analysis for
Libraries, 225-232, 2004.
[17] McCallum, A., Freitag, D., and Pereira, F. Maximum entropy
markov models for information extraction and segmentation.
In Proceedings of the Seventeenth International Conference
on Machine Learning, 591-598, 2000.
[18] Murphy, L. D. Digital document metadata in organizations:
roles, analytical approaches, and future research directions.
In Proceedings of the Thirty-First Annual Hawaii
International Conference on System Sciences, 267-276, 1998.
[19] Pinto, D., McCallum, A., Wei, X., and Croft, W. B. Table
extraction using conditional random fields. In Proceedings of
the 26th Annual International ACM SIGIR Conference on
Research and Development in Information Retrieval, 235-242
, 2003.
[20] Ratnaparkhi, A. Unsupervised statistical models for
prepositional phrase attachment. In Proceedings of the
Seventeenth International Conference on Computational
Linguistics. 1079-1085, 1998.
[21] Robertson, S., Zaragoza, H., and Taylor, M. Simple BM25
extension to multiple weighted fields, In Proceedings of
ACM Thirteenth Conference on Information and Knowledge
Management, 42-49, 2004.
[22] Yi, J. and Sundaresan, N. Metadata based Web mining for
relevance, In Proceedings of the 2000 International
Symposium on Database Engineering & Applications, 113-121
, 2000.
[23] Yilmazel, O., Finneran, C. M., and Liddy, E. D. MetaExtract:
An NLP system to automatically assign metadata. In
Proceedings of the 2004 Joint ACM/IEEE Conference on
Digital Libraries, 241-242, 2004.
[24] Zhang, J. and Dimitroff, A. Internet search engines' response
to metadata Dublin Core implementation. Journal of
Information Science, 30:310-320, 2004.
[25] Zhang, L., Pan, Y., and Zhang, T. Recognising and using
named entities: focused named entity recognition using
machine learning. In Proceedings of the 27th Annual
International ACM SIGIR Conference on Research and
Development in Information Retrieval, 281-288, 2004.
[26] http://dublincore.org/groups/corporate/Seattle/
154 | Digital Copies;metadata extraction;Metadata processing;Search ranking results;File Formats extraction;Information search and Retrieval;PowerPoint documents;information extraction;Precision extraction;File extraction;search;generic languages;machine learning;Microsoft Office Automation |
41 | Autonomous and Distributed Node Recovery in Wireless Sensor Networks | Intrusion or misbehaviour detection systems are an important and widely accepted security tool in computer and wireless sensor networks. Their aim is to detect misbehaving or faulty nodes in order to take appropriate countermeasures, thus limiting the damage caused by adversaries as well as by hard or software faults. So far, however, once detected, misbehaving nodes have just been isolated from the rest of the sensor network and hence are no longer usable by running applications. In the presence of an adversary or software faults, this proceeding will inevitably lead to an early and complete loss of the whole network. For this reason, we propose to no longer expel misbehaving nodes, but to recover them into normal operation. In this paper, we address this problem and present a formal specification of what is considered a secure and correct node recovery algorithm together with a distributed algorithm that meets these properties. We discuss its requirements on the soft- and hardware of a node and show how they can be fulfilled with current and upcoming technologies. The algorithm is evaluated analytically as well as by means of extensive simulations, and the findings are compared to the outcome of a real implementation for the BTnode sensor platform. The results show that recovering sensor nodes is an expensive, though feasible and worthwhile task. Moreover , the proposed program code update algorithm is not only secure but also fair and robust. | INTRODUCTION
Wireless sensor networks (WSNs) consist of many wireless
communicating sensor nodes. Essentially, these are mi-crocontrollers
including a communication unit and a power
supply, as well as several attached sensors to examine the
environment. Sensor nodes typically have very limited computing
and storage capacities and can only communicate
with their direct neighbourhood. In addition, WSNs have
to work unattended most of the time as their operation area
cannot or must not be visited. Reasons can be that the area
is inhospitable, unwieldy, or ecologically too sensitive for human
visitation; or that manual maintenance would just be
too expensive.
More and more, WSN applications are supposed to operate
in hostile environments, where their communication
might be overheard and nodes can be removed or manipu-lated
. Regarding attacks on sensor networks, one differentiates
between so called outsider and insider attacks <A href="41.html#9">[21]. In
the former, a potential attacker tries to disclose or influence
a confidential outcome without participating in its computation
; for instance, by intercepting, modifying, or adding
messages. In the latter, by contrast, an attacker appears as
an adequate member of the WSN by either plausibly impersonating
regular nodes or by capturing and compromising
them.
Cryptographic methods, such as encrypting or signing
messages, are an effective protection against attacks from
outside the network, but are of only limited help against
insider attacks. Once an adversary possesses one or several
valid node identities (including the associated keys), it
can actively participate in the operations of the WSN and
influence the computed results.
Intrusion or misbehaviour detection systems (IDS), on the
other hand, are an important and widely accepted security
tool against insider attacks <A href="41.html#9">[18, 21].
They allow for the
detection of malicious or failed nodes and the application
of appropriate countermeasures. So far, however, once detected
, misbehaving nodes have just been isolated from the
rest of the network and hence are no longer usable by running
applications. In the presence of an adversary or software
faults, this proceding will inevitably result in an early
and complete loss of the whole network. Therefore, not only
the detection of misbehaving nodes is important, but also
the selection and application of effective countermeasures.
Their aim must not be to simply expel suspected nodes but
to recover them into correct operation. In combination, the
advantages of an IDS together with the appropriate recov-113
ery measures are manifold. Not only do they help in case of
program faults (e.g., deadlocks or crashes) but even if an attacker
manages to capture a node and to abuse it for his own
purposes, there is a chance that the aberrant behaviour of
this node will be detected and the node be recovered, thus
nullifying the attack. However, due to the size of sensor
networks, both the IDS functionality as well as the recovery
measures should be autonomously executed by the involved
nodes in a distributed and cooperative manner and without
the need for central instances with extended functionality.
Motivated by the above mentioned insights, this paper focuses
on autonomous an distributed node recovery in wireless
sensor networks and proposes three alternative countermeasures
to node expelling; namely to switch a node off,
to restart it, and to update its program code. We formally
specify what we consider a secure and correct recovery algorithm
, present a distributed algorithm which meets these
properties, and reason why it can help to extend the overall
lifetime of a sensor network. In addition, we discuss the limitations
of the proposed countermeasures, show which hard-and
software parts of a corrupted node must still work correctly
to make them applicable, and explain how this can
be achieved with current and upcoming technologies. More
precisely, the contributions of this paper are as follows:
We propose to no longer expel misbehaving nodes, but
to either (i) switch them off, (ii) restart them, or (iii)
update their program code.
We give a formal specification of a secure and correct
recovery algorithm.
We present a provably secure and robust distributed
node recovery algorithm.
We discuss the requirements on the soft- and hardware
of a node in order to make the countermeasures applicable
and show how they can be fulfilled with current
and upcoming technologies.
The algorithm is evaluated analytically as well as by means
of extensive simulations and the findings are compared to
the outcome of a real implementation for the BTnode sensor
platform. The results show that recovering sensor nodes
is an expensive, though feasible and worthwhile task. Moreover
, the proposed program code update algorithm is not
only provably secure but also fair and robust. It distributes
the update load equally over all participating nodes and terminates
as long as at least one of them remains correct.
The rest of this paper is organised as follows. Section <A href="41.html#2">1.1
presents the related work in the area of intrusion detection
and node recovery in wireless sensor networks. Section <A href="41.html#3">2
states the required definitions and assumptions. Section <A href="41.html#4">3
specifies the proposed recovery algorithm whose correctness
is proven in section <A href="41.html#6">4. The algorithm is evaluated in section
<A href="41.html#6">5 and section <A href="41.html#8">6 concludes the paper.
1.1
Related Work
In this section, we present related work in the area of intrusion
detection in wireless sensor networks. Additionally,
related work regarding program code updates in sensor networks
is also discussed, as we propose program code updates
as a mean to recover nodes.
Intrusion Detection
In recent years, intrusion detection systems for wireless sensor
networks have become a major research issue and several
approaches have been proposed. However, to our best
knowledge, the only countermeasure applied so far was to
(logically) exclude malicious nodes.
Khalil, Bagchi, and Nina-Rotaru present a distributed
IDS where nodes monitor the communication among their
neighbours <A href="41.html#9">[14]. For each monitored node a malignity counter
is maintained and incremented whenever the designated node
misbehaves. Once a counter exceeds a predefined threshold,
an according alert is sent to all neighbours and if enough
alerts are received the accused node is revoked from the
neighbourhood list. Hsin and Liu suggest a two-phase timeout
system for neighbour monitoring which uses active probing
to reduce the probability of false-positives <A href="41.html#9">[10].
A rule-based IDS, which proceeds in three phases, is proposed
by da Silva et al. <A href="41.html#9">[5]. In the first phase, messages are
overheard and the collected information is filtered and ordered
. In the second phase, the detection rules are applied
to the gathered data and each inconsistency counted as a
failure. Finally, in the third phase, the number or failures is
compared to the expected amount of occasional failures and
if too high an intrusion alert is raised.
Inverardi, Mostarda and Navarra introduce a framework
which enables the automatic translation of IDS specifications
into program code <A href="41.html#9">[12]. The so generated code is then
installed on the sensor nodes in order to locally detect violations
of the node interaction policies. In the approach by
Herbert et al. <A href="41.html#9">[6], predefined correctness properties (invariants
) are associated with conditions of individual nodes or
the whole network and program code to verify these invariants
is automatically inserted during compilation.
A reputation-based IDS framework for WSNs where sensor
nodes maintain a reputation for other nodes is presented
by Ganeriwal and Srivastava <A href="41.html#9">[8]. It uses a Bayesian formula-tion
for reputation representation, update, and integration.
Program Code Update
The main difference between the already available reprogramming
algorithms and the proposed recovery measures
are that the former focus on the propagation of new program
releases among all nodes of the network, whereas the
aim of the latter is the local and autonomous update of a
single node. Furthermore, most reprogramming mechanisms
do not care about security at all, or rely on expensive public
key cryptography.
Kulkarni and Wang propose a multihop reprogramming
service for wireless sensor networks which uses a sender selection
algorithm to avoid collisions <A href="41.html#9">[16]. Impala, a middleware
system for managing sensor systems is presented
by Liu and Martonosi <A href="41.html#9">[17]. Its modular architecture supports
updates to the running system. An application consists
of several modules which are independently transferred;
an update is complete if all its modules have been received.
Jeong and Culler introduce an efficient incremental network
programming mechanism <A href="41.html#9">[13]. Thanks to the usage of the
Rsync algorithm, only incremental changes to the new program
must be transferred.
A secure dissemination algorithm to distribute new program
releases among nodes is presented by Dutta et al. <A href="41.html#9">[7].
Program binaries are propagated as a sequence of data blocks
of which the first is authenticated with the private key of
114
the base station and the subsequent ones by means of a
hash chain. In order to improve the fault tolerance of the
sensor network, nodes use a grenade timer to reboot period-ically
. During the boot process neighboring nodes are asked
whether a new program release is available and if so, its
download is initiated.
DEFINITIONS
In this section, we define our assumptions regarding the
observation of nodes and the network communication model.
We specify the capabilities of a potential adversary, explain
what we consider a correct recover algorithm, and discuss
the requirements on the hard- and software of a sensor node.
2.1
Intrusion and Misbehaviour Detection
Troughout this paper, we assume that the network is divided
into N
C
so called observation clusters C
i
= (V
i
, E
i
),
0 i < N
C
of size n, n = |V
i
|. Within a cluster each node
is connected to each other (v
i
, v
j
V
k
, v
i
= v
j
: {v
i
, v
j
}
E
k
) and observes the behaviour of its cluster neighbours.
For the actual monitoring of the neighbours an arbitrary
IDS can be used, as long as each node ends up with an
(individual) decision about whether a certain node behaves
correct or malicious. The set of malicious nodes in a cluster
is denoted by M
i
and their number by t, t = |M
i
| n.
2.2
Network Model
In the following, p
s
(p
r
) denotes the probability that the
sending (receiving) of a message fails. Thus, for 0 p
s
, p
l
<
1 the resulting probability for an unsuccessful transmission
(packet loss ratio, PLR) is
p
l
:= 1 - (1 - p
s
)(1 - p
r
) = p
s
+ p
r
+ p
s
p
r
Additionally, we assume that there exists a constant upper
bound
p
O(1) on the transmission time of a message.
2.3
Adversary Model
We consider an omnipresent but computationally bounded
adversary who can perform both outsider as well as insider
attacks. This means that a potential adversary is able to intercept
and create arbitrary messages but unable to decrypt
or authenticate messages for which he does not possess the
required keys. We further assume that nodes can be either
logically (i.e., by exploiting a software bug) or physically
captured. However, the time to compromise a node physically
is considered non-negligible (i.e., it takes some time to
move from node to node and to perform the physical manipulations
) and to not significantly decrease with the number
of already captured nodes.
2.4
Hard- and Software Requirements
To all presented recovery measures applies that they are
only applicable if at least the therefore needed systems of
the corrupt node in the following denoted as the recovery
system still work correctly. In order to achieve this, one
has to make sure that the recovery system is logically and,
if feasible, physically protected.
Logical Protection of the Recovery System
Logical protection means that it should not be possible for a
running application to prevent the execution of the recovery
procedures. That is, if the program code running on a node
has crashed or been corrupted by an adversary (e.g., by exploiting
a security hole), this should not affect the integrity
and availability of the recovery system.
One mechanism to achieve this is to set up a hardware
interrupt which cannot be suppressed or redirected by the
application and by locating the dedicated interrupt routine
in a write protected memory area. Consequently, on each
interrupt request, control is handed over to the immutable
interrupt routine an thus to the recovery system. A simple
variant of this mechanism in which a grenade timer period-ically
reboots the system and the bootloader is located in
read only memory (ROM) is used by Dutta et al. <A href="41.html#9">[7]. Another
approach would be to misuse some additionally available
MCUs <A href="41.html#9">[23], for example the ARM CPU on the ARM-based
Bluetooth module on the BTnode.
Some of these
MCUs are powerful enough to take on additional tasks like
monitoring the main MCUs activities or rewriting the application
memory. In case of the BTnode that extra MCU
is directly responsible for communication and thus it would
be guaranteed that it has access to all received packets as
well.
On more advanced systems, mechanisms as provided by
Intel's protected mode (e.g., isolated memory areas, privilege
levels, etc.) could be used to protect the recovery system
more efficiently. Current technologies such as ARM's
TrustZone <A href="41.html#9">[1] for embedded devices or Intel's LaGrande technology
<A href="41.html#9">[11] go even further and enable a comprehensive protection
of the CPU, memory, and peripherals from software
attacks.
Physical Protection of the Recovery System
The physical protection of current sensor node platforms
is very poor because of their focus on simple maintenance
<A href="41.html#9">[9]. However, although it is generally agreed that entirely
tamper-proof sensor nodes would be too expensive, current
trends in the hardware development of embedded devices indicate
that some level of physical protection will be available
in the near future <A href="41.html#9">[20, 15]. Security mechanisms regarding
the packaging of sensor nodes as, for instance, those proposed
by FIPS 140-2 level 2 <A href="41.html#9">[19] could already significantly
increase the cost for an adversary. For integrity and not
confidentiality is the main concern with the recovery module
, it has only to be protected against manipulations but
not against unintended disclosure or side-channel attacks.
In fact, it would be sufficient to have mechanisms which
render a node useless if the case of the recovery system was
opened; complete tamper resistance is not required.
2.5
Correct Node Recover Algorithms
A node recovery algorithm for a cluster C
i
= (V
i
, E
i
) is
considered correct if the following liveness and safety properties
hold:
L1 If all correct nodes (V
i
\ M
i
) accuse a node m V
i
to
be faulty or malicious, its recovery process will finally
be initiated with high probability.
L2 Once the recovery process for a node m V
i
has been
initiated, it will eventually terminate as long as there
remain at least k 1 correct nodes V
i
\ M
i
.
S1 If no more than
n-1
3
correct nodes (i.e., a minority)
accuse another correct node v V
i
\ M
i
the recovery
process will not be initiated.
115
S2 After the recovery process, a node m V
i
must either
(i) be halted, (ii) contain the same program code as
before, or (iii) contain the correct program code.
The two liveness properties L1 and L2 ensure that each malicious
node is recovered if its aberrant behaviour is detected
by enough neighbours. Safety property S1 is required to
make sure that a node is only recovered if a majority of correct
nodes accuses it and property S2 ensures that things
are not worsened by applying the recovery process.
DISTRIBUTED NODE RECOVERY
In this section, we present a distributed node recovery
algorithm which is autonomously executed within an observation
cluster. The supported recovery measures are: node
shutdown, node restart, and program code update. As long
as the recovery module of an otherwise faulty or malicious
node is still intact, it is tried to recover it by restarting it or
updating its program code; or to at least eliminate its interfering
influence by turning it off. If a node does not respond
to any of these measures, it is still possible to logically expell
it; preferably by means of a reliable majority decision <A href="41.html#9">[22]
to avoid inconsistencies among the cluster members.
3.1
Description of the Recovery Procedure
The proposed recovery algorithm consists of two phases.
In the first, so called accusation phase, nodes accuse all
neighbours which are regarded as being malicious. If a node
is accused by at least two third of its neighbours it initiates
the second, so called recovery phase, during which the actual
countermeasures are executed. To simplify the cooperative
program code update, the program memory of a node is divided
into F frames f
i
, 0 i < F of size f s. Additionally,
for each frame f
i
its corresponding hash value h
i
:= h(f
i
) is
computed.
Accusation Phase
Recovery Phase
Round 1
Round 2
Round 3
Figure 1: Schematic depiction of a recovery procedure which
performs a program update as the countermeasure.
Accusation Phase
Nodes which conclude that one of their neighbours behaves
maliciously, send it an authenticated accusation mes<A href="41.html#4">sage
<A href="41.html#4">1
<A href="41.html#4">.
The proposed countermeasure depends on the observed aberration
and can be either of type shutdown, reset, or update,
if the node should be halted, restarted, or its program code
updated, respectively. Accusation messages have to be ac-knowledged
and are resent up to r times otherwise.
1
For simplicity, it is assumed that nodes can accuse their
neighbours at any time. However, if the recovery module
is only active from time to time, nodes could of course also
actively ask for (pending) accusations.
In case that a program update is requested, the accusation
messages also include a list of the sender's F frame hash
values h
i
. They represent the current state of its program
memory and are required to deduce the correct program
code. Therefore, for each frame f
i
not only its hash value h
i
but also a counter c
i
, which is initialised with zero, is stored.
Upon reception of a accusation message, each included hash
value is compared to the already stored one and if they are
equal, c
i
is incremented by one. If they differ and c
i
> 0 the
counter is decremented by one; otherwise (i.e., they are not
equal and c
i
= 0) the stored hash value is replaced with the
received value. This procedure ensures that, for 3t < n - 1,
every h
i
will contain the hash value of the correct program
code frame after
2(n-1)
3
accusations have been received
(see Proof <A href="41.html#6">4).
Recovery Phase
When a node m has received
2(n-1)
3
accusations of a certain
recovery type, the corresponding measure is initiated.
In the non trivial case of a distributed program code update
, the correct program code has therefore to be down-loaded
from the neighboring nodes. Otherwise, the node is
just rebooted or shutdown and no further communication or
coordination is required.
The autonomous program code transfer is performed in
rounds of which each starts with the broadcasting of an authenticated
update request message by the accused node m.
Essentially, the message contains a list of so called frame
descriptors (u
i
, Q
i
), consisting of a node id u
i
and a set of
requested frame numbers Q
i
:= {r
0
, r
1
, . . . , r
|Q|-1
}. Upon
reception of a valid request, a node v seeks for descriptors
which contain its own id (i.e., u
i
= v). If present, for each
requested frame number r
j
Q
i
the corresponding program
code frame is sent back to m with an update message. All
received program code fragments f
i
, in turn, are verified by
m using the stored hash values h
i
. Valid code fragments are
copied into the program <A href="41.html#4">memory
<A href="41.html#4">2
and the frame marked as
updated. If for a duration of
round
no update messages arrive
although there are still some outstanding frames, a new
update request message is broadcasted and the next round
initiated. As soon as all frames have been received, the node
is rebooted and thus the new program code activated.
In order to distribute the transfer load equally among all
participating nodes and to ensure that the update procedure
terminates if at least one correct node is available, the
frame descriptors are determined as follows: First, the n - 1
participating nodes are ordered such that id(v
0
) < id(v
1
) <
. . . < id(v
n-2
). Next, the F memory frames are divided into
n - 1 sectors of length l :=
F
n-1
. Finally, to each node one
such fragment is assigned per update round in a round robin
fashion. Thus, in round i node v
j
, is responsible for the segment
s := j + i mod (n - 1), that is, for the frames sl to
min((s + 1)l - 1, F - 1). In the first round, for example,
the first node is responsible for the first l frames, the second
node for the second l frames and so on. In the second round,
however, the assignment is rotated by one and thus the outstanding
frames of the first sector are now requested from
the second node. This process has to be continued until all
required frames have been received.
2
On most sensor node platforms, new code is not directly
written into program memory but into a therefore available
Flash memory and installed during a subsequent reboot.
116
Extensions and Optimisations
Even though not all but only the subset of modified program
code frames has to be requested, updating a node is
still a time consuming and expensive task. Consequently,
the amount of update load that a specific node can cause
should be restricted, for instance by limiting the number of
update messages that are sent to it. To further reduce the
load for the participating nodes, the F hash values h
i
in
an accusation message can be replaced by the hash value
h := h(h
0
||h
1
|| . . . ||h
F -1
).
Once the correct value h has
been determinded using the corresponding counter c in analogy
to the above mentioned algorithm, the actual hash values
can be requested from the neighbours in a second step
and verified with h. In order to decrease the total number
of required accusation messages, more than one recovery
measure per message should be allowed. Alternatively, the
measures could be hierarchically organised, having the type
update also counting as a reboot or shutdown request.
3.2
Algorithms
Listing 1: Algorithm for an accusing node v.
v a r
a c c r e t r i e s [ n - 1 ] : = {0, . . . ,0}
a c c f a i l e d [ n - 1 ]
: = {false, . . . ,false}
n u m u p d a t e s [ n - 1 ] : = {0, . . . ,0}
upon m i s b e h a v i o r
d e t e c t i o n
o f n o d e m
c h o o s e an a p p r o p r i a t e
a c c u s a t i o n -t y p e a
m
i f a
m
= acc update
s e n d
a c c u s a t i o n , v, m, , a
m
, {h(f
0
), . . . , h(f
F -1
)}
t o m
e l s e
s e n d
a c c u s a t i o n , v, m, , a
m
t o m
s t a r t
t i m e r A
m
upon r e c e p t i o n
o f
a c c u s a t i o n a c k , m, , a
f r o m m
s t o p
t i m e r A
m
a c c r e t r i e s [ m ] := 0
upon t i m e o u t
o f
t i m e r A
m
i f
a c c r e t r i e s [ m ] < max acc retries
a c c r e t r i e s [ m ] := a c c r e t r i e s [ m ] + 1
s e n d
a c c u s a t i o n , v, m, , a
m
, {h(f
0
), . . . , h(f
F -1
)}
t o m
s t a r t
t i m e r A
m
e l s e
a c c f a i l e d [ m ] := true
upon r e c e p t i o n
o f
u p d a t e r e q u e s t , m, , R
f r o m m
i f
n u m u p d a t e s [ m ] < m a x u p d a t e s and
(u, {r
0
, . . . , r
k
}) R
n u m u p d a t e s [ m ] := n u m u p d a t e s [ m ] + 1
r
i
, 0 i k s e n d
u p d a t e , v, m, r
i
, f
r
i
t o m
Listing 2: Algorithm for the accused node m.
v a r
u p d a t i n g
:= false
n u m a c c r e s e t
:= 0
n u m a c c u p d a t e
:= 0
n u m a c c s h u t d o w n := 0
s t a r t n o d e
:= 0
a c c r e s e t r e c v d [ n - 1 ]
:= {0, . . . ,0}
a c c u p d a t e r e c v d [ n - 1 ]
:= {0, . . . ,0}
a c c s h u t d o w r e c v d [ n - 1 ] := {0, . . . ,0}
f r a m e u p d a t e d [ F - 1 ] := {false, . . . ,false}
f r a m e d i g e s t [ F - 1 ]
:= {h(f
0
), . . . , h(f
F -1
)}
f r a m e c o u n t [ F - 1 ]
:= {0, . . . ,0}
upon r e c e p t i o n
o f
a c c u s a t i o n , v, m, ,acc reset
f r o m v
s e n d
a c c u s a t i o n a c k , m, ,acc reset
t o v
i f
n o t u p d a t i n g and n o t
a c c r e s e t r e c v d [ v ]
a c c r e s e t r e c v d [ v ] := true
n u m a c c r e s e t : = n u m a c c r e s e t + 1
i f
n o t u p d a t i n g and n u m a c c r e s e t
2(n-1)
3
r e s e t
n o d e
upon r e c e p t i o n
o f
a c c u s a t i o n , v, m, ,acc shutdown
f r o m v
s e n d
a c c u s a t i o n a c k , m, ,acc shutdown
t o v
i f
n o t u p d a t i n g and n o t a c c s h u t d o w n r e c v d [ v ]
a c c s h u t d o w n r e c v d [ v ] := true
n u m a c c s h u t d o w n : = n u m a c c s h u t d o w n + 1
i f
n o t u p d a t i n g and n u m a c c s h u t d o w n
2(n-1)
3
shutdown n o d e
f u n c t i o n
s e t u p u p d a t e r e q u e s t ( )
k := 0
R := {}
f o r 0 i < n , i = m
w := ( s t a r t n o d e + i) mod n
Q := {}
f o r 0 j <
F
n
i f
n o t f r a m e u p d a t e d [ k ]
Q := Q {k}
k := k + 1
i f Q = {}
R := R {(w, Q)}
s t a r t n o d e := s t a r t n o d e + 1
r e t u r n R
upon r e c e p t i o n
o f
a c c u s a t i o n , v, m, ,acc update
, {h
0
, . . . , h
F -1
}
f r o m v
s e n d
a c c u s a t i o n a c k , m, ,acc update
t o v
i f
n o t u p d a t i n g and n o t
a c c u p d a t e r e c v d [ v ]
a c c u p d a t e r e c v d [ v ] := true
n u m a c c u p d a t e : = n u m a c c u p d a t e + 1
f o r 0 i < F
i f
f r a m e d i g e s t [ i ] = h
i
f r a m e c o u n t [ i ] := f r a m e c o u n t [ i ] + 1
e l s e
i f
f r a m e c o u n t [ i ] > 0
f r a m e c o u n t [ i ] := f r a m e c o u n t [ i ] - 1
e l s e
f r a m e d i g e s t [ i ] := h
i
i f
n o t u p d a t i n g and n u m a c c u p d a t e
2(n-1)
3
R : = s e t u p u p d a t e r e q u e s t ( )
b r o a d c a s t
u p d a t e r e q u e s t , m, , R
s t a r t
t i m e r U
u p d a t i n g = true
upon t i m e o u t
o f
t i m e r U
R : = s e t u p u p d a t e r e q u e s t ( )
b r o a d c a s t
u p d a t e r e q u e s t , m, , R
s t a r t
t i m e r U
upon r e c e p t i o n
o f
u p d a t e , v, m, i, f
f r o m v
r e s e t
t i m e r U
117
i f h(f ) = f r a m e d i g e s t [ i ] and n o t
f r a m e u p d a t e d [ i ]
u p d a t e memory f r a m e i
f r a m e u p d a t e d [ i ] := true
i f i, 0 i < F
f r a m e u p d a t e d [ i ]
r e s e t
n o d e
PROOF OF CORRECTNESS
In this section, we proof the correctness of the proposed
algorithm with respect to the specifications of section <A href="41.html#3">2.
Theorem 1. Given the network and adversary model specified
in section <A href="41.html#3">2, the proposed recovery algorithm is correct
and fulfils the properties L1, L2, S1, and S2 if the recovery
module of the accused node m is intact, if h() is a secure
hashfunction, and if less than one third of the participating
nodes are malicious (i.e., 3t < n - 1).
In order to prove Theorem <A href="41.html#6">1 we have to show that the
properties L1, L2, S1, and S2 hold. We therefore first prove
some helper Lemmas.
Lemma 1. If all correct nodes accuse a node m, its recovery
process will be initiated with high probability.
Proof. The probability that less than
2(n-1)
3
accusations
are received is equal to the probability that more than
n-1
3
messages are either not sent or lost. Assuming that the t
malicious nodes do not participate in the distributed update,
at least
n-1
3
-t+1 accusations must get lost. Given 0 p
l
<
1, the probability for this is (p
r
l
)
n-1
3
-t+1
+ (p
r
l
)
n-1
3
-t+2
+
. . .+(p
r
l
)
n-1
2(n-1)+3t
3
(p
r
l
)
n-1
3
-t+1
. It holds that c > 1 :
r 1 such that
2(n-1)+3t
3
,,
p
n-1
3
-t+1
l
r
< n
-c
. Thus, the
node m gets
2(n-1)
3
accusations w.h.p. and the recovery
process is initiated.
Lemma 2. Once the recovery process for a node m has
been initiated it will eventually terminate as long as there
remain at least k 1 correct nodes.
Proof. In order that a frame is updated in a specific
round, the dedicated request as well as its actual transmission
must succeed.
The probability that this is the case
is (1 - p
l
)
2
. With only one correct node (k = 1) the expected
number of update rounds per frame a can be described
as a Markov chain described by the expression a =
(a + 1)(1 - (1 - p
l
)
2
) + (1 - p
l
)
2
with the solution a =
1
(1-p
l
)
2
.
The overall expected number of rounds is thus
aF =
F
(1-p
l
)
2
O(1). In each round at most one request and
F updates are transmitted, leading to an upper bound for
its duration of (F + 1)
p
O(1). Altogether, the expected
worst case duration is
F (F +1)
(1-p
l
)
2
p
O(1).
Lemma 3. If no more than
n-1
3
correct nodes accuse another
correct node v M the recovery process will not be
initiated.
Proof. From each node only one accusation is accepted
and thus the number of valid accusations is at most
n-1
3
+t <
2(n-1)
3
.
Lemma 4. At the start of a program code update the target
node m has stored the correct hash value h
i
for all frames,
given that all correct nodes have loaded the same program
code.
Proof. Let's assume that there is a hash value h
i
which
is not correct when the program code update starts. As a
stored hash value is only substituted if the dedicated counter
c
i
is zero, the node must have received at least as many
wrong values as correct ones. From each node only one accusation
is accepted, thus of the a
2(n-1)
3
received values
at most t <
n-1
3
are false.
It follows that at least
a - t >
2(n-1)
3
n
-1
3
=
n-1
3
> t values must be correct,
which contradicts the assumption that at least as many false
as correct hash values were received.
The properties L1, L2, and S1 are proven by Lemma <A href="41.html#6">1,
<A href="41.html#6">2, and <A href="41.html#6">3, respectively.
If the accused node is turned off
or restarted, property S2 holds by definition. Otherwise, if
the program code is updated, Lemma <A href="41.html#6">4 and property L2
guarantee that only correct code frames are installed and
that the procedure finally terminates.
EVALUATION
In this section we provide an analytical evaluation of the
proposed algorithm and present the findings of extensive
simulations as well as of a real implementation for the BTnodes
.
The evaluated metrics are: (a) number of update
rounds, (b) update load for the accused nodes, (c) update
load for the other participating nodes, and (d) update duration
.
5.1
Analytical Evaluation
Number of Update Rounds
In order to update a frame it is required that the dedicated
request as well as the actual frame itself are successfully
transmitted. The expected fraction of erroneous updates is
therefore 1 - (1 - p
l
)
2
. If one further assumes that the t
malicious nodes do not participate in the program update,
the fraction increases to 1 n
-1-t
n-1
(1 - p
l
)
2
. Thus, the expected
number of outstanding frames after the first and second
update round are E
f
(1) = F (1 n
-1-t
n-1
(1 - p
l
)
2
) and
E
f
(2) = E
f
(1)(1-n
-1-t
n-1
(1-p
l
)
2
) = F (1-n
-1-t
n-1
(1-p
l
)
2
)
2
,
respectively. In general, the expected number of outstanding
frames after i > 0 rounds is:
E
f
(i)
=
E
f
(i - 1)
,,
1 - n - 1 - t
n - 1
(1 - p
l
)
2
=
F
,,
1 - n - 1 - t
n - 1
(1 - p
l
)
2
i
Consequently, the expected number of update rounds is
E
r
log(0.5) - log(F )
log(1 n
-1-t
n-1
(1 - p
l
)
2
)
For reliable connections (p
l
= 0) we get E
r
log(0.5)-log(F )
log(1/3)
O(1).
In a worst case scenario a continuous sequence of
frames is assigned to the t malicious nodes and thus at least
t + 1, that is O(t) = O(n) rounds are required. Moreover,
for a fixed p
l
, the expected number of rounds is (almost)
independent of the cluster size n as
2
3
<
n-1-t
n-1
1 for a
fixed t, 0 t <
n-1
3
.
118
Update Load
The expected amount of data (in bytes) to transfer for the
accused node is
E
tm
=
E
r
-1
X
i=0
,,
C
req
+ C
mac
(n - 1) + C
sel
(n - 1) E
f
(i)
F
(C
req
+ C
mac
(n - 1) + C
sel
(n - 1))E
r
and
E
tv
=
E
r
-1
X
i=0
E
f
(i)
n - 1 (1 - p
l
) (C
update
+ f s)
F
n - 1 (1 - p
l
) (C
update
+ f s) E
r
for the other participating nodes. In the above expressions
(n - 1)
E
f
(i)
F
is the expected number of addressed nodes and
E
f
(i)
n-1
(1 - p
l
) expresses the expected number of successfully
requested frames per node.
Update Duration
The total number of sent messages E
m
is bound by (F +
1)E
r
O(E
r
) as there are only one request and no more
that F update messages per round.
Thus, the expected
value of E
m
is in O(1) for reliable connections and in O(n)
in the worst case. More precisely, the expected number of
messages is given by
E
m
=
E
r
-1
X
i=0
,,
1 + E
f
(i)
n - 1 (1 - p
l
)
= E
r
+ (n - 1 - t)E
tv
C
update
+ f s
Neglecting the delays caused by the involved software routines
, a good approximation for the update duration can be
achieved by considering the overall transfer time and the
delays caused by the round timeouts. The expected time to
transfer all messages is
E
tm
+(n-1-t)E
tv
B
+ E
m
mac
whereas
the overhead of the round timer is given by (E
r
- 1)
round
,
resulting in a total update duration of
E
d
E
tm
+ (n - 1 - t)E
tv
B
+ E
m
mac
+ (E
r
- 1)
round
Parametrisation
For the comparison of the analytical results with the simulation
and implementation of the algorithm, the following
parameters were used:
Baudrate
B
19.2 kBit/s
B-MAC preamble
mac
100 ms
Round timeout
round
3 s
Request header size
C
req
12 Bytes
Update header size
C
update
11 Bytes
Frame selector site
C
sel
10 Bytes
MAC size
C
mac
20 Bytes
Frame size
f s
1024 Bytes
5.2
Simulation
The simulation of the algorithm was carried out with the
Java based JiST/SWANS simulator <A href="41.html#9">[2]. In order to make the
results comparable to the real BTnode implementation, the
radio module was set up according to the characteristics of
0
10
20
30
40
50
60
70
80
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
packet loss ratio
Expected number of update rounds
F=100, n=10, t=0 (analytic)
F=100, n=10, t=2 (analytic)
F=100, n=10, t=0 (simulated)
F=100, n=10, t=2 (simulated)
Figure 2: Expected number of rounds to update a node.
0
50
100
150
200
250
300
350
400
450
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
packet loss ratio
Expected duration (in sec.) to update a node
F=100, n=10, t=0 (analytic)
F=100, n=10, t=2 (analytic)
F=100, n=10, t=0 (simulated)
F=100, n=10, t=2 (simulated)
Figure 3: Expected duration to update a node.
the Chipcon CC1000 transceiver <A href="41.html#9">[4] and B-MAC was chosen
as the data link layer protocol. The complete parametrisa-tion
of the simulation is given in the table below:
Transmission frequency
868 MHz
Transmission power
5 dBm
Receiver sensitivity
-100 dBm
Memory size
100 kByte
Number of nodes
10
Deployment area
20 x 20 m (u.r.d.)
5.3
Implementation
In addition to the above mentioned simulations, the algorithm
was also implemented for the BTnodes, a wireless
sensor platform running NutOS <A href="41.html#9">[3]. A detailed description
of the created software is omitted due to space reasons but
can be found in <A href="41.html#9">[22]. The implementation was evaluated by
randomly distributing a cluster of 10 nodes in a field of 20
x 20 m, whereupon each node in turn initiated a complete
program update. Altogether, over 100 recovery procedures
where measured.
5.4
Results
The packet loss ratio has, as expected, a significant effect
on all evaluated metrics and each of them increases exponen-tially
if the ratio worsens. The number of nodes, in contrast,
119
0
2
4
6
8
10
12
14
16
18
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
packet loss ratio
Expected amount of data to transfer (in kBytes) for the updating node
F=100, n=10, t=0 (analytic)
F=100, n=10, t=2 (analytic)
F=100, n=10, t=0 (simulated)
F=100, n=10, t=2 (simulated)
Figure 4: Expected update load for the accused node.
0
5
10
15
20
25
30
35
40
45
50
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
packet loss ratio
Expected amount of data to transfer (in kBytes) for the participating nodes
F=100, n=10, t=0 (analytic)
F=100, n=10, t=2 (analytic)
F=100, n=10, t=0 (simulated)
F=100, n=10, t=2 (simulated)
Figure 5: Expected update load for the participating nodes.
has for a fixed packet loss ratio almost no negative impact
on the evaluated metrics, showing that the algorithm itself
scales well. Furthermore, the results show that the update
algorithm is fair and equally distributes the update load over
all participating nodes.
Update Rounds and Update Duration
Whilst the expected number of update rounds (see Figure
<A href="41.html#7">2) is only of secondary importance, the update duration (see
Figure <A href="41.html#7">3) is of major interest for the feasibility of the algorithm
. The faster a node recovery is completed, the sooner
the network is operable again. Even though the update duration
almost triples from 50 to 150 s if the packet loss ratio
increases from 0 to 40 percent, it is still in a range which
most WSN application should be able to cope with.
Update Load
In a cluster of 10 nodes the update load for the accused node
(see Figure <A href="41.html#8">4) is 0.5 to 3.5 kByte for 0 p
l
0.4 and thus
considerably smaller than for the other participating nodes
(see Figure <A href="41.html#8">5) with a load of 12 to 24 kByte. However, the
latter is, as expected, inverse proportional to the cluster size
(see Figure <A href="41.html#8">7): the larger a cluster and the lower the number
of malicious nodes, the smaller the expected update load per
participating node.
0
50
100
150
200
250
5
10
15
20
25
30
number of nodes
Expected duration (in sec.) to update a node
F=100, plr=0.1, t=0
F=100, plr=0.5, t=0
F=100, plr=0.1, t=n/3
F=100, plr=0.5, t=n/3
Figure 6: Influence of the cluster size on the update duration
.
0
10
20
30
40
50
60
70
80
90
5
10
15
20
25
30
number of nodes
Expected amount of data to transfer (in kBytes) for the participating nodes
F=100, plr=0.1, t=0
F=100, plr=0.5, t=0
F=100, plr=0.1, t=n/3
F=100, plr=0.5, t=n/3
Figure 7: Influence of the cluster size on the expected update
load for the participating nodes.
Implementation
In the experiments conducted with the BTnode implementation
, the average number of update rounds was five, the
update duration about 100 s ( 10 s), and the load for
the participating nodes about 15 kByte ( 1 kByte). Applied
to the analytical model this would mean that the gross
packet loss ratio was roughly 20%.
SUMMARY AND CONCLUSIONS
In this paper, we presented an autonomous and distributed
recovery algorithm for sensor networks. The algorithm allows
for bringing malicious or failed nodes back into normal
operation or, at least, for securely shutting them down. Particularly
in remote or unwieldy areas, such as deserts, the
bottom of the sea, mountains, or even on planets in outer
space, where redeployment is expensive and sensor nodes
cannot easily be exchanged or maintained, the application
of a node recovery system is most likely to extend the lifetime
of the whole network.
The results of the simulation and analytical analysis were
confirmed by the real BTnode implementation. They show
that recovering sensor nodes is as any form of reprogramming
an expensive, though feasible task. Moreover, the
proposed program code update algorithm is not only prov-120
ably secure but also fair and robust. It distributes the update
load equally over all participating nodes and terminates
as long as at least one of the nodes remains correct.
To all presented recovery measures applies that they are
only applicable if at least the therefore needed systems of
the corrupt node still work correctly. However, although it is
generally agreed that entirely tamper-proof sensor nodes are
too expensive, current trends in the hardware development
of embedded devices indicate that at least some logical and
physical protection (e.g., CPUs which support isolated memory
areas or automatic memory erason if a node is tempered
with) will be available in the near future. We discussed how
these upcoming technologies can be exploited to protect the
recovery mechanisms of a sensor node and what is already
feasible with existing systems.
REFERENCES
[1] T. Alves and D. Felton. TrustZone: Integrated
Hardware and Software Security. ARM Ltd, July 2004.
[2] R. Barr, Z. J. Haas, and R. van Renesse. Jist: an
efficient approach to simulation using virtual
machines: Research articles. Softw. Pract. Exper.,
35(6):539576, 2005.
[3] J. Beutel, O. Kasten, and M. Ringwald. Poster
abstract: Btnodes a distributed platform for sensor
nodes. In Proceedings of the 1st International
Conference on Embedded Networked Sensor Systems,
pages 292 293, Los Angeles, California, USA, Jan.
2003. ACM Press. http://www.btnode.ethz.ch/.
[4] Chipcon AS, Oslo, Norway. Single Chip Very Low
Power RF Transceiver, Rev. 2.1, Apr. 2002.
http://www.chipcon.com/.
[5] A. P. R. da Silva, M. H. T. Martins, B. P. S. Rocha,
A. A. F. Loureiro, L. B. Ruiz, and H. C. Wong.
Decentralized intrusion detection in wireless sensor
networks. In Q2SWinet '05: Proceedings of the 1st
ACM international workshop on Quality of service &
security in wireless and mobile networks, pages 1623,
New York, NY, USA, 2005. ACM Press.
[6] S. B. Douglas Herbert, Yung-Hsiang Lu and Z. Li.
Detection and repair of software errors in hierarchical
sensor networks. To appear in IEEE conference on
Sensor Networks and Ubiquitous Trustworthy
Computing (SUTC), June 2006.
[7] P. K. Dutta, J. W. Hui, D. C. Chu, and D. E. Culler.
Towards secure network programming and recovery in
wireless sensor networks. Technical Report
UCB/EECS-2005-7, Electrical Engineering and
Computer Sciences University of California at
Berkeley, Oct. 2005.
[8] S. Ganeriwal and M. B. Srivastava. Reputation-based
framework for high integrity sensor networks. In
SASN '04: Proceedings of the 2nd ACM workshop on
Security of ad hoc and sensor networks, pages 6677,
New York, NY, USA, 2004. ACM Press.
[9] C. Hartung, J. Balasalle, and R. Han. Node
compromise in sensor networks: The need for secure
systems. Technical Report CU-CS-990-05, Department
of Computer Science, University of Colorado, Jan.
2005.
[10] C. Hsin and M. Liu. A distributed monitoring
mechanism for wireless sensor networks. In WiSE '02:
Proceedings of the 3rd ACM workshop on Wireless
security, pages 5766, New York, NY, USA, 2002.
ACM Press.
[11] Intel Corporation. LaGrande Technology Architectural
Overview, Sept. 2003.
[12] P. Inverardi, L. Mostarda, and A. Navarra.
Distributed IDSs for enhancing security in mobile
wireless sensor networks. AINA, 2:116120, 2006.
[13] J. Jeong and D. Culler. Incremental network
programming for wireless sensors. In Proceedings of
the First IEEE Communications Society Conference
on Sensor and Ad-Hoc Communications and Networks
(SECON), 2004.
[14] I. Khalil, S. Bagchi, and C. Nina-Rotaru. Dicas:
Detection, diagnosis and isolation of control attacks in
sensor networks. securecomm, 00:89100, 2005.
[15] P. Kocher, R. Lee, G. McGraw, and A. Raghunathan.
Security as a new dimension in embedded system
design. In DAC '04: Proceedings of the 41st annual
conference on Design automation, pages 753760, New
York, NY, USA, 2004. ACM Press.
Moderator-Srivaths Ravi.
[16] S. S. Kulkarni and L. Wang. Mnp: Multihop network
reprogramming service for sensor networks. icdcs,
00:716, 2005.
[17] T. Liu and M. Martonosi. Impala: a middleware
system for managing autonomic, parallel sensor
systems. SIGPLAN Not., 38(10):107118, 2003.
[18] S. Northcutt and J. Novak. IDS: Intrusion
Detection-Systeme. mitp Verlag Bonn, 2001.
[19] N. B. of Standards. Security Requirements for
Cryptographic Modules. National Bureau of Standards,
Dec. 2002.
[20] S. Ravi, A. Raghunathan, P. Kocher, and
S. Hattangady. Security in embedded systems: Design
challenges. Trans. on Embedded Computing Sys.,
3(3):461491, 2004.
[21] E. Shi and A. Perrig. Designing secure sensor
networks. IEEE Wireless Communication Magazine,
11(6):3843, Dec. 2004.
[22] M. Strasser. Intrusion detection and failure recovery in
sensor networks. Master's thesis, Department of
Computer Science, ETH Zurich, 2005.
[23] H. Vogt, M. Ringwald, and M. Strasser. Intrusion
detection and failure recovery in sensor nodes. In
Tagungsband INFORMATIK 2005, Workshop
Proceedings, LNCS, Heidelberg, Germany, Sept. 2005.
Springer-Verlag.
121
| sensor networks;countermeasures;intrusion detection;Wireless Sensor Networks;Node Recovery;Intrusion Detection;node recovery;IDS;sensor nodes;security;distributed algorithm |
42 | Bayesian Online Classifiers for Text Classification and Filtering | This paper explores the use of Bayesian online classifiers to classify text documents. Empirical results indicate that these classifiers are comparable with the best text classification systems. Furthermore, the online approach offers the advantage of continuous learning in the batch-adaptive text filtering task. | INTRODUCTION
Faced with massive information everyday, we need automated
means for classifying text documents. Since handcrafting
text classifiers is a tedious process, machine learning
methods can assist in solving this problem[15, 7, 27].
Yang & Liu[27] provides a comprehensive comparison of
supervised machine learning methods for text classification.
In this paper we will show that certain Bayesian classifiers
are comparable with Support Vector Machines[23], one
of the best methods reported in [27].
In particular, we
will evaluate the Bayesian online perceptron[17, 20] and the
Bayesian online Gaussian process[3].
For text classification and filtering, where the initial training
set is large, online approaches are useful because they
allow continuous learning without storing all the previously
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
SIGIR'02, August 11-15, 2002, Tampere, Finland.
Copyright 2002 ACM 1-58113-561-0/02/0008 ...
$
5.00.
seen data. This continuous learning allows the utilization
of information obtained from subsequent data after the initial
training. Bayes' rule allows online learning to be performed
in a principled way[16, 20, 17]. We will evaluate the
Bayesian online perceptron, together with information gain
considerations, on the batch-adaptive filtering task[18].
CLASSIFICATION AND FILTERING
For the text classification taskdefined by Lewis[9], we
have a set of predefined categories and a set of documents.
For each category, the document set is partitioned into two
mutually exclusive sets of relevant and irrelevant documents.
The goal of a text classification system is to determine whether
a given document belongs to any of the predefined categories
. Since the document can belong to zero, one, or more
categories, the system can be a collection of binary classifiers
, in which one classifier classifies for one category.
In Text REtrieval Conference (TREC), the above taskis
known as batch filtering. We will consider a variant of batch
filtering called the batch-adaptive filtering[18]. In this task,
during testing, if a document is retrieved by the classifier,
the relevance judgement is fed backto the classifier. This
feedbackcan be used to improve the classifier.
2.1
Corpora and Data
For text classification, we use the ModApte version of
the Reuters-21578 corpus
1
, where unlabelled documents are
removed. This version has 9,603 training documents and
3,299 test documents. Following [7, 27], only categories that
have at least one document in the training and test set are
retained. This reduces the number of categories to 90.
For batch-adaptive filtering, we attempt the taskof TREC-9
[18], where the OHSUMED collection[6] is used. We will
evaluate on the OHSU topic-set, which consists of 63 topics.
The training and test material consist of 54,710 and 293,856
documents respectively. In addition, there is a topic statement
for each topic. For our purpose, this is treated as an
additional training document for that topic. We will only
use the title, abstract, author, and source sections of the
documents for training and testing.
2.2
Representation
There are various ways to transform a document into a
representation convenient for classification. We will use the
1
Available via http://www.daviddlewis.com/resources/
testcollections/reuters21578.
97
bag-of-words approach, where we only retain frequencies
of words after tokenisation, stemming, and stop-words removal
. These frequencies can be normalized using various
schemes[19, 6]; we use the ltc normalization:
l
i,d
=
1 + log
2
T F
i,d
t
i
=
log
2
N
n
i
ltc
i,d
=
l
i,d
t
i
j
{terms in d}
(l
j,d
t
j
)
2
,
where the subscripts i and d denote the ith term and the
dth document respectively, T F
i,d
is the frequency of the ith
term in the dth document, n
i
is the document-frequency of
the ith term, and N is the total number of documents.
2.3
Feature Selection Metric
Given a set of candidate terms, we select features from
the set using the likelihood ratio for binomial distribution
advocated by Dunning[5]:
=
R
t
+R
t
N
R
t
+R
t
N
t
+N
t
N
N
t
+N
t
R
t
R
t
+N
t
R
t
N
t
R
t
+N
t
N
t
R
t
R
t
+N
t
R
t
N
t
R
t
+N
t
N
t
,
where R
t
(N
t
) is the number of relevant (non-relevant) training
documents which contain the term, R
t
(N
t
) is the number
of relevant (non-relevant) training documents which do
not, and N is the total number of training documents.
Asymptotically,
-2 ln is
2
distributed with 1 degree of
freedom. We choose terms with
-2 ln more than 12.13,
i.e. at 0.05% significance level. More details on the feature
selection procedures will be given in section 4.
2.4
Performance Measures
To evaluate a text classification system, we use the F
1
measure introduced by van Rijsbergen[22]. This measure
combines recall and precision in the following way:
Recall
=
number of correct positive predictions
number of positive examples
Precision
=
number of correct positive predictions
number of positive predictions
F
1
=
2
Recall Precision
Recall + Precision
.
For ease of comparison, we summarize the F
1
scores over
the different categories using the micro- and macro-averages
of F
1
scores[11, 27]:
Micro-avg F
1
=
F
1
over categories and documents
Macro-avg F
1
=
average of within-category F
1
values.
The micro- and macro-average F
1
emphasize the performance
of the system on common and rare categories respectively
. Using these averages, we can observe the effect
of different kinds of data on a text classification system.
In addition, for comparing two text classification systems,
we use the micro sign-test (s-test) and the macro sign-test
(S-test), which are two significance tests first used for comparing
text classification systems in [27]. The s-test compares
all the binary decisions made by the systems, while
the S-test compares the within-category F
1
values. Similar
to the F
1
averages, the s-test and S-test compare the
performance of two systems on common and rare categories
respectively.
To evaluate a batch-adaptive filtering system, we use the
T9P measure of TREC-9[18]:
T9P =
number of correct positive predictions
Max(50, number of positive predictions) ,
which is precision, with a penalty for not retrieving 50 documents
BAYESIAN ONLINE LEARNING
Most of this section is based on workby Opper[17], Solla
& Winther[20], and Csat
o & Opper[3].
Suppose that each document is described by a vector x,
and that the relevance indicator of x for a category is given
by label y {-1, 1}, where -1 and 1 indicates irrelevant
and relevant respectively. Given m instances of past data
D
m
=
{(y
t
, x
t
), t = 1...m}, the predictive probability of the
relevance of a document described by x is
p(y|x, D
m
) =
da p(y|x, a)p(a|D
m
),
where we have introduced the classifier a to assist us in the
prediction. In the Bayesian approach, a is a random variable
with probability density p(a|D
m
), and we integrate over all
the possible values of a to obtain the prediction.
Our aim is to obtain a reasonable description of a. In
the Bayesian online learning framework[16, 20, 17], we begin
with a prior p(a|D
0
), and perform incremental Bayes'
updates to obtain the posterior as data arrives:
p(a|D
t
+1
)
=
p(y
t
+1
|x
t
+1
, a)p(a|D
t
)
da p(y
t
+1
|x
t
+1
, a)p(a|D
t
) .
To make the learning online, the explicit dependence of
the posterior p(a|D
t
+1
) on the past data is removed by approximating
it with a distribution p(a|A
t
+1
), where A
t
+1
characterizes the distribution of a at time t + 1. For example
, if p(a|A
t
+1
) is a Gaussian, then A
t
+1
refers to its mean
and covariance.
Hence, starting from the prior p
0
(a) = p(a|A
0
), learning
from a new example (y
t
+1
, x
t
+1
) comprises two steps:
Update the posterior using Bayes rule
p(a|A
t
, (y
t
+1
, x
t
+1
))
p(y
t
+1
|x
t
+1
, a) p(a|A
t
)
Approximate the updated posterior by parameterisation
p(a|A
t
, (y
t
+1
, x
t
+1
))
p(a|A
t
+1
),
where the approximation step is done by minimizing the
Kullback-Leibler distance between the the approximating and
approximated distributions.
The amount of information gained about a after learning
from a new example can be expressed as the Kullback-Leibler
distance between the posterior and prior distributions
[25]:
IG(y
t
+1
, x
t
+1
|D
t
)
=
da p(a|D
t
+1
) log
2
p(a|D
t
+1
)
p(a|D
t
)
da p(a|A
t
+1
) log
2
p(a|A
t
+1
)
p(a|A
t
) ,
where instances of the data
D are replaced by the summaries
A in the approximation.
98
To simplify notation henceforth, we use p
t
(a) and . . .
t
to
denote p(a|A
t
) and averages taken over p(a|A
t
) respectively.
For example, the predictive probability can be rewritten as
p(y|x, D
t
)
p(y|x, A
t
) =
da p(y|x, a)p
t
(a) = p(y|x, a)
t
.
In the following sections, the scalar field h = a x will also
be used to simplify notation and calculation.
3.1
Bayesian Online Perceptron
Consider the case where a describes a perceptron. We then
define the likelihood as a probit model
p(y|x, a) =
ya x
0
,
where
2
0
is a fixed noise variance, and is the cumulative
Gaussian distribution
(u) =
1
2
u
d
e
2
/
2
.
If p
0
(a) is the spherical unit Gaussian, and p
t
(a) is the
Gaussian approximation, Opper[16, 17] and Solla & Winther[20]
obtain the following updates by equating the means and covariances
of p(a|A
t
+1
) and p(a|A
t
, (y
t
+1
, x
t
+1
)):
a
t
+1
=
a
t
+ s
t
+1
h
t
ln p(y
t
+1
|h)
t
C
t
+1
=
C
t
+ s
t
+1
s
T
t
+1
2
h
2
t
ln p(y
t
+1
|h)
t
,
where
s
t
+1
=
C
t
x
t
+1
,
p(y
t
+1
|h)
t
=
y
t
+1
h
t
t
+1
,
2
t
+1
=
2
0
+ x
T
t
+1
C
t
x
t
+1
and
h
t
=
a
T
t
x
t
+1
.
3.1.1
Algorithm
Training the Bayesian online perceptron on m data involves
successive calculation of the means a
t
and covariances
C
t
of the posteriors, for t {1, ..., m}:
1. Initialize a
0
to be 0 and C
0
to be 1 (identity matrix),
i.e. a spherical unit Gaussian centred at origin.
2. For t = 0, 1, ..., m - 1
3.
y
t
+1
is the relevance indicator for document x
t
+1
4.
Calculate s
t
+1
,
t
+1
, h
t
and p(y
t
+1
|h)
t
5.
Calculate u =
y
t+1
h
t
t+1
and
=
1
2
exp(
1
2
u
2
)
6.
Calculate
h
t
ln p(y
t
+1
|h)
t
=
y
t+1
t+1
1
p
(y
t+1
|h)
t
7.
Calculate
2
h
2
t
ln p(y
t
+1
|h)
t
=
- 1
2
t
+1
u
p(y
t
+1
|h)
t
+
p(y
t
+1
|h)
t
2
8.
Calculate a
t
+1
and C
t
+1
The prediction for datum (y, x) simply involves the calculation
of p(y|x, a)
m
= p(y|h)
m
.
3.2
Bayesian Online Gaussian Process
Gaussian process (GP) has been constrained to problems
with small data sets until recently when Csat
o & Opper[3]
and Williams & Seeger[24] introduced efficient and effective
approximations to the full GP formulation. This section will
outline the approach in [3].
In the GP framework, a describes a function consisting of
function values
{a(x)}. Using the probit model, the likelihood
can be expressed as
p(y|x, a) =
ya(x)
0
,
where
0
and are described in section 3.1.
In addition, p
0
(a) is a GP prior which specifies a Gaussian
distribution with zero mean function and covariance/kernel
function K
0
(x, x ) over a function space. If p
t
(a) is also a
Gaussian process, then Csat
o & Opper obtain the following
updates by equating the means and covariances of p(a|A
t
+1
)
and p(a|A
t
, (y
t
+1
, x
t
+1
)):
a
t
+1
=
a
t
+ s
t
+1
h
t
ln p(y
t
+1
|h)
t
C
t
+1
=
C
t
+ s
t
+1
s
T
t
+1
2
h
2
t
ln p(y
t
+1
|h)
t
,
where
s
t
+1
=
C
t
k
t
+1
+ e
t
+1
,
p(y
t
+1
|h)
t
=
y
t
+1
h
t
t
+1
,
2
t
+1
=
2
0
+ k
t
+1
+ k
T
t
+1
C
t
k
t
+1
and
h
t
=
a(x
t
+1
)
t
=
a
T
t
k
t
+1
Notice the similarities to the updates in section 3.1. The
main difference is the `kernel trick' introduced into the equations
through
k
t
+1
=
K
0
(x
t
+1
, x
t
+1
)
and
k
t
+1
=
(K
0
(x
1
, x
t
+1
), . . . , K
0
(x
t
, x
t
+1
))
T
New inputs x
t
+1
are added sequentially to the system via
the (t + 1)th unit vector e
t
+1
. This results in a quadratic
increase in matrix size, and is a drawbackfor large data
sets, such as those for text classification. Csat
o & Opper
overcome this by introducing sparseness into the GP. The
idea is to replace e
t
+1
by the projection
^
e
t
+1
= K
-1
t
k
t
+1
,
where
K
t
=
{K
0
(x
i
, x
j
), i, j = 1 . . . t}.
This approximation introduces an error
t
+1
= (k
t
+1
- k
T
t
+1
K
-1
t
k
t
+1
)
h
t
ln p(y
t
+1
|h)
t
,
which is used to decide when to employ the approximation.
Hence, at any time the algorithm holds a set of basis vectors
. It is usually desirable to limit the size of this set. To
accommodate this, Csat
o & Opper describe a procedure for
removing a basis vector from the set by reversing the process
of adding new inputs.
For lackof space, the algorithm for the Bayesian Online
Gaussian Process will not be given here. The reader is re-ferred
to [3] for more information.
99
EVALUATION
In this evaluation, we will compare Bayesian online perceptron
, Bayesian online Gaussian process, and Support Vector
Machines (SVM)[23]. SVM is one of the best performing
learning algorithms on the Reuters-21578 corpus[7, 27].
The Bayesian methods are as described in section 3, while
for SVM we will use the SV M
light
package by Joachims[8].
Since SVM is a batch method, to have a fair comparison,
the online methods are iterated through the training data 3
times before testing.
2
4.1.1
Feature Selection
For the Reuters-21578 corpus, we select as features for
each category the set of all words for which
-2 ln > 12.13.
We further prune these by using only the top 300 features.
This reduces the computation time required for the calculation
of the covariances of the Bayesian classifiers.
Since SVM is known to perform well for many features,
for the SVM classifiers we also use the set of words which
occur in at least 3 training documents[7]. This gives us 8,362
words. Note that these words are non-category specific.
4.1.2
Thresholding
The probabilistic outputs from the Bayesian classifiers can
be used in various ways. The most direct way is to use the
Bayes decision rule, p(y = 1|x, D
m
) > 0.5, to determine
the relevance of the document described by x.
3
However,
as discussed in [10, 26], this is not optimal for the chosen
evaluation measure.
Therefore, in addition to 0.5 thresholding, we also empir-ically
optimise the threshold for each category for the F
1
measure on the training documents. This scheme, which we
shall call MaxF1, has also been employed in [27] for thresholding
kNN and LLSF classifiers. The difference from our
approach is that the threshold in [27] is calculated over a
validation set. We do not use a validation set because we
feel that, for very rare categories, it is hard to obtain a reasonable
validation set from the training documents.
For the Bayesian classifiers, we also perform an analyti-cal
threshold optimisation suggested by Lewis[10]. In this
scheme, which we shall call ExpectedF1, the threshold for
each category is selected to optimise the expected F
1
:
E [F
1
]
i
D
(1
- p
i
)
if
|D
+
| = 0
2
iD+
p
i
|
D
+
|
+
iD
p
i
otherwise,
where is the threshold, p
i
is the probability assigned to
document i by the classifier, D is the set of all test documents
, and
D
+
is the set of test documents with probabilities
higher than the threshold .
Note that ExpectedF1 can only be applied after the probabilities
for all the test documents are assigned. Hence the
classification can only be done in batch. This is unlike the
first two schemes, where classification can be done online.
4.1.3
Results and Discussion
2
See section A.2 for discussion on the number of passes.
3
For SVM, to minimise structural risks, we would classify
the document as relevant if w
x + b > 0, where w is the
hyperplane, and b is the bias.
4
See section A.3 for discussion on the jitter terms
ij
.
Table 1: Description of Methods
Description
4
SVM-1
K
0
= x
i
x
j
+ 1
SVM-2
K
0
= (x
i
x
j
+ 1)
2
SVM-R1
K
0
= exp(
1
2
|x
i
- x
j
|
2
)
Perceptron
0
= 0.5, one fixed feature (for bias)
GP-1
0
= 0.5, K
0
= x
i
x
j
+ 1 + 10
-4
ij
GP-2
0
= 0.5, K
0
= (x
i
x
j
+ 1)
2
+ 10
-4
ij
GP-R1
0
= 0.5, K
0
= exp(
1
2
|x
i
- x
j
|
2
) + 10
-4
ij
Table 2: Micro-/Macro-average F
1
0.5
MaxF1
ExpectedF1
SVM
a
-1
86.15 / 42.63
86.35 / 56.92
SVM
a
-2
85.44 / 40.13
86.19 / 56.42
SVM
a
-R1
84.99 / 37.61
86.63 / 53.14
SVM
b
-1
85.60 / 52.03
85.05 / 52.43
SVM
b
-2
85.60 / 50.53
84.50 / 50.49
SVM
b
-R1
85.75 / 50.52
84.65 / 51.27
Perceptron
85.12 / 45.23
86.69 / 52.16
86.44 / 53.08
GP-1
85.08 / 45.20
86.73 / 52.12
86.54 / 53.12
GP-2
85.58 / 47.90
86.60 / 52.19
86.77 / 55.04
GP-R1
85.18 / 44.88
86.76 / 52.61
86.93 / 53.35
Table 1 lists the parameters for the algorithms used in our
evaluation, while Table 2 and 3 tabulate the results. There
are two sets of results for SVM, and they are labeled SVM
a
and SVM
b
. The latter uses the same set of features as the
Bayesian classifiers (i.e. using the
-2 ln measure), while
the former uses the set of 8,362 words as features.
Table 2 summarizes the results using F
1
averages. Table
3 compares the classifiers using s-test and S-test. Here, the
MaxF1 thresholds are used for the classification decisions.
Each row in these tables compares the method listed in the
first column with the other methods. The significance levels
from [27] are used.
Several observations can be made:
Generally, MaxF1 thresholding increases the performance
of all the systems, especially for rare categories.
For the Bayesian classifiers, ExpectedF1 thresholding
improves the performance of the systems on rare categories
.
Perceptron implicitly implements the kernel used by
GP-1, hence their similar results.
With MaxF1 thresholding, feature selection impedes
the performance of SVM.
In Table 2, SVM with 8,362 features have slightly lower
micro-average F
1
to the Bayesian classifiers. However,
the s-tests in Table 3 show that Bayesian classifiers
outperform SVM for significantly many common categories
. Hence, in addition to computing average F
1
measures, it is useful to perform sign tests.
As shown in Table 3, for limited features, Bayesian
classifiers outperform SVM for both common and rare
categories.
Based on the sign tests, the Bayesian classifiers outperform
SVM (using 8,362 words) for common categories,
and vice versa for rare categories.
100
Table 3: s-test/S-test using MaxF1 thresholding
SVM
a
-1
SVM
a
-2
SVM
a
-R1
SVM
b
-1
SVM
b
-2
SVM
b
-R1
Pptron
GP-1
GP-2
GP-R1
SVM
a
-1
/
< /
/
/
/
/
/
/
/
SVM
a
-2
/
/
> /
/
/
/
/
/
/ >
SVM
a
-R1
> /
/
/
/
/
< / >
< / >
/
< /
SVM
b
-1
/
< /
/
/
> / >
/ <
/
/ <
/ <
SVM
b
-2
/
/
/
/
/
/ <
/ <
/
/
SVM
b
-R1
/
/
/
< / <
/
/ <
/
/
/
Perceptron
/
/
> / <
/ >
/ >
/ >
/
/
/
GP-1
/
/
> / <
/
/ >
/
/
/
/
GP-2
/
/
/
/ >
/
/
/
/
/
GP-R1
/
/ <
> /
/ >
/
/
/
/
/
"
" or "
" means P-value
0.01;
">" or "<" means 0.01 < P-value 0.05;
"
" means P-value > 0.05.
The last observation suggests that one can use Bayesian
classifiers for common categories, and SVM for rare ones.
4.2
Filtering on OHSUMED
In this section, only the Bayesian online perceptron will
be considered. In order to avoid numerical integration of
the information gain measure, instead of the probit model
of section 3.1, here we use a simpler likelihood model in
which the outputs are flipped with fixed probability :
p(y|x, a) = + (1 - 2) (ya x) ,
where
(x) =
1
x > 0
0
otherwise.
The update equations will also change accordingly, e.g.
p(y
t
+1
|h)
t
=
+ (1 - 2)
y
t
+1
h
t
t
+1
,
2
t
+1
=
x
T
t
+1
C
t
x
t
+1
and
h
t
=
a
T
t
x
t
+1
.
Using this likelihood measure, we can express the information
gained from datum (y
t
+1
, x
t
+1
) as
IG(y
t
+1
, x
t
+1
|D
t
)
log
2
+
y
t
+1
h
t
+1
t
+1
log
2
1
-log
2
p(y
t
+1
|h)
t
,
where
2
t
+1
=
x
T
t
+1
C
t
+1
x
t
+1
and
h
t
+1
=
a
T
t
+1
x
t
+1
.
We use = 0.1 in this evaluation. The following sections
will describe the algorithm in detail. To simplify presentation
, we will divide the batch-adaptive filtering taskinto
batch and adaptive phases.
4.2.1
Feature Selection and Adaptation
During the batch phase, words for which
-2 ln > 12.13
are selected as features.
During the adaptive phase, when we obtain a feedback, we
update the features by adding any new words with
-2 ln >
12.13. When a feature is added, the distribution of the perceptron
a is extended by one dimension:
a
a
0
C
C
0
0
1
.
4.2.2
Training the classifier
During the batch phase, the classifier is iterated through
the training documents 3 times. In addition, the relevant
documents are collected for use during the adaptive phase.
During the adaptive phase, retrieved relevant documents
are added to this collection. When a document is retrieved,
the classifier is trained on that document and its given relevance
judgement.
The classifier will be trained on irrelevant documents most
of the time. To prevent it from "forgetting" relevant documents
due to its limited capacity, whenever we train on an
irrelevant document, we would also train on a past relevant
document. This past relevant document is chosen succes-sively
from the collection of relevant documents.
This is needed also because new features might have been
added since a relevant document was last trained on. Hence
the classifier would be able to gather new information from
the same document again due to the additional features.
Note that the past relevant document does not need to be
chosen in successive order. Instead, it can be chosen using
a probability distribution over the collection. This will be
desirable when handling topic-drifts.
We will evaluate the effectiveness of this strategy of retraining
on past retrieved relevant documents, and denote
its use by +rel. Though its use means that the algorithm
is no longer online, asymptotic efficiency is unaffected, since
only one past document is used for training at any instance.
4.2.3
Information Gain
During testing, there are two reasons why we retrieve
a document.
The first is that it is relevant, i.e.
p(y =
1
|x, D
t
) > 0.5, where x represents the document. The second
is that, although the document is deemed irrelevant
by the classifier, the classifier would gain useful information
from the document. Using the measure IG(y, x|D
t
), we calculate
the expected information gain
IG(x|D
t
) =
{-1,1}
p(y = |x, D
t
)
IG(y = , x|D
t
).
101
0
10
20
30
40
50
60
70
80
90
100
0
0.2
0.4
0.6
0.8
1
N
ret
Target number of
documents = 50
Figure 1: versus N
ret
tuned for T9P
A document is then deemed useful if its expected information
gain is at least . Optimizing for the T9P measure
(i.e. targeting 50 documents), we choose to be
= 0.999 1 + exp - N
ret
- 50.0
10
-1
+ 0.001,
where N
ret
is the total number of documents that the system
has retrieved. Figure 1 plots against N
ret
. Note that this
is a kind of active learning, where the willingness to tradeoff
precision for learning decreases with N
ret
. The use of this
information gain criteria will be denoted by +ig.
We will test the effectiveness of the information gain strategy
, against an alternative one. The alternative, denoted by
+rnd, will randomly select documents to retrieve based on
the probability
U =
0
if N
ret
>= 50
50-N
ret
293856
otherwise,
where 293,856 is the number of test documents.
4.2.4
Results and Discussion
Table 4 lists the results of seven systems. The first two are
of Microsoft Research Cambridge and Fudan University respectively
. These are the only runs in TREC-9 for the task.
The third is of the system as described in full, i.e. Bayesian
online perceptron, with retraining on past retrieved relevant
documents, and with the use of information gain. The rest
are of the Bayesian online perceptron with different combinations
of strategies.
Besides the T9P measure, for the sake of completeness, Table
4 also lists the other measures used in TREC-9. Taken
together, the measures show that Bayesian online perceptron
, together with the consideration for information gain,
is a very competitive method.
For the systems with +rel, the collection of past known
relevant documents is kept. Although Microsoft uses this
same collection for its query reformulation, another collection
of all previously seen documents is used for threshold
adaptation. Fudan maintains a collection of past retrieved
documents and uses this collection for query adaptation.
5
[18] reports results from run ok9bfr2po, while we report
results from the slightly better run ok9bf2po.
0
2
4
6
8
10
12
14
16
18
20
40
50
60
70
80
90
100
110
120
130
Average number of relevant documents retrieved
Average number of features
Pptron+rel+ig
Pptron+ig
Pptron+rnd
Pptron
Figure 2: Variation of the number of features as
relevant documents are retrieved.
The plots for
Pptron+rel+ig and Pptron+ig are very close. So are
the plots for Pptron+rnd and Pptron.
In a typical operational system, retrieved relevant documents
are usually retained, while irrelevant documents are
usually discarded. Therefore +rel is a practical strategy to
adopt.
Figure 2 plots the average number of features during the
adaptive phase.
We can see that features are constantly
added as relevant documents are seen. When the classifier
is retrained on past documents, the new features enable the
classifier to gain new information from these documents. If
we compare the results for Pptron+rel and Pptron in Table
4, we find that not training on past documents causes
the number of relevant documents retrieved to drop by 5%.
Similarly, for Pptron+rel+ig and Pptron+ig, the drop is
8%.
Table 5 breaks down the retrieved documents into those
that the classifier deems relevant and those that the classifier
is actually querying for information, for Pptron+ig
and Pptron+rnd. The table shows that none of the documents
randomly queried are relevant documents. This is
not surprising, since only an average of 0.017% of the test
documents are relevant. In contrast, the information gain
strategy is able to retrieve 313 relevant documents, which is
26.1% of the documents queried. This is a significant result.
Consider Pptron+ig. Table 4 shows that for Pptron, when
the information gain strategy is removed, only 731 relevant
documents will be retrieved. Hence, although most of the
documents queried are irrelevant, information gained from
these queries helps recall by the classifier (i.e. 815 documents
versus 731 documents), which is important for reaching
the target of 50 documents.
MacKay[13] has noted the phenomenon of querying for
irrelevant documents which are at the edges of the input
space, and suggested maximizing information in a defined
region of interest instead.
Finding this region for batch-adaptive
filtering remains a subject for further research.
Comparing the four plots in Figure 2, we find that, on
average, the information gain strategy causes about 3% more
features to be discovered for the same number of relevant
documents retrieved. A consequence of this is better recall.
102
Table 4: Results for Batch-adaptive filtering optimized for T9P measure.
Microsoft
5
Fudan
Pptron+rel+ig
Pptron+ig
Pptron+rnd
Pptron+rel
Pptron
Total retrieved
3562
3251
2716
2391
2533
1157
1057
Relevant retrieved
1095
1061
1227
1128
732
772
731
Macro-average recall
39.5
37.9
36.2
33.3
20.0
20.8
20.0
Macro-average precision
30.5
32.2
35.8
35.8
21.6
61.9
62.3
Mean T9P
30.5
31.7
31.3
29.8
19.2
21.5
20.8
Mean Utility
-4.397
-1.079
15.318
15.762
-5.349
18.397
17.730
Mean T9U
-4.397
-1.079
15.318
15.762
-5.349
18.397
17.730
Mean scaled utility
-0.596
-0.461
-0.025
0.016
-0.397
0.141
0.138
Zero returns
0
0
0
0
0
8
0
Table 5: Breakdown of documents retrieved for Pptron+ig and Pptron+rnd. The numbers for the latter are in
brackets.
Relevant
Not Relevant
Total
# docs retrieved by perceptron classifier proper
815
(732)
378
(345)
1193
(1077)
# docs retrieved by information gain (or random strategy)
313
(0)
885
(1456)
1198
(1456)
Total
1128
(732)
1263
(1801)
2391
(2533)
CONCLUSIONS AND FURTHER WORK
We have implemented and tested Bayesian online perceptron
and Gaussian processes on the text classification problem
, and have shown that their performance is comparable
to that of SVM, one of the best learning algorithms on
text classification in the published literature. We have also
demonstrated the effectiveness of online learning with information
gain on the TREC-9 batch-adaptive filtering task.
Our results on text classification suggest that one can use
Bayesian classifiers for common categories, and maximum
margin classifiers for rare categories. The partitioning of the
categories into common and rare ones in an optimal way is
an interesting problem.
SVM has been employed to use relevance feedbackby
Drucker et al [4], where the retrieval is in groups of 10 documents
. In essence, this is a form of adaptive routing. It
would be instructive to see how Bayesian classifiers perform
here, without storing too many previously seen documents.
It would also be interesting to compare the merits of incremental
SVM[21, 1] with the Bayesian online classifiers.
Acknowledgments
We would like to thank Lehel Csat
o for providing details
on the implementation of the Gaussian process, Wee Meng
Soon for assisting in the data preparation, Yiming Yang
for clarifying the representation used in [27], and Loo Nin
Teow for proof-reading the manuscript. We would also like
to thankthe reviewers for their many helpful comments in
improving the paper.
REFERENCES
[1] G. Cauwenberghs and T. Poggio. Incremental and
decremental support vector machine learning. In T. K.
Leen, T. G. Dietterich, and V. Tresp, editors, NIPS
2000, volume 13. The MIT Press, 2001.
[2] D. Cox and E. Snell. Analysis of Binary Data.
Chapman & Hall, London, 2nd edition, 1989.
[3] L. Csat
o and M. Opper. Sparse representation for
Gaussian process models. In T. K. Leen, T. G.
Dietterich, and V. Tresp, editors, NIPS 2000,
volume 13. The MIT Press, 2001.
[4] H. Drucker, B. Shahrary, and D. C. Gibbon.
Relevance feedbackusing support vector machines. In
Proceedings of the 2001 International Conference on
Machine Learning, 2001.
[5] T. E. Dunning. Accurate methods for the statistics of
surprise and coincidence. Computational Linguistics,
19(1):6174, 1993.
[6] W. Hersh, C. Buckley, T. Leone, and D. Hickam.
OHSUMED: An interactive retrieval evaluation and
new large test collection for research. In Proceedings of
the 17th Annual International ACM SIGIR
Conference on Research and Development in
Information Retrieval, pages 192201, 1994.
[7] T. Joachims. Text categorization with support vector
machines: Learning with many relevant features. In
Proceedings of the European Conference on Machine
Learning (ECML), pages 137142, 1998.
[8] T. Joachims. Making large-scale SVM learning
practical. In B. Sch
olkopf, C. Burges, and A. Smola,
editors, Advances in Kernel Methods -- Support
Vector Learning, chapter 11. The MIT Press, 1999.
[9] D. D. Lewis. Representation and Learning in
Information Retrieval. PhD thesis, Department of
Computer and Information Science, University of
Massachusetts at Amherst, 1992.
[10] D. D. Lewis. Evaluating and optimizing automomous
text classification systems. In Proceedings of the 18th
Annual International ACM SIGIR Conference on
Research and Development in Information Retrieval,
pages 246254, 1995.
[11] D. D. Lewis, R. E. Schapire, J. P. Callan, and
R. Papka. Training algorithms for linear text
classifiers. In Proceedings of the 19th Annual
International ACM SIGIR Conference on Research
and Development in Information Retrieval, pages
298306, 1996.
[12] D. J. Mackay. Bayesian interpolation. Neural
Computation, 4(3):415447, 1991.
[13] D. J. Mackay. Information-based objective functions
for active data selection. Neural Computation,
4(4):590604, 1992.
103
[14] R. M. Neal. Monte Carlo implementation of Gaussian
process models for Bayesian regression and
classification. Technical Report CRG-TR-97-2,
Department of Computer Science, University of
Toronto, January 1997.
[15] H. T. Ng, W. B. Goh, and K. L. Low. Feature
selection, perceptron learning, and a usability case
study for text categorization. In Proceedings of the
20th Annual International ACM SIGIR Conference on
Research and Development in Information Retrieval,
pages 6773, 1997.
[16] M. Opper. Online versus offline learning from random
examples: General results. Physical Review Letters,
77:46714674, 1996.
[17] M. Opper. A Bayesian approach to online learning. In
D. Saad, editor, On-Line Learning in Neural
Networks. Combridge University Press, 1998.
[18] S. Robertson and D. A. Hull. The TREC-9 filtering
trackfinal report. In Proceedings of the 9th Text
REtrieval Conference (TREC-9), pages 2540, 2001.
[19] G. Salton and C. Buckley. Term-weighting approaches
in automatic text retrieval. Information Processing
and Management, 24(5):513523, 1988.
[20] S. A. Solla and O. Winther. Optimal perceptron
learning: an online Bayesian approach. In D. Saad,
editor, On-Line Learning in Neural Networks.
Combridge University Press, 1998.
[21] N. A. Syed, H. Liu, and K. K. Sung. Incremental
learning with support vector machines. In Proceedings
of the Workshop on Support Vector Machines at the
International Joint Conference on Artificial
Intelligence (IJCAI-99), 1999.
[22] C. van Rijsbergen. Information Retrieval.
Butterworths, London, 1979.
[23] V. N. Vapnik. The Nature of Statistical Learning
Theory. Springer, New York, 1995.
[24] C. K. Williams and M. Seeger. Using the Nystr
om
method to speed up kernel machines. In T. K. Leen,
T. G. Dietterich, and V. Tresp, editors, NIPS 2000,
volume 13. The MIT Press, 2001.
[25] O. Winther. Bayesian Mean Field Algorithms for
Neural Networks and Gaussian Processes. PhD thesis,
University of Copenhagen, CONNECT, The Niels Bohr
Institute, 1998.
[26] Y. Yang. A study on thresholding strategies for text
categorization. In Proceedings of the 24th Annual
International ACM SIGIR Conference on Research
and Development in Information Retrieval, pages
137145, 2001.
[27] Y. Yang and X. Liu. A re-examination of text
categorization methods. In Proceedings of the 22nd
Annual International ACM SIGIR Conference on
Research and Development in Information Retrieval,
pages 4249, 1999.
APPENDIX
A.
ON THE CHOICE OF PARAMETERS
A.1
Likelihood model
MacKay[12] has suggested the evidence frameworkfor model
selection. Here, we calculate the evidence on the training
Table 6:
Micro-/Macro-avg F
1
(MaxF1 thresholds)
and Avg log-evidence on Reuters-21578 for different
likelihood models, using Bayesian online perceptron.
Micro-/Macro-avg F
1
Avg log-evidence
Logit
86.48 / 52.75
-45.02
Probit
86.69 / 52.16
-34.32
Flip
85.94 / 53.00
-368.8
Table 7:
Micro-/Macro-avg F
1
(MaxF1 thresholds)
and Avg log-evidence on Reuters-21578 for different
passes over the training data, using Bayesian online
perceptron.
Passes
Micro-/Macro-avg F
1
Avg log-evidence
1
87.08 / 52.87
-35.56
2
86.92 / 52.63
-34.36
3
86.69 / 52.16
-34.32
4
86.62 / 52.75
-34.54
5
85.22 / 46.93
-34.69
data using the final posterior for a:
p(D
m
) =
m
t
=1
p(y
t
|x
t
, a)
m
.
Table 6 illustrates this for selecting the likelihood measure
for the text classification task, using the Bayesian online
perceptron. In the table, the probit model follows the
formulation in section 3.1 with
0
= 0.5, logit model is esti-mated
by the probit model with
0
= 1.6474[2], and the flip
noise model is as described in section 4.2. Although their
F
1
averages are similar, the evidences show that the probit
model with
0
= 0.5 is a more likely model. The small evidence
for the flip noise model is because much information
is lost through the threshold function .
A.2
Effects of multiple passes over data
Using the evidence measure defined in section A.1, Table
7 illustrates the effects of different number of passes over
training data for Bayesian online perceptron. Treating the
number of passes as a parameter for the algorithm, we see
that having 3 passes over the data gives the highest average
evidence, although there is no significant difference between
2, 3, or 4 passes. Similar results hold for the Gaussian process
for the 3 different kernels. Hence, in section 4.1, we
choose to use 3 passes for all the Bayesian algorithms.
A.3
Jitter term
The addition of the jitter term 10
-4
ij
(where
ij
= 1
if i = j, and 0 otherwise) for Gaussian process for classification
is recommended by Neal[14]. This term improves
the conditioning of the matrix computations while having
a small effect on the model. From our preliminary experiments
, without the jitter term, the matrix operations in
Bayesian online Gaussian process become ill-conditioned.
A.4
Sizes of the basis vectors sets
The sizes of the sets of basis vectors for GP in section 4.1
are limited to less than or equal to the number of features
selected. This is because, as noted by Csat
o & Opper[3],
for a feature space of finite dimension M, no more than M
basis vectors are needed, due to linear dependence.
104 | Text Classification;perceptron;text classification;filtering;information gain;Bayesian online classifiers;Online;Machine Learning;continous learning;machine learning;Text Filtering;Gaussian process;Bayesian |
43 | Beyond PageRank: Machine Learning for Static Ranking | Since the publication of Brin and Page's paper on PageRank, many in the Web community have depended on PageRank for the static (query-independent) ordering of Web pages. We show that we can significantly outperform PageRank using features that are independent of the link structure of the Web. We gain a further boost in accuracy by using data on the frequency at which users visit Web pages. We use RankNet, a ranking machine learning algorithm, to combine these and other static features based on anchor text and domain characteristics. The resulting model achieves a static ranking pairwise accuracy of 67.3% (vs. 56.7% for PageRank or 50% for random). | INTRODUCTION
Over the past decade, the Web has grown exponentially in size.
Unfortunately, this growth has not been isolated to good-quality
pages. The number of incorrect, spamming, and malicious (e.g.,
phishing) sites has also grown rapidly. The sheer number of both
good and bad pages on the Web has led to an increasing reliance
on search engines for the discovery of useful information. Users
rely on search engines not only to return pages related to their
search query, but also to separate the good from the bad, and
order results so that the best pages are suggested first.
To date, most work on Web page ranking has focused on
improving the ordering of the results returned to the user (query-dependent
ranking, or dynamic ranking). However, having a good
query-independent ranking (static ranking) is also crucially
important for a search engine. A good static ranking algorithm
provides numerous benefits:
Relevance
: The static rank of a page provides a general
indicator to the overall quality of the page. This is a
useful input to the dynamic ranking algorithm.
Efficiency
: Typically, the search engine's index is
ordered by static rank. By traversing the index from high-quality
to low-quality pages, the dynamic ranker may
abort the search when it determines that no later page
will have as high of a dynamic rank as those already
found. The more accurate the static rank, the better this
early-stopping ability, and hence the quicker the search
engine may respond to queries.
Crawl Priority
: The Web grows and changes as quickly
as search engines can crawl it. Search engines need a way
to prioritize their crawl--to determine which pages to re-crawl
, how frequently, and how often to seek out new
pages. Among other factors, the static rank of a page is
used to determine this prioritization. A better static rank
thus provides the engine with a higher quality, more up-to
-date index.
Google is often regarded as the first commercially successful
search engine. Their ranking was originally based on the
PageRank algorithm [5][27]. Due to this (and possibly due to
Google's promotion of PageRank to the public), PageRank is
widely regarded as the best method for the static ranking of Web
pages.
Though PageRank has historically been thought to perform quite
well, there has yet been little academic evidence to support this
claim. Even worse, there has recently been work showing that
PageRank may not perform any better than other simple measures
on certain tasks. Upstill et al. have found that for the task of
finding home pages, the number of pages linking to a page and the
type of URL were as, or more, effective than PageRank [32]. They
found similar results for the task of finding high quality
companies [31]. PageRank has also been used in systems for
TREC's "very large collection" and "Web track" competitions,
but with much less success than had been expected [17]. Finally,
Amento et al. [1] found that simple features, such as the number
of pages on a site, performed as well as PageRank.
Despite these, the general belief remains among many, both
academic and in the public, that PageRank is an essential factor
for a good static rank. Failing this, it is still assumed that using the
link structure is crucial, in the form of the number of inlinks or the
amount of anchor text.
In this paper, we show there are a number of simple url- or page-
based features that significantly outperform PageRank (for the
purposes of statically ranking Web pages) despite ignoring the
Copyright is held by the International World Wide Web Conference
Committee (IW3C2). Distribution of these papers is limited to
classroom use, and personal use by others.
WWW 2006, May 2326, 2006, Edinburgh, Scotland.
ACM 1-59593-323-9/06/0005.
707
structure of the Web. We combine these and other static features
using machine learning to achieve a ranking system that is
significantly better than PageRank (in pairwise agreement with
human labels).
A machine learning approach for static ranking has other
advantages besides the quality of the ranking. Because the
measure consists of many features, it is harder for malicious users
to manipulate it (i.e., to raise their page's static rank to an
undeserved level through questionable techniques, also known as
Web spamming). This is particularly true if the feature set is not
known. In contrast, a single measure like PageRank can be easier
to manipulate because spammers need only concentrate on one
goal: how to cause more pages to point to their page. With an
algorithm that learns, a feature that becomes unusable due to
spammer manipulation will simply be reduced or removed from
the final computation of rank. This flexibility allows a ranking
system to rapidly react to new spamming techniques.
A machine learning approach to static ranking is also able to take
advantage of any advances in the machine learning field. For
example, recent work on adversarial classification [12] suggests
that it may be possible to explicitly model the Web page
spammer's (the adversary) actions, adjusting the ranking model in
advance of the spammer's attempts to circumvent it. Another
example is the elimination of outliers in constructing the model,
which helps reduce the effect that unique sites may have on the
overall quality of the static rank. By moving static ranking to a
machine learning framework, we not only gain in accuracy, but
also gain in the ability to react to spammer's actions, to rapidly
add new features to the ranking algorithm, and to leverage
advances in the rapidly growing field of machine learning.
Finally, we believe there will be significant advantages to using
this technique for other domains, such as searching a local hard
drive or a corporation's intranet. These are domains where the
link structure is particularly weak (or non-existent), but there are
other domain-specific features that could be just as powerful. For
example, the author of an intranet page and his/her position in the
organization (e.g., CEO, manager, or developer) could provide
significant clues as to the importance of that page. A machine
learning approach thus allows rapid development of a good static
algorithm in new domains.
This paper's contribution is a systematic study of static features,
including PageRank, for the purposes of (statically) ranking Web
pages. Previous studies on PageRank typically used subsets of the
Web that are significantly smaller (e.g., the TREC VLC2 corpus,
used by many, contains only 19 million pages). Also, the
performance of PageRank and other static features has typically
been evaluated in the context of a complete system for dynamic
ranking, or for other tasks such as question answering. In contrast,
we explore the use of PageRank and other features for the direct
task of statically ranking Web pages.
We first briefly describe the PageRank algorithm. In Section 3 we
introduce RankNet, the machine learning technique used to
combine static features into a final ranking. Section 4 describes
the static features. The heart of the paper is in Section 5, which
presents our experiments and results. We conclude with a
discussion of related and future work.
PAGERANK
The basic idea behind PageRank is simple: a link from a Web
page to another can be seen as an endorsement of that page. In
general, links are made by people. As such, they are indicative of
the quality of the pages to which they point when creating a
page, an author presumably chooses to link to pages deemed to be
of good quality. We can take advantage of this linkage
information to order Web pages according to their perceived
quality.
Imagine a Web surfer who jumps from Web page to Web page,
choosing with uniform probability which link to follow at each
step. In order to reduce the effect of dead-ends or endless cycles
the surfer will occasionally jump to a random page with some
small probability
, or when on a page with no out-links. If
averaged over a sufficient number of steps, the probability the
surfer is on page j at some point in time is given by the formula:
+
=
j
i
i
i
P
N
j
P
B
F
)
(
)
1
(
)
(
(1)
Where F
i
is the set of pages that page i links to, and B
j
is the set of
pages that link to page j. The PageRank score for node j is defined
as this probability: PR(j)=P(j). Because equation (1) is recursive,
it must be iteratively evaluated until P(j) converges (typically, the
initial distribution for P(j) is uniform). The intuition is, because a
random surfer would end up at the page more frequently, it is
likely a better page. An alternative view for equation (1) is that
each page is assigned a quality, P(j). A page "gives" an equal
share of its quality to each page it points to.
PageRank is computationally expensive. Our collection of 5
billion pages contains approximately 370 billion links. Computing
PageRank requires iterating over these billions of links multiple
times (until convergence). It requires large amounts of memory
(or very smart caching schemes that slow the computation down
even further), and if spread across multiple machines, requires
significant communication between them. Though much work has
been done on optimizing the PageRank computation (see e.g.,
[25] and [6]), it remains a relatively slow, computationally
expensive property to compute.
RANKNET
Much work in machine learning has been done on the problems of
classification and regression. Let X={x
i
} be a collection of feature
vectors (typically, a feature is any real valued number), and
Y
={y
i
} be a collection of associated classes, where y
i
is the class
of the object described by feature vector x
i
. The classification
problem is to learn a function f that maps y
i
=f(x
i
), for all i. When
y
i
is real-valued as well, this is called regression.
Static ranking can be seen as a regression problem. If we let x
i
represent features of page i, and y
i
be a value (say, the rank) for
each page, we could learn a regression function that mapped each
page's features to their rank. However, this over-constrains the
problem we wish to solve. All we really care about is the order of
the pages, not the actual value assigned to them.
Recent work on this ranking problem [7][13][18] directly
attempts to optimize the ordering of the objects, rather than the
value assigned to them. For these, let Z={<i,j>} be a collection of
pairs of items, where item i should be assigned a higher value than
item j. The goal of the ranking problem, then, is to learn a
function f such that,
)
(
)
(
,
,
j
i
f
f
j
i
x
x
Z
>
708
Note that, as with learning a regression function, the result of this
process is a function (f) that maps feature vectors to real values.
This function can still be applied anywhere that a regression-learned
function could be applied. The only difference is the
technique used to learn the function. By directly optimizing the
ordering of objects, these methods are able to learn a function that
does a better job of ranking than do regression techniques.
We used RankNet [7], one of the aforementioned techniques for
learning ranking functions, to learn our static rank function.
RankNet is a straightforward modification to the standard neural
network back-prop algorithm. As with back-prop, RankNet
attempts to minimize the value of a cost function by adjusting
each weight in the network according to the gradient of the cost
function with respect to that weight. The difference is that, while a
typical neural network cost function is based on the difference
between the network output and the desired output, the RankNet
cost function is based on the difference between a pair of network
outputs. That is, for each pair of feature vectors <i,j> in the
training set, RankNet computes the network outputs o
i
and o
j
.
Since vector i is supposed to be ranked higher than vector j, the
larger is o
j
-o
i
, the larger the cost.
RankNet also allows the pairs in Z to be weighted with a
confidence (posed as the probability that the pair satisfies the
ordering induced by the ranking function). In this paper, we used
a probability of one for all pairs. In the next section, we will
discuss the features used in our feature vectors, x
i
.
FEATURES
To apply RankNet (or other machine learning techniques) to the
ranking problem, we needed to extract a set of features from each
page. We divided our feature set into four, mutually exclusive,
categories: page-level (Page), domain-level (Domain), anchor text
and inlinks (Anchor), and popularity (Popularity). We also
optionally used the PageRank of a page as a feature. Below, we
describe each of these feature categories in more detail.
PageRank
We computed PageRank on a Web graph of 5 billion crawled
pages (and 20 billion known URLs linked to by these pages).
This represents a significant portion of the Web, and is
approximately the same number of pages as are used by
Google, Yahoo, and MSN for their search engines.
Because PageRank is a graph-based algorithm, it is important
that it be run on as large a subset of the Web as possible. Most
previous studies on PageRank used subsets of the Web that are
significantly smaller (e.g. the TREC VLC2 corpus, used by
many, contains only 19 million pages)
We computed PageRank using the standard value of 0.85 for
.
Popularity
Another feature we used is the actual popularity of a Web page,
measured as the number of times that it has been visited by
users over some period of time. We have access to such data
from users who have installed the MSN toolbar and have opted
to provide it to MSN. The data is aggregated into a count, for
each Web page, of the number of users who viewed that page.
Though popularity data is generally unavailable, there are two
other sources for it. The first is from proxy logs. For example, a
university that requires its students to use a proxy has a record
of all the pages they have visited while on campus.
Unfortunately, proxy data is quite biased and relatively small.
Another source, internal to search engines, are records of which
results their users clicked on. Such data was used by the search
engine "Direct Hit", and has recently been explored for
dynamic ranking purposes [20]. An advantage of the toolbar
data over this is that it contains information about URL visits
that are not just the result of a search.
The raw popularity is processed into a number of features such
as the number of times a page was viewed and the number of
times any page in the domain was viewed. More details are
provided in section 5.5.
Anchor text and inlinks
These features are based on the information associated with
links to the page in question. It includes features such as the
total amount of text in links pointing to the page ("anchor
text"), the number of unique words in that text, etc.
Page
This category consists of features which may be determined by
looking at the page (and its URL) alone. We used only eight,
simple features such as the number of words in the body, the
frequency of the most common term, etc.
Domain
This category contains features that are computed as averages
across all pages in the domain. For example, the average
number of outlinks on any page and the average PageRank.
Many of these features have been used by others for ranking Web
pages, particularly the anchor and page features. As mentioned,
the evaluation is typically for dynamic ranking, and we wish to
evaluate the use of them for static ranking. Also, to our
knowledge, this is the first study on the use of actual page
visitation popularity for static ranking. The closest similar work is
on using click-through behavior (that is, which search engine
results the users click on) to affect dynamic ranking (see e.g.,
[20]).
Because we use a wide variety of features to come up with a static
ranking, we refer to this as fRank (for feature-based ranking).
fRank uses RankNet and the set of features described in this
section to learn a ranking function for Web pages. Unless
otherwise specified, fRank was trained with all of the features.
EXPERIMENTS
In this section, we will demonstrate that we can out perform
PageRank by applying machine learning to a straightforward set
of features. Before the results, we first discuss the data, the
performance metric, and the training method.
5.1
Data
In order to evaluate the quality of a static ranking, we needed a
"gold standard" defining the correct ordering for a set of pages.
For this, we employed a dataset which contains human judgments
for 28000 queries. For each query, a number of results are
manually assigned a rating, from 0 to 4, by human judges. The
rating is meant to be a measure of how relevant the result is for
the query, where 0 means "poor" and 4 means "excellent". There
are approximately 500k judgments in all, or an average of 18
ratings per query.
The queries are selected by randomly choosing queries from
among those issued to the MSN search engine. The probability
that a query is selected is proportional to its frequency among all
709
of the queries. As a result, common queries are more likely to be
judged than uncommon queries. As an example of how diverse
the queries are, the first four queries in the training set are "chef
schools", "chicagoland speedway", "eagles fan club", and
"Turkish culture". The documents selected for judging are those
that we expected would, on average, be reasonably relevant (for
example, the top ten documents returned by MSN's search
engine). This provides significantly more information than
randomly selecting documents on the Web, the vast majority of
which would be irrelevant to a given query.
Because of this process, the judged pages tend to be of higher
quality than the average page on the Web, and tend to be pages
that will be returned for common search queries. This bias is good
when evaluating the quality of static ranking for the purposes of
index ordering and returning relevant documents. This is because
the most important portion of the index to be well-ordered and
relevant is the portion that is frequently returned for search
queries. Because of this bias, however, the results in this paper are
not applicable to crawl prioritization. In order to obtain
experimental results on crawl prioritization, we would need
ratings on a random sample of Web pages.
To convert the data from query-dependent to query-independent,
we simply removed the query, taking the maximum over
judgments for a URL that appears in more than one query. The
reasoning behind this is that a page that is relevant for some query
and irrelevant for another is probably a decent page and should
have a high static rank. Because we evaluated the pages on
queries that occur frequently, our data indicates the correct index
ordering, and assigns high value to pages that are likely to be
relevant to a common query.
We randomly assigned queries to a training, validation, or test set,
such that they contained 84%, 8%, and 8% of the queries,
respectively. Each set contains all of the ratings for a given query,
and no query appears in more than one set. The training set was
used to train fRank. The validation set was used to select the
model that had the highest performance. The test set was used for
the final results.
This data gives us a query-independent ordering of pages. The
goal for a static ranking algorithm will be to reproduce this
ordering as closely as possible. In the next section, we describe
the measure we used to evaluate this.
5.2
Measure
We chose to use pairwise accuracy to evaluate the quality of a
static ranking. The pairwise accuracy is the fraction of time that
the ranking algorithm and human judges agree on the ordering of
a pair of Web pages.
If S(x) is the static ranking assigned to page x, and H(x) is the
human judgment of relevance for x, then consider the following
sets:
)}
(
)
(
:
,
{
y
H
x
H
y
x
>
=
p
H
and
)}
(
)
(
:
,
{
y
S
x
S
y
x
>
=
p
S
The pairwise accuracy is the portion of H
p
that is also contained
in S
p
:
p
p
p
H
S
H
=
accuracy
pairwise
This measure was chosen for two reasons. First, the discrete
human judgments provide only a partial ordering over Web pages,
making it difficult to apply a measure such as the Spearman rank
order correlation coefficient (in the pairwise accuracy measure, a
pair of documents with the same human judgment does not affect
the score). Second, the pairwise accuracy has an intuitive
meaning: it is the fraction of pairs of documents that, when the
humans claim one is better than the other, the static rank
algorithm orders them correctly.
5.3
Method
We trained fRank (a RankNet based neural network) using the
following parameters. We used a fully connected 2 layer network.
The hidden layer had 10 hidden nodes. The input weights to this
layer were all initialized to be zero. The output "layer" (just a
single node) weights were initialized using a uniform random
distribution in the range [-0.1, 0.1]. We used tanh as the transfer
function from the inputs to the hidden layer, and a linear function
from the hidden layer to the output. The cost function is the
pairwise cross entropy cost function as discussed in section 3.
The features in the training set were normalized to have zero mean
and unit standard deviation. The same linear transformation was
then applied to the features in the validation and test sets.
For training, we presented the network with 5 million pairings of
pages, where one page had a higher rating than the other. The
pairings were chosen uniformly at random (with replacement)
from all possible pairings. When forming the pairs, we ignored the
magnitude of the difference between the ratings (the rating spread)
for the two URLs. Hence, the weight for each pair was constant
(one), and the probability of a pair being selected was
independent of its rating spread.
We trained the network for 30 epochs. On each epoch, the
training pairs were randomly shuffled. The initial training rate was
0.001. At each epoch, we checked the error on the training set. If
the error had increased, then we decreased the training rate, under
the hypothesis that the network had probably overshot. The
training rate at each epoch was thus set to:
Training rate =
1
+
Where
is the initial rate (0.001), and
is the number of times
the training set error has increased. After each epoch, we
measured the performance of the neural network on the validation
set, using 1 million pairs (chosen randomly with replacement).
The network with the highest pairwise accuracy on the validation
set was selected, and then tested on the test set. We report the
pairwise accuracy on the test set, calculated using all possible
pairs.
These parameters were determined and fixed before the static rank
experiments in this paper. In particular, the choice of initial
training rate, number of epochs, and training rate decay function
were taken directly from Burges et al [7].
Though we had the option of preprocessing any of the features
before they were input to the neural network, we refrained from
doing so on most of them. The only exception was the popularity
features. As with most Web phenomenon, we found that the
distribution of site popularity is Zipfian. To reduce the dynamic
range, and hopefully make the feature more useful, we presented
the network with both the unpreprocessed, as well as the
logarithm, of the popularity features (As with the others, the
logarithmic feature values were also normalized to have zero
mean and unit standard deviation).
710
Applying fRank to a document is computationally efficient, taking
time that is only linear in the number of input features; it is thus
within a constant factor of other simple machine learning methods
such as nave Bayes. In our experiments, computing the fRank for
all five billion Web pages was approximately 100 times faster
than computing the PageRank for the same set.
5.4
Results
As Table 1 shows, fRank significantly outperforms PageRank for
the purposes of static ranking. With a pairwise accuracy of 67.4%,
fRank more than doubles the accuracy of PageRank (relative to
the baseline of 50%, which is the accuracy that would be achieved
by a random ordering of Web pages). Note that one of fRank's
input features is the PageRank of the page, so we would expect it
to perform no worse than PageRank. The significant increase in
accuracy implies that the other features (anchor, popularity, etc.)
do in fact contain useful information regarding the overall quality
of a page.
Table 1: Basic Results
Technique
Accuracy (%)
None (Baseline)
50.00
PageRank
56.70
fRank
67.43
There are a number of decisions that go into the computation of
PageRank, such as how to deal with pages that have no outlinks,
the choice of
, numeric precision, convergence threshold, etc.
We were able to obtain a computation of PageRank from a
completely independent implementation (provided by Marc
Najork) that varied somewhat in these parameters. It achieved a
pairwise accuracy of 56.52%, nearly identical to that obtained by
our implementation. We thus concluded that the quality of the
PageRank is not sensitive to these minor variations in algorithm,
nor was PageRank's low accuracy due to problems with our
implementation of it.
We also wanted to find how well each feature set performed. To
answer this, for each feature set, we trained and tested fRank
using only that set of features. The results are shown in Table 2.
As can be seen, every single feature set individually outperformed
PageRank on this test. Perhaps the most interesting result is that
the Page-level features had the highest performance out of all the
feature sets. This is surprising because these are features that do
not depend on the overall graph structure of the Web, nor even on
what pages point to a given page. This is contrary to the common
belief that the Web graph structure is the key to finding a good
static ranking of Web pages.
Table 2: Results for individual feature sets.
Feature Set
Accuracy (%)
PageRank
56.70
Popularity
60.82
Anchor
59.09
Page
63.93
Domain
59.03
All Features
67.43
Because we are using a two-layer neural network, the features in
the learned network can interact with each other in interesting,
nonlinear ways. This means that a particular feature that appears
to have little value in isolation could actually be very important
when used in combination with other features. To measure the
final contribution of a feature set, in the context of all the other
features, we performed an ablation study. That is, for each set of
features, we trained a network to contain all of the features except
that set. We then compared the performance of the resulting
network to the performance of the network with all of the features.
Table 3 shows the results of this experiment, where the "decrease
in accuracy" is the difference in pairwise accuracy between the
network trained with all of the features, and the network missing
the given feature set.
Table 3: Ablation study. Shown is the decrease in accuracy
when we train a network that has all but the given set of
features. The last line is shows the effect of removing the
anchor, PageRank, and domain features, hence a model
containing no network or link-based information whatsoever.
Feature Set
Decrease in
Accuracy
PageRank
0.18
Popularity
0.78
Anchor
0.47
Page
5.42
Domain
Anchor, PageRank & Domain
0.10
0.60
The results of the ablation study are consistent with the individual
feature set study. Both show that the most important feature set is
the Page-level feature set, and the second most important is the
popularity feature set.
Finally, we wished to see how the performance of fRank
improved as we added features; we wanted to find at what point
adding more feature sets became relatively useless. Beginning
with no features, we greedily added the feature set that improved
performance the most. The results are shown in Table 4. For
example, the fourth line of the table shows that fRank using the
page, popularity, and anchor features outperformed any network
that used the page, popularity, and some other feature set, and that
the performance of this network was 67.25%.
Table 4: fRank performance as feature sets are added. At each
row, the feature set that gave the greatest increase in accuracy
was added to the list of features (i.e., we conducted a greedy
search over feature sets).
Feature Set
Accuracy (%)
None
50.00
+Page
63.93
+Popularity
66.83
+Anchor
67.25
+PageRank
67.31
+Domain
67.43
711
Finally, we present a qualitative comparison of PageRank vs.
fRank. In Table 5 are the top ten URLs returned for PageRank and
for fRank. PageRank's results are heavily weighted towards
technology sites. It contains two QuickTime URLs (Apple's video
playback software), as well as Internet Explorer and FireFox
URLs (both of which are Web browsers). fRank, on the other
hand, contains more consumer-oriented sites such as American
Express, Target, Dell, etc. PageRank's bias toward technology can
be explained through two processes. First, there are many pages
with "buttons" at the bottom suggesting that the site is optimized
for Internet Explorer, or that the visitor needs QuickTime. These
generally link back to, in these examples, the Internet Explorer
and QuickTime download sites. Consequently, PageRank ranks
those pages highly. Though these pages are important, they are
not as important as it may seem by looking at the link structure
alone. One fix for this is to add information about the link to the
PageRank computation, such as the size of the text, whether it was
at the bottom of the page, etc.
The other bias comes from the fact that the population of Web site
authors is different than the population of Web users. Web
authors tend to be technologically-oriented, and thus their linking
behavior reflects those interests. fRank, by knowing the actual
visitation popularity of a site (the popularity feature set), is able to
eliminate some of that bias. It has the ability to depend more on
where actual Web users visit rather than where the Web site
authors have linked.
The results confirm that fRank outperforms PageRank in pairwise
accuracy. The two most important feature sets are the page and
popularity features. This is surprising, as the page features
consisted only of a few (8) simple features. Further experiments
found that, of the page features, those based on the text of the
page (as opposed to the URL) performed the best. In the next
section, we explore the popularity feature in more detail.
5.5
Popularity Data
As mentioned in section 4, our popularity data came from MSN
toolbar users. For privacy reasons, we had access only to an
aggregate count of, for each URL, how many times it was visited
by any toolbar user. This limited the possible features we could
derive from this data. For possible extensions, see section 6.3,
future work.
For each URL in our train and test sets, we provided a feature to
fRank which was how many times it had been visited by a toolbar
user. However, this feature was quite noisy and sparse,
particularly for URLs with query parameters (e.g., http://search
.msn.com/results.aspx?q=machine+learning&form=QBHP). One
solution was to provide an additional feature which was the
number of times any URL at the given domain was visited by a
toolbar user. Adding this feature dramatically improved the
performance of fRank.
We took this one step further and used the built-in hierarchical
structure of URLs to construct many levels of backoff between the
full URL and the domain. We did this by using the set of features
shown in Table 6.
Table 6: URL functions used to compute the Popularity
feature set.
Function
Example
Exact URL
cnn.com/2005/tech/wikipedia.html?v=mobile
No Params
cnn.com/2005/tech/wikipedia.html
Page
wikipedia.html
URL-1
cnn.com/2005/tech
URL-2
cnn.com/2005
...
Domain
cnn.com
Domain+1
cnn.com/2005
...
Each URL was assigned one feature for each function shown in
the table. The value of the feature was the count of the number of
times a toolbar user visited a URL, where the function applied to
that URL matches the function applied to the URL in question.
For example, a user's visit to cnn.com/2005/sports.html would
increment the Domain and Domain+1 features for the URL
cnn.com/2005/tech/wikipedia.html.
As seen in Table 7, adding the domain counts significantly
improved the quality of the popularity feature, and adding the
numerous backoff functions listed in Table 6 improved the
accuracy even further.
Table 7: Effect of adding backoff to the popularity feature set
Features
Accuracy (%)
URL count
58.15
URL and Domain counts
59.31
All backoff functions (Table 6)
60.82
Table 5: Top ten URLs for PageRank vs. fRank
PageRank
fRank
google.com
google.com
apple.com/quicktime/download
yahoo.com
amazon.com
americanexpress.com
yahoo.com
hp.com
microsoft.com/windows/ie
target.com
apple.com/quicktime
bestbuy.com
mapquest.com
dell.com
ebay.com
autotrader.com
mozilla.org/products/firefox
dogpile.com
ftc.gov
bankofamerica.com
712
Backing off to subsets of the URL is one technique for dealing
with the sparsity of data. It is also informative to see how the
performance of fRank depends on the amount of popularity data
that we have collected. In Figure 1 we show the performance of
fRank trained with only the popularity feature set vs. the amount
of data we have for the popularity feature set. Each day, we
receive additional popularity data, and as can be seen in the plot,
this increases the performance of fRank. The relation is
logarithmic: doubling the amount of popularity data provides a
constant improvement in pairwise accuracy.
In summary, we have found that the popularity features provide a
useful boost to the overall fRank accuracy. Gathering more
popularity data, as well as employing simple backoff strategies,
improve this boost even further.
5.6
Summary of Results
The experiments provide a number of conclusions. First, fRank
performs significantly better than PageRank, even without any
information about the Web graph. Second, the page level and
popularity features were the most significant contributors to
pairwise accuracy. Third, by collecting more popularity data, we
can continue to improve fRank's performance.
The popularity data provides two benefits to fRank. First, we see
that qualitatively, fRank's ordering of Web pages has a more
favorable bias than PageRank's. fRank's ordering seems to
correspond to what Web users, rather than Web page authors,
prefer. Second, the popularity data is more timely than
PageRank's link information. The toolbar provides information
about which Web pages people find interesting right now,
whereas links are added to pages more slowly, as authors find the
time and interest.
RELATED AND FUTURE WORK
Since the original PageRank paper, there has been work on
improving it. Much of that work centers on speeding up and
parallelizing the computation [15][25].
One recognized problem with PageRank is that of topic drift: A
page about "dogs" will have high PageRank if it is linked to by
many pages that themselves have high rank, regardless of their
topic. In contrast, a search engine user looking for good pages
about dogs would likely prefer to find pages that are pointed to by
many pages that are themselves about dogs. Hence, a link that is
"on topic" should have higher weight than a link that is not.
Richardson and Domingos's Query Dependent PageRank [29]
and Haveliwala's Topic-Sensitive PageRank [16] are two
approaches that tackle this problem.
Other variations to PageRank include differently weighting links
for inter- vs. intra-domain links, adding a backwards step to the
random surfer to simulate the "back" button on most browsers
[24] and modifying the jump probability (
) [3]. See Langville
and Meyer [23] for a good survey of these, and other
modifications to PageRank.
6.2
Other related work
PageRank is not the only link analysis algorithm used for ranking
Web pages. The most well-known other is HITS [22], which is
used by the Teoma search engine [30]. HITS produces a list of
hubs and authorities, where hubs are pages that point to many
authority pages, and authorities are pages that are pointed to by
many hubs. Previous work has shown HITS to perform
comparably to PageRank [1].
One field of interest is that of static index pruning (see e.g.,
Carmel et al. [8]). Static index pruning methods reduce the size of
the search engine's index by removing documents that are
unlikely to be returned by a search query. The pruning is typically
done based on the frequency of query terms. Similarly, Pandey
and Olston [28] suggest crawling pages frequently if they are
likely to incorrectly appear (or not appear) as a result of a search.
Similar methods could be incorporated into the static rank (e.g.,
how many frequent queries contain words found on this page).
Others have investigated the effect that PageRank has on the Web
at large [9]. They argue that pages with high PageRank are more
likely to be found by Web users, thus more likely to be linked to,
and thus more likely to maintain a higher PageRank than other
pages. The same may occur for the popularity data. If we increase
the ranking for popular pages, they are more likely to be clicked
on, thus further increasing their popularity. Cho et al. [10] argue
that a more appropriate measure of Web page quality would
depend on not only the current link structure of the Web, but also
on the change in that link structure. The same technique may be
applicable to popularity data: the change in popularity of a page
may be more informative than the absolute popularity.
One interesting related work is that of Ivory and Hearst [19].
Their goal was to build a model of Web sites that are considered
high quality from the perspective of "content, structure and
navigation, visual design, functionality, interactivity, and overall
experience". They used over 100 page level features, as well as
features encompassing the performance and structure of the site.
This let them qualitatively describe the qualities of a page that
make it appear attractive (e.g., rare use of italics, at least 9 point
font, ...), and (in later work) to build a system that assists novel
Web page authors in creating quality pages by evaluating it
according to these features. The primary differences between this
work and ours are the goal (discovering what constitutes a good
Web page vs. ordering Web pages for the purposes of Web
search), the size of the study (they used a dataset of less than 6000
pages vs. our set of 468,000), and our comparison with PageRank.
y = 0.577Ln(x) + 58.283
R
2
= 0.9822
58
58.5
59
59.5
60
60.5
61
1
10
100
Days of Toolbar Data
P
a
i
r
w
i
s
e
A
c
c
u
r
a
c
y
Figure 1: Relation between the amount of popularity data and
the performance of the popularity feature set. Note the x-axis
is a logarithmic scale.
713
Nevertheless, their work provides insights to additional useful
static features that we could incorporate into fRank in the future.
Recent work on incorporating novel features into dynamic ranking
includes that by Joachims et al. [21], who investigate the use of
implicit feedback from users, in the form of which search engine
results are clicked on. Craswell et al. [11] present a method for
determining the best transformation to apply to query independent
features (such as those used in this paper) for the purposes of
improving dynamic ranking. Other work, such as Boyan et al. [4]
and Bartell et al. [2] apply machine learning for the purposes of
improving the overall relevance of a search engine (i.e., the
dynamic ranking). They do not apply their techniques to the
problem of static ranking.
6.3
Future work
There are many ways in which we would like to extend this work.
First, fRank uses only a small number of features. We believe we
could achieve even more significant results with more features. In
particular the existence, or lack thereof, of certain words could
prove very significant (for instance, "under construction"
probably signifies a low quality page). Other features could
include the number of images on a page, size of those images,
number of layout elements (tables, divs, and spans), use of style
sheets, conforming to W3C standards (like XHTML 1.0 Strict),
background color of a page, etc.
Many pages are generated dynamically, the contents of which may
depend on parameters in the URL, the time of day, the user
visiting the site, or other variables. For such pages, it may be
useful to apply the techniques found in [26] to form a static
approximation for the purposes of extracting features. The
resulting grammar describing the page could itself be a source of
additional features describing the complexity of the page, such as
how many non-terminal nodes it has, the depth of the grammar
tree, etc.
fRank allows one to specify a confidence in each pairing of
documents. In the future, we will experiment with probabilities
that depend on the difference in human judgments between the
two items in the pair. For example, a pair of documents where one
was rated 4 and the other 0 should have a higher confidence than
a pair of documents rated 3 and 2.
The experiments in this paper are biased toward pages that have
higher than average quality. Also, fRank with all of the features
can only be applied to pages that have already been crawled.
Thus, fRank is primarily useful for index ordering and improving
relevance, not for directing the crawl. We would like to
investigate a machine learning approach for crawl prioritization as
well. It may be that a combination of methods is best: for
example, using PageRank to select the best 5 billion of the 20
billion pages on the Web, then using fRank to order the index and
affect search relevancy.
Another interesting direction for exploration is to incorporate
fRank and page-level features directly into the PageRank
computation itself. Work on biasing the PageRank jump vector
[16], and transition matrix [29], have demonstrated the feasibility
and advantages of such an approach. There is reason to believe
that a direct application of [29], using the fRank of a page for its
"relevance", could lead to an improved overall static rank.
Finally, the popularity data can be used in other interesting ways.
The general surfing and searching habits of Web users varies by
time of day. Activity in the morning, daytime, and evening are
often quite different (e.g., reading the news, solving problems,
and accessing entertainment, respectively). We can gain insight
into these differences by using the popularity data, divided into
segments of the day. When a query is issued, we would then use
the popularity data matching the time of query in order to do the
ranking of Web pages. We also plan to explore popularity features
that use more than just the counts of how often a page was visited.
For example, how long users tended to dwell on a page, did they
leave the page by clicking a link or by hitting the back button, etc.
Fox et al. did a study that showed that features such as this can be
valuable for the purposes of dynamic ranking [14]. Finally, the
popularity data could be used as the label rather than as a feature.
Using fRank in this way to predict the popularity of a page may
useful for the tasks of relevance, efficiency, and crawl priority.
There is also significantly more popularity data than human
labeled data, potentially enabling more complex machine learning
methods, and significantly more features.
CONCLUSIONS
A good static ranking is an important component for today's
search engines and information retrieval systems. We have
demonstrated that PageRank does not provide a very good static
ranking; there are many simple features that individually out
perform PageRank. By combining many static features, fRank
achieves a ranking that has a significantly higher pairwise
accuracy than PageRank alone. A qualitative evaluation of the top
documents shows that fRank is less technology-biased than
PageRank; by using popularity data, it is biased toward pages that
Web users, rather than Web authors, visit. The machine learning
component of fRank gives it the additional benefit of being more
robust against spammers, and allows it to leverage further
developments in the machine learning community in areas such as
adversarial classification. We have only begun to explore the
options, and believe that significant strides can be made in the
area of static ranking by further experimentation with additional
features, other machine learning techniques, and additional
sources of data.
ACKNOWLEDGMENTS
Thank you to Marc Najork for providing us with additional
PageRank computations and to Timo Burkard for assistance with
the popularity data. Many thanks to Chris Burges for providing
code and significant support in using training RankNets. Also, we
thank Susan Dumais and Nick Craswell for their edits and
suggestions.
REFERENCES
[1]
B. Amento, L. Terveen, and W. Hill. Does "authority" mean
quality? Predicting expert quality ratings of Web documents.
In Proceedings of the 23
rd
Annual International ACM SIGIR
Conference on Research and Development in Information
Retrieval, 2000.
[2]
B. Bartell, G. Cottrell, and R. Belew. Automatic combination
of multiple ranked retrieval systems. In Proceedings of the
17th Annual International ACM SIGIR Conference on
Research and Development in Information Retrieval, 1994.
[3]
P. Boldi, M. Santini, and S. Vigna. PageRank as a function
of the damping factor. In Proceedings of the International
World Wide Web Conference, May 2005.
714
[4]
J. Boyan, D. Freitag, and T. Joachims. A machine learning
architecture for optimizing web search engines. In AAAI
Workshop on Internet Based Information Systems, August
1996.
[5]
S. Brin and L. Page. The anatomy of a large-scale
hypertextual web search engine. In Proceedings of the
Seventh International Wide Web Conference, Brisbane,
Australia, 1998. Elsevier.
[6]
A. Broder, R. Lempel, F. Maghoul, and J. Pederson.
Efficient PageRank approximation via graph aggregation. In
Proceedings of the International World Wide Web
Conference, May 2004.
[7]
C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N.
Hamilton, G. Hullender. Learning to rank using gradient
descent. In Proceedings of the 22
nd
International Conference
on Machine Learning, Bonn, Germany, 2005.
[8]
D. Carmel, D. Cohen, R. Fagin, E. Farchi, M. Herscovici, Y.
S. Maarek, and A. Soffer. Static index pruning for
information retrieval systems. In Proceedings of the 24th
Annual International ACM SIGIR Conference on Research
and Development in Information Retrieval, pages 43-50,
New Orleans, Louisiana, USA, September 2001.
[9]
J. Cho and S. Roy. Impact of search engines on page
popularity. In Proceedings of the International World Wide
Web Conference, May 2004.
[10]
J. Cho, S. Roy, R. Adams. Page Quality: In search of an
unbiased web ranking. In Proceedings of the ACM SIGMOD
2005 Conference. Baltimore, Maryland. June 2005.
[11]
N. Craswell, S. Robertson, H. Zaragoza, and M. Taylor.
Relevance weighting for query independent evidence. In
Proceedings of the 28
th
Annual Conference on Research and
Development in Information Retrieval (SIGIR), August,
2005.
[12]
N. Dalvi, P. Domingos, Mausam, S. Sanghai, D. Verma.
Adversarial Classification. In Proceedings of the Tenth
International Conference on Knowledge Discovery and Data
Mining (pp. 99-108), Seattle, WA, 2004.
[13]
O. Dekel, C. Manning, and Y. Singer. Log-linear models for
label-ranking. In Advances in Neural Information Processing
Systems 16. Cambridge, MA: MIT Press, 2003.
[14]
S. Fox, K S. Fox, K. Karnawat, M. Mydland, S. T. Dumais
and T. White (2005). Evaluating implicit measures to
improve the search experiences. In the ACM Transactions on
Information Systems, 23(2), pp. 147-168. April 2005.
[15]
T. Haveliwala. Efficient computation of PageRank. Stanford
University Technical Report, 1999.
[16]
T. Haveliwala. Topic-sensitive PageRank. In Proceedings of
the International World Wide Web Conference, May 2002.
[17]
D. Hawking and N. Craswell. Very large scale retrieval and
Web search. In D. Harman and E. Voorhees (eds), The
TREC Book. MIT Press.
[18]
R. Herbrich, T. Graepel, and K. Obermayer. Support vector
learning for ordinal regression. In Proceedings of the Ninth
International Conference on Artificial Neural Networks, pp.
97-102. 1999.
[19]
M. Ivory and M. Hearst. Statistical profiles of highly-rated
Web sites. In Proceedings of the ACM SIGCHI Conference
on Human Factors in Computing Systems, 2002.
[20]
T. Joachims. Optimizing search engines using clickthrough
data. In Proceedings of the ACM Conference on Knowledge
Discovery and Data Mining (KDD), 2002.
[21]
T. Joachims, L. Granka, B. Pang, H. Hembrooke, and G.
Gay. Accurately Interpreting Clickthrough Data as Implicit
Feedback. In Proceedings of the Conference on Research and
Development in Information Retrieval (SIGIR), 2005.
[22]
J. Kleinberg. Authoritative sources in a hyperlinked
environment. Journal of the ACM 46:5, pp. 604-32. 1999.
[23]
A. Langville and C. Meyer. Deeper inside PageRank.
Internet Mathematics 1(3):335-380, 2004.
[24]
F. Matthieu and M. Bouklit. The effect of the back button in
a random walk: application for PageRank. In Alternate track
papers and posters of the Thirteenth International World
Wide Web Conference, 2004.
[25]
F. McSherry. A uniform approach to accelerated PageRank
computation. In Proceedings of the International World
Wide Web Conference, May 2005.
[26]
Y. Minamide. Static approximation of dynamically generated
Web pages. In Proceedings of the International World Wide
Web Conference, May 2005.
[27]
L. Page, S. Brin, R. Motwani, and T. Winograd. The
PageRank citation ranking: Bringing order to the web.
Technical report, Stanford University, Stanford, CA, 1998.
[28]
S. Pandey and C. Olston. User-centric Web crawling. In
Proceedings of the International World Wide Web
Conference, May 2005.
[29]
M. Richardson and P. Domingos. The intelligent surfer:
probabilistic combination of link and content information in
PageRank. In Advances in Neural Information Processing
Systems 14, pp. 1441-1448. Cambridge, MA: MIT Press,
2002.
[30]
C. Sherman. Teoma vs. Google, Round 2. Available from
World Wide Web (http://dc.internet.com/news/article.php/
1002061), 2002.
[31]
T. Upstill, N. Craswell, and D. Hawking. Predicting fame
and fortune: PageRank or indegree?. In the Eighth
Australasian Document Computing Symposium. 2003.
[32]
T. Upstill, N. Craswell, and D. Hawking. Query-independent
evidence in home page finding. In ACM Transactions on
Information Systems. 2003.
715
| anchor text;relevance;Web pages;pairwise accuracy;fRank;popularity data;dynamic ranking;search engines;PageRank;static ranking;Static ranking;static features;RankNet |
44 | Black-Box Constructions for Secure Computation | It is well known that the secure computation of non-trivial functionalities in the setting of no honest majority requires computational assumptions. We study the way such computational assumptions are used. Specifically, we ask whether the secure protocol can use the underlying primitive (e.g., one-way trapdoor permutation) in a black-box way, or must it be nonblack-box (by referring to the code that computes this primitive)? Despite the fact that many general constructions of cryptographic schemes (e.g., CPA-secure encryption ) refer to the underlying primitive in a black-box way only, there are some constructions that are inherently nonblack-box. Indeed, all known constructions of protocols for general secure computation that are secure in the presence of a malicious adversary and without an honest majority use the underlying primitive in a nonblack-box way (requiring to prove in zero-knowledge statements that relate to the primitive). In this paper, we study whether such nonblack-box use is essential. We present protocols that use only black-box access to a family of (enhanced) trapdoor permutations or to a homomorphic public-key encryption scheme. The result is a protocol whose communication complexity is independent of the computational complexity of the underlying primitive (e.g., a trapdoor permutation) and whose computational complexity grows only linearly with that of the underlying primitive. This is the first protocol to exhibit these properties. | INTRODUCTION
It is a known fact that most cryptographic tasks require
the use of computational hardness assumptions. These assumptions
typically come in two types: specific assumptions
like the hardness of factoring, RSA, discrete log and others,
and general assumptions like the existence of one-way functions
, trapdoor permutations and others. In this paper, we
refer to general assumptions and how they are used. Specifically
, we consider an intriguing question regarding how secure
protocols utilize a primitive that is assumed to carry
some hardness property. Here again, there is a clear distinction
between two types of uses:
1. Black-box usage: a protocol (or construction) uses
a primitive in a black-box way if it refers only to the
input/output behavior of the primitive.
1
For example,
if the primitive is a trapdoor permutation, then the
protocol may sample a permutation and its domain,
and may compute the permutation and its inverse (if
the trapdoor is given). Beyond this, no reference is
made to the primitive. In particular, the code used to
compute the permutation (or carry out any other task)
is not referred to by the protocol. The vast majority
of constructions in cryptography are black-box.
2. Nonblack-box usage: a protocol (or construction)
uses a primitive in a nonblack-box way if it refers to
the code for computing its functionality. A typical example
of a nonblack-box construction is where a Karp
reduction is applied to the circuit computing the function
, say, in order to prove an
N P zero-knowledge
proof, as in [14].
A rich and fruitful body of work, initiated by [16], attempts
to draw the borders between possibility and impossibility for
black-box constructions in cryptography. While many of the
relations between primitives are well understood, there are
still some important tasks for which the only constructions
that we have rely on nonblack-box access to the assumed
primitive, yet the existence of a black-box construction is
1
It is typically also required that the security proof of the construction
is black-box in the sense that an adversary breaking the protocol
can be used as an oracle in order to break the underlying primitive.
See, e.g., [11, 12, 29] for a comprehensive treatment of black-box reductions
in cryptography.
99
not ruled out. In particular, all known general constructions
of multiparty protocols that are secure in the presence
of malicious adversaries and without an honest majority
, originating from [15], use nonblack-box access to the
assumed primitive.
2
(We note that by "general construc-tions"
, we mean constructions that can be used to securely
compute any functionality.)
Another notable example of
this phenomenon is the case of public-key encryption that
is secure against chosen-ciphertext attacks [7, 30, 23]; here
too, all known constructions are nonblack-box. The above
phenomenon begs the following question:
Is it possible to construct general protocols for
secure computation without an honest majority
and with malicious adversaries, given only black-box
access to a "low-level" primitive?
Answering the above question is of interest for the following
reasons. First, it is of theoretical interest to understand
whether or not nonblack-box access to a primitive is necessary
for these tasks. An answer to this question would
enhance our understanding of how hardness assumptions
can (or must) be used. Second, as we have mentioned, the
nonblack-box use of the underlying primitive is typically utilized
in order to apply a Karp reduction for the purpose
of using a (general) zero-knowledge proof. Such reductions
are highly inefficient and are unlikely to be very useful in
practice. Furthermore, in these protocols the communication
complexity depends on the complexity of computing
the primitive and the computational complexity grows more
than linearly with that of the primitive. (An exception to
this rule is the communication-efficient compiler presented
in [26], which relies on the communication-efficient arguments
of [20, 25]. However, the computational complexity
of the protocol of [26] is even worse than the GMW protocol
[15].)
To illustrate the type of inefficiency resulting from current
nonblack-box constructions, consider the following hypothetical
scenario. Suppose that, due to major advances
in cryptanalytic techniques, the security parameter must
be large enough so that all basic cryptographic primitives
require a full second of computation on a fast CPU. In
such a case, would it still be possible to carry out a distributed
task like oblivious transfer? Current nonblack-box
techniques (e.g., the GMW protocol [15]) require parties to
prove in zero-knowledge statements that involve the computation
of the underlying primitive, say a trapdoor permutation
. These zero-knowledge protocols, in turn, invoke
cryptographic primitives for any gate of a circuit computing
a trapdoor permutation. Since (by our assumption) a trapdoor
permutation takes one second to compute, its circuit
implementation contains trillions of gates, thereby requiring
the protocol trillions of second to run. In contrast, a
black-box construction of oblivious transfer from the trapdoor
permutation primitive would make the number of invocations
of the primitive independent of the complexity of
2
We stress that the above discussion is only true when considering
general assumptions. Furthermore, it is only true when considering
"low-level primitives" like trapdoor permutations. Specifically, there
do exist constructions of secure multiparty protocols that use only
black-box access to an oblivious transfer primitive [18].
However,
since it is not known how to construct oblivious transfer using only
black-box access to, say trapdoor permutations, the overall construction
obtained does not use its "low-level" primitive in a black-box
way.
implementing the primitive, thus making oblivious transfer
feasible even in the hypothetical scenario described above.
We conclude that the current nonblack-box use of the underlying
primitives constitutes an obstacle to efficiency. It is
therefore of great interest to know whether or not it is possible
to obtain solutions to these tasks that do not suffer from
this obstacle. (We note that the inefficiency of nonblack-box
constructions here is quite ironic because in many areas of
cryptography, black-box constructions have been shown to
have inherent computational limitations [21, 10].) Despite
the above, we stress that the focus of this paper is not on
efficiency, but rather on the theoretical question of whether
or not it is possible to obtain the aforementioned black-box
constructions. We believe this question to be interesting in
its own right.
Our results.
We show how to construct general secure
multiparty computation (for the case of no honest majority
and malicious adversaries), given black-box access to either
homomorphic encryption schemes or enhanced trapdoor permutations
(see [13, Appendix C.1] for the definition of enhanced
trapdoor permutations). We note that all known
general constructions for this task from "low-level" primitives
rely on either enhanced trapdoor permutations or homomorphic
encryption schemes. However, they all use them
in an inherently nonblack-box way. This is the case even for
protocols that implement very simple functionalities, such
as oblivious transfer. We prove the following:
Theorem 1.1. There exist protocols for securely computing
any multiparty functionality without an honest majority
and in the presence of static malicious adversaries, that rely
only on black-box access to a family of enhanced trapdoor
permutations or to a homomorphic encryption scheme.
We remark that nonblack-box access is not typically used
when considering semi-honest adversaries [32, 15]. Rather,
the nonblack-box access is utilized in known protocols in order
to have the parties prove (in zero-knowledge) that they
are correctly following the protocol specification. This is
necessary for preventing a malicious adversary from effec-tively
deviating from the protocol instructions. We note also
that in the case of an honest majority, it is possible to securely
compute any functionality information-theoretically,
and without any hardness assumption [2, 5]. Thus, no primitive
at all is needed. For this reason, we focus on the case
of no honest majority (including the important two-party
case) and malicious adversaries.
Techniques.
In order to prove Theorem 1.1, we begin
by constructing oblivious transfer protocols that use only
black-box access to enhanced trapdoor permutations or homomorphic
encryption schemes, but provide rather weak security
guarantees. We then "boost" the security of these
protocols in order to obtain protocols that are secure in the
presence of malicious adversaries. Constructions until today
that have followed this paradigm work by first obtaining
protocols that are secure in the presence of semi-honest
adversaries, and then boosting them so that they are secure
in the presence of malicious adversaries. However, it is
not known how to carry out this "boosting" in a black-box
way (and, indeed, it has been conjectured that malicious
oblivious transfer cannot be constructed from semi-honest
oblivious transfer in a black-box way [24]). Since we wish to
make our construction black-box, we take a different route.
100
Protocol number
Security for corrupted sender
Security for corrupted receiver
3.1, 3.3
Private for defensible sender
Private for defensible receiver
4.1
Private for defensible sender
Secure for malicious receiver
5.1
Secure for malicious sender
Private for defensible receiver
In Theorem 6.1
Secure for malicious sender
Secure for malicious receiver
Table 1: The progression of our constructions: each protocol uses the previous one as a subprotocol.
Specifically, we begin by introducing the notion of a defensible
adversary. In order to describe this notion, we describe
what a defense is: a defense is an input and random-tape
that is provided by the adversary after the protocol execution
concludes. A defense is good if the honest party upon
that input and random-tape would have sent the same messages
as the adversary sent. Such a defense is a supposed
"proof" of honest behavior. However, the adversary need
not actually behave honestly and can construct its defense
retroactively (after the execution concludes). A protocol is
said to be private in the presence of defensible adversaries if
privacy is preserved in the event that an adversary provides
a good defense. However, in the case that the adversary
doesn't provide a good defense, nothing is guaranteed, and
the entire honest party's input may be learned. This notion
is therefore rather weak. We note that the oblivious transfer
protocol of [8] is not secure under this notion. However, it
can be efficiently modified into one that is secure under this
notion. It is also possible to efficiently construct such an
oblivious transfer protocol from homomorphic encryption.
Importantly, we show that it is possible to construct oblivious
transfer that is secure in the presence of malicious adversaries
from oblivious transfer that is private in the presence
of defensible adversaries. Furthermore, this construction is
black-box.
As we have mentioned, we start by constructing oblivious
transfer protocols that are private in the presence of
defensible adversaries. We present two such protocols: one
that uses black-box access to a family of enhanced trapdoor
permutations, and one that uses black-box access to a homomorphic
public-key encryption scheme. Next, we construct
from the above oblivious transfer protocol a new oblivious
transfer protocol that is still private in the presence of defensible
senders, but is secure in the presence of malicious
receivers (where security is "full security" according to the
ideal/real simulation paradigm). This is achieved using the
so-called cut-and-choose technique. That is, many oblivious
transfer executions (using random inputs) are run, and the
receiver is asked to present a defense for its behavior in half
of them. If it indeed presents a good defense, then we are
guaranteed that it behaved somewhat honestly in most of
the executions.
We stress that this step is novel, because the requirements
on a protocol that is secure according to the ideal/real simulation
paradigm are much stricter than when only privacy
is guaranteed. Indeed, some efficient protocols for oblivious
transfer from the literature [27, 1, 17] are private for both
(malicious) parties, but are not fully secure for either party.
Nevertheless, we are able to boost both the resilience of the
protocol (from a defensible to a malicious adversary) and
its security guarantee (from privacy to full simulation-based
security). Next, we "reverse" the oblivious transfer protocol
(i.e., by switching the sender and receiver roles) in order to
obtain a protocol with reversed security properties. Specifically
, this next protocol is secure in the presence of malicious
senders and private in the presence of defensible receivers.
At this point, we reapply our security boosting technique in
order to obtain a protocol that is "fully secure"; that is, a
protocol that is secure in the presence of malicious senders
and receivers. See Table 1 for the series of oblivious transfer
protocols that we construct. Needless to say, each protocol
uses its subprotocol in a black-box way.
Finally, having constructed secure oblivious transfer protocols
using only black-box access to primitives, it suffices to
apply the well-known result of Kilian [18, 19] that shows that
any functionality can be securely computed using black-box
access to a secure oblivious transfer protocol. This therefore
yields Theorem 1.1, as desired.
Related work. Recently, in [6], it was shown that it is possible
to construct constant-round protocols for the setting of
an honest majority, that use only black-box access to the assumed
primitive. As we have mentioned, in the setting of
an honest majority, it is possible to construct information-theoretically
secure protocols (which are, by triviality, black-box
). Nevertheless, there are no known (general) constant-round
protocols for the information-theoretic setting, and
so [6] relates to this issue. We remark that the techniques
used in [6] and here are vastly different, due to the inherent
differences between the setting of an honest majority and
that of no honest majority.
Organization.
Due to lack of space in this abstract, we
present only brief sketches of the definitions and proofs.
Complete details appear in the full version of the paper.
We often write OT as shorthand for oblivious transfer.
DEFINITIONS
We denote by
P
1
(1
n
, x
1
,
1
)
, P
2
(1
n
, x
2
,
2
) the transcript
of an execution between parties
P
1
and
P
2
with a security
parameter
n, where P
i
has input
x
i
and random-tape
i
. For
brevity, we will sometimes omit the security parameter 1
n
.
The message sent by party
P
i
(on the above inputs) after
having received the series of incoming messages
is denoted
by
P
i
(
x
i
,
i
;
). Stated otherwise, P
i
(
x
i
,
i
;
) denotes the
next message function of
P
i
. Let
t = P
1
(
x
1
,
1
)
, P
2
(
x
2
,
2
) .
Then, denote the
th
message sent by
P
i
in
t by sent
P
i
(
t) and
the first
messages received by
P
i
in
t by received
P
i
1,...,
(
t).
We also denote the output of
P
i
in an execution by
output
P
i
P
1
(
x
1
,
1
)
, P
2
(
x
2
,
2
) .
In our presentation, we assume familiarity with the standard
definitions of secure computation; see [13, Chapter 7]
for a full treatment. In this work, we consider malicious adversaries
(i.e., adversaries that may arbitrarily deviate from
the protocol specification), and static corruptions (meaning
that the set of corrupted parties is fixed before the protocol
execution begins).
We use a non-uniform formulation of adversaries here and
therefore, without loss of generality, assume that they are
101
deterministic. However, this is not essential and all of our
proofs hold for the uniform model of computation.
Black-box access to primitives. In this paper, we consider
constructions of protocols that use only black-box access
to an underlying primitive. This can be easily formalized
by defining oracles that provide the functionality of the
primitive. For example, a trapdoor permutation can be defined
by an oracle that samples a function description along
with a trapdoor, an oracle that is given the function description
and samples a random value from the domain, an
oracle that is given the function description and a point in
the domain and computes the permutation, and an oracle
that is given the trapdoor and a point in the domain and
computes the permutation inverse. It is easy to see that
our protocols rely on the underlying primitive in a black-box
way. We will therefore not burden the presentation by
formally defining these oracles. We remark that we also construct
protocols that use subprotocols in a black-box way.
This can be formalized by just looking at the input/output
behavior of the protocol. We will not formalize this. It suffices
for our result to note that if the subprotocol uses the
underlying primitive in a black-box way, then the protocol
(that uses the subprotocol) also uses the underlying primitive
in a black-box way. Again, this is easy to verify for
all of our protocols. In addition to using the underlying
primitive in a black-box way, our proofs of security are also
black-box. Therefore, our reductions are what are typically
called "fully black-box" [29].
2.2
Defensible Adversarial Behavior
We introduce the notion of defensible adversarial behavior
. Loosely speaking, an adversary that exhibits defensible
behavior may arbitrarily deviate from the protocol specification
. However, at the conclusion of the protocol execution,
the adversary must be able to justify or defend its behavior
by presenting an input and a random-tape such that the
honest party (with this input and random-tape) would behave
in the same way as the adversary did. A protocol is
"private" under defensible adversarial behavior if it is "private"
in the presence of such adversaries. We stress that if
an adversary behaves maliciously and cannot provide a good
defense, then no security guarantees are given.
We now define the notion of a good defense. Intuitively,
a defense is an "explanation" of an adversary's behavior
during the protocol execution. Such an explanation consists
of an input and random-tape, and the defense is "good" if
an honest party, given that input and random-tape, would
have sent the same messages as the adversary did during the
protocol execution. The formal definition follows.
Definition 2.1. (good defense for t): Let t be the transcript
of an execution of a protocol
= (P
1
, P
2
) between an
adversary
A (say, controlling P
1
) and the honest party (say
P
2
). Then, we say that the pair (
x
1
,
1
) constitutes a good
defense by
A for t in , denoted (x
1
,
1
) = defense
A
(
t), if for
every
it holds that sent
A
(
t) = P
1
(
x
1
,
1
; received
A
1,..., -1
(
t)).
In other words, every message sent by
A in the execution
is such that the honest party
P
1
with input (
x
1
,
1
) would
have sent the same message.
2.3
Security of OT Protocols
The starting point of our constructions is an oblivious
transfer protocol [28, 8] that is private in the presence of a
defensible receiver or sender. Recall that an oblivious transfer
protocol involves a sender
S with two input strings s
0
and
s
1
, and a receiver
R with an input bit r {0, 1}. Very
informally, an oblivious transfer protocol has the property
that the sender learns nothing about the receiver's bit
r and
the receiver obtains
s
r
, but learns nothing about
s
1-r
. (The
variant of oblivious-transfer that we use here is usually referred
to as "1-out-of-2 OT".) We begin by presenting the
formal definition of oblivious transfer that is private in the
presence of a defensible receiver and then proceed to define
privacy in the presence of a defensible sender.
Non-trivial protocols.
One technicality that must be
dealt with is that a protocol that does nothing is trivially
"private" in that it does not reveal anything about the par-ties'
inputs. Of course, such a protocol is also useless. In
order to make sure that the oblivious transfer protocols that
we construct are "useful", we define the notion of a non-trivial
oblivious transfer protocol. Such a protocol has the
property that if both the sender and receiver are honest,
then the receiver will receive its output as designated by
the oblivious transfer functionality
f((s
0
, s
1
)
, r) = (, s
r
)
(where
denotes the empty output).
Privacy for random inputs in the presence of a defensible
receiver.
We now define privacy for defensible
receivers. Recall that the receiver in an oblivious transfer
protocol is supposed to obtain one of the pair (
s
0
, s
1
) in the
execution. However, the other value must remain secret.
When considering defensible adversaries, the requirement is
that, as long as the adversary can provide a good defense,
it can only learn one of the values. Recall that, by Definition
2.1, a party's defense includes its input (in this case, the
bit
r of the receiver, meaning that it wishes to obtain the
value
s
r
). We therefore require that a defensible receiver can
learn nothing about
s
1-r
when its defense contains the input
value
r. Due to technical reasons in our proofs later on,
we define privacy only for the case that the sender's inputs
are uniformly distributed bits. Fortunately, this will suffice
for our constructions.
We define an experiment for a protocol
and an adversary
A modelled by a polynomial-size family of circuits {A
n
}
nN
.
Informally, the experiment begins by choosing a random pair
of bits (
s
0
, s
1
) to be used for the sender's input. The adversary's
aim is to guess the value of the input that it doesn't
receive as output.
Experiment Expt
rec
(
A
n
):
1. Choose
s
0
, s
1
R
{0, 1} uniformly at random.
2. Let
S
be a uniformly distributed random tape for
S
and let
t = S(1
n
, s
0
, s
1
,
S
)
, A
n
.
3. Let ((
r,
r
)
, ()) be the output of A
n
(
t). (The pair
(
r,
r
) constitute
A
n
's defense and
is its guess for
s
1-r
.)
4. Output 1 if and only if (
r,
r
) is a good defense by
A
n
for
t in , and = s
1-r
.
Notice that by
A's defense, it should have received s
r
. The
challenge of the adversary is therefore to guess the value
of
s
1-r
; if it cannot do this, then the sender's privacy is
preserved.
102
Definition 2.2. (privacy for random inputs in the presence
of a defensible receiver): Let
= (S, R) be a non-trivial
oblivious transfer protocol. We say that
is private for random
inputs in the presence of a defensible receiver if for every
polynomial-size family of circuits
A = {A
n
}
nN
controlling
R, for every polynomial p() and for all sufficiently large n's
Pr [Expt
rec
(
A
n
) = 1]
< 12 + 1
p(n) .
Remark. The definition of Expt
rec
only considers the case
that the inputs of the sender are uniformly distributed. We
stress that this is a very weak definition. However, the reasons
that we make this restriction are because (a) it suffices
for our construction of "fully secure" oblivious transfer
(see Protocol 4
.1), and more importantly, (b) without this
restriction we were unable to prove the privacy of Protocol
3
.3 for defensible receivers (see Section 3.2). We stress
that this restriction is not made when considering security
in the presence of malicious parties.
Privacy in the presence of a defensible sender.
In
an oblivious transfer protocol, the sender is not supposed to
learn anything about the receiver's input. When considering
a defensible sender, this means that the sender should
not be able to simultaneously present a good defense of its
behavior and make a correct guess as to the value of the receiver's
input. We stress that this privacy requirement only
needs to hold when the sender outputs a good defense; in
all other cases, there may be no privacy whatsoever. The
exact definition is formulated in a similar way as above.
Security.
The definitions above refer only to "privacy",
meaning that the adversary can learn nothing more about
the honest party's input than what is revealed by the output.
However, these definitions say nothing about the simulata-bility
of the protocols in question. In particular, a protocol
that is private by one of the above definitions may not
be secure according to the real/ideal simulation paradigm
(see [13, Chapter 7] for these definitions). When we mention
security in this paper, we refer to security according to
the ideal/real model paradigm.
PRIVACY FOR DEFENSIBLE SENDERS AND DEFENSIBLE RECEIVERS
In this section we show how to construct oblivious transfer
protocols that are private for defensible senders and receivers
. We present two protocols: one based on homomorphic
encryption and one based on enhanced trapdoor permutations
. Importantly, both protocols access the underlying
primitive in a black-box way only.
3.1
Bit OT from Homomorphic Encryption
We assume the existence of a public-key encryption scheme
(
G, E, D) that is indistinguishable under chosen-plaintext
attacks and has the following homomorphic property:
1. The plaintext is taken from a finite Abelian group
determined by the public key. For notational convenience
, we assume here that the group is an "additive"
group
Z
q
; however, the same construction works for
"multiplicative" groups as well.
2. Given any public-key
pk generated by the key generation
algorithm
G and any two ciphertexts c
1
=
E
pk
(
m
1
) and
c
2
=
E
pk
(
m
2
), it is possible to efficiently
compute a random encryption of the sum
E
pk
(
m
1
+
m
2
).
Consequently, it is also possible to efficiently
compute
E
pk
(
m
1
) for any known integer
.
We also assume that (
G, E, D) has no decryption errors.
Such encryption schemes can be constructed under the quadratic
-residuosity, decisional Diffie-Hellman and other assumptions
; see [1, 17] for some references. The following protocol
is implicit in [22].
Protocol 3.1.
Inputs: The sender S has a pair of bits (s
0
, s
1
); the
receiver
R has a bit r.
The protocol:
1. The receiver
R chooses a pair of keys (pk, sk)
G(1
n
), computes
c = E
pk
(
r) and sends c and p
k
to
S.
2. The sender
S uses the homomorphic property and
its knowledge of
s
0
and
s
1
to compute a random
encryption
c = E
pk
((1
- r)s
0
+
rs
1
).
3.
R computes and outputs s
r
=
D
sk
(
c ).
Before proving security, note that if
S and R are both
honest, then
R receives the correct output. For example, if
r = 0, then c = E
pk
(1
s
0
+ 0
s
1
) =
E
pk
(
s
0
) and so
R
receives the correct value after decryption.
Claim 3.2. Assume that the encryption scheme (G, E, D)
is indistinguishable under chosen-plaintext attacks and has
no decryption errors. Then, Protocol 3
.1 is a non-trivial
oblivious transfer protocol that is private in the presence of
defensible senders and private for random inputs in the presence
of defensible receivers.
Privacy in the presence of a defensible (or even malicious)
sender follows from the fact that the sender's view consists
only of a single encryption under
E, and this encryption
is secure. Privacy with respect to a defensible receiver follows
since the existence of a proper defense implies that
c
is indeed an encryption of 0 or 1. This, in turn, guarantees
that
c is a random encryption of s
r
. Hence, again, privacy
follows from the security of
E.
3.2
Bit OT from Enhanced Trapdoor Permutations
The following protocol is a modified version of [8] that is
private in the presence of defensible adversaries. We stress
that the original protocol of [8] is completely insecure in the
presence of defensible adversaries.
The construction uses
any family of enhanced trapdoor permutations. Informally
speaking, a family of trapdoor permutations is comprised of
a function-sampling algorithm
I, a domain-sampling algorithm
D
f
, an algorithm
F for computing the permutation
and an algorithm
F
-1
for inverting the permutation (given
the trapdoor). Such a family is called enhanced if it is hard
to invert a random value
y even when given the coins used
by the domain-sampling algorithm to sample
y. See [13,
Appendix C.1 and Section 7.3] for a full definition. In the
sequel, we will abuse notation and refer to the random coins
used by
D
f
as its input. We note that the enhanced property
103
is used in all constructions of oblivious transfer from trapdoor
permutations. Indeed it has been shown that black-box
constructions of oblivious transfer from plain trapdoor permutations
is impossible [9].
We will require that
I is errorless, meaning that for every
series of random coins provided to
I, the description of
the function output is indeed a permutation. We call this
errorless function sampling, or just errorless sampling.
The protocol uses a perfectly binding commitment scheme
C. We denote a commitment to a using randomness by
C(a; ). For simplicity, we assume that in order to commit
to a string
a of length n, it suffices to use a random string
that is also of length
n. Such a commitment scheme can be
obtained using black-box access to any trapdoor permutation
or homomorphic encryption scheme.
Protocol 3.3.
Inputs: The sender S has a pair of random bits (s
0
, s
1
);
the receiver
R has a bit r.
Auxiliary information: The description of a family
of (enhanced) trapdoor permutations (
I, D
f
, F, F
-1
) and
a hard-core bit
B for the family.
The protocol:
1. The receiver
R chooses
1
,
R
{0, 1}
n
and sends
c = C(
1
;
) to the sender S.
2.
S chooses a trapdoor permutation pair (i, t)
I(1
n
) and a random
2
R
{0, 1}
n
, and sends
i
and
2
to
R.
3.
R computes y
1-r
=
D
f
(
1
2
); i.e.,
y
1-r
is
obtained by running the domain sampling algorithm
with coins
1
2
. In addition,
R chooses
R
{0, 1}
n
, obtains
x
r
=
D
f
(
) and computes
y
r
=
f
i
(
x
r
). Finally,
R sends (y
0
, y
1
) to
S.
4.
S uses t to compute
0
=
B(f
-1
i
(
y
0
))
s
0
and
1
=
B(f
-1
i
(
y
1
))
s
1
.
S sends (
0
,
1
) to
R.
5.
R computes and outputs s
r
=
B(x
r
)
r
.
Note that the only difference between Protocol 3.3 and
the protocol of [8] is that in [8], the value
y
1-r
is chosen
singlehandedly by the receiver, whereas here the value is
chosen mutually using a (weak non-simulatable) coin-tossing
protocol. (Indeed, in the protocol of [8] a cheating receiver
can just choose a value
y
1-r
for which it knows the preimage.
The receiver will then learn both
s
0
and
s
1
. Note also that a
defensible receiver can also easily cheat in the protocol of [8]
because it can send any value
y
1-r
and not the value that
equals
D
f
(
1
2
). In particular, it can send a value
y
1-r
for which it knows its preimage
x
1-r
under
f
i
, and can still
claim in its defense that its coins are such that
y
1-r
was
sampled directly.)
Claim 3.4. Assume that (I, D
f
, F, F
-1
) is a family of
enhanced one-way trapdoor permutations and that the scheme
C is perfectly binding and computationally hiding. Then,
Protocol 3
.3 is a non-trivial oblivious transfer protocol that
is private in the presence of defensible receivers and private
for random inputs in the presence of defensible senders.
Intuitively, a corrupted sender cannot guess the value of
r
from (
y
0
, y
1
) because these values are identically distributed.
This actually only holds as long as the function
f
i
chosen by
the sender is really a permutation from the family. (Otherwise
, it may be possible to distinguish
y
r
which is generated
by computing
f
i
(
x
r
) from
y
1-r
which is randomly chosen
from the domain.) The fact that the function is really a
permutation is "proven" in the defense, and so if a good
defense is provided,
y
r
and
y
1-r
are identically distributed.
We therefore have that the only way a defensible sender can
learn the value of
r is from the commitments. However,
this involves distinguishing between
c = C(D
-1
f
(
y
0
)
2
)
and
c = C(D
-1
f
(
y
1
)
2
) which is hard due to the hiding
property of commitments. (Notice that
y
1-r
=
D
f
(
1
2
)
and so
c = C(
1
) =
C(D
-1
f
(
y
1-r
)
2
). Therefore, the
problem of guessing
r reduces to the problem of distinguishing
such commitments.) As for privacy in the presence of
a defensible receiver
R
: intuitively, if
R
behaves so that
it can present a good defense, then it is unable to compute
B(f
-1
(
y
1-r
)) because it has no freedom in choosing
y
1-r
.
That is,
R
must choose
y
1-r
=
1
2
and so it cannot
know the preimage
f
-1
(
y
1-r
). This implies that it can only
learn the sender's bit
s
r
.
ACHIEVING SECURITY AGAINST A MALICIOUS RECEIVER
In this section we construct a bit oblivious transfer protocol
that is secure in the presence of a malicious receiver
and private in the presence of a defensible sender. We stress
that the security achieved for malicious receivers is according
to the ideal/real model definition of security for secure
computation. Our construction uses black-box access to an
oblivious transfer protocol that is private for defensible receivers
and senders (like those constructed in the previous
section). Thus, in this section we show how to boost the
security guarantee from privacy in the presence of a defensible
receiver to security in the presence of a malicious receiver
. The guarantee regarding a corrupted sender remains
unchanged.
Protocol 4.1.
Inputs: The sender S has a pair of bits (s
0
, s
1
); the
receiver
R has a bit r.
The protocol:
1. The receiver
R chooses 2n uniformly distributed
bits
r
1
, . . . , r
2n
R
{0, 1}.
2. The sender
S chooses 2n pairs of random bits
s
0
i
, s
1
i
R
{0, 1} for i = 1, . . . , 2n.
3.
S and R run 2n parallel executions of a bit oblivious
transfer protocol
that is private in the presence
of defensible receivers and defensible senders.
In the
i
th
execution,
S inputs (s
0
i
, s
1
i
) and
R inputs
r
i
. Let
t
1
, . . . , t
2n
be the transcripts that result
from these executions.
4.
S and R run a secure two-party coin-tossing protocol
(that accesses a one-way function in a black-box
way) for generating a random string of length
n: q = q
1
, . . . , q
n
.
3
The string
q is used to define
a set of indices
Q {1, . . . , 2n} of size n in
the following way:
Q = {2i - q
i
}
n
i=1
. (Thus, for
n = 3 and q = 010 we have that Q = {2, 3, 6}.)
3
Sequential executions of the coin-tossing protocol of [3] can be used.
The security of this has been proven formally in [13].
104
5. For every
i Q, the receiver R provides a defense
(
r
i
,
i
r
).
6.
S checks that for every i Q, the pair (r
i
,
i
r
)
constitutes a good defense by
R for t
i
. If not,
then
S aborts and halts. Otherwise, it continues
to the next step.
7. For every
j / Q, the receiver R computes
j
=
r r
j
(where
r is R's initial input) and sends
{
j
}
j /
Q
to
S.
8.
S computes
0
=
s
0
j /
Q
s
j
j
and
1
=
s
1
j /
Q
s
1-j
j
, and sends (
0
,
1
) to
R.
9.
R computes and outputs s
r
=
r
j /
Q
s
r
j
j
.
We note that the sender's inputs to the executions of the
oblivious transfer subprotocol
in Protocol 4.1 are uniformly
distributed. Therefore, it suffices to use Protocol 3.3,
even though it has only been proven "private" for the case
of uniformly distributed sender inputs.
We stress that our proof below of Protocol 4.1 relies on
the fact that the sender's inputs are single bits.
4
Claim 4.2. Assume that is a non-trivial oblivious transfer
protocol that is private for random inputs in the presence
of defensible senders and receivers. Then, Protocol 4
.1 is a
non-trivial oblivious transfer protocol that is secure in the
presence of malicious receivers and private in the presence
of defensible senders.
Proof Sketch: We first demonstrate the non-triviality
property; that is, we show that if
S and R are honest, then
R receives s
r
, as required. To see this, first note that by the
non-triviality of
, the receiver R obtains all of the bits s
r
j
j
,
and in particular all
s
r
j
j
for
j / Q. Now, if r = 0, then R sets
j
=
r
j
for every
j / Q. Therefore, R will compute s
0
=
0
j /
Q
s
r
j
j
=
0
j /
Q
s
j
j
. This computation
is correct because
S computed
0
=
s
0
j /
Q
s
j
j
. In
contrast, if
r = 1, then
j
= 1
r
j
for every
j, which is
equivalent to
r
j
= 1
j
. Thus, once again,
R's computation
of
j /
Q
s
r
j
j
when computing
s
1
equals
S's computation of
j /
Q
s
1-j
j
when computing
1
, and
R will obtain
1
.
Privacy in the presence of defensible senders.
We
present only the idea behind the proof that Protocol 4.1 is
private in the presence of a defensible sender
A. Intuitively,
if protocol
is private in the presence of a defensible sender,
then a defensible adversary here cannot learn any of the
r
i
values in the execution (apart from those explicitly revealed
by
R when it provides its defenses). Therefore, the
j
=
r
j
r values that it receives reveal nothing of the receiver's
input
r, because for all j / Q, the value r
j
is not learned.
Security in the presence of malicious receivers. We
present an almost full proof that Protocol 4.1 is secure in
the presence of malicious receivers. The intuition behind
4
This is due to our definition of "oblivious transfer that is private for
defensible adversaries". It is possible to define a stronger notion of
defensible adversaries that is sufficient for proving that Protocol 4.1
is secure even when the sender's inputs are strings of an arbitrary
length. However, we were not able to prove that Protocol 3.3 is private
for defensible adversaries under this stronger notion (in contrast to
Protocol 3.1 that can be proven secure under the stronger notion).
this proof is that the cut-and-choose technique forces an adversarial
receiver
A to be able to provide a good defense for
most of the oblivious transfer executions (or be caught with
high probability). In particular, there must be at least one
j / Q for which A could have provided a good defense. This
implies that there exists some
j for which A cannot predict
the value of
s
1-r
j
j
with any non-negligible advantage. Since
s
1-r
is masked by
s
1-r
j
j
, it follows that
A also learns nothing
about
s
1-r
. We stress that the above intuition shows
that a malicious
A cannot learn anything about s
1-r
. However
, we actually need to prove a much stronger claim in
that the protocol is secure for a malicious
R
, as defined
via the ideal/real model simulation paradigm. We present
our analysis in the so-called "hybrid model", where the honest
parties use a trusted party to compute the coin-tossing
functionality for them.
We now describe the simulator Sim for
A = {A
n
}:
1. For each
i = 1, . . . , 2n, simulator Sim chooses random
pairs
s
0
i
, s
1
i
R
{0, 1} and plays the honest sender in
with these inputs, where
A
n
plays the receiver.
2. Sim chooses a random string
q
R
{0, 1}
n
and hands
it to
A
n
as if it is the output of the coin-tossing functionality
, as sent by the trusted party. Let
Q be the
index set derived from
q. Upon receiving back pairs
(
r
i
,
i
r
) for
i Q, simulator Sim checks that they all
constitute good defenses, respectively. If not, then it
aborts (just like the honest sender).
3. Sim rewinds
A
n
to the beginning of the previous step
and chooses a new random string
q with associated
index set
Q . (We stress that q is independent of q.)
Sim hands
q to A
n
and sees if it replies with pairs
(
r
i
,
i
r
) that are good defenses, for all
i Q . Sim
repeats this process with a new
q until A
n
indeed
replies with pairs (
r
i
,
i
r
) that are good defenses, for
all
i Q . If Q = Q, then Sim outputs fail. Otherwise
it proceeds to the next step.
4. Given that
Q = Q (and |Q | = |Q|), there exists at
least one index
j such that j / Q but j Q. For
such a
j, Sim computes r = r
j
j
and sends
r to
the trusted party. (Note that
r
j
is obtained from the
defense (
r
j
,
j
r
) that was received from
A
n
after it was
sent the query set
Q. In contrast,
j
is the value received
from
A
n
after rewinding; i.e., when the query
set was
Q .)
5. Upon receiving back a bit
s
r
from the trusted party,
Sim computes
0
and
1
as follows:
(a) If
r = 0, then
0
=
s
0
j /
Q
s
j
j
and
1
R
{0, 1}.
(b) If
r = 1, then
0
R
{0, 1} and
1
=
s
1
j /
Q
s
1-j
j
.
Sim sends (
0
,
1
) to
A
n
and output whatever
A
n
does.
We proceed to prove that the joint output of Sim and the
honest sender
S in the ideal model is computationally indistinguishable
from the joint output of
A
n
and
S in the
real model. Actually, since the honest
S has no output from
the protocol, it suffices here to show that the output of Sim
in the ideal model is computationally indistinguishable from
the output of
A
n
in the real model. We first claim that apart
from the pair (
0
,
1
), the view of
A
n
in the simulation with
105
Sim is statistically close to its view in a real execution with
S; the only difference being in the case that Sim outputs fail.
This can be seen as follows: if
A
n
does not send good defenses
after receiving
q, then Sim aborts, just as the honest
S would (and in this case the simulation is perfect). If A
n
does send good defenses, then Sim continues until it finds another
(independent)
q for which A
n
also replies with good
defenses. It is not hard to see that this yields a distribution
that is the same as in a real execution, except when
q = q,
in which case Sim outputs fail. However, this event (that
it provides good defenses on
q and then the next time that
it provides good defenses is again on
q) can happen with
probability only 2
-n
.
We therefore have that in the simulation by Sim, the adversary
A
n
's partial view up until the point that it receives
(
0
,
1
) is statistically close to its view in a real execution
with
S. We now show that A
n
's full view is computationally
indistinguishable. To do this, we consider a modified ideal-model
simulator Sim who receives the sender
S's input pair
(
s
0
, s
1
). Simulator Sim works in exactly the same way as
Sim, except that it computes
1-r
as an honest sender would
instead of choosing it uniformly. By the above argument, it
follows that the distribution generated by Sim in the ideal
model is statistically close to the distribution generated by a
real execution between
S and A
n
. (Recall that Sim already
generates
r
in the same way as an honest
S, and therefore
so does Sim .) It remains to show that the distribution generated
by Sim is computationally indistinguishable to that
generated by Sim.
The only difference between Sim and Sim is in the generation
of
1-r
: simulator Sim generates it "honestly", whereas
Sim chooses it uniformly. As mentioned above, intuitively,
indistinguishability follows from the fact that at least one
s
1-r
j
j
masks the value of
s
1-r
. Formally, we show that if
this "fake"
1-r
can be distinguished from a real one, then
we can construct a defensible receiver ~
A
n
that can break the
oblivious transfer protocol
.
That is, we show that if the output generated by Sim and
Sim can be distinguished with non-negligible probability,
then it is possible for a defensible adversary ~
A
n
to succeed
in the experiment of Definition 2.2 with non-negligible advantage
, with respect to the subprotocol
. Assume by contradiction
that there exists a distinguisher
D, a polynomial
p() and infinitely many n's such that
|Pr[D(output
Sim
) = 1]
- Pr[D(output
Sim
) = 1]
| 1
p(n) .
Without loss of generality, assume that
Pr[
D(output
Sim
) = 1]
- Pr[D(output
Sim
) = 1]
1
p(n) . (1)
We now use the above to construct a defensible adversary
~
A = { ~
A
n
}. Adversary ~
A
n
begins its attack by starting
the simulation of Protocol 4.1, according to Sim's strategy.
Specifically, ~
A
n
chooses
s
0
, s
1
R
{0, 1} and runs the simulation
strategy of Sim with
A
n
up until the point where
0
and
1
are sent.
The simulation is the same as Sim,
except for the following difference:
~
A
n
begins by choosing
j
R
{1, . . . , 2n} and internally invokes A
n
, simulating an
execution of Protocol 4.1. Then, all of the oblivious transfers
subexecutions of
, except for the j
th
one, are run internally
with ~
A
n
playing the honest sender ( ~
A
n
also chooses the
s
0
i
and
s
1
i
values as
S would); in contrast, the messages of the
j
th
execution of the oblivious transfer protocol
are forwarded
between ~
A
n
's external sender and the internal
A
n
playing the receiver. Following the oblivious transfer executions
, ~
A
n
runs the honest sender in the coin-tossing protocol
to generate
q and thus Q as required. If j / Q, then ~
A
n
outputs fail and halts. Otherwise, ~
A
n
receives back the defenses
; since
j Q, the j
th
defense is included. If (
r
j
,
j
r
) is
not a good defense, then ~
A
n
outputs fail and halts. Otherwise
, it stores (
r
j
,
j
r
) and continues like Sim by rewinding
A
n
and generating a new
q and Q . If j Q , then once
again ~
A
n
outputs fail and halts.
Otherwise, it continues
like Sim (using the
j chosen above for which it is given that
j Q and j / Q ). ~
A
n
continues in the same way that Sim
does up until (but not including) the point at which (
0
,
1
)
must be sent. Now, ~
A
n
computes (
0
,
1
) as follows. First,
note that ~
A
n
knows the values (
s
0
, s
1
) and
s
0
i
, s
1
i
for all
i = j (because it chose them). However, the values s
0
j
and
s
1
j
are not known to ~
A
n
because these are the values used
by the external sender with whom it interacts. Nevertheless,
the (good) defense provided by
A
n
is enough to obtain the
value
s
r
j
j
. This holds because given the transcript of the
j
th
oblivious transfer execution and the input and random-tape
of the receiver, it is possible to derive
s
r
j
j
. The only value
unknown to ~
A
n
is therefore
s
1-r
j
j
. Therefore, ~
A
n
is able to
compute
r
like the honest sender. In contrast, it cannot
honestly compute
1-r
. Rather, ~
A
n
guesses the value of
s
1-r
j
j
R
{0, 1} randomly, and then computes
1-r
using
s
1-r
, all of the
s
i
values that it knows (i.e., all apart from
s
1-r
j
j
), and the uniformly chosen
s
1-r
j
j
. In order to determine
its output, ~
A
n
obtains the output of
A
n
and runs the
distinguisher
D (from Eq. (1)) on this output; let b be the
bit output by
D. Then, ~
A
n
sets
= s
1-r
j
j
b. (Recall that
is ~
A
n
's guess for the "not-received" bit used by the honest
sender. The motivation for this guess is that by Eq. (1),
D outputs 1 with higher probability on Sim (when the bit
is random) than on Sim (when the bit is correct). Thus,
when
D outputs 1, we flip ~
A
n
's guess for
s
1-r
j
j
.) Finally,
~
A
n
outputs the defense (
r
j
,
j
r
) from above and the bit
.
We proceed to analyze the probability that ~
A
n
succeeds
in Expt
rec
. First, note that unless ~
A
n
outputs fail, the view
of
A
n
when interacting with ~
A
n
above is identical to its
view in the simulation by Sim. This is due to the fact that
~
A
n
follows Sim's strategy, except for two differences. The
first difference is that in the
j
th
execution of the oblivious
transfer protocol
is run externally. However, since Sim
plays the role of an honest receiver in all of the executions,
this makes no difference to
A
n
's view. The second difference
is in how
1-r
is computed: Sim chooses it uniformly,
whereas ~
A
n
computes it as described above. Clearly, the
distribution generated is the same because ~
A
n
uses a uniformly
distributed
s
1-r
j
j
, and thus
1-r
is also uniformly
distributed.
Now, denote the inputs of the honest sender that ~
A
n
interacts
with by (~
s
0
, ~s
1
). Using the facts that (a) ~
A
n
generates
the exact same distribution as Sim, (b) ~
A
n
sets
= s
1-r
j
j
b
(where
b is D's output bit), and (c) ~
A
n
presents a good defense
every time that it does not output fail, we have that
Pr Expt
rec
( ~
A
n
) = 1
| output
~
A
n
= fail
(2)
= Pr
D(output
Sim
)
s
1-r
j
j
= ~
s
1-r
j
.
106
(Recall that Expt
rec
( ~
A
n
) = 1 if ~
A
n
presents a good defense
and
= ~s
1-r
j
.)
In contrast to the above, conditioned on the event that
s
1-r
j
j
= ~
s
1-r
j
(i.e., the event that ~
A
n
guessed correctly), the
result is an execution that is distributed exactly according
to Sim . (Recall that the only difference between Sim and
Sim is with respect to the computation of
1-r
.) That is,
Pr
D(output
Sim
)
s
1-r
j
j
= ~
s
1-r
j
| s
1-r
j
j
= ~
s
1-r
j
= Pr
D(output
Sim
)
s
1-r
j
j
= ~
s
1-r
j
| s
1-r
j
j
= ~
s
1-r
j
= Pr [
D(output
Sim
) = 0]
where the last equality is just due to the fact that
s
1-r
j
j
=
~
s
1-r
j
. Now, recalling that
s
1-r
j
j
is chosen uniformly by ~
A
n
(and so equals ~
s
1-r
j
with probability exactly 1
/2), we have:
Pr
D(output
Sim
)
s
1-r
j
j
= ~
s
1-r
j
=
1
2 Pr D(output
Sim
)
s
1-r
j
j
= ~
s
1-r
j
| s
1-r
j
j
= ~
s
1-r
j
+ 1
2 Pr D(output
Sim
)
s
1-r
j
j
= ~
s
1-r
j
| s
1-r
j
j
= ~
s
1-r
j
=
1
2 Pr [D(output
Sim
) = 0]
+ 1
2 Pr D(output
Sim
) = 1
| s
1-r
j
j
= ~
s
1-r
j
=
1
2 (1 - Pr [D(output
Sim
) = 1])
+ 1
2 Pr D(output
Sim
) = 1
| s
1-r
j
j
= ~
s
1-r
j
=
1
2 +
1
2 Pr D(output
Sim
) = 1
| s
1-r
j
j
= ~
s
1-r
j
- 12 Pr[D(output
Sim
) = 1]
.
Recalling again that when
s
1-r
j
j
= ~
s
1-r
j
the output of Sim
is the same as Sim , we have that
1
2 +
1
2 Pr D(output
Sim
) = 1
| s
1-r
j
j
= ~
s
1-r
j
- 12 Pr[D(output
Sim
) = 1]
=
1
2 +
1
2 Pr D(output
Sim
) = 1
| s
1-r
j
j
= ~
s
1-r
j
+ 1
2 Pr D(output
Sim
) = 1
| s
1-r
j
j
= ~
s
1-r
j
- Pr [D(output
Sim
) = 1]
=
1
2 + Pr [D(output
Sim
) = 1]
- Pr [D(output
Sim
) = 1]
.
Combining the above with Equations (1) and (2), we have
that for infinitely many
n's
Pr Expt
rec
( ~
A
n
) = 1
| output
~
A
n
= fail
= Pr
D(output
Sim
)
s
1-r
j
j
= ~
s
1-r
j
12 + 1
p(n) .
Recall now that ~
A
n
outputs fail if
A
n
does not output a good
defense, if
j / Q, or if j Q . We first claim that A
n
must
output a good defense with non-negligible probability. This
follows simply from the fact that when
A
n
does not output
a good defense, the execution is truncated and the distributions
generated by Sim and Sim are identical. Therefore,
Eq. (1) implies that for infinitely many
n's, A
n
outputs a
good defense with probability at least 1
/p(n). Next, recall
that ~
A
n
chooses the sets
Q and Q randomly (under the constraints
prescribed in the protocol). Thus, with probability
exactly 1
/4, j Q and j / Q (because the probability that
a given
j is in a specified set is exactly 1/2). We conclude
that with non-negligible probability, ~
A
n
does not output fail,
and thus Pr[Expt
rec
( ~
A
n
) = 1] is non-negligible.
It remains to show that Sim runs in expected polynomial-time
. Aside from the rewinding stage, all work takes a fixed
polynomial amount of time. Regarding the rewinding stage,
we have the following. Let
p denote the probability that A
n
replies correctly upon a random set of indices
Q of size n,
as specified in the protocol. Then, given that
A
n
replied
correctly to the initial query set
Q, the expected number
of rewinding attempts with independent
Q made by Sim
equals 1
/p. Since these rewinding attempts are only made if
A
n
replied correctly to the initial query set
Q, we have that
the expected number of attempts overall equals
p 1/p = 1.
This completes the proof.
MALICIOUS SENDERS AND DEFENSIBLE RECEIVERS
In this section, we reverse the oblivious transfer protocol
of Protocol 4.1 to obtain a protocol that is secure in the
presence of a malicious sender and private for random inputs
in the presence of a defensible receiver. We use the
construction of [31] for reversing Protocol 4.1. The protocol
is as follows:
Protocol 5.1. (reversing oblivious transfer):
Inputs: The sender S has a pair of bits (s
0
, s
1
) for
input and the receiver
R has a bit r.
The protocol:
1. The sender and receiver run an oblivious transfer
protocol
that is secure in the presence of a
malicious receiver and private in the presence of
a defensible sender:
(a) The sender
S, playing the receiver in , inputs
~
r = s
0
s
1
(b) The receiver
R, playing the sender in , chooses
a random bit
R
{0, 1} and inputs ~s
0
=
and ~
s
1
=
r.
Denote
S's output from by a.
2.
S sends R the bit = s
0
a.
3.
R outputs s
r
=
.
The security of Protocol 5.1 can be easily proven as an
information-theoretic reduction, or when the original oblivious
transfer protocol is fully secure. In contrast, it is far
more subtle in the setting where only privacy in the presence
of a defensible sender is assumed. Nevertheless, we do
obtain the following claim:
Claim 5.2. If is a non-trivial oblivious transfer protocol
that is secure in the presence of a malicious receiver
and private in the presence of a defensible sender, then Protocol
5.1 is a non-trivial oblivious transfer protocol that is
secure in the presence of a malicious sender and private for
random inputs in the presence of a defensible receiver.
107
FULLY-SECURE BIT OT
In this section, we use the construction of Protocol 4.1
again in order to boost the security of Protocol 5.1 so that
it is secure in the presence of both a malicious sender and
a malicious receiver; we call such a protocol fully secure to
stress that it is secure in the face of any corruption.
By Claim 4.2, we have that Protocol 4.1 boosts the security
of any oblivious transfer protocol that is private for
defensible receivers into one that is secure in the presence
of malicious receivers. We can therefore use Protocol 4.1
to boost the security of Protocol 5.1 so that the result is a
protocol that is secure in the presence of malicious receivers.
This does not suffice, however, because we must show that
if the subprotocol used in Protocol 4.1 is secure in the presence
of malicious senders, then the result is still secure in the
presence of malicious senders. (Claim 4.1 considers only privacy
for defensible senders.) This is actually easy to show,
and is omitted here due to lack of space.
Theorem 6.1. Assume that there exists a non-trivial bit
oblivious transfer protocol
that is secure in the presence of
malicious senders and private for random inputs in the presence
of defensible receivers. Then, Protocol 4.1 that is in-stantiated
using this
, is a non-trivial bit oblivious transfer
protocol that is secure in the presence of malicious receivers
and senders.
Black-box construction of oblivious transfer. Noting
that perfectly-binding commitment schemes (as used in Protocol
3.3) can be constructed using black-box access to homomorphic
encryption or enhanced trapdoor permutations,
and combining Protocols 3.1 and 3.3 with Protocol 4.1, followed
by Protocol 5.1 and the construction in Theorem 6.1,
we obtain secure bit oblivious transfer with black-box access
to a homomorphic encryption scheme or a family of
enhanced trapdoor permutations.
BLACK-BOX SECURE COMPUTATION
Kilian [18] showed that any function can be securely computed
given black-box access to a bit oblivious transfer functionality
.
We therefore have the following theorem, that
constitutes our main result:
Theorem 7.1. Assume that there exist homomorphic encryption
schemes with errorless decryption or families of
enhanced trapdoor permutations. Then, for any probabilis-tic
polynomial-time functionality
f there exists a protocol
that uses only black-box access to a homomorphic encryption
scheme or to a family of enhanced trapdoor permutations
, and securely computes
f with any number of corrupted
parties and in the presence of a static malicious adversary.
We remark that as is standard for the setting of no honest
majority, the security guarantee achieved here is that of "se-curity
with abort"; see [13, Chapter 7] for formal definitions.
REFERENCES
[1] W. Aiello, Y. Ishai and O. Reingold. Priced Oblivious
Transfer: How to Sell Digital Goods. In EUROCRYPT 2001,
Springer-Verlag (LNCS 2045), pages 119135, 2001.
[2] M. Ben-Or, S. Goldwasser and A. Wigderson. Completeness
Theorems for Non-Cryptographic Fault-Tolerant Distributed
Computation. In 20th STOC, pages 110, 1988.
[3] M. Blum. Coin Flipping by Phone. In IEEE Spring
COMPCOM, pages 133137, 1982.
[4] R. Canetti. Security and Composition of Multiparty
Cryptographic Protocols. Journal of Cryptology,
13(1):143202, 2000.
[5] D. Chaum, C. Cr
epeau and I. Damg
ard. Multi-party Uncond-itionally
Secure Protocols. In 20th STOC, pages 1119, 1988.
[6] I. Damg
ard and Y. Ishai. Constant-Round Multiparty
Computation Using a Black-Box Pseudorandom Generator. In
CRYPTO 2005, Springer-Verlag (LNCS 3621), pages 378394,
2005.
[7] D. Dolev, C. Dwork and M. Naor. Non-Malleable
Cryptography. SIAM Journal on Computing, 30(2):391437,
2000.
[8] S. Even, O. Goldreich and A. Lempel. A Randomized Protocol
for Signing Contracts. In Communications of the ACM,
28(6):637647, 1985.
[9] R. Gennaro, Y. Lindell and T. Malkin. Enhanced versus Plain
Trapdoor Permutations for Non-Interactive Zero-Knowledge
and Oblivious Transfer. Manuscript in preparation, 2006.
[10] R. Gennaro and L. Trevisan. Lower Bounds on the Efficiency
of Generic Cryptographic Constructions. In 41st FOCS, pages
305314, 2000.
[11] Y. Gertner, S. Kannan, T. Malkin, O. Reingold and
M. Viswanathan. The Relationship between Public Key
Encryption and Oblivious Transfer. In 41st FOCS, pages
325334, 2000.
[12] Y. Gertner, T. Malkin and O. Reingold. On the Impossibility
of Basing Trapdoor Functions on Trapdoor Predicates. In 42nd
FOCS, pages 126135, 2001.
[13] O. Goldreich. Foundations of Cryptography: Volume 2
Basic Applications. Cambridge University Press, 2004.
[14] O. Goldreich, S. Micali and A. Wigderson. Proofs that Yield
Nothing but their Validity or All Languages in NP Have
Zero-Knowledge Proof Systems. Journal of the ACM,
38(1):691729, 1991.
[15] O. Goldreich, S. Micali and A. Wigderson. How to Play any
Mental Game A Completeness Theorem for Protocols with
Honest Majority. In 19th STOC, pages 218229, 1987.
[16] R. Impagliazzo and S. Rudich. Limits on the Provable
Consequences of One-way Permutations. In CRYPTO'88,
Springer-Verlag (LNCS 403), pages 826, 1988.
[17] Y.T. Kalai. Smooth Projective Hashing and Two-Message
Oblivious Transfer. In EUROCRYPT 2005, Springer-Verlag
(LNCS 3494) pages 7895, 2005.
[18] J. Kilian. Founding Cryptograph on Oblivious Transfer. In
20th STOC, pages 2031, 1988.
[19] J. Kilian. Uses of Randomness In Algorithms and Protocols.
MIT Press, 1990.
[20] J. Kilian. Improved Efficient Arguments. In CRYPTO'95,
Springer-Verlag (LNCS 963), pages 311324, 1995.
[21] J.H. Kim, D.R. Simon and P. Tetali. Limits on the Efficiency
of One-Way Permutation-Based Hash Functions. In 40th
FOCS, pages 535542, 1999.
[22] E. Kushilevitz and R. Ostrovsky. Replication Is Not Needed:
Single Database, Computationally-Private Information
Retrieval. In 38th FOCS, pages 364373, 1997.
[23] Y. Lindell. A Simpler Construction of CCA2-Secure Public-Key
Encryption Under General Assumptions. In EUROCRYPT
2003, Springer-Verlag (LNCS 2656), pages 241254, 2003.
[24] T. Malkin and O. Reingold. Personal communication, 2006.
[25] S. Micali. Computationally Sound Proofs. SIAM Journal on
Computing, 30(4):12531298, 2000.
[26] M. Naor and K. Nissim. Communication Preserving Protocols
for Secure Function Evaluation. In 33rd STOC, pages 590599,
2001.
[27] M. Naor and B. Pinkas. Efficient Oblivious Transfer Protocols.
In 12th SODA, pages 458457, 2001.
[28] M. Rabin. How to Exchange Secrets by Oblivious Transfer.
Tech. Memo TR-81, Harvard University, 1981.
[29] O. Reingold, L. Trevisan, and S. Vadhan. Notions of
Reducibility between Cryptographic Primitives. In 1st TCC,
pages 120, 2004.
[30] A. Sahai. Non-Malleable Non-Interactive Zero-Knowledge and
Adaptive Chosen-Ciphertext Security. In 40th FOCS, pages
543553, 1999.
[31] S. Wolf and J. Wullschleger. Oblivious Transfer Is Symmetric.
To appear in EUROCRYPT 2006. Appears at Cryptology
ePrint Archive, Report 2004/336, 2004.
[32] A. Yao. How to Generate and Exchange Secrets. In 27th
FOCS, pages 162167, 1986.
108 | oblivious transfer;encryption scheme;oblivious transfer protocol;secure computation;nonblack-box;malicious adversary;black-box;Theory of cryptography;cryptographic;black-box reductions;trapdoor permutation |
45 | Bluetooth Dynamic Scheduling and Interference Mitigation | Bluetooth is a cable replacement technology for Wireless Personal Area Networks. It is designed to support a wide variety of applications such as voice, streamed audio and video, web browsing, printing, and file sharing, each imposing a number of quality of service constraints including packet loss, latency, delay variation, and throughput. In addition to QOS support, another challenge for Bluetooth stems from having to share the 2.4 GHz ISM band with other wireless devices such as IEEE 802.11. The main goal of this paper is to investigate the use of a dynamic scheduling algorithm that guarantees QoS while reducing the impact of interference. We propose a mapping between some common QoS parameters such as latency and bit rate and the parameters used in the algorithm. We study the algorithm's performance and obtain simulation results for selected scenarios and configurations of interest. | Introduction
Today most radio technologies considered by Wireless Personal
Area Network (WPAN) industry consortia and standard
groups including the Bluetooth Special Interest Group [1],
HomeRF, and the IEEE 802.15, employ the 2.4 GHz ISM frequency
band. This same frequency band is already in use by
microwave ovens and the popular Wireless Local Area Network
(WLAN) devices implementing the IEEE 802.11 standard
specifications [8].
However, instead of competing with WLANs for spectrum
and applications, WPANs are intented to augment many of
the usage scenarios and operate in conjunction with WLANs,
i.e., come together in the same laptop, or operate in proximity
in an office or conference room environment. For example,
Bluetooth can be used to connect a headset, or PDA to a desk-top
computer, that, in turn, may be using WLAN to connect
to an Access Point placed several meters away.
Thus, an issue of growing concern is the coexistence of
WLAN and WPAN in the same environment. Several techniques
and algorithms aimed at reducing the impact of interference
have been considered.
These techniques range
from collaborative schemes intended for Bluetooth and IEEE
802.11 protocols to be implemented in the same device to
fully independent solutions that rely on interference detection
and estimation. In particular:
Collaborative mechanisms. Mechanisms for collaborative
schemes have been proposed to the IEEE 802.15 Coexistence
Task Group and are based on a Time Division Multiple
Access (TDMA) solution that alternates the transmission
of Bluetooth and WLAN packets (assuming both
protocols are implemented in the same device and use a
common transmitter) [9]. A priority of access is given to
Bluetooth for transmitting voice packets, while WLAN is
given priority for transmitting data.
Non-collaborative mechanisms. The non-collaborative
mechanisms range from adaptive frequency hopping [11]
to packet scheduling and traffic control [4]. They all use
similar techniques for detecting the presence of other devices
in the band such as measuring the bit or frame error
rate, the signal strength or the signal to interference ratio
(often implemented as the Received Signal Indicator
Strength (RSSI)). Frequency hopping devices may be able
to detect that some frequencies are used by other devices
and thus modify their frequency hopping pattern. They
can also choose not to transmit on "bad" frequencies. The
first technique is known as adaptive frequency hopping,
while the second technique is known as MAC scheduling.
The main advantage of scheduling is that it does not require
changes to the Bluetooth specifications.
In this paper we present a Bluetooth Interference Aware
Scheduling (BIAS) algorithm to deal with coexistence. This
algorithm takes advantage of the fact that devices in the same
piconet will not be subject to the same levels of interference
on all channels of the band. The basic idea is to utilize the
Bluetooth frequency hopping pattern and distribute channels
to devices such that to maximize their throughput while ensuring
fairness of access among users.
In this paper, we propose several extensions to a preliminary
discussion of the algorithm [4] in order to address (1) priority
scheduling, (2) dynamic changes in the environment,
and (3) asymmetric scenarios where packet lengths and data
rates are chosen differently in the upstream (slave to master
transmission) and downstream (master to slave transmission)
directions. In addition, we describe how to map commonly
used QOS parameters, namely bit rate, and jitter and the parameters
used in BIAS. Simulation results for scenarios and
configurations of interest are presented and performance is
measured in terms of packet loss and mean access delay.
The remainder of this paper is organized as follows. In
section 2, we give some general insights on the Bluetooth
interference environment. In section 3, we describe BIAS
and discuss the mapping of QOS parameters. In section 4,
22
N. GOLMIE
we present simulation results and offer concluding remarks in
section 5.
Interference environment
Since Bluetooth operates in the 2.4 GHz band along with
other wireless technologies such as 802.11, high and low rate
WPAN (802.15.3 and 4), the resulting mutual interference
leads to significant performance degradation.
In this paper, we assume that interference is caused by an
802.11 spread spectrum network operating in proximity of the
Bluetooth piconet. This represents the worst case interference
for Bluetooth. Golmie et al. [6] use a detailed MAC and
PHY simulation framework to evaluate the impact of interference
for a pair of WLAN devices and a pair of Bluetooth
devices. The results indicate that Bluetooth performance may
be severely impacted by interference with packet loss of 8%
and 18% for voice and data traffic, respectively. In [6], the
authors investigate the effect of several factors, such as transmitted
power, offered load, packet size, hop rate, and error
correction on performance. First, they note that power control
may have limited benefits in an interference environment.
Increasing the Bluetooth transmission power even ten times is
not sufficient to reduce the Bluetooth packet loss. Second, using
a shorter packet size leads to less packet loss for Bluetooth
at the cost of causing more interference on WLAN. Overall,
the results exhibit a strong dependence on the type and characteristics
of the traffic distribution used.
Additional analytical [5,10] and experimentation [3,7] results
confirm these findings.
Bluetooth Interference Aware Scheduling
In this section, we present a Bluetooth Interference Aware
Scheduling (BIAS) algorithm that consists of several components
, namely, (i) dynamic channel estimation, (ii) credit
computation, and (iii) access priority. A preliminary discussion
of BIAS appeared in [4].
In this sequel, we assume that traffic from slave S
i
to the
master (upstream) is characterized by a data rate,
i
up
, equal
to (N
i
peak
l
iup
)/p
i
where N
i
peak
is the number of packets sent
back-to-back within a poll interval, p
i
, and l
iup
is the packet
length (1, 3, or 5 slots depending on the packet type). Similarly
, the data rate in the downstream (from the master to slave
S
i
) is characterized by
i
dn
equal to (N
i
peak
l
i
dn
)/p
i
. Note that
N
i
peak
and p
i
are the same in the upstream and downstream,
since every packet in the upstream corresponds to one in the
downstream. In addition, we assume the following transmission
rules for the master and slave.
Master The master polls S
i
every p
i
slots in order to guarantee
i
up
in the upstream direction. A poll message can
be either a data or POLL packet. A data packet is sent if
there is a packet in the queue destined for S
i
. This packet
contains the ACK of the previous packet received from S
i
.
In case there is no data to transmit and the master needs
to ACK a previous slave transmission, it sends a NULL
packet.
Slave S
i
Upon receipt of a packet from the master, the
slave can transmit a data packet. This data packet contains
the ACK information of the master to slave packet
transmission. In case the slave does not have any data to
send, it sends a NULL packet in order to ACK the previous
packet reception from the master. No ACK is required for
a NULL message from the master.
In a nutshell, we propose a method that allows the master
device, which controls all data transmissions in the piconet,
to avoid data transmission to a slave experiencing a "bad"
frequency. Furthermore, since a slave transmission always
follows a master transmission, using the same principle, the
master avoids receiving data on a "bad" frequency, by avoiding
a transmission on a frequency preceding a "bad" one in
the hopping pattern.
This simple scheduling scheme illustrated in figure 1 needs
only be implemented in the master device and translates into
the following transmission rule. The master transmits in a slot
after it verifies that both the slave's receiving frequency, f
s
,
Figure 1. Interference Aware Scheduling.
BLUETOOTH DYNAMIC SCHEDULING AND INTERFERENCE MITIGATION
23
Figure 2. Master packet transmission flow diagram.
and its own receiving frequency, f
m
, are "good". Otherwise,
the master skips the current transmission slot and repeats the
procedure over again in the next transmission opportunity.
Figure 2 describes the master's transmission flow diagram.
In addition, to checking the slave's and the master's receiving
frequencies pair, (f
s
, f
m
), the algorithm incorporates bandwidth
requirements, and quality of service guarantees for each
master/slave connection in the piconet. This bandwidth allocation
is combined with the channel state information and
mapped into transmission priorities given to each direction in
the master/slave communication. It is shown in the "choose
slave" routine in the flow diagram. Note that the master invokes
the "choose" routine after serving the retransmission
ACK queue for packets sent by the master requiring retransmission
.
In the remainder of this section, we discuss (a) a dynamic
channel estimation procedure, (b) a credit allocation function,
and (c) a service priority routine that schedules packet transmissions
to devices according to their service requirements
and the state of the channel.
3.1. Dynamic channel estimation
Estimation is mainly based on measurements conducted on
each frequency or channel in order to determine the presence
of interference. Several methods are available ranging
from BER, RSSI, packet loss rate, and negative ACKs. In this
discussion, the estimation is based on negative ACKs, which
belongs to the class of implicit methods that do not require
messages to be exchanged between the master and the slave
devices. First, we define two phases in the channel estimate
procedure as illustrated in figure 3. During the Estimation
Window, EW, packets are sent on all frequencies regardless of
their classification.
Note that in case no data traffic is available for transmission
, POLL/NULL packets could be exchanged between the
master and the slave in order to probe the channel and collect
measurements. This POLL/NULL exchanged is designed in
most implementations to keep the connection alive and check
the status of the slave. It comes at the expense of causing
more interference on other systems. EW takes place at the
Figure 3. Implicit estimation.
beginning of every Estimation Interval, EI, and is followed
by an Online phase where the master uses only "good" frequencies
to selectively send data and POLL packets to slaves
in the piconet. Next, we give a lower bound on the EW and
describe how to adjust EI based on the environment's dynamics
.
Estimation Window.
The time to perform the channel estimation
depends on the frequency hopping rate since the methods
used to perform the classification depend on packet loss
measurements per frequency visited. A lower bound calculation
is as follows. First, we assume a hop rate of 1600 hops/s
given single slot packets. For each receiver the hopping rate
is 1600/2 hops/s, or 800 hops/s since nodes receive on every
other "frequency" or "hop" in the sequence. Next, we consider
the Bluetooth frequency hopping algorithm. In a window
of 32 frequencies, every frequency is selected once, then
the window is advanced by 16 frequencies, and the process
is repeated. Therefore, it takes 5 windows of 32 frequencies
in order to visit each of the 79 frequencies twice. In other
words, 160 hops visit each frequency twice. The time to visit
each frequency four times per receiver is 160/800
2 = 0.4
seconds or 400 ms. In fact, 400 ms constitutes a lower bound
assuming full load and single-slot packets.
In order to avoid having to fix the EW, or compute it manu-ally
, we propose a simple technique to dynamically adjusts the
window based on the number of times, N
f
, each frequency in
the band should be visited. For example, if N
f
is equal to 2,
then each receiving frequency in the band is visited at least
twice, i.e., the estimation phase ends only when the last frequency
in the band has been used twice for each device in
the piconet. Note that, avoiding "bad" frequencies can start
before EW ends, or as soon as frequency status information
becomes available.
Estimation Interval.
How often to update the channel estimation
depends on the application and the dynamics of the
scenario used. We propose an adaptive procedure to adjust
EI, which is the interval between two consecutive estimation
windows.
First, we let , be the percentage of frequencies that change
classification status (from "good" to "bad" or vice versa) during
the previous estimation phase. More formally, let S(f, t)
be the status of frequency f at time t.
S(f, t)
= 1 if f is "good",
0
otherwise.
(1)
Using the exclusive bit "OR" operation between S(f, t) and
S(f, t
+1) represents the change of status of frequency f from
24
N. GOLMIE
time t to t
+ 1. A change of status leads to a logic "1" while
a no change yields a logic "0". Summing over all frequencies
and dividing by the number of frequencies available, which is
79 in this case, is then equal to .
t
+1
= 1
79
79
f
S(f, t)
S(f, t + 1) .
(2)
Initially, EI is set to EI
min
. Then, EI is updated every interval
, t, according to the rationale that if a change were to happen
it is likely to happen again in the near future and therefore
EI is set to EI
min
. Otherwise, the window is doubled.
EI
t
+1
= max(2 EI
t
, EI
max
)
if
t
+1
0.1,
EI
min
otherwise.
(3)
3.2. Credit allocation
The credit system controls the bandwidth allocated to each
device in order to ensure that no device gets more than its
fair share of the available bandwidth. Thus, devices with a
positive credit counter, c
i
, are allowed to send data. Since
the rate in the upstream can be different from the rate in the
downstream, we define c
i
up
and c
i
dn
for both the upstream and
downstream credits. Credits can be computed according to
the upstream and downstream rates negotiated as follows:
c
i
up
=
i
up
N,
c
i
dn
=
i
dn
N,
(4)
where N is the number of slots considered in the allocation
and
i
up/down
= l
i
up/down
N
i
peak
/p
i
. Credits are decremented
by the number of slots used in each data packet transmission.
The transmission of POLL and NULL packets does not affect
the credit count based on the rationale that credits are
not required for the transmission of POLL and NULL messages
. An interesting question is how to compute or derive
it from application QOS parameters such as delay, peak
bandwidth, and jitter. Let d (seconds), r (bits/s), (seconds)
represent delay, peak bandwidth, and jitter, respectively. r is
part of the L2CAP QOS parameters and for some applications
is negotiated between the master and the slave at connection
setup. r is equal to (N
peak
E
l
8)/(p 625 10
-6
)
and
= (r l 625 10
-6
)/(E
l
8). Note that E
l
is the number
of information bytes contained in a packet of length l.
Table 1 gives E
l
corresponding to the various DH formats.
The choice of l depends on the L2CAP packet size, k. When
k
E
5
, N
peak
= 1 and l is such that:
l
=
1
if 0 < k
27,
3
if 27 < k
183,
5
if 183 < k
339.
(5)
Table 1
Packet encapsulation rate for DH packets.
Packet type
l
E
l
(bytes)
DH1
1
27
DH3
3
183
DH5
5
339
However, when k > E
5
, higher layer packets (L2CAP) are
segmented into N
peak
packets. The aim is to find N
peak
equal
to
N
peak
= k
E
l
(6)
such as to minimize N
peak
l, or the total number of slots
needed. Furthermore, since master and slave transmission alternate
, the end-to-end delay of a packet accounts for the segmentation
and the transmission of packets in both directions.
Therefore, the choice of l
up
and l
dn
are loosely constrained by
the delay requirements as follows:
N
peak
(l
up
+ l
dn
)
d
625
10
-6
,
(7)
where 625
10
-6
is the length of a slot in seconds. Finally,
the choice of p is determined by as follows:
2
p
625
10
-6
,
(8)
where 2 is the minimum value for the poll interval since every
other slot is dedicated to a master (or slave) transmission.
In case r, d, and cannot be determined from the application
QOS, can be set to 1
i
, the leftover bandwidth after
having calculated for all other applications with known
service rates (
).
3.3. Service priority
The third component of the algorithm is to give an access priority
to devices based on their channel conditions and their
allocated credits.
We let u
i
be the probability that a pair of master/slave
transmission slots are "good". Thus, u
i
represents the available
spectrum to slave S
i
, and we write:
u
i
= min
1
- 1
79 ,
P (
slave i has a good receiving frequency)
P (master has a good receiving frequency) , (9)
where
P (
device i has a good receiving frequency)
= Number of good channels
i
Total number of channels .
(10)
We use a two-tier system with high and low priorities, denoted
by A, and B, respectively. Priority A is used to support delay
constrained applications such as voice, MP3, and video.
On the other hand, priority B, is used to support best effort
connections such as ftp, http, print, email. The scheduling
routine services priority A devices first, and priority B devices
second. Also, among same tier connections, we choose
to give devices with fewer number of good channels the right
of way over other devices that have more channels available.
The priority access is determined according to a weight factor
, w, that is the product of the credits and the probability of
BLUETOOTH DYNAMIC SCHEDULING AND INTERFERENCE MITIGATION
25
Table 2
Definition of parameters used in the scheduling algorithm.
Parameters
Definition
i
up,dn
Rate allocated for device i in the upstream anddownstream
w
i
up,dn
Weight for device i
c
i
up,dn
Credit for device i
N
Number of slots considered in the allocation
u
i
Available frequency usage for device i
experiencing a bad frequency. w
i
up
and w
i
dn
are computed as
follows:
w
i
up
= c
i
up
(1 - u
i
),
w
i
dn
= c
i
dn
(1 - u
i
).
(11)
The master schedules a data transmission for slave i such as
to maximize the product of the weights in the up and downstreams
:
i
= max
f
S
w
i
up
w
i
dn
.
(12)
To transmit a POLL packet, the master looks only at the
weight function in the upstream:
i
= max
f
S
w
i
up
.
(13)
The selection of a slave is restricted over the set of slaves
S
that can receive on the master's current transmission frequency
, f . Thus, any slave that experiences a "bad" channel
on the current transmission frequency is not considered.
Four sets of slaves are formed, A
f
data
, A
f
poll
, B
f
data
, and B
f
poll
.
A
data
and A
poll
represent the set of high priority connections
requiring data and POLL packet transmissions, respectively.
Similarly, B
data
and B
poll
represent low priority connections.
First, the algorithm tries to schedule a packet to high priority
slaves in group A, then a POLL packet, before it moves to
group B. The credit counters and weights are updated accordingly
after every master's transmission. Table 2 summarizes
the parameters used in the algorithm and their definition. The
algorithm's pseudocode is given in table 11.
Performance evaluation
In this section, we present simulation results to evaluate the
performance of BIAS. The experiments illustrate the algorithm's
responsiveness to changes in the environment and
the support of QOS. The results obtained are compared with
Round Robin (RR) scheduling. Our simulation environment
is based on a detailed MAC, PHY and channel models for
Bluetooth and IEEE 802.11 (WLAN) as described in [6]. The
parameters used in the setup vary according to the experiment
. The common simulation parameters are summarized in
table 3. The simulations are run for 900 seconds of simulated
time unless specified otherwise. We run 10 trials using a different
random seed for each trial. In addition, to plotting the
mean value, we verify that that the statistical variation around
the mean values are very small (less than 1%).
The performance metrics include the packet loss, the mean
access delay, and the channel estimation transient time. The
Table 3
Common simulation parameters.
Bluetooth parameters
Values
ACL baseband packet encapsulation
DH5
Transmitted power
1 mW
WLAN parameters
Values
Packet interarrival time
2.172 ms
Offered load
60% of channel capacity
Transmitted power
25 mW
Data rate
11 Mbit/s
PLCP header
192 bits
Packet header
224 bits
Payload size
12000 bits
packet loss is the percentage of packets dropped due to interference
over the total number of packets received. The
access delay measures the time it takes to transmit a packet
from the time it is passed to the MAC layer until it is suc-cessfully
received at the destination. The delay is measured
at the L2CAP layer. The estimation transient time measures
the time it takes a Bluetooth device to detect the presence of a
"bad" frequency, i.e., from the time a packet loss occurs until
the frequency is classified "bad". This average is provided on
a per frequency basis.
4.1. Experiment 1: base case
This experiment includes Bluetooth performance results for
the reference scenario when no interference is present. It represents
a base case since the effects of BIAS are quantified
and compared against the reference scenario. It also covers
different levels of interference caused by WLAN systems operating
in close proximity. Thus, we examine Bluetooth's performance
when 1, 2, and 3 WLAN interfering systems are operational
and compare that to the ideal performance when no
interference is present. Note that, the maximum number of
non-overlapping channels for WLAN systems is 3, i.e., there
could be up to 3 WLAN networks operating simultaneously
using different non-overlapping channels. In each case, results
are obtained with BIAS and RR scheduling. The benefits
of using BIAS are discussed in terms of packet loss and
access delay.
Topology.
We use the topology illustrated in figure 4 that
consists of 3 WLAN systems (sourcesink pairs), and one
Bluetooth piconet with one master and one slave device. In a
first step, we record the results of Bluetooth when no WLAN
system is present. Then, we add one WLAN system at a time
starting with WLAN (Source/Sink) 1, followed by WLAN
(Source/Sink) 2, and 3.
Traffic.
For Bluetooth, a generic source that generates DH5
packets is considered. The packet interarrival mean time in
seconds, t
B
, is exponentially distributed and is computed according
to
t
B
= 2 l 0.000625 1 - 1 ,
(14)
26
N. GOLMIE
where l is the packet length in slots and is the offered
load. We assume that WLAN is operating in the Direct Sequence
Spread Spectrum (DSSS) mode. The WLAN source
is transmitting data packets to the sink which is responding
with ACKs. The WLAN packet payload is set to 12000 bits
transmitted at 11 Mbit/s, while the PLCP header of 192 bits
is transmitted at 1 Mbit/s. The packet interarrival time in seconds
, t
W
, is exponentially distributed and its mean is computed
according to
t
W
=
192
1000000 +
12224
11000000
1
.
(15)
Results.
Figure 5 gives the packet loss (a) and the mean access
delay (b) measured at the slave for a variable Bluetooth
offered load (580%). Observe that when no WLAN system
is present, the packet loss is zero and the access delay remains
flat at around 4 ms. This represents a reference measure
for the Bluetooth performance when there is no interference.
Each WLAN system addition an increase of 15% in packet
loss as shown in figure 5(a). The packet loss is around 15%,
30% and 45% when one, two, and three WLAN systems are
present, respectively. Repeating the same experiments using
BIAS, brings the packet loss down to zero for any number
of WLAN systems. The delay trends captured in figure 5(b)
Figure 4. Topology for experiments 1 and 2.
are consistent with the packet loss results. Using BIAS yields
lower delays than when RR is used. When one WLAN system
is present, the delay curve with BIAS is flat at 5 ms (a 1 ms
increase compared to the reference case when no interference
is present). When 2 WLAN systems are present, the delay
curve takes off at 35% with RR, while the curve remains flat
until 60% with BIAS. When 3 WLAN systems are present,
the delay curve takes off sharply at 15% with RR, while the
knee of the curve remains lower with BIAS (shifted to the
right).
4.2. Experiment 2: dynamic behavior
In this experiment, we focus on BIAS's responsiveness to
transient effects and sudden changes in the environment. We
measure the channel estimation transient time per frequency
and over the entire spectrum. We design an experiment where
the WLAN traffic is turned on and off several times during
each simulation run (about 30 times).
Topology.
We use the topology of figure 4 with one WLAN
system (Source/Sink 1) and the Bluetooth master/slave pair.
Traffic.
The traffic is based on bulk data. The offered load
for Bluetooth is varied between 10 and 100%, while for
WLAN the offered load is set to 60%. For Bluetooth, both
DH1 (1 slot) and DH5 (5 slots) packets are used in order
to compare the difference in transient times. The time the
WLAN connection is ON, T
ON
, is exponentially distributed
with a mean equal to 10 seconds, while the time the WLAN
connection is OFF, T
OFF
, is also exponentially distributed
with mean equal to 20 seconds. Each simulation is run for
900 seconds. Unless specified otherwise, we set EI
min
= 2
seconds, EI
max
= 100 seconds, N
f
= 1.
Results.
Figures 6(a) and 6(b) give the packet loss and access
delay, respectively, measured at the Bluetooth slave de
(a)
(b)
Figure 5. Experiment 1. Variable number of WLAN interfering systems. (a) Probability of packet loss. (b) Mean access delay.
BLUETOOTH DYNAMIC SCHEDULING AND INTERFERENCE MITIGATION
27
(a)
(b)
Figure 6. Experiment 2. Variable Bluetooth offered load. (a) Probability of packet loss. (b) Mean access delay.
vice. The packet loss obtained with BIAS is negligible (less
than 2%) for both DH1 and DH5 packets. On the other hand
the packet loss with Round Robin (RR) is close to 10%. The
access delay obtained with BIAS for DH1 packets is lower
than the delay for DH5 packets for offered loads under 70%
(it is around 1.5 ms for DH1 packets, and 4 ms for DH5 packets
). The knee of the curve for DH5 packets is located around
80% of the offered load while it is at 60% for DH1 packets
. Observe that BIAS gives lower access delays than RR for
DH5 packets (between 40% and 80% offered load). However,
the same does not apply to DH1 packets, in which we observe
a slight increase in access delay (0.5 ms) with BIAS compared
to RR. For short packets (DH1) retransmissions due to packet
loss (RR), and delay in transmission due to "bad" frequency
avoidance (BIAS), yields comparable delays. Furthermore,
given that the probability of packet loss (and retransmission)
is small for short packets, RR gives lower access delays on average
. Figure 7 gives the time it takes to estimate a "bad" frequency
using DH1 and DH5 packets. The use of DH5 packets
leads to a higher round trip transmission time, and therefore
increases the transient time, up to 1.5 ms while it is around
0 s for DH1 packets.
4.3. Experiment 3: QOS support
This experiment highlights the support of QOS in an environment
where devices experience different levels of interference
and connections have a range of service requirements.
Topology.
We use the topology illustrated in figure 8.
Slaves 1 and 2 experience the same level of interference,
while slave 3 does not experience any interference. The y-coordinate
of the WLAN FTP server is varied along the y-axis
in order to vary the level of interference on the Bluetooth piconet
.
Figure 7. Experiment 2. Variable Bluetooth offered load. Time to estimate a
"bad" channel.
Figure 8. Topology for experiment 3.
28
N. GOLMIE
Traffic.
For Bluetooth, we consider three application profiles
, namely, Print, Video, and Email. We use print, video,
and email traffic between slaves 1, 2, 3 and the master, respectively
. Note that the master is the client process in all
three connections. The profile parameters are given in table 4.
The WLAN uses the FTP profile described in table 5.
Since the video application generates roughly around 93
and 58 packets in the upstream and downstream directions,
respectively, and since it is often difficult to predict the exact
traffic distributions, the rate is divided evenly between both
directions. Thus, we set
2
up
2
dn
= 0.25. The two other appli-Table
4
Bluetooth application profile parameters.
Parameters
Distribution
Value
Email
Send interarrival time (sec)
Exponential
120
Send group
Constant
3
Receive interarrival time (sec)
Exponential
60
Receive group
Constant
3
Email size (bytes)
Exponential
1024
Print
Print requests interarrival time (sec) Exponential
30
File size
Normal
(30 K, 9 M)
Video
Frame rate
Constant
1 frame/s
Frame size (bytes)
Constant
17280 (128
120 pixels)
Table 5
WLAN application profile parameters.
Parameters
Distribution
Value
FTP
File interarrival time (sec)
Exponential
5
File size (bytes)
Exponential
5 M
Percentage of get
100%
cations, share the leftover bandwidth (
1,3
up,dn
= (1 - 0.5)/4 =
0.125).
Results.
Figure 9 depicts the results when the WLAN y-coordinate
is varied between 0 and 10 meters. In figure 9(a),
the packet loss with BIAS is below 0.1% for all three slaves
and the master. With RR, slave 1 (Print) and slave 2 (Video)
vary between 15% and 3% of packet loss between 0 and 10
meters, respectively. While the packet loss for the master is
above 20%. Slave 3 (Email) has a low packet loss with both
BIAS and RR since it is far from the WLAN server.
The access delay for slave 2 (Video) in figure 9(b) is 0.3
seconds with BIAS, while it is almost double with RR (0.6
seconds). For Print, delays with BIAS are half the delays with
RR (0.01 seconds as opposed to 0.02 seconds). The delays for
Email are also reduced by half with BIAS.
4.4. Experiment 4: WLAN and multi-Bluetooth piconets
interference
When two or more Bluetooth piconets are proximally located,
one expects few collisions when the packets happen to be
transmitted on the same frequency. However, the probability
of such collisions is low as discussed in [2] since each
piconet has a unique frequency sequence. Given that these
packet collisions are random in nature and are already miti-gated
by frequency hopping, we do not expect significant performance
improvements when BIAS is used since the packet
loss is already very low. Furthermore, the fact that frequencies
are eliminated due to other Bluetooth piconet interference
may even cause delay increases. We illustrate this particular
issue using the following scenario.
Topology.
We use the topology illustrated in figure 10 representing
a conference hall environment. It consists of one
WLAN AP located at (0, 15) meters, and one WLAN mobile
at (0, 0) meters. The WLAN mobile is the server device,
(a)
(b)
Figure 9. Experiment 3. Variable distance. (a) Probability of packet loss. (b) Access delay.
BLUETOOTH DYNAMIC SCHEDULING AND INTERFERENCE MITIGATION
29
Figure 10. Topology for experiment 4.
Table 6
Profile parameters.
Parameters
Distribution
Value
Bluetooth FTP
Percentage of put/get
100%
Inter-request time (sec)
Exponential
5
File size (bytes)
Exponential
250 K
HTTP
Page interarrival time (sec)
Exponential
30
Number of objects per page
Constant
2
Object 1 size (bytes)
Constant
1 K
Object 2 size (bytes)
Uniform
(2 K, 100 K)
while the AP is the client. The distance between the WLAN
AP and mobile is d
W
= 15 meters. There are ten Bluetooth
piconets randomly placed, covering a disk. The center of the
disk is located at (0, 0) and its radius is r
= 10 meters. We define
d
B
as the distance between a Bluetooth master and slave
pair. d
B
= 1 meter for half of the master and slave pairs,
while d
B
= 2 meters for the other half of the master and slave
pairs.
Traffic.
We run four experiments with different combinations
of WLAN and Bluetooth applications, namely, HTTP
and FTP. We use the application profiles available in the OP-NET
library and configure the parameters according to table
6. The WLAN FTP profile parameters are given in table
5.
Results.
The results for the Bluetooth packet loss and access
delay are given in tables 7 and 8, respectively. The results are
grouped by application category (FTP, HTTP), and d
B
, for
each of the WLAN profiles. Overall, the packet loss results
with BIAS are comparable to the packet loss obtained with
RR. In some instances, the packet loss with BIAS is slightly
lower than with RR, however the difference remains less than
2%. The access delays for Bluetooth is given in table 8. The
results with BIAS and RR are comparable. However, there
are no significant advantages in using BIAS.
Tables 9 and 10 give the packet loss and the access delay
respectively for the WLAN FTP and HTTP profiles. Ob-Table
7
Bluetooth packet loss probability for experiment 4.
BT traffic
WLAN traffic
FTP
HTTP
BIAS
RR
BIAS
RR
FTP
d
B
= 1 m
0.0103
0.0158
0.0064
0.0356
d
B
= 2 m
0.1079
0.1210
0.0379
0.0393
HTTP
d
B
= 1 m
0.0012
0.0034
0.0003
0.0002
d
B
= 2 m
0.0425
0.0614
0.0265
0.0071
Table 8
Bluetooth MAC delay (sec) for experiment 4.
BT traffic
WLAN traffic
FTP
HTTP
BIAS
RR
BIAS
RR
FTP
d
B
= 1 m
0.1805
0.1749
0.1912
0.1739
d
B
= 2 m
0.3753
0.4574
0.2444
0.2378
HTTP
d
B
= 1 m
0.0840
0.0861
0.0836
0.0835
d
B
= 2 m
0.0945
0.1121
0.0963
0.0952
Table 9
WLAN probability of packet loss for experiment 4.
BT traffic
WLAN traffic
FTP
HTTP
BIAS
RR
BIAS
RR
FTP
0.1534
0.303
0.2510
0.3481
HTTP
0.0192
0.0961
0.0721
0.1534
Table 10
WLAN MAC delay (sec) for experiment 4.
BT traffic
WLAN traffic
FTP
HTTP
BIAS
RR
BIAS
RR
FTP
0.0017
0.0022
0.0010
0.0011
HTTP
0.0011
0.0018
0.0009
0.0012
serve a significant reduction in packet loss with BIAS for both
WLAN applications, in which the packet loss drops from 30%
and 34% to 15% and 25% for the FTP and HTTP application,
respectively. The access delay shown in table 10 is consistent
with the packet loss results and shows slight improvements
with BIAS. In summary, the use of BIAS in a multi-Bluetooth
and WLAN environment leads to performance improvements
for WLAN, while it has little benefits on the Bluetooth performance
Concluding remarks
In this paper we propose a scheduling technique, BIAS, aimed
at eliminating interference on WLAN and alleviating the impact
of interference on the Bluetooth performance. This work
addresses the need to adjust to changes in the environment,
support asymmetric traffic in the upstream and downstream,
in addition to the use of different scheduling priorities.
30
N. GOLMIE
Table 11
BIAS pseudocode.
1: Every N slots
2:
estimate_channel();
3:
compute_credits();
4: Every even TS
f
// Master transmission slot
5:
if TS
f
+ l
dn
is clear
// Master can receive in next slot
6:
{
7:
A
f
data
= {set of high priority slaves s.t. ((f "good") and (qsize > 0) and (c
dn
>
0)}
8:
A
f
poll
= {set of high priority slaves s.t. ((f "good") and (c
up
>
0))}
9:
B
f
data
= {set of low priority slaves s.t. ((f "good") and (qsize > 0))}
10:
B
f
poll
= {set of low priority slaves s.t. ((f "good") and (c
up
c
dn
>
0))}
11:
// Service high priority slaves first
12:
if (A
f
data
= )
// transmit data packets
13:
{
14:
i
= max
A
f
data
(w
i
up
w
i
dn
)
// select device i with the largest weight
15:
transmit data packet of size l
dn
to slave i
16:
c
i
dn,up
= c
i
dn,up
- l
idn,up
;
// decrement credit counter
17:
w
i
dn,up
= (1 - u
i
)
c
i
dn,up
;
// update weights
18:
}
19:
else if (A
f
poll
= )
// transmit polls
20:
{
21:
i
= max
A
f
poll
(w
i
up
)
// select device i with the largest weight
22:
transmit poll to slave i
23:
c
i
up
= c
i
up
- l
iup
;
// decrement credit counter
24:
w
i
up
= (1 - u
i
)
c
i
up
;
// update weights
25:
}
26:
// Then service low priority slaves
27:
else if (B
f
data
= )
28:
{
29:
i
= max
B
f
data
(w
i
up
w
i
dn
)
// select device i with the largest weight
30:
transmit data packet of size l
dn
to slave i
31:
if (c
i
dn
>
0) c
i
dn
= c
i
dn
- l
i
dn
;
// decrement credit counter
32:
else c
i
up
= c
i
up
- l
i
dn
;
// decrement credit counter
33:
w
i
dn,up
= (1 - u
i
)
c
i
dn,up
;
// update weights
34:
}
35:
else if (B
f
poll
= )
// transmit polls
36:
{
37:
i
= max
B
f
poll
(w
i
up
)
// select device i with the largest weight
38:
transmit poll to slave i
39:
if (c
up
>
0) c
i
up
= c
i
up
- l
iup
;
// decrement credit counter
40:
else c
i
dn
= c
i
dn
- l
iup
;
// decrement credit counter
41:
w
i
dn,up
= (1 - u
i
)
c
i
dn,up
;
// update weights
42:
}
43:
}
The performance results obtained are summarized as follows
. First, BIAS eliminates packet loss even in the worst
interference case when more than 3/4 of the spectrum are occupied
by other devices. Delay is slightly increased over the
reference scenario (when no interference is present). This increase
varies between 1 to 5 ms on average. Furthermore,
BIAS is able to rapidly adjusts to changes in the channel. The
channel estimation transient time can be as low as 1.5 ms
and 250 s for DH5 and DH1 packets, respectively. In addition
, BIAS supports QOS and maintains a low access delay
for delay-sensitive traffic such as video applications. Finally,
we observe that the use of BIAS is not as effective to mitigate
interference caused by other Bluetooth piconets. In this case,
we note no improvements in access delay and packet loss results
, which are comparable to results obtained with Round
Robin (RR).
An immediate next step for our work consists of developing
a channel estimation procedure that is able to differentiate
between different types of interference, namely, WLAN and
Bluetooth interference. Our preliminary results indicate that
this may be helpful in a multi-Bluetooth and WLAN environment
.
BLUETOOTH DYNAMIC SCHEDULING AND INTERFERENCE MITIGATION
31
Acknowledgements
The author would like to thank O. Rebala and A. Tonnerre for
their help in developing the simulation models and compiling
the results.
References
[1] Bluetooth Special Interest Group, Specifications of the Bluetooth System
, Vol. 1, v. 1.0B "Core" and Vol. 2, v. 1.0B "Profiles" (December
1999).
[2] A. El-Hoiydi, Interference between Bluetooth networks upper bound
on the packet error rate, IEEE Communications Letters 5 (June 2001)
245247.
[3] D. Fumolari, Link performance of an embedded Bluetooth personal
area network, in: Proceedings of IEEE ICC'01, Vol. 8, Helsinki, Finland
(June 2001) pp. 25732577.
[4] N. Golmie, N. Chevrollier and I. Elbakkouri, Interference aware Bluetooth
packet scheduling, in: Proceedings of GLOBECOM'01, Vol. 5,
San Antonio, TX (November 2001) pp. 28572863.
[5] N. Golmie and F. Mouveaux, Interference in the 2.4 GHz ISM band:
Impact on the Bluetooth access control performance, in: Proceedings
of IEEE ICC'01, Vol. 8, Helsinki, Finland (June 2001) pp. 25402545.
[6] N. Golmie, R.E. Van Dyck and A. Soltanian, Interference of Bluetooth
and IEEE 802.11: Simulation modeling and performance evaluation
, in: Proceedings of the Fourth ACM International Workshop on
Modeling, Analysis, and Simulation of Wireless and Mobile Systems,
MSWIM'01, Rome, Italy (July 2001) pp. 1118. Extended version appeared
in ACM Wireless Networks 9(3) (2003) 201211.
[7] I. Howitt, V. Mitter and J. Gutierrez, Empirical study for IEEE 802.11
and Bluetooth interoperability, in: Proceedings of IEEE Vehicular
Technology Conference (VTC), Vol. 2 (Spring 2001) pp. 11031113.
[8] IEEE Std. 802-11, IEEE Standard for Wireless LAN Medium Access
Control (MAC) and Physical Layer (PHY) Specification (June 1997).
[9] J. Lansford, R. Nevo and E. Zehavi, MEHTA: A method for coexistence
between co-located 802.11b and Bluetooth systems, IEEE
P802.11 Working Group Contribution, IEEE P802.15-00/360r0 (November
2000).
[10] S. Shellhammer, Packet error rate of an IEEE 802.11 WLAN in the
presence of Bluetooth, IEEE P802.15 Working Group Contribution,
IEEE P802.15-00/133r0, Seattle, WA (May 2000).
[11] B. Treister, A. Batra, K.C. Chen and O. Eliezer, Adapative frequency
hopping: A non-collaborative coexistence mechanism, IEEE P802.11
Working Group Contribution, IEEE P802.15-01/252r0, Orlando, FL
(May 2001).
Nada Golmie received the M.S.E. degree in computer
engineering from Syracuse University, Syracuse
, NY, and the Ph.D. degree in computer science
from the University of Maryland, College Park, MD.
Since 1993, she has been a member of the Advanced
Network Technologies Division of the National Institute
of Standards and Technology (NIST). Her research
in traffic management and flow control led to
several papers presented at professional conferences,
journals and numerous contributions to international
standard organizations and industry led consortia. Her current work is focused
on the performance evaluation of protocols for Wireless Personal Area
Networks. Her research interests include modeling and performance analysis
of network protocols, media access control, and quality of service for IP
and wireless network technologies. She is the vice-chair of the IEEE 802.15
Coexistence Task Group.
E-mail: [email protected] | WLAN;BIAS;QoS;inteference;dynamic scheduling;Bluetooth;scheduling priorities;interference;coexistence;MAC scheduling;WPANs;WPAN |
46 | Breadth-First Search Crawling Yields High-Quality Pages | This paper examines the average page quality over time of pages downloaded during a web crawl of 328 million unique pages. We use the connectivity-based metric PageRank to measure the quality of a page. We show that traversing the web graph in breadth-first search order is a good crawling strategy, as it tends to discover high-quality pages early on in the crawl. | INTRODUCTION
According to a study released in October 2000, the directly
accessible "surface web" consists of about 2.5 billion
pages, while the "deep web" (dynamically generated web
pages) consists of about 550 billion pages, 95% of which are
publicly accessible [9].
By comparison, the Google index released in June 2000
contained 560 million full-text-indexed pages [5]. In other
words, Google -- which, according to a recent measurement
[6], has the greatest coverage of all search engines -covers
only about 0.1% of the publicly accessible web, and
the other major search engines do even worse.
Increasing the coverage of existing search engines by three
orders of magnitude would pose a number of technical challenges
, both with respect to their ability to discover, download
, and index web pages, as well as their ability to serve
queries against an index of that size. (For query engines
based on inverted lists, the cost of serving a query is linear
to the size of the index.) Therefore, search engines should
attempt to download the best pages and include (only) them
in their index.
Cho, Garcia-Molina, and Page [4] suggested using connectivity
-based document quality metrics to direct a crawler towards
high-quality pages. They performed a series of crawls
over 179,000 pages in the stanford.edu domain and used
Copyright is held by the author/owner.
WWW10, May 1-5, 2001, Hong Kong.
ACM 1-58113-348-0/01/0005.
different ordering metrics -- breadth-first, backlink count,
PageRank [2], and random -- to direct the different crawls.
Under the breath-first ordering, pages are crawled in the order
they are discovered. Under the backlink ordering, the
pages with the highest number of known links to them are
crawled first. Under the PageRank ordering, pages with the
highest PageRank (a page quality metric described below)
are crawled first. Under the random ordering, the crawler
selects the next page to download at random from the set
of uncrawled pages. (For repeatability, these crawls were
"virtual"; that is, they were performed over a cached copy
of these 179,000 pages.) Cho et al. evaluated the effectiveness
of each ordering metric by examining how fast it led
the crawler to all the "hot" pages. In this context, a "hot"
page is a page with either a high number of links pointing
to it, or a page with a high PageRank. They found
that using the PageRank metric to direct a crawler works
extremely well. However, they also discovered that performing
the crawl in breadth-first order works almost as well, in
particular if "hot" pages are defined to be pages with high
PageRank.
This paper extends the results of Cho et al. regarding the
effectiveness of crawling in breadth-first search order, using
a much larger and more diverse data set. While Cho's work
was based on a crawl of 179,000 pages from the stanford.edu
domain, we performed a crawl of 328 million pages over the
entire web, covering more than 7 million distinct hosts. We
use connectivity-based page quality metrics, namely Brin
and Page's PageRank and variations of it, to measure the
quality of downloaded pages over the life of the crawl.
We find that not only does breadth-first search download
the hot pages first, but also that the average quality of the
pages decreased over the duration of the crawl. We also
suggest that our crawler's modifications to strict breadth-first
search -- made to increase the overall download rate
and to avoid overloading any given web server -- enhance
its likeliness of retrieving important pages first.
The remainder of this paper is structured as follows: Section
2 reviews the PageRank metric we used to evaluate
the effectiveness of crawling in breadth-first search order.
Section 3 describes the tools we used to conduct our experiments
. Section 4 describes the experiments we performed,
and the results we obtained. Finally, section 5 offers concluding
remarks.
PAGERANK
There are many conceivable metrics for judging the quality
of a web page: by analyzing its content, by measuring
114
its popularity (that is, how often it is viewed), or by examining
its connectivity (that is, by determining which other
pages link to this page, and vice versa). Metrics based on
connectivity have the advantages that they do not require
information that is not easily accessible (such as page popularity
data), and that they are easy to compute, so they
scale well to even very large page collections.
They also
require retrieving only the links on each page, not the full
page contents. Storing the full page contents requires several
kilobytes per page, one to two orders of magnitude more
than just storing the links.
PageRank is the connectivity-based page quality measure
suggested by Brin and Page [2]. It is a static measure; it is
designed to rank pages in the absence of any queries. That
is, PageRank computes the "global worth" of each page.
Intuitively, the PageRank measure of a page is similar to its
in-degree, which is a possible measure of the importance of
a page. The PageRank of a page is high if many pages with
a high PageRank contain links to it, and a page containing
few outgoing links contributes more weight to the pages it
links to than a page containing many outgoing links. The
PageRank of a page is expressed mathematically as follows.
Suppose there are T total pages on the web. We choose
a parameter d (explained below) such that 0 < d < 1; a
typical value of d might lie in the range 0.1 < d < 0.15.
Let pages p
1
, p
2
, . . . , p
k
link to page p. Let R(p
i
) be the
PageRank of p
i
and C(p
i
) be the number of links out of p
i
.
Then the PageRank R(p) of page p is defined to satisfy:
R(p) = d
T + (1 - d)
k
X
i=1
R(p
i
)
C(p
i
)
This equation defines R(p) uniquely, modulo a constant scaling
factor. If we scale R(p) so that the PageRanks of all
pages sum to 1, R(p) can be thought of as a probability
distribution over pages.
The PageRank distribution has a simple interpretation in
terms of a random walk. Imagine a web surfer who wanders
the web. If the surfer visits page p, the random walk is in
state p. At each step, the web surfer either jumps to a page
on the web chosen uniformly at random, or the web surfer
follows a link chosen uniformly at random from those on
the current page. The former occurs with probability d, the
latter with probability 1
- d. The equilibrium probability
that such a surfer is at page p is simply R(p). An alternative
way to say this is that the average fraction of the steps that
a walk spends at page p is R(p) over sufficiently long walks.
This means that pages with high PageRank are more likely
to be visited than pages with low PageRank.
In our experiments, we set d =
1
7
= 0.14. We also modified
PageRank slightly so that pages with no outgoing links
contribute their weight equally to all pages. That is, the
random surfer is equally likely to jump to any page from
a page with no outgoing links. We ran experiments using
both the original PageRank algorithm, which does not distinguish
between links to pages on the same versus different
hosts, and a variant of PageRank which only considers links
to different hosts.
TOOLS
We used two tools in conducting this research: Mercator
and the Connectivity Server 2, both developed at our lab.
We used Mercator to crawl the web, and the Connectivity
Server 2 to provide fast access to the link information downloaded
from the crawl.
Mercator is an extensible, multithreaded, high-performance
web crawler [7, 10]. It is written in Java and is highly
configurable. Its default download strategy is to perform
a breadth-first search of the web, with the following three
modifications:
1. It downloads multiple pages (typically 500) in parallel.
This modification allows us to download about 10 million
pages a day; without it, we would download well
under 100,000 pages per day.
2. Only a single HTTP connection is opened to any given
web server at any given time.
This modification is
necessary due to the prevalence of relative URLs on the
web (about 80% of the links on an average web page
refer to the same host), which leads to a high degree
of host locality in the crawler's download queue. If
we were to download many pages from the same host
in parallel, we would overload or even crash that web
server.
3. If it took t seconds to download a document from a
given web server, then Mercator will wait for 10t seconds
before contacting that web server again. This
modification is not strictly necessary, but it further
eases the load our crawler places on individual servers
on the web. We found that this policy reduces the rate
of complaints we receive while crawling.
For the experiments described below, we configured Mercator
to extract all the links from each downloaded page
and save them to disk; for disk space reasons, we did not
retain the pages themselves. We conducted a crawl that attempted
to download 532 million pages over the course of 58
days (which we refer to as days 1 to 58 throughout the paper
). Of all those download attempts, 328 million returned
valid, unique HTML pages; the others resulted in TCP- and
DNS-errors, non-200 HTTP return codes, non-HTML documents
, or duplicates. Mercator's download rate decreased
over the course of the crawl, due to increasing access times
to one of its disk-based data structures that keeps track of
which URLs have already been seen. The median download
day was 22; the mean download day was 24.5.
The extracted links data was then loaded into the Connectivity
Server 2 (CS2) [11], a database for URLs and links.
A build of CS2 takes a web crawl as input and creates a
database representation of the web graph induced by the
pages in the crawl. A CS2 database consists of all URLs that
were crawled, extended with all URLs referenced at least
five times by the crawled pages. (Incorporating uncrawled
URLs with multiple links pointing to them ensured that we
did not ignore any popular URLs. Setting the threshold at
five incoming links reduced the set of uncrawled URLs by
over 90%, which enabled us to fit the database within the 16
GB of RAM available to us.) The CS2 database also contains
all links among those URLs and host information for
each URL. It maps each URL to all of its outgoing and its
incoming links. It is possible to get all the incoming links
for a given URL, or just the links from different hosts.
CS2 stores links in both directions in, on average, 2.4
bytes per link (as compared to 8 bytes per link in the original
connectivity server (CS1) described in [1]). Like CS1,
115
0
5
10
15
20
25
30
35
40
45
50
55
Day of crawl
0
2
4
6
8
Average PageRank
Figure 1: Average PageRank score by day of crawl
CS2 is designed to give high-performance access when run
on a machine with enough RAM to store the database in
memory. On the 667 MHz Compaq AlphaServer ES40 with
16 GB of RAM used in our experiments, it takes 70-80 ms
to convert a URL into an internal id or vice versa, and 0.1
ms/link to retrieve each incoming or outgoing link as an internal
id. The database for our crawl of 328 million pages
contained 351 million URLs and 6.1 billion links. Therefore,
one iteration of PageRank ran in about 15 minutes.
AVERAGE PAGE QUALITY OVER A LONG CRAWL
In this section, we report on our experiments. We implemented
PageRank and its variants over the CS2 interface,
and ran each algorithm for 100 iterations on the 6.1 billion
link database. (In all our experiments, the PageRank computation
converged within less than 100 iterations.)
Although the PageRank scores are conventionally normalized
to sum to 1 (making it easier to think of them as a
probability distribution), we normalized them to sum to the
number of nodes in the graph (351 million). This way, the
average page has a PageRank of 1, independent of the number
of pages.
Figure 1 shows the average PageRank of all pages downloaded
on each day of the crawl. The average score for pages
crawled on the first day is 7.04, more than three times the average
score of 2.07 for pages crawled on the second day. The
average score tapers from there down to 1.08 after the first
week, 0.84 after the second week, and 0.59 after the fourth
week. Clearly, we downloaded more high quality pages, i.e.,
pages with high PageRank, early in the crawl than later
on. We then decided to examine specifically when we had
crawled the highest ranked pages.
We examined the pages with the top N PageRanks, for
increasing values of N from 1 to 328 million (all of the pages
downloaded). Figure 2 graphs the average day on which we
crawled the pages with the highest N scores. Note that the
horizontal axis shows the values of N on a log scale.
All of the top 10 and 91 of the top 100 pages were crawled
on the first day. There are some anomalies in the graph
between N equals 100 and 300, where the average day fluctuates
between 2 and 3 (the second and third days of the
crawl). These anomalies are caused by 24 pages in the top
300 (8%) that were downloaded after the first week. Most of
those pages had a lot of local links (links from pages on the
same host), but not many remote links. In other words, the
1
10
100
1000
10000 100000 1e+06
1e+07
1e+08
top N
5
10
15
20
25
Average day top N pages were crawled
Figure 2: Average day on which the top N pages
were crawled
pages on the same host "endorse" each other, but few other
hosts endorse them. We address this phenomenon later in
the last experiment, shown in Figure 4. After N equals 400,
the curve steadily increases to day 24.5, the mean download
day of the entire crawl.
Our next experiment checks that pages with high PageRank
are not ranked high only because they were crawled
early. For example, a page whose outgoing links all point
to pages with links back to it might have an artificially high
PageRank if all of its outgoing links have been crawled, but
not too many other pages. For this experiment we ran the
PageRank algorithm on the graph induced by only the first
28 days of the crawl. This graph contains 217 million URLs
and 3.8 billion links between them. We then compared the
top ranked pages between the two data sets. We found that
of the top 1 million scoring pages, 96% were downloaded
during the first 4 weeks, and 76% of them were ranked in
the top 1 million pages in the 28 day data set. That is, it
was clear that those pages were important even before the
crawl had finished.
Figure 3 generalizes these statistics: for each value of N,
we plot the percentage of overlap between the top N scoring
pages in the 28 day crawl versus the 58 day crawl. Although
the top few pages are different, by the top 20 ranked pages
there is an 80% overlap. The overlap continues in the 60-80%
range through the extent of the entire 28 day data
set. This figure suggests that breadth-first search crawling
is fairly immune to the type of self-endorsement described
above: although the size of the graph induced by the full
crawl is about 60% larger than the graph induced by the 28
day crawl, the longer crawl replaced only about 25% of the
"hot" pages discovered during the first 28 days, irrespective
of the size of the "hot" set.
Some connectivity-based metrics, such as Kleinberg's algorithm
[8], consider only remote links, that is, links between
pages on different hosts. We noticed that some anomalies in
Figure 2 were due to a lot of local links, and decided to experiment
with a variant of the PageRank algorithm that only
propagates weights along remote links. This modification of
PageRank counts only links from different hosts as proper
endorsements of a page; links from the same host are viewed
as improper self-endorsement and therefore not counted.
Figure 4 shows our results: the average PageRank for
pages downloaded on the first day is even higher than when
all links are considered. The average PageRank for the first
day is 12.1, while it's 1.8 on the second day and 1.0 on the
116
1
10
100
1000
10000
100000
1e+06
1e+07
1e+08
top N pages
0
20
40
60
80
100
% overlap between data sets
Figure 3: The percent overlap between the top N
ranked pages in the first 28 vs all 58 days of the
crawl
fourth day. The average PageRank then declines gradually
down to 0.6 on the last day. Notice that the average PageRank
on the first day of crawling is higher than in Figure
1, and that the curve falls more sharply. This drop indicates
that our crawling strategy is not biased toward self-endorsing
hosts, as a crawler using the standard version of
PageRank would be. We believe that this lack of bias is due
in part to our crawler's politeness policies, which impose a
rate limit on its accesses to any particular host.
There are some flaws with a metric based only on remote
links. For example, http://www.yahoo.com/ has a very
high PageRank score. However, it only has local outlinks,
so its weight gets evenly distributed over all pages in the
graph, rather than just to the other pages in Yahoo! to
which it points. Transitively, the pages on other hosts to
which Yahoo! links do not benefit from the high score of
http://www.yahoo.com/. In the future work section below,
we outline some ideas for remedying this problem.
CONCLUSIONS
The experiments described in this paper demonstrate that
a crawler that downloads pages in breadth-first search order
discovers the highest quality pages during the early stages
of the crawl. As the crawl progresses, the quality of the
downloaded pages deteriorates. We speculate that breadth-first
search is a good crawling strategy because the most
important pages have many links to them from numerous
hosts, and those links will be found early, regardless of on
which host or page the crawl originates.
Discovering high-quality pages early on in a crawl is desirable
for public web search engines such as AltaVista or
Google, given that none of these search engines is able to
crawl and index more than a fraction of the web.
Our results have practical implications to search engine
companies. Although breadth-first search crawling seems to
be a very natural crawling strategy, not all of the crawlers
we are familiar with employ it. For example, the Internet
Archive crawler described in [3] does not perform a breadth-first
search of the entire web; instead, it picks 64 hosts at a
time and crawls these hosts in parallel. Each host is crawled
exhaustively; links that point to other hosts are saved to seed
subsequent crawls of those hosts. This crawling strategy has
no bias towards high-quality pages; if the hosts to be crawled
are picked in random order, the quality of downloaded pages
will be uniform throughout the crawl.
0
5
10
15
20
25
30
35
40
45
50
55
Day of crawl
0
2
4
6
8
10
12
Average PageRank (remote links only)
Figure 4:
Average PageRank when only remote
links are considered
Similarly, the Scooter web crawler used until recently by
AltaVista downloaded pages in essentially random order.
(At this point, AltaVista is using Mercator.) This approach
made it easier to provide politeness guarantees -- essentially,
it spread the load imposed by the crawler evenly over all web
servers -- but as a result, the quality of the discovered pages
is uniform over the life of the crawl.
We cannot make any statements about other large-scale
web crawlers.
Most search engine companies treat their
crawling strategy as a trade secret, and have not described
it in the literature.
Cho et al. [4] showed that using a connectivity-based ordering
metric for downloads, such as PageRank, will steer
the crawler towards even higher-quality pages than using
breadth-first search. However, computing PageRank values
for several hundred million or more pages is an extremely
expensive computation. It took us over a day to compute
the PageRanks of our graph of 351 million pages, despite
the fact that we had the hardware resources to hold the entire
graph in memory! Using PageRank to steer a crawler
would require multiple such computations over larger and
larger graphs, in order to take newly discovered pages into
account, and is essentially infeasible in real time. On the
other hand, crawling in breadth-first search order provides
a fairly good bias towards high quality pages without the
computational cost. We believe that crawling in breadth-first
search order provides the better tradeoff.
FUTURE WORK
There are two directions in which we would like to extend
this work. One direction is to try a variant of PageRank
which weighs links to pages on remote hosts differently than
links to other pages on the same host. From the experiment
that generated Figure 4 above, we learned that remote links
should count more than local links, but that weights should
be propagated along local links as well (e.g., to distribute the
weight of http://www.yahoo.com/ to the pages that Yahoo!
recommends). We suspect that some search engines already
use different weights for links, but there has been no formal
study of how to divide the weights among the links or even
whether the division should be static (e.g., remote links get
80% of the total weight) or proportional to the number of
total links (e.g., each remote link gets four times the weight
of each local link).
The other direction is to try different connectivity-based
117
metrics. While PageRank is the only connectivity measure
we know aimed at ranking all of the pages on the world wide
web, Kleinberg's algorithm [8] is another well-known connectivity
analysis algorithm targeted towards computing quality
scores for pages. The algorithm computes two scores for
each document: a hub score and an authority score. Pages
with high authority scores are expected to have high-quality
content; the authority scores are similar in intent to PageRanks
. Kleinberg's algorithm is designed to rank the results
of a query to a search engine, and only considers a small set
of pages when it computes authority scores. However, we
believe that we can extend the algorithm to consider the
entire graph of the web.
REFERENCES
[1] K. Bharat, A. Broder, M. Henzinger, P. Kumar, and
S. Venkatasubramanian. The connectivity server: Fast
access to linkage information on the web. In
Proceedings of the 7th International World Wide Web
Conference, pages 469477, Brisbane, Australia, April
1998. Elsevier Science.
[2] S. Brin and L. Page. The anatomy of a large-scale
hypertextual web search engine. In Proceedings of the
7th International World Wide Web Conference, pages
107117, Brisbane, Australia, April 1998. Elsevier
Science.
[3] M. Burner. Crawling towards eternity: Building an
archive of the world wide web. Web Techniques
Magazine, 2(5):3740, May 1997.
[4] J. Cho, H. Garcia-Molina, and L. Page. Efficient
crawling through URL ordering. In Proceedings of the
7th International World Wide Web Conference, pages
161172, Brisbane, Australia, April 1998. Elsevier
Science.
[5] Google Inc. Press release: "Google launches world's
largest search engine." June 26, 2000. Available at
http://www.google.com/press/pressrel/pressrelease26.html
[6] M. Henzinger, A. Heydon, M. Mitzenmacher, and
M. Najork. On near-uniform URL sampling. In
Proceedings of the 9th International World Wide Web
Conference, pages 295308, Amsterdam, Netherlands,
May 2000. Elsevier Science.
[7] A. Heydon and M. Najork. Mercator: A scalable,
extensible web crawler. World Wide Web,
2(4):219229, Dec. 1999.
[8] J. Kleinberg. Authoritative sources in a hyperlinked
environment. In Proceedings of the 9th ACM-SIAM
Symposium on Discrete Algorithms, pages 668677,
San Francisco, CA, Jan. 1998.
[9] P. Lyman, H. Varian, J. Dunn, A. Strygin, and
K. Swearingen. How much information? School of
Information Management and Systems, Univ. of
California at Berkeley, 2000. Available at
http://www.sims.berkeley.edu/how-much-info
[10] Mercator Home Page.
http://www.research.digital.com/SRC/mercator
[11] J. L. Wiener, R. Wickremesinghe, M. Burrows,
K. Randall, and R. Stata. Better link compression.
Manuscript in progress. Compaq Systems Research
Center, 2001.
VITAE
Marc Najork is a senior member of
the research staff at Compaq Computer
Corporation's Systems Research
Center.
His current research focuses
on high-performance web crawling and
web characterization.
He was a principal
contributor to Mercator, the web
crawler used by AltaVista. In the past,
he has worked on tools for web surfing
, 3D animation, information visualization
, algorithm animation, and visual
programming languages.
He received
a Ph.D. in Computer Science from
the University of Illinois at Urbana-Champaign
in 1994.
Janet L. Wiener is a member of
the research staff at Compaq Computer
Corporation's Systems Research
Center.
She currently focuses on developing
algorithms to characterize the
web and tools (such as the Connectivity
Server) to support those algorithms
.
Prior to joining Compaq in
1998, she was a research scientist at
Stanford University working on data
warehousing, heterogeneous data integration
, and semi-structured data. She
received a Ph.D. from the University of
Wisconsin-Madison in 1995, and a B.A.
from Williams College in 1989.
118 | ;high quality pages;breadth first search;crawl order;ordering metrics;Crawling;crawling;PageRank;page quality metric;breadth-first search;connectivity-based metrics |
47 | Broadcasting Information via Display Names in Instant Messaging | Many instant messenger (IM) clients let a person specify the identifying name that appears in another person's contact list. We have noticed that many people add extra information to this name as a way to broadcast information to their contacts. Twelve IM contact lists comprising 444 individuals were monitored over three weeks to observe how these individuals used and altered their display names. Almost half of them changed their display names at varying frequencies, where the new information fell into seventeen different categories of communication supplied to others. Three themes encompass these categories: Identification ("who am I"?), Information About Self ("this is what is going on with me") and Broadcast Message ("I am directing information to the community"). The design implication is that systems supporting person to person casual interaction, such as IM, should explicitly include facilities that allow people to broadcast these types of information to their community of contacts. | INTRODUCTION
Millions of people use instant messenger (IM) clients daily to
communicate with friends, relatives, co-workers and even online
dating contacts. With this explosion of use, researchers have
taken to studying instant messaging and its impact. Much of the
research regarding IM has been focused on its primary uses:
maintaining awareness of a contact's presence and availability,
how people (usually dyads) converse via text chat, and how they
exploit other features such as file sharing and receipt of
notifications. For example, studies of IM use in the workplace
expose how it supports collaboration, communication and
project activities [3, 10, 13], as well as its negative effects [15]
such as disruption [4]. In more social contexts, researchers
found a positive relationship between the amount of IM use and
verbal, affective and social intimacy [9]. IM also proves
important in the life of teens, where it helps support the
maintenance of their social relationships [8].
Other computer-mediated communication tools, such as MUDs
(Multi-User Domains or Multi-User Dungeons), IRC (Internet
Relay Chat), and broadcast messaging tools also allow
spontaneous real-time (or synchronous) communication with
other users. However, there are significant differences between
them. IM is predominately used between people who are known
to each other outside of cyberspace, e.g., friends and associates.
IM conversations are also private, and tend to be between pairs
of people. They are also person centered and not group centered:
while a contact list may collect one's `buddies', these lists are
not shared across contacts. In contrast, MUDs and IRC are
public channels, where any conversation is heard by all people
currently in the MUD or IRC. Most tend to be used by
`strangers', i.e., those who are unknown to each other in real
space, and usually involve more than two individuals. Indeed,
the norm is for participants to protect their anonymity by
displaying a pseudonym rather than their real names. Any
personal messages that are posted are usually in relation to their
virtual identity. However, a few experimental MUD-like
systems do focus on teams, where they provide its members
with rich awareness information of one another and more power
in their collaboration tools, e.g., Sideshow [2], Notification
Collage [7], or Community Bar [12]. Broadcast messaging tools
[11] sit in the middle, where real-time messages usually
comprising notifications and announcements (not conversations)
are sent to large groups of people who are somehow associated
with one another, e.g., Tickertape [6].
The big `win' of IM is that it provides one's ad hoc set of
contacts with awareness of one's online state, which in turns
serves as an estimate of one's availability for conversation.
While not completely accurate [13], even this minimal
information suffices to create opportunities for lightweight text-based
conversations and to reduce the equivalent of `telephone
tag'. While many research systems go far beyond IM in the rich
awareness information they give to others [e.g., 2, 7, 12, 16],
questions remain about privacy implications of distributing this
information.
IM contacts are identified by the system through e-mail
addresses. While unique, these email addresses may be cryptic
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
GROUP'05, November 69, 2005, Sanibel Island, Florida, USA.
Copyright 2005 ACM 1-59593-223-2/05/0011...$5.00.
89
to a human viewer e.g., a person may not be able to infer that
[email protected] is really Gregor McEwan. Consequently,
the designers of most IM clients include a feature that lets a
person create and/or change their display name at any time; this
name is shown to others instead of the email address. For
example, in MSN Messenger (Figure 1) a person can raise the
`Personal Settings...' dialog by selecting the drop-down menu
attached to their name (i.e., `Stephanie' in the figure), and edit
the `Display name' text field which we also call the display
field within it. All contacts immediately see this new name.
Because we are heavy IM users, we noticed that many of our
contacts change their display field to do more than simply
identify or label themselves. Figure 1 illustrates this, where we
see that various people have used this feature to publicly
broadcast what they are doing (e.g., `anitsirK Marking') or an
event in their life (e.g., `Employed'), or a personal state of mind
(e.g., `Chasing Insanity'). Other examples we saw include using
the display field as a way to publicize personal status, specify
location, post comments, ask questions, and even post popular
culture references. These obviously augment IM's preset
availability messages (i.e. away, busy, be right back) in far
richer ways than the system was explicitly designed to support.
We believed that people's appropriation of the IM display name
feature into a public broadcast facility is a phenomenon worth
exploring. Why was this space being appropriated for messages
broadcast to an entire contact list? What were users trying to
communicate to others and how is this information different
than that in a normal IM conversation? How often do these
messages or alternate communications occur? To answer these
and other questions, we conducted a three week study, where we
monitored changes in each person's display field within contact
lists held by various users of MSN Messenger. We tracked how
often contacts changed their display name, and what these
changes were. We also categorized these changes into
communication purposes.
After briefly summarizing some related work, we describe the
methodology used to acquire display name usage data. This is
followed by our results, a discussion of the findings,
implications of the work, and recommendations for future work.
RELATED WORK
There are a variety of articles describing how people identify
themselves on the Internet, usually in MUDs and IRC. Yet most
of these stress how identity is formed through pseudo-anonymity
[1,17], i.e., where a person creates a virtual identity to project a
different persona of who they are while protecting their real
identity. People's choices of names and/or avatars are usually
one part of identity creation. This work is not particularly
applicable to IM, as people on a contact list are typically known
to one another.
Grinter & Palen's [8] study of teen use of IM is far more
relevant, and partially reflects our own interests. While their
work broadly considers IM as an emerging feature of teen life,
they do mention that teens found the preset availability
messages to be too impersonal. To combat feelings of exclusion
or to avoid being rude, teens would personalize the display name
area to include a message which explains their unavailability,
changes in their local environment (i.e., `Going quiet because
Mom just arrived'), and for justifying their lack of presence on
the system (i.e., `Out for dinner').
The use of IM names to broadcast messages is an everyday
world phenomenon, and has been anecdotally noticed by non-scientists
. For example, one reporter noted in a newspaper
article that changes to her display name are her main form of IM
communication rather than actual chat conversations [14].
Social scientists talk more generally about computer mediated
communications and how they can be used to build
communities. Etzioni and Etzioni [5] argue that in order to form
and sustain bonds, a community of connected individuals needs
what they call "interactive broadcasting". This is composed of
two major elements:
the ability to broadcast messages to many people within the
community simultaneously, and
the ability for those addressed by the message to provide
feedback, not just to the message originator, but to other
message recipients as well.
In this context, a broadcast message can be considered a request
for interaction from some (or all) members of a group [11]. A
variety of designers have implemented this broadcast capability
into their systems. For example, IRC, Notification Collage [7],
Community Bar [12] and Tickertape [6] are all tools that
implement interactive broadcasting. A message (which may
include multimedia information) can be posted and broadcast to
the group, and it is possible for everyone to view the information
without directly contributing to the conversation. Those who
want to respond can do so, in full view of all users. All these
systems allow for communal feedback, i.e., where everyone sees
the response. Unlike IM, however, these systems include a
strong notion of a common group by providing a public space
for interaction.
In summary, there are discussions of how broadcasting
information contributes to community building, and there are
systems that are based on public dissemination of information
within a group. However, excepting a few discussions of this
Figure 1: MSN Messenger; modified display names are
visible
90
phenomenon [8,14], there has been no real analysis of how
people have appropriated the display name feature of IM. Given
the importance and widespread of IM, we believe this analysis is
critical if we are to understand how we can improve IM systems.
METHODOLOGY
This study investigates how people use the display name feature
in IM clients to broadcast information other than one's name.
We do this by capturing changes in each person's display field
as they appear in contact lists over time and over everyday use,
by asking people to explain what these changes meant, and by
counting, categorizing and analyzing these changes.
3.1
Research questions
We wanted to identify three main behavioural patterns within
our captured data:
1.
At what frequency do users change the information in their
display field when using an IM client such as MSN
Messenger?
2.
What are the main communication categories that represent
the information held by these display field changes?
3.
What is the frequency distribution of these categories?
A fourth interesting but secondary question was:
4. Are changes to the display name related to the demographics
of age or sex?
3.2
Participants
We had two classes of participants. Our primary participants
were those who made their contact list available to us. Our
secondary participants were those who comprised the people on
the contact lists.
Twelve participants were recruited as primary participants, all
Computer Science graduate students or faculty at the University
of Calgary. They ranged in age from 23 to 50, and were regular
users of MSN Messenger. These participants provided access to
their IM contact lists. They were also willing to annotate the
collected data. While the number of contacts on each person's
list varied somewhat, this variance was irrelevant to our study.
Our secondary participants were the 444 contacts found on the
contact lists of the 12 primary participants. These contacts
covered a broad range of demographics and social relationships,
i.e., fellow students, workmates, friends, family members and
other relatives. While the display names used by these 444
people were collected as data, they were not contacted directly.
3.3
Materials and Data Capture
Each participant (whether primary or secondary) used their own
pre-existing and unaltered MSN Messenger client on their own
computer (running Windows) for everyday purposes.
We wrote a logging program to collect all contact list data from
each primary participant. It monitored every person's display
field as it appeared in the contact list. The software worked by
tapping into the programming API of MSN Messenger
(regardless of its version) to monitor activities within it.
This logging program was given only to the 12 primary
participants. No special software was needed by the 444
secondary participants, as their data was captured via the
logging software on the primary participant's computer.
The 12 primary participants installed our software on whatever
computers they wished. When installed, it worked in tandem
with MSN Messenger to collect data on everyday IM usage in
the field.
The program monitored whether the participant was logged in to
MSN Messenger. If logged in, it recorded the initial set of
display names and any display name changes of the secondary
participants on the contact list. The initial set of display names
were needed to notice if a change occurred since the primary
participant's last login.
As part of our analysis, we used the standard features of
Microsoft Excel 2003 to sort and consolidate the data files.
Relevant data was then transferred to Minitab v.14 to tally
distributions, calculate any statistics and create visual
representations of the data. Further analysis of the categories of
communication used in the display field was conducted using
paper cut-outs and post-it notes to create an affinity diagram;
this is detailed later.
3.4
Method
Once primary participants agreed to participate in the study, we
gave them instructions on how to install the logging program on
their computer. We did not have to be present for this, so people
could install it on whichever computers they regularly used, be it
at work or at home. The program then ran automatically; the
only indication of its operation was a small red notebook icon
appearing in the participant's system tray. This icon allowed a
participant to abort the collection process if they wished, but
none chose this option.
Data was collected for approximately three weeks, but did
require the person to be logged onto MSN Messenger. If a
primary participant was not logged on, no data about their
contacts was recorded. This meant that some display field
changes of secondary participants could have been missed.
3.5
Analysis
At the end of three weeks, the primary participants were
instructed to open the data file with Excel, and indicate the sex
and approximate age of each listed contact member in a pre-designated
column. For each display name change, they were
also asked to categorize the type of information the contact was
trying to broadcast to others. We did not predefine any
categories. Participants created their own category labels and
categorized names into them however they wished. We chose
this approach because we felt that participants would have a far
better understanding of the true meaning of a person's display
field changes than someone unfamiliar with the contact; we also
felt that as recipients of this information, their interpretation was
important. We also believed that they would generate a greater
and therefore richer breadth of categories.
Once the categorizations were completed, the data files were
transferred to the primary investigator. The investigator
consolidated all of the data files into one master file, and
removed any duplicate entries. These duplicate entries occurred
for two reasons.
More than one person had a particular contact on their list.
Each time a participant logged in, their entire contact list
was recorded in the data file. If a contact had not changed
their name while the participant was offline, a duplicate
entry was created.
91
When duplicate entries occurred, all but the earliest occurrence
of the display name change was removed.
A category list was created for each primary participant based
on his or her individual categorizations of display name changes.
Because these category names could differ between participants,
we needed to re-categorize these names into a master category
list. To do this, all categories were printed on separate slips of
paper for easy sorting. We then created an affinity diagram to
resort these categories, where entries from all the lists were
sorted into groups based on similarity. These groups then
formed a master category (see Figure 2). A master category title
was then chosen that best represented the theme for the
grouping. After this master list was established, the entries in the
consolidated file were then re-categorized based on these new
divisions; this would allow us to create a distribution profile.
We should mention that many entries into the display field
contained more than one textual element, each of which could
be categorized differently. When this happened, we treated the
display field as holding multiple entries. An example of this is
shown here, where
the contact's display
field contains two elements; `Johnboy' could be categorized as a
Name Variation, while `yonge&eglington' (a street junction in
Toronto) is categorized as an indicator of Location. In this case,
this display field entry would be split into two text fragments,
where each fragment would be counted in the category that best
fit. As we will see, these types of dual entry usually occurred
because people tend to keep their names (or an identifying
variation thereof) visible to others in order to identify
themselves. Occasionally a display field would contain two
elements where neither were identifiers. For example, the text
shown here is categorized
as two elements: `packing'
is an Activity, and `sad to be leaving' is a Mood. Only rarely did
display field entries contain more than two elements.
RESULTS
Our first research question was:
At what frequency do users change the information in their
display field when using an IM client such as MSN
Messenger?
Before answering this question, recall that the recording of
display field changes of a secondary participant on a contact list
only happened when the primary participant was logged on to
MSN Messenger. If the primary participant was logged out, no
display field changes to their contacts were recorded. While a
single change would be noted by comparing the last recorded
version of the contact's display field to the one recorded when
the primary participant logs on, multiple changes during the
logout period would be lost. This means we cannot calculate the
exact display name change distribution across all contacts. Still,
our numbers should be a good estimate of what happens. At the
very least they represent a lower bound that somewhat
underestimates how often display fields changes occur. The data
certainly suffices to indicate the range of activities and
individual differences across 444 people.
Figure 3 illustrates the distribution of contacts according to how
often they changed the contents of the display field. Our results
show that 58% of our 444 contacts (258 people) never changed
the contents of the display field during the three week period.
For the remaining 42% of contacts (186 people), we counted a
total of 1968 display name changes, or an average of 11 display
name changes per person over the three week period, or up to 4
times a week.
Never
58.1%
Rarely
12.2%
Weekly
5.6%
Several times a day
4.5%
Daily
3.4%
Several times
a week
16.2%
Figure 3: Distribution of contacts according to how often
they change the display field contents
Figure 2: Example affinity diagram used to group participant
categorizations into master categories.
92
However, this average is misleading, for we also found that
people change their display names at different frequencies. We
created six rate change categories. Based on a contact's data, we
placed each contact into the category that best estimated that
contact's change rate. Figure 3 displays this distribution of
contacts among the six rate change categories. We see that the
42% of contacts who change their display name do so at
different rates. About 8% (4.5% + 3.4%) of contacts change
their names from once to several times a day. About 22% of
them change their names less often, from once to several times a
week (16.2% + 5.6%). The final 12% change it rarely, i.e., once
or twice over the three week period.
The person who had the highest display field change rate is
worth added discussion, as it suggests what happens with
contacts who used this feature heavily. This person changed her
display field early in the morning, and notified contacts when
she arrived at school. Around 4 pm the changes started again,
continuing until approximately 11 pm when she went to bed.
Her changes would incorporate details on what was occupying
her time. Changes would state particulars: when she was
studying, babysitting, or watching TV, and her emotional
reactions to these events. If she found something entertaining or
interesting on TV, she would post quotes. If she was bored, she
would put out a request for someone/anyone to call. In essence,
this person used her display field as a web log, where she
recorded and disseminated information to her community. Even
though we had no further knowledge of this person, a sense of
who she was and what her life was garnered through all the
changes that she made to her display name.
4.2
Communication categories
Our second research question was:
What are the main communication categories that represent
the information held by these display field changes?
After analyzing the categories created by our primary
participants through the affinity diagramming process, we
identified seventeen master categories. These are listed below in
alphabetic order. A description of each category is given along
with illustrative examples taken from our data. Many examples
contain more than one textual element, usually an identifier, as
we present them as they appeared in the display field. To protect
confidentiality, name information has been changed.
Activities include things or activities that a person has done in
the past, is currently involved in, or is about to participate in the
future. It also includes countdowns to an upcoming event.
Examples include:
Amy - House hunting!
Joe was drunk on a Tuesday...shameful.
Braced: 60% done my portfolio!
Adverts include advertisements or promotions for items or
events, and things that people have for sale.
Easton Synergy Grip 100 Flex Iginla Blade Left (Brand
spanking new): $225
headachey -- Tim Stuart Tribute and Fundraiser November
6th @ 8PM -- ask for details
Comments are personal comments, expressions of an
individual's opinion and general statements on how they view
things in the world around them.
Jan[et] - Airlines are Evil
Bee - undocumented code should be illegal
Nancy: you don't need English to live in Vancouver
Default contains only the default unaltered entries to the display
field. After installation, the IM client displays a person's e-mail
address in the field. These may or may not actually contain a
person's name as part of the email address.
[email protected]
[email protected]
Directions contain entries where a reader is being directed to a
web site or link. Examples are:
Bee-http://java.sun.com/features/1999/05/duke_gallery.html
jessie {http://littlemisskool.blogspot.com}
CHECK THIS====> http://www.blitzkreigentt.com/....
Constructed <====
Fun contains entries that contain puns, inside jokes, humorous
statements, and items placed for the amusement of others.
Melanie. me: "come see, its a lunar eclipse"; kate: "where?"
what do you call a fish with no eyes: f sh
Huffy - Home is where you hang your @
Joe Like
a Vermin, trapped for the very first time
Handles contains those display name entries that hold a
person's handle. A handle is like a well known nickname: it is a
consistent title or name that people give themselves to represent
their identity on the internet. As we will see later, IM handles
are not used for the pseudo-anonymity purposes as found in IRC
public forums.
hunnybear
Iceman
spidermax
Location contains information about a person's current location
or future destination. It can also contain travel information.
Many times this location information is permanently attached to
the display name when localized at a particular computer, as in
"home" or "work". This label can indicate to others the type of
communications that are appropriate.
Mat Singh...going home in 10 days!
In the dominican republic
Dan James [Office]
mike -> lab meeting
Messages contain information of significance directed at an
individual on a person's contact list or to the group as a whole.
darren~thanks nate for the halo hookup
SirMe - Happy Birthday, Angie!
Melanie. Nick, ill be on the 3 30 or whatever bus at the
college. <<school>>
Mood contains entries that give indications of a person's mood,
feelings, health or state of being.
i give up
Adam feels rejected
britney - disoriented haze
Joe - as if shot in the head, yet still charging blindly forward.
Bee - double espresso
whee!!!
Maggs - Not Feeling Well.
Name contains entries of a person's given name. This category
contains no nicknames, handles or variations on the name.
Rebecca
Fred Jones
93
Notices contains entries that give notice of a particular event,
share news or display announcements.
DBosch [ We're home owners! ]
Tracey... down 24.2 lbs
Jennifer - party is cancelled
NaKuL - new msn login
Gretchen -- Holy Cole's coming to vancouver!!
Presence contains items which provide more detailed
information about a person's online presence or availability
beyond the standard status indicators.
Bee - really am busy, only msg me in emergency
Melanie. >> off for family time<<
mike - reading at my desk/disregard (Away) status
Flickerin: be Back at 630ish
Questions contain rhetorical questions and questions that are
posed to stimulate response. This category also contains
questions that are requests for assistance, similar to those that
appear in company broadcast messaging systems when a person
is searching for an expert in a given area [6].
Luke -- Anyone took CS322? I need some help with cilog!
Joe - who keeps messing with my chair??
Shri- Needin' a Physics Toolkit w/Dynamics + Collisions +
Fields, any ideas?
Melanie. Anyone have a working printer?
Quotes contains quotations taken from movies, tv, books, plays
or lyrics from music. It also contains references to pop culture.
Dusit - If you can dodge a wrench, you can dodge anything!
b33qZ -- king jeremy the wicked... oh, rules his world...
Andrea - so long and thanks for all the fish
Unknown contains all the entries in which the meaning of the
text is too cryptic that it could not be categorized by either the
primary participant or the investigator. It is assumed that once
deciphered that each of these entries could be placed in one of
the other sixteen categories.
b33qZ [nts:perri]
Andy ~ Ah '
Black_Venom (In 432)
»~-jd-~«-->
SkRoNk
<-- yeh social ppl
Variations contain entries where the identifier is a variation on
the person's name. This can include an abbreviated version of
the full name, a nickname in which the given name is still
identifiable, or a variation in the way the letters of the name are
printed or ordered.
DiAnNe
kev
Maggs
timbob
Einahpets
4.3
Category Distribution Frequency
Our third research question was:
What is the frequency distribution of these categories?
First, the 2226 logged display fields were analyzed to reveal a
total of 3603 elements (recall that some display fields could
have more than one information element in it). Second, each
element was then located in a single communications category
that best represented its meaning.
Figure 4 shows these category counts in two sections. The top
part plots the Name, Variations and Handle categories. We
separated these `Identification' categories from the other
categories because the information they contain satisfy the
original purpose of the display field i.e., to hold identifying
information. The frequency distribution of the remaining 14
categories are then listed.
The bar representing the counts of the number of elements
within each of these categories are further distinguished into
three groups. The lightest section of each bar represents the
group of category elements that were the only element contained
by the display field. The medium coloured section shows the
number of category elements whose text coexisted with another
element found in one of the three `Identification' categories in
the display field. The darkest section of the bar groups category
elements whose text coexisted in the display field with another
element found in any category other than the three
`Identification' categories.
The figure shows that approximately 49%, or 1766/3603 of the
categorized elements, were in one of the three `Identification'
categories, i.e., Name (32.4%), Variations (10%) or Handle
(6.4%). This makes sense, for meaningful self-identification is
the expected use of the IM display name feature. The darkly
colored regions of their bars also reveal that identification
elements in total coexist with other pieces of information in the
display field over 67% (1186/1766) of the time. For example,
the Name was included with other elements 825/1168 (71%) of
the time. Similarly, Variations and Handles was included
Figure 4: Bar chart displaying category distribution
94
205/359 (43%) and 156/239 (65%) in conjunction with other
elements. Note that there are no medium coloured regions in
these bars. This is because elements within the Name,
Variations and Handle categories never co-existed with each
other. They only occurred in conjunction with elements in the
other 14 category types.
The other 14 categories of communication identify information
unrelated to identification. Collectively, these categories
comprise the other 51% of the total number of elements (1837 of
3603 total). Within these 1837 elements, we see that the most
frequent categories of communication used are Mood at 19.4%
(357/1837), Comments at 17.8% (327/1837) Activities at
16.6% (305/1837), Location at 12.5% (230/1837), Messages at
8.3% (152/1837), followed by Quotes, Notices and Fun. The
other categories occur less often, but still at a significant level.
The modest size of the lightly coloured section of all these
categories suggest that this information often appeared in
tandem with other categories. Most of time, this was one of the
Name, Variations, or Handle elements, as represented by the
medium-coloured section in each bar. Still, the presence of the
darkly coloured bar sections showed that two non-identifier
category elements may coexist in a display field.
4.4
Demographics of People Who Change
Their Display Names
Our final research question was:
Are changes to the display name related to the demographics
of age or sex?
The 444 contacts comprised somewhat more males than
females. The primary participants reported 232 males, 189
females, and 1 male/female (the account was known to be used
by a couple). The sex of the remaining 22 contacts was not
reported.
The dominant age range of the 444 contacts was between 21-30
years old. Table 1 summarizes the age demographics of the 444
contacts, as reported by our 12 primary participants. Since the
exact age of each contact was sometimes uncertain, we used age
group categories to capture their estimated ages.
We then analyzed whether age or sex of a person was related to
the number of changes that person made. First, we removed
records for those contacts whose sex was not reported. We then
performed a chi-square analysis on the remaining 421 contacts
to determine whether there was a relationship between sex and
the rate that a person changed their display field. Sex and
display name change rate were found to be independent,
2 (5,
N = 421) = 7.54, p = 0.183. That is, no relationship exists
between the sex of a person and how often a person changes the
display name.
We performed a similar chi-square analysis for age and display
name change rates, where unreported people were excluded.
Age groups were collapsed into three age ranges: <20, 21 to
30, and 31+. This was done for analytic reasons, since several
cells in the chi-square analysis would have contained counts of
less than one with the original divisions. Age range and name
change rates were found to be not independent,
2 (10, N = 413)
= 20.507, p = 0.025. That is, a relationship exists between the
age of a person and their likelihood of changing their display
name. This result will be examined further in the discussion.
DISCUSSION
The most important thing revealed by our study is that a good
number of people persistently used the display name feature to
publicly broadcast information about themselves to their friends,
and that this happened outside of individual chat sessions. They
did this in spite of the fact that IM display fields are not
explicitly designed to be a public broadcast system. This
suggests that systems should be designed to better support this
kind of broadcast activity. Details are discussed below.
5.1
Interpreting the results
People change the information in their display field. From
this study we have learned that the changing of the information
in an IM display field is not an oddity or something done
occasionally by certain individuals. Rather, it is a popular
behaviour: 42% of users in our study changed their display
name, and 25% did so several times a week or more. This
behaviour happens in spite of the fact that the Instant Messenger
client we studied does not make changing the display name
immediately accessible (e.g., through direct manipulation):
people had to raise menus, dialog boxes, and form fill the text.
People use the display field for identification, to give
information about self, and to broadcast messages. People
used the limited text that could be displayed in the display field
in rich ways. Seventeen different categories were needed to
describe the various communications placed in the display field.
Stepping back, three themes encompass these categories. The
first theme is Identification: "who am I"? The second theme is
Information About Self: "this is what is going on with me". The
third theme is Broadcast Messages: "I am directing information
to the community". These are described separately in the
following three sections.
Identification is fundamental. Identifying oneself to personal
contacts by typing one's own name in the display field is the
original purpose of this feature; the name replaces the default
email address as a way to uniquely identify a person. This
proved necessary because e-mail addresses are a poor substitute
for a name; some email services enforce cryptic email addresses,
and others are so oversubscribed that all but the rarest names are
already taken.
Table 1: Age distribution of contact group
Age Group
Count
Percent
<15
7
1.69
16-20
24
5.81
21-25
179
43.34
26-30
126
30.51
31-35
36
8.72
36-40
18
4.36
40+
23
5.57
N = 413, Unreported = 31
95
While people identified themselves in several ways, inserting
one's real Name or a recognizable Variation of it (e.g., initials
or nicknames) proved the two most common communication
categories. Handles was also popular (a constant representative
name that superficially resembles nicknames in IRC or
discussion groups on the Internet [1, 17]). Regardless of the
differences between these categories, in all cases the names,
variations or handles presented are not used to maintain pseudo-anonymity
or complete anonymity as in IRC or MUDs. Rather,
the identifier is something that the contact group uses to
recognize a known individual.
Another indicator of the importance of the Identification
categories is that many users keep their name visible even when
they add extra information to the display field (the black bars in
the three identification categories, and the grey bars in the other
14 categories in Figure 4). People do this in spite of the limited
display space: in a normally sized IM window about 30-50
display field characters are viewable. As well, the usual order of
this information is a name followed by the extra information. A
typical example is illustrated in Figure 5. This inclusion of
identity is likely done as a courtesy behaviour so that others can
distinguish between contacts without resorting to deciphering
the e-mail address.
Extra information is usually about self. Of the remaining 14
categories, the majority of them provide information about
`self'. Elements in these `about self' categories dominate the
frequency count (~85% of the non-name elements), with the top
four categories providing information about Mood, Comments,
Activities, and Location. These top categories all present
information about the person at a moment in time: they annotate
how they are feeling, what they are doing, or where they are.
Similarly, the lesser used Presence category indicates if they are
available, thus augmenting the preset status indicators, while
Quotes and Fun are indirect indicators of state of mind and
personality traits. Obviously, these people want to disclose an
additional level of information revealing personal state and
action to their community of friends, close contacts and
collaborators. The regular association of this kind of information
with one's name means that this information is truly about self;
this is in sharp contrast to the personas found in chat systems,
where people construct an artificial pseudonym identity through
avatars or nicknames [1, 17].
People want to be able to broadcast information without
involving conversation. Most of the remaining categories (about
14% of the non-name elements) contain communicative
messages intended for the group. In particular, Messages,
Notices, Questions and Directions are categories that either
provide information thought to be of interest to the group or are
posted to stimulate a response. Most of these are undirected e.g.,
`Does anyone know...'. Occasionally, a message may be
specifically directed to an individual, yet this is done in a forum
public to the community of contacts. Clearly, people are
adapting the IM display field into a form of public broadcast
communication facility; they are thus fulfilling one element of
the broadcasting system described by Etzioni and Etzioni [5].
Since each user's contact list contains a different set of names, a
responder (who may change their display name to respond to
another's broadcast message) is likely not sending that response
to the same community of people. This hampering of responses
suggests that display names are less effective for creating the
running dialogs common to IRC, MUDs and other public
broadcast systems [6, 11, 17].
Asynchronous messaging. In MSN Messenger, the direct chat
facility is session based. That is, direct chat cannot be used by
one person to leave information for a currently `Offline'
participant to read later. In contrast, the display name persists
across sessions, meaning that asynchronous communication to
offline participants is possible. For example, consider the
message `SirMe - Happy Birthday, Angie!' that was found in the
Messages category. By including this in his display name,
SirMe is leaving an asynchronous message that Angie (and
others) can see when they come on line.
Younger users may change their display names more
frequently than older users; sex does not make a difference.
The demographics of our study suggest some demographic
trends, which are described below. However, we caution that,
due to the way we collected data, the demographic findings and
how they relate to display name changes are at best tentative.
First, the age ranges of our secondary contacts (as being 14 65
years old) were likely heavily influenced by the fact that these
contacts were culled from the lists of only 12 primary
participants (from 22 50 years old), most of whom were within
the 21-30 age group, weighing the data with a similar age range.
Second, our data is incomplete as display field change data for
secondary contacts was not collected when their associated
primary contact was off line. Third, ages of secondary
participants were estimated, which affects the analysis we could
do. In spite of this tentative flavour, we include our results as
they suggest trends and future areas of study.
We saw a fairly balanced number of males and females in our
sample: 55% were male, 45% were female. The chi-square
analysis for sex and display field change rates indicated that the
two variables are independent, i.e., the sex of the participant
does not suggest how often that person would change their
display name. However, the chi-square analysis for participant
age and display field changes suggests that they are related
1
. We
subsequently examined the chi-square table data to compare the
observed count with the expected count for each cell of age
group crossed with rate. Discrepancies between the observed
and expected counts indicate a pattern where younger users are
more apt to frequently change their display name when
compared to older users. This trend may reflect a "computer
generation" gap where younger users would be more apt to
change their display name. It could also reflect a culture gap,
where younger users are using it for social reasons [8], while
older users are using it for workplace purposes [13].
1
While the chi-square test determined that the two variables are not
independent, it does not provide details on how the two variables are
related. If true values of age and average change rates were available
instead of our estimated categories (a subject of a future study), other
statistical analyses could be used to reveal this detail.
Figure 5: A typical display field showing how people retain
identity (Name), followed by other information (Activity)
96
5.2
Implications for practitioners
People persistently use the display field not only to identify
themselves to their community of contacts, but to reveal
personal information about self and to broadcast messages. They
do this in spite of the fact that the display field facility was
designed for other purposes; the IM community co-opted this
feature to fill their real desires and needs.
The first major implication is that IM and similar facilities need
first-class interface features that let people broadcast identifying
information, information about self, and public messages.
Because some people change this information fairly often, this
information should be easy to create and alter, e.g., through
direct manipulation.
Some of these capabilities are only now being supplied by a few
major IM vendors. For example, the new version of MSN
Messenger (v. 7.0), released shortly after our study was
performed), includes a dedicated space for adding and editing a
personal message (Figure 6, top). A person can directly alter this
text by clicking within it: no menus or dialog boxes have to be
navigated or raised. Other people see this personal information
as visually distinguished text, e.g., the italicized text within the
contact list (Figure 6, bottom). The personal information
message is also proprietary to the machine, similar to the display
picture. Thus people can set unique location labels to various
computers if desired, i.e. home or work.
The Community Bar (CB) [12] is a multimedia groupware
system being developed by collaborators in our laboratory.
Elements of its design are partially influenced by our study
results. People within an ad hoc group inhabit places, and all see
the equivalent of a contact list within a place. For example,
Figure 7 shows a place called `IM Paper' and three participants
within it. To support `Identification', each participant is
represented by a `Presence item', which shows running video (or
photo) of them, their name. To support `Information about self',
the Presence item also includes optional personal information
(which may wrap across multiple lines) that persists across login
sessions. A person can quickly modify this personal information
by a popup raised whenever he or she moves their mouse over
their item (Figure 7, right side). To support `Broadcast
Messages', it also lets people broadcast and respond to public
messages to all people in the group. This public broadcast is not
available in MSN Messenger 7, For example, Figure 7 (bottom)
illustrates a public text chat dialog that lets anyone in the group
post messages; all see its contents and all can post responses.
Not shown is a sticky note facility, where a person can post a
persistent message to all. Finally, certain categories of
information are supported. For example, `Directions' are
satisfied by letting people post a `web item' (not illustrated): a
thumbnail of a web page of interest that others can navigate to
via a single button press.
Another implication of our study is that people use many
different categories of information especially when describing
self which in turn suggests that people are trying to provide
others with a rich picture of themselves. Yet most systems, even
the current ones shown above, only let people set one attribute
of themselves in their personal message space (although they
may combine these in a text fragment). Perhaps future systems
will let people construct an `avatar' of sorts with multiple
attributes that distinguish these categories, so that (say) mood,
location and activity are treated independently rather than
compete for a small space.
While these (and likely other) systems suggest point design
solutions to our implications, what is important is that our study
has placed this work on a solid intellectual footing. It provides
details of what people have done, and has identified the
categories of information that people supply. For example, we
suspect that MSN Messenger's inclusion of a personal
information field arose because its designers noticed that people
were moulding the technology to suit their needs, and they
wanted to "fix the interface" to better fulfill these needs. In
contrast, our study helps designers understand why
Figure 6: MSN Messenger v7.0 separates editing and display
of names and personal messages.
Figure 7: Snapshot of Community Bar displaying personal
message space within presence item
97
appropriation occurred in the first place. Looking at the 17
categories of communication that are used in messages found in
the display name space, we saw that most are personal, or about
the self. In taking over this space, users are not `hacking' to
make IM do totally different things. Rather, they are adding
richness to their identity beyond their simple name label. They
are expressing identity, and they own this expression by using a
text field that only they can alter.
We also saw that there is some use of the display field for public
broadcasting of messages. This suggests that there is a problem
with the way we compartmentalize systems: IM systems with no
real notion of groups or public broadcast, versus IRC and similar
systems where public broadcasts dominate. The real solution
likely amalgamates both: a system that somehow supports both
public and private discussions between ad hoc (and perhaps non-overlapping
) groups. To our knowledge, only very few systems
(such as the Community Bar above [12]) are trying to tackle this
fusion of genres.
CONCLUSION
Most studies of communication using instant messenger clients
have been focused on the activities within the main chat
window. In contrast, this study examined how contacts
appropriate IM technology to publicly broadcast information by
adding extra text to their display name. We exposed patterns of
behaviour, where we saw that almost half of the contacts we
monitored change their display names with varying frequencies.
We established a set of seventeen communication categories for
the different types of personal messages added to the display
field. We saw that people did want to identify themselves (the
Name, Variations and Handles category), and that these were
true identities that contacts would recognize versus anonymous
pseudonyms not known by others within the social group. We
also saw that the most popular communications were those that
added personal information about self: a person's psycho-physiological
status, one's current activities, details of their
location, and expressions of personal comments and opinions.
We also saw that people occasionally used it to broadcast
messages to the group, a facility not otherwise available in IM.
These findings suggest that personal information and public
broadcast of messages, currently supported through this creative
appropriation by users, should be provided as a first class
interface feature in IM design.
This is just the first of a set of studies that could be done. Much
has been discovered, although these results should be verified
and refined further. For example, modest refinements of our
study protocol would allow us to more precisely capture the
frequency of changes within the display field and their
distribution within the different communication categories.
However, we suspect that the actual categories of
communication will not change dramatically. We would also
like to consider the author's intentions of a display name change
along with the recipient's opinion. More importantly, we intend
to study behaviour and communication patterns within systems
that provide explicit support for personal information supply
(such as MSN v7.0) and public broadcast (such as the
Community Bar).
ACKNOWLEDGMENTS
Many thanks to all those who participated in this project and
took precious time to make communication category evaluations
for their many contacts' display name changes; their input was
invaluable. Special thanks to Gregor McEwan and Michael
Boyle for their advice both on intellectual content and on our
recording software. This work was funded in part by the NSERC
Nectar Research Networks program.
REFERENCES
[1]
Bechar-Israeli, H. (1995). From "Bonehead" to
"cLoNehEAd": Nicknames, play and identity on internet
relay chat. J. Computer-Mediated Communication, 1 (2).
[2]
Cadiz J. J., Venolia G. D., Jancke G. & Gupta A. Designing
and Deploying an Information Awareness Interface. Proc
ACM CSCW (2002). 314-323.
[3]
Cameron, A. F. & Webster, J. (2005). Unintended
consequences of emerging communication technologies:
Instant Messaging in the workplace. Computers in Human
Behaviour, 21, 85-103.
[4]
Cutrell E. B., Czerwinski M. & Horvitz E. Effects of
instant messaging interruptions on computing tasks. In
Proc ACM CHI Extended Abstracts (2002). 99-100.
[5]
Etzioni, A. & Etzioni, O. (1999). Face-to-face and
computer-mediated communities, a comparative analysis.
The Information Society, 15, 241-248.
[6]
Fitzpatrick, G., Parsowith, S., Segall, B., & Kaplan, S.
(1998). Tickertape: Awareness in a single line. Proc ACM
CHI (1998). 281-282.
[7]
Greenberg, S. & Rounding, M. (2001). The Notification
Collage: Posting Information to Public and Personal
Displays. Proc ACM CHI (2001). 514-521.
[8]
Grinter, R. E. & Palen, L. (2002). Instant messaging in teen
life. Proc ACM CSCW (2002). 21-30.
[9]
Hu, Y., Wood, J. F., Smith, V., & Westbrook, N. (2004).
Friendships through IM: Examining the relationship
between instant messaging and intimacy. J Computer-Mediated
Communication, 10 (1).
[10]
Isaacs E., Walendowski A., Whittaker S., Schiano D. J. &
Kamm C. (2002) The character, functions, and styles of
instant messaging in the workplace, Proc ACM CSCW
(2002). 11-20.
[11]
Jania, F. (2003). Broadcast Messaging: Messaging to the
masses. Queue, 1 (8), 38-43.
[12]
McEwan, G. and Greenberg, S. Supporting Social Worlds
with the Community Bar. To appear in Proceedings of
ACM Group 2005, Sanibel Island, Florida, Nov 6-9.
[13]
Nardi, B. A., Whittaker, S., & Bradner, E. Interaction and
outeraction: instant messaging in action. In Proc ACM
CSCW (2000), 79-88.
[14]
Piepmeyer, A. I've been replaced by a screen name. The
Daily Utah Chronicle,
October 31, 2003.
www.dailyutahchronicle.com/news/2003/10/31/Opinion/
Ive-Been.Replaced.By.A.Screen.Name-545565.shtml
[15]
Rennecker J. & Godwin L. Theorizing the Unintended
Consequences of Instant Messaging for Worker
Productivity, Sprouts: Working Papers on Information
Environments, Systems and Organizations, 3 (Summer).
Retrieved Dec 3, 2004 //weatherhead.cwru.edu/sprouts/
2003/030307.pdf
[16]
Tang, J. C. & Begole, J. (2003). Beyond instant messaging.
Queue, 1 (8), 28-37.
[17]
Turkle, S. (1997). Life on the Screen: Identity in the Age of
the Internet. New York: Simon & Schuster Inc.
98 | communication;Communication Catogories;Name Variation Handles;Identification Is Fundamental;Related IM Research;Distribution Frequency Of Various Catogories;Display Names;Instant messenger;awareness;MSN messager;Broadcast Information;Catorgorisation Of Display Names;Instant Messaging;display name |
48 | Building a Research Library for the History of the Web | This paper describes the building of a research library for studying the Web, especially research on how the structure and content of the Web change over time. The library is particularly aimed at supporting social scientists for whom the Web is both a fascinating social phenomenon and a mirror on society. The library is built on the collections of the Internet Archive, which has been preserving a crawl of the Web every two months since 1996. The technical challenges in organizing this data for research fall into two categories: high-performance computing to transfer and manage the very large amounts of data, and human-computer interfaces that empower research by non-computer specialists. | 1. BACKGROUND
1.1 Research in the History of the Web
The Web is one of the most interesting artifacts of our time. For
social scientists, it is a subject of study both for itself and for the
manner in which it illuminates contemporary social phenomena. Yet
a researcher who wishes to study the Web is faced with major
difficulties.
An obvious problem is that the Web is huge. Any study of the Web
as a whole must be prepared to analyze billions of pages and
hundreds of terabytes of data. Furthermore, the Web changes
continually. It is never possible to repeat a study on the actual Web
with quite the same data. Any snapshot of the whole Web requires a
crawl that will take several weeks to gather data. Because the size
and boundaries of the Web are ill defined, basic parameters are hard
to come by and it is almost impossible to generate random samples
for statistical purposes.
But the biggest problem that social scientists face in carrying out
Web research is historical: the desire to track activities across time.
The Web of today can be studied by direct Web crawling, or via
tools such as the Google Web API1
, while Amazon has recently
made its older Alexa corpus commercially available for the
development of searching and related services2
. However, the only
collection that can be used for more general research into the history
of the Web is the Web collection of the Internet Archive
.
1.2 The Internet Archive
Everybody with an interest in the history of the Web must be
grateful to Brewster Kahle for his foresight in preserving the content
of the Web for future generations, through the not-for-profit Internet
Archive and through Alexa Internet, Inc., which he also founded.
1
The Google Web Search API allows a client to submit a limited
number of search requests, using the SOAP and WSDL
standards. See: http://www.google.com/apis/.
2
See http://websearch.alexa.com/welcome.html for the Alexa
corpus made available by Amazon. This site also has a
description of the relationship between Alexa Internet and the
Internet Archive.
3
The Internet Archive's Web site is http://www.archive.org/.
Copyright is held by the author/owner(s).
JCDL'06, June 11-15, 2006, Chapel Hill, North Carolina, USA.
ACM 1-5959
3-354-9/06/0006.
95
Vannevar Bush Best Paper Candidate
The Internet Archive began to collect and preserve the Web in 1996.
With a few gaps in the early years, the collection has added a full
crawl of the Web every two months since then. Most but not all of
this data comes from the Alexa crawls. Statistics of the sizes of the
separate crawls are complicated by the fact that a single crawl may
contain several variants of the same URL, but in August 2005 the
total volume of data was 544 Terabytes (TB). This is the size of the
compressed data. As discussed below, the overall compression ratio
is about 10:1, so that the total size of the collection is approximately
5 to 6 Petabytes uncompressed. Table 1 gives estimates of the size
of the individual crawls for each year.
Table 1. Estimates of crawl sizes (compressed)
Year
Web pages
(TB per crawl)
Metadata
(TB per crawl)
1996 1
0.2
1997 2
0.4
1998 3
0.6
1999 4
0.8
2000 10 1.2
2001 15 2
2002 25 3
2003 30 4
2004 45 6
2005 60 10
We are working with the Internet Archive to build a research library
based on this collection. In summer 2005, we began work on the
system that is being used to transfer a major subset of the data to
Cornell and to organize it for researchers, with a particular emphasis
on supporting social science research. This paper describes the
technical design, performance testing, and progress in
implementation.
The overall goals of the library and plans for its use in research are
described in a separate paper [1].
1.3 User Studies
In building any library, the objective is to organize the collections
and services so that they provide the greatest range of opportunities
for users, both now and in the future. Inevitably the design is a
trade-off between predictions of what users will find helpful and the
practicalities of building and maintaining the library. This trade-off
is particularly important for a library of the whole Web because of
the computing challenges of managing very large amounts of data.
Therefore, the design of the library began with interviews of
potential users to identify how the collections might be organized to
be most valuable to them. Two users studies were carried out, with
sociologists and with computer science researchers.
1.3.1 Sociology
In fall 2005, Cornell received support from the National Science
Foundation's Next Generation Cybertools program for a project that
combines sociology research with continuing development of the
Web library
4
. In this project, the specific areas of research are
diffusion of ideas, including polarization of opinions and the spread
of urban legends. Conventionally, sociologists have studied such
phenomena by analysis of small surveys with hand-coded data. One
aim of the project is to develop a new methodology for such
research built around very large-scale collections of Web data, with
automated tools used to extract, encode and analyze the data.
Social science researchers identified a number of specific studies
that they would like to carry out using historical Web data. Many of
the studies have the same general structure: (a) extract a subset of
the Web for detailed analysis, (b) encode selected attributes of the
pages in that subset, (c) repeat for the corresponding subsets at
several different dates, (d) analyze the changes over time.
The criteria by which a portion of the Web is chosen for analysis are
extremely varied. Some desirable criteria are impossible with
today's computing, e.g., they require understanding of the content of
a page. However, simple criteria such as domain names provide a
good starting point for many purposes, particularly when combined
with focused Web crawling to refine the subsets for analysis. Once a
subset has been extracted, social science researchers want to analyze
the text, for which full text indexes are important. They also wish to
analyze the structure of links between pages for the social
relationship that they represent.
Such research requires interdisciplinary efforts by computer
scientists and social scientists. Some of the analysis tools already
exist, e.g., using full text indexes of Web pages to trace the
movement of individuals. Others tools are themselves subjects of
computer science research in natural language processing and
machine learning, e.g., to analyze the text of Web pages for
sentiments, opinions, and other features of interest to social
scientists.
1.3.2 Computer Science
Ten computer scientists who carry out research on the Web
contributed to the user studies. Their research areas include the
structure and evolution of the Web, data mining, digital libraries,
machine learning, and natural language processing. Most of their
interest focuses on the textual content of Web pages and the
structure of the Web as revealed by the graph of links between
pages. Several of the researchers commented that they expend
ninety percent of their effort gathering test data; even then they have
difficulty in determining how robust the results are across time.
A fundamental tool for such research is the Web graph of links
between pages. Studies of the graph are very important in
understanding the structure of the Web, and the graph is the basis of
practical tools such as PageRank [3] or Hubs and Authorities [9].
Despite its importance, there have been few studies that have looked
at changes in the Web graph over time. Many of the classical studies
of the Web graph were based on early AltaVista crawls and have
never been repeated. Algorithmic research needs graphs of at least
one billion pages, preferably stored in the main memory of a single
computer.
For textual research on the Web there are two additional
requirements. The first is snapshots that are repeated across time
4
Michael Macy (principal investigator), et al., "Very Large Semi-Structured
Datasets for Social Science Research". NSF grant
SES-0537606. http://www.infosci.cornell.edu/SIN/cybertools/
96
that can be used for burst analysis, and other time based research.
The second is full text indexes of substantial numbers of Web pages.
Focused Web crawling is of particular importance in digital libraries
research. Part of the original motivation for developing this library
was an interest in automatic selection of library materials from the
Web [2, 10]. Using the actual Web for research in focused crawling
is technically difficult and the results are often hard to interpret
since no experiment can ever be repeated with exactly the same
data.
ARCHITECTURE
The Internet Archive uses highly compressed file formats developed
in conjunction with Alexa Internet. Compressed Web pages are
packed together in large files using the ARC format [4]. The pages
in each ARC file are in essentially random order, usually the
sequence in which the Web crawler originally captured them. Every
ARC file has an associated DAT file, which contains metadata for
the pages including URL, IP address, crawl date and time, and
hyperlinks from the page. The files are compressed with gzip. Ten
years ago the decision was made that ARC files should be
approximately 100 MB, which seemed big at the time, but this size
is now too small for efficiency and will need to be increased. The
sizes of the DAT files depend on the number of pages in the
associated ARC files, but average about 15 MB. The compression
ratios also vary widely. The ratio is more than 20:1 for text files but
close to 1:1 for files that are already compressed efficiently, such as
videos. The overall ratio for ARC files is about 10:1.
2.1.1 The Database
The Cornell Web library uses a relational database to store metadata
about the Web pages and a separate Page Store to store the actual
pages. In addition, the unprocessed ARC and DAT files received
from the Internet Archive are copied to a tape archive. In choosing a
relational database, we considered but rejected two approaches that
have been successful in related applications.
The first option was to use a modern digital library repository with
support for rich data models, such as XML and RDF, and search
services that support semi-structured data, such as XQuery. Such
capabilities are appealing, but we know of no repository system that
can manage tens of billions of objects. The scale of the Web
precludes such an approach.
The second option was to follow the model of production services
for the Web, such as Google [7] and Yahoo. They provide low cost
processing and data storage by spreading their systems across very
large numbers of small, commodity computers used as servers. This
is the approach used by the Internet Archive to store its collections
and for its very popular Wayback Machine
5
. We rejected this
architecture for a research library for two principal reasons: (a) there
are many algorithmic computations on large datasets where a single
large computer is intrinsically more efficient than a distributed
cluster of smaller machines, and (b) even when the research can be
done effectively, clusters of computers are more difficult to program
by researchers who are carrying out Web-scale research. As an
example, each server at the Internet Archive has an index of the files
stored on it, but there is only a very limited central index. The
Wayback Machine allows a user to retrieve all the pages in the
5
The Wayback Machine is accessible at http://www.archive.org/.
entire collection that have a given URL. It relies on a protocol in
which an identifier is broadcast and each server responds with a list
of matches. This is very efficient for this specific purpose, but it
would be extremely difficult to extract the flexible subsets required
by social science researchers with this organization of data.
A relational database has many advantages for the Web library and
one major disadvantage. Foremost among the advantages is
scalability. Commercial relational database systems are highly
optimized for storing, loading, indexing, extracting, backing-up, and
restoring huge volumes of data. Usability is another important
advantage. A relational schema provides a single image of the
collection, expressed in a manner that is familiar to many
researchers. The disadvantage is a loss of flexibility. The design and
implementation of the database attempt to reconcile the expected
uses that will be made of the library against scalability constraints,
but it will be difficult to make major changes to the schema without
rebuilding the entire database.
The actual Web pages are stored in a separate Page Store. At the
Internet Archive, if two Web pages are identical they are stored
twice. With the new Page Store, duplicate pages are stored only
once. Rather surprisingly, there is as yet very little data about how
many pages remain unchanged between crawls, but we expect that
elimination of duplicates will save significant online storage,
especially with large audio and video files.
The Page Store is implemented as a set of compressed files, one file
for each page received in the ARC files. Since many pages on the
Web do not change between crawls, the Preload subsystem checks
for content duplicates using an MD5 check sum of the content.
Thus, a copy of the content is stored only once however many pages
have that content. In order to guarantee fast access to the stored
content, each page's content is compressed individually.
The architecture of the Page Store allows decisions to be made
about which pages to store online at any given time. For example,
the library might decide not to store large audio and video files
online. While all metadata will be online at all times, an individual
Web page could be accessed from the online Page Store, the off-line
tape archive, or over the Internet from the Internet Archive.
2.2 Equipment
The library is housed at the Cornell Theory Center, which is the
university's high-performance computing center. The choice of
equipment and the use of a relational database were closely related
decisions. The Theory Center has expertise in distributed cluster
computing, but because of the very high data rates, a symmetric
multi-processor configuration was chosen instead. Figure 1 shows
part of the configuration of the central computer.
Figure 1. Configuration of the main computer system
97
The system is shared with another data-intensive program of
research, the analysis of data from the Arecibo radio telescope in
Puerto Rico. Each group has use of a dedicated Unisys ES7000/430
server, with 16 Itanium2 processors running at 1.5 Gigahertz. The
memory can be shared between the servers, but in practice each
project has sole use of 32 GB. Each server has a RAID disk
subsystem attached via a dual-ported fiber-channel. The operating
system is Microsoft Windows Server 2003.
For the Web library the disk subsystem provides an initial 45 TB of
disk storage. We plan to extend the capacity to 240 TB by 2007.
There are no technical barriers to adding additional fiber channels
and disk capacity. In the longer-term, disk prices are falling faster
than the growth in the size of Web crawls, which gives confidence
that the library will be able to keep up with the growth of the Web.
By using a symmetric multi-processor configuration with a high
performance disk subsystem, we are able to balance processing and
disk access requirements. Since the data sets are local to the system
on which the database is located, the system can perform bulk-loading
tasks without incurring any networking penalties.
The large real memory is an added attraction of this configuration. It
allows researchers to carry out substantial computation entirely in
memory. For instance, it is possible to process a Web graph of one
billion pages within memory.
2.3 The Human Interface
The design of the human interface is perhaps the most challenging
aspect of developing the library. The social science research groups
that we are working with have the technical skills to write scripts
and simple programs. Many are experts in statistical calculations.
But they should not be expected to write large or complex computer
programs. The current design supports three categories of users.
The Basic Access Service provides a Web Services API that
allows a client to access pages in the collection by any metadata
that is indexed in the database, e.g., by URL and date. The Retro
Browser, which is described below, uses this API to allow a user
to browse the collection as it was at a certain date.
The Subset Extraction Service supports users who wish to
download sets of partially analyzed data to their own computers
for further analysis. A Web form is provided to define a subset of
the data (e.g., by date, URL, domain, etc.), extract subsets of the
collection, and store them as virtual views in the database. Sets of
analysis tools, many of which are already under development, can
be applied to the subset and the results downloaded to a client
computer.
Technically advanced users can be authorized to run their own
programs on the central computer.
To support the Basic Access Service and the Subset Extraction
Service, we provide a dedicated Web server, which is housed next
to the main system.
SCALABILITY EXPERIMENTS
Although the library has been generously funded by the National
Science Foundation, we do not yet have sufficient capacity to
download and mount online the entire Web collection of the Internet
Archive. This is the long term goal, but during the initial phase, care
has been taken to balance the several parts of the system: online
storage, network bandwidth, processing of the incoming data,
database, performance, and the need to archive, back-up, and restore
the data. In spring 2005, several undergraduate and masters students
carried out independent projects to estimate sizes and processing
requirements
6
.
To test database performance before large amounts of actual data
were available, we used the R-MAT algorithm to generate a
synthetic graph with properties similar to the Web graph [5]. This
test graph has one billion nodes with more than seven billion links,
and domain names generated according to their distribution on the
real Web [11].
Based on these benchmarks, the decision was made to install a 100
Mb/sec network connection to the Internet Archive and to load data
at a sustained rate of 250 GB/day, beginning January 2006. This rate
will enable the library to acquire and mount online by the end of
2007 a complete crawl of the Web for each year since 1996. This
phase will require approximately 240 TB of disk storage. Note that
the disk requirement differs from the estimates of raw data shown in
Table 1. The database with its indexes is less highly compressed
than the raw data, but savings are made in the storage of duplicate
data, both in the database and the Page Store.
During fall 2005, first generation software was written for (a) the
data flow system that brings data to the library, and (b) the user API
and tool sets. They are described in the next two sections.
DATA FLOW
Figure 2 shows the flow of data into the library. When ARC and
DAT files are received from the Internet Archive, the first step is to
store them in the tape archive. The Preload system then unpacks the
raw data, extracts metadata, and prepares batch files for loading into
the database and Page Store.
Figure 2. Flow of data into the library
Figure 2 does not show the data tracking system. This is a major
subsystem that manages the data transfers, monitors all errors, and
tracks the tens of millions of files within the library.
4.1 Networking
Internet2 is used to transfer data from the Internet Archive in San
Francisco, California to Cornell University in Ithaca, New York. For
this purpose, a 100Mbit/sec link has been established from the
Internet Archive to Internet2. Both Cornell and the Internet Archive
have internal networks with 1 Gbit/sec or greater performance.
In the future, the National LambdaRail and the TeraGrid are
intriguing possibilities. These new networks have the capacity to go
6
These student reports are available at
http://www.infosci.cornell.edu/SIN/WebLib/papers.html.
User tools
Preload system
Database
Page store
Tape
archive
Internet 2
98
beyond bulk data transfer and support genuine distributed
processing between the Web library and the Internet Archive. For
example, if large audio and video files are not stored online, an
application could use the TeraGrid to retrieve individual large files
from the Internet Archive on demand.
At the end of December 2005, a series of experiments were run to
measure the sustained throughput of multi-threaded FTP transfers
over Internet2, using Transport Layer Security. These measurements
showed transfer rates of 280 GB per day before system tuning, or
rather better than 30 percent of the theoretical maximum throughput
of the link to the Internet Archive. This is sufficient for the planned
rate of 250 GB per day. If greater bandwidth proves necessary, the
link from the Internet Archive to Internet2 can be upgraded to
500Mbps inexpensively, while the Cornell Theory Center will soon
have enormous bandwidth available via the TeraGrid.
4.2 Preload Subsystem
The Preload subsystem takes incoming ARC and DAT files,
uncompresses them, parses them to extract metadata, and generates
two types of output files: metadata for loading into the database and
the actual content of the Web pages to be stored in the Page Store.
Metadata for loading into the database is output in the form of 40GB
text files, a separate file for every database table.
To satisfy the speed and flexibility requirements, the Preload system
is designed to run as a set of independent single-thread processes,
avoiding all inter-process communication and locking over input or
output data. Likewise, each process writes its own output files. This
design allows for easy configuration of the system to run a required
number of processes on a given number of processors, on one or
more machines. To determine each process's input, input files are
partitioned by the first k bits of the MD5 hash sum of the filename,
where 2k is the total number of processes in the system. The design
of the subsystem does not require the corresponding ARC and DAT
files to be processed together.
A series of experiments were run to test the performance of the
Preload system, using the metadata from the synthetic Web graph.
The experiments used 1, 2, 4 and 8 processors, with the data
partitioned into 16 parts, according to the first 4 bits of the hash
sum. Separate experiments were made for ARC and DAT files.
Figure 3 shows results for the ARC files. The x-axis shows the
number of CPUs used, and the y-axis shows the throughput in
KB/sec. The white bar shows throughput per processor and the
shaded bar shows total throughput. Adding more processors slightly
decreases the throughput per processor due to contention for random
disk accesses. The total throughput increases steadily up to four
processors. After that, disk contention becomes too high, and the
throughput actually declines. The results for DAT files are similar,
with the total throughput flattening after four processors.
From these experiments, we conclude that four processors are
optimal. The corresponding throughputs are 73 MB/sec (about 6
TB/day) for ARC files, and 12 MB/sec (about 1 TB/day) for DAT
files.
When metadata from the DAT files is uncompressed its size
increases by a factor of about 11:1. Fortunately, much of this data is
duplicated. For example, a given URL may occur many times in a
crawl and be replicated in many crawls. Therefore duplicate
elimination has been a major consideration in refining the database
design.
Figure 3. Performance of Preload system (ARC files)
Note that during the above experiments no other processes were
running on the system. The overall performance will be lower when
the Preload system is run in production mode at the same time as
other subsystems. Also, during the preliminary phase, only text and
HTML files were fully analyzed to extract links and anchor text.
Processing of other file formats will be added in the future. Some of
these formats will be computationally intensive, e.g., PDF files.
4.3 Database Design
The relational database uses Microsoft SQL Server 2000. Three
important goals when designing the database schema and deciding
how to load the data to the database were: (a) minimize the storage
requirements, (b) maximize the load throughput, and (c) support
efficient logging, backup, and restore.
Conceptually, for each crawl, the database stores metadata about
each page (e.g., information about the content of the page, URL of
the page) and about the links between them (including anchor text
and text surrounding the anchor text). However, to avoid storing
redundant data, the schema is denormalized. The denormalized
schema is shown below in Figure 4. Information about URL and
domain names is stored in a look-up table, since the same URLs and
domain names appear many times in a single crawl and across
crawls. For similar reasons, anchor text, text surrounding the anchor
text, and information about page content are stored in the look-up
tables Dictionary and Page Content respectively, as shown in the
schema in Figure 4. To make the loading of the data faster, separate
tables for each of Page, Page Content and Link are created for each
crawl while the other tables (e.g., the URL table) are shared among
crawls.
80
70
60
50
40
30
20
10
Throughput (MB/s)
1
2 4 8
Number of CPUs
99
Figure 4. Database design: the de-normalized schema
The Preload subsystem outputs separate files conforming to the
schema described above and these files are bulk loaded into the
database. There are many parameters that affect the bulk load
performance. These parameters include: batch size, file size, degree
of parallelism, and interaction with the logging, backup and
recovery system. The synthetic Web data was used to understand
how these parameters affect the loading performance and to tune
them. The first sets of experiments were used to determine the
optimal file size for bulk loading and the number of CPUs used. In
these experiments, default, recovery and backup mechanisms were
used [6].
The results of the first set of experiments indicate that it is optimal
to load each file as a single batch; a file size of 40GB and 4 CPUs
gave the best performance, and around 800GB could be loaded in
one day. However, the experiments to determine the file size and
degree of parallelism showed significant variability. Using the
default logging provided by MS SQL Server 2000, checkpointing
and nightly backups were consuming enormous resources. They
interfered with the bulk loading process and are a probable cause of
the variance seen in the performance benchmarks.
Two observations were made to overcome the performance penalty.
First, data is append-only while being bulk loaded and is read-only
after the bulk load is complete; logging, recovery and backup
mechanisms can be customized to increase the performance and
decrease the variance in loading times. Second, tables in the schema
can reside on different disks and thus can be written in parallel.
Following these two observations, in the current design each table
can be put onto a separate disk as shown in Figure 5. Moreover,
Page, Page Content and Links information for each crawl are put
into separate files. This partitioning according to crawls is easy in
MS SQL Server as separate tables for each of Page, Page Content
and Link are created for each crawl.
The database load subsystem is divided into two programs: a high-level
program that organizes the processes and a low level program
that runs separate loads in parallel. The workflow for loading each
table in each crawl consists of five major steps: (a) Get the files
produced by the Preload subsystem for the current table in the
current crawl and write the relevant log information to an
administrative database; commit the transaction writing the log
information. (b) Write files of the table to the disk corresponding to
the current table via the low level program; files corresponding to
different tables can be written in parallel. (c) Create the necessary
indexes. (d) Back-up the newly written data. (e) Write to the log the
relevant information to indicate that processing of the files for the
current table in the current crawl is complete and commit the
transaction writing the log information. In MS SQL Server 2000,
backups and index creation are all atomic.
Figure 5. Database design: organization of file system
This new design is being implemented and tested, as of January
2006. First indications are that the performance will comfortably
meet the required performance goals. Extensive benchmarking is
required to tune many parameters, such as batch size, file size,
degree of parallelism, and the index management.
4.4 Archiving
Archiving and back-up are expensive operations with complex
trade-offs. Without care, the networking bandwidth and disk
throughput used in logging, back-up, and writing to the tape library
could have a major impact on the system throughput.
As described above, the database design allows the database files to
be backed up incrementally. This provides two options for restoring
the database, by reprocessing the raw data or from the back-up. The
Page Store is not backed-up. If parts of it were ever corrupted, they
would have to be restored by reprocessing the raw data. A current
design project is to reorganize the Page Store to permit efficient
restoration.
The library uses a robotic tape library with LTO3 tape drives. This
is shared with other systems at the center. All unprocessed ARC and
DAT files are copied to the tape library to be stored indefinitely.
This preserves another copy of the Internet Archive's data for the
long-term. This data is unique and could never be replaced. The
industry standard life of these tapes is thirty years but our
expectation is that six years is a more probable time before the tape
library is replaced and all the data will have to be copied onto fresh
media.
SUPPORT FOR THE USER
Figure 6 shows the architecture of the interface that the library
offers to users. This is a three-tier architecture. The data tier consists
of the relational database of metadata and the Page Store; the
middleware tier provides services to access the data, tools to analyze
it, and a choice of Web Services APIs; clients interact with the
middleware tools either through the APIs, or directly.
5.1 Clients
The user tools system was designed to be extensible and scalable.
Specifically, it supports two categories of users: (a) users who
analyze the data remotely from their own computers, perhaps at
another institution or even in another country, and (b)
Page
Page Content
Link
- Destination URL
URL Path
Dictionary
Domain
Page
Link
Page Content
Custom
Log
Everything
Else
100
computationally intensive users, who may wish to run very heavy
analyses of the data using the Cornell Theory Center computing
environment. Corresponding to these two categories of users the
architecture supports two types of clients: Web services clients and
clients that execute their own programs on the library's servers. The
first category requires little technical sophistication from the user,
while the second category trades complexity against the flexibility
of being able to write custom programs and to use the full
processing power available.
Figure 6. Architecture of the interfaces provided for users of the
library
Web services clients are intended to be used remotely, with
moderate demands on their access to the data. Examples of these
clients include running queries using a full text index, or fetching
specific pages using existing indexes. Web service clients may also
be used to start, control, and retrieve results from experiments run
by high-performance clients. Web services are implemented by
using Microsoft's ATL server libraries. They run on a dedicated
Web server. The clients themselves can be implemented in any
language or environment that supports Web services standards.
Users of these clients do not need to know how the data is stored.
They are provided with forms that are automatically converted to
SQL commands by the middleware.
The high-performance clients will for the most part run within the
Cornell Theory Center. They will usually have a high bandwidth
connection to the database. Clients of this form may carry out
research that need lots of computation power, e.g., experiments that
process very large subsets or analyze how the structure of the Web
changes over time. These clients are implemented by linking against
dynamic link libraries (DLLs) provided by the application server
tier.
5.2 Access to the Database
The application server tier accesses the database using Microsoft
database tools. Two main areas of functionality have been
implemented in this tier: Basic Access Services (BAS), and the
Subset Extraction Services (SES). Each consists of two parts: a set
of services, implemented as a series of dynamic link libraries
(DLLs) written in C++, and a Web Services API.
Basic Access Services are for clients that interact directly with the
database. They allow a client to fetch pages given a combination of
URL and date of crawl. They also allow a client to check within
which crawls a given page is available. For example, a focused Web
crawler can retrieve pages from a specified crawl, using a simple
client script that interfaces to the BAS Web services API.
Subset Extraction Services allow a client to select a part of the data
as a subset. Once created, this subset is stored in the database as a
view. Such subsets are useful for running experiments over a
smaller, perhaps random, sample of the Web, as well as selecting
relevant pages for a particular experiments, such as those from a
given domain. For example, a researcher studying government Web
sites might extract textual pages from the .gov domain for a selected
range of dates.
5.2.1 Users Beyond Cornell
This digital library is intended for the use of all academic
researchers, not only those based at Cornell. Technically, this is
straightforward. The library is connected to the Internet, including
Internet2, and will soon be available via the TeraGrid.
We are currently developing a code of use policies for researchers.
Mining this data has potential for abuses of privacy. Cornell
researchers need to follow the university's procedures for such
research and we need to find convenient ways to extend the code of
practice to all users of the library.
5.3 User Tools
5.3.1 The Retro Browser
The Retro Browser is an example of a Web services client that is
already implemented and running in a test environment [12]. To a
user, it appears to be a regular Web browser, except that it browses
an historical snapshot of the Web.
The Retro Browser is designed with the non-technical user in mind.
The design assumes that the user may not be technically proficient
and should not be expected to install new software or run special
scripts. After the user has made an initial choice of a date in Web
history, the Retro Bowser behaves like any other Web browser. The
user uses a standard browser to carry out all the standard Web tasks,
such as download an applet, run a script, submit a forms, etc. The
only difference is that every URL is resolved to the record in the
Web library for the specified date.
The major component of the Retro Browser is a Web server
configured to be used as a proxy server. To obtain the data from the
database, the proxy server utilizes the Basic Access Web Service
API.
The Retro Browser client interacts with the Retro Browser proxy in
a standard HTTP client-server fashion. To fetch the appropriate
page from the database requires a URL and the date of the crawl,
which is represented by a crawl ID. The proxy server expects a
session cookie specifying a crawl ID with every request. If such a
cookie is not found with the request, the user is asked to specify the
crawl ID. Further requests may or may not contain a cookie since
cookies are tied to a domain. However, the Retro Browser proxy
ensures that the cookie is replicated for all domains using a series of
redirects. In this manner, the architecture ensures that the user is
asked to specify the crawl date for the first request only.
5.3.2 Analysis of the Web Graph
A set of analysis tools is under development that will provide more
complex access and analysis functions. These tools are part of the
application server tier. They are applied to subsets of the data and
accessed either directly or through the Subset Extraction Services
API.
Data Tier
Application
Server Tier
Client Tier
Page Store
Metadata
Basic Access
Services
Subset Extraction
Services
BAS Web
Service
SES Web
Service
Clients
High
Performance
Clients
Retro-Browser
101
One group of tools operates on the Web graph of a subset.
Hyperlinks from each Web page are stored in the database.
Representation of the graph is by its adjacency matrix using a
compressed sparse row representation. Preliminary software has
been written to read all the links from a given subset of the data and
construct the adjacency matrix. The matrix is then stored in the file
system in a compressed form, which allows performing the basic
operations, such as matrix addition and multiplication. The Cuthill-McKee
algorithm is used to reorder the nodes to create dense blocks
within the matrix to increase the compression ratio and allow in-memory
processing [8].
5.3.3 Full Text Indexes
Full text indexes are a vital tool for many researchers. For instance a
social science researcher may wish to track the Web pages that refer
to a named individual or may identify trends by burst analysis of
terms used on the Web.
Our initial approach is to provide an indexing service for data
subsets, using the Nutch search engine. It is straightforward to
extract a subset, which is represented by a database view, and create
a full text index of all textual pages in the subset. The only problem
is the processing necessary to index a very large subset.
We are in discussions with Cutting, the principal developer of the
Lucene and Nutch search engines
7
. He has been working with the
Internet Archive to build indexes of very large collections of Web
pages in ARC format. For this purpose, they are developing a
modified version of Nutch, known as Nutch WAX (Web Archive
eXtensions). Rather than duplicate this effort, we are exploring the
possibility of providing access to these indexes through the Basic
Access Service.
ACKNOWLEDGEMENTS
We wish to thank Brewster Kahle, Tracey Jaquith, John Berry and
their colleagues at the Internet Archive for their support of this
work.
The following Cornell students have contributed to the development
of the library described in this paper: Mayank Gandhi, Nicholas
Gerner, Min-Daou Gu, Wei Guo, Parul Jain, Karthik Jeyabalan,
Jerrin Kallukalam, Serena Kohli, Ari Rabkin, Patrick Colin Reilly,
Lipi Sanghi, Shantanu Shah, Dmitriy Shtokman, Chris Sosa, Samuel
Benzaquen Stern, Jimmy Yanbo Sun, Harsh Tiwari, Nurwati
Widodo, Yu Richard Wang.
This work has been funded in part by the National Science
Foundation, grants CNS-0403340, DUE-0127308, and SES-0537606
, with equipment support from Unisys and by an E-Science
grant and a gift from Microsoft Corporation.
7
The Lucene search engine is described at
http://lucene.apache.org/. Nutch is described at
http://lucene.apache.org/nutch/.
REFERENCES
[1] Arms, W., Aya, S., Dmitriev, P., Kot, B., Mitchell, R., Walle,
L., A Research Library for the Web based on the Historical
Collections of the Internet Archive. D-Lib Magazine. February
2006. http://www.dlib.org/dlib/february06/arms/02arms.html
[2] Bergmark, D., Collection synthesis. ACM/IEEE-CS Joint
Conference on Digital Libraries, 2002.
[3] Brin, S., and Page. L., The anatomy of a large-scale
hypertextual Web search engine. Seventh International World
Wide Web Conference. Brisbane, Australia, 1998.
[4] Burner, M., and Kahle, B., Internet Archive ARC File Format,
1996. http://archive.org/web/researcher/ArcFileFormat.php
[5] Chakrabarti, D., Zhan, Y., and Faloutsos, C., R-MAT:
recursive model for graph mining. SIAM International
Conference on Data Mining, 2004.
[6] Gerner, N., Sosa, C., Fall 2005 Semester Report for Web Lab
Database Load Group. M.Eng. report, Computer Science
Department, Cornell University, 2005.
http://www.infosci.cornell.edu/SIN/WebLib/papers/Gerner200
5.doc.
[7] Ghemawat, S., Gobioff, H. and Leung, S., The Google File
System. 19th ACM Symposium on Operating Systems
Principles, October 2003.
[8] Jeyabalan, K., Kallukalam, J., Representation of Web Graph
for in Memory Computation. M.Eng. report, Computer Science
Department, Cornell University, 2005.
http://www.infosci.cornell.edu/SIN/WebLib/papers/Jeyabalan
Kallukalam2005.doc.
[9] J. Kleinberg. Authoritative sources in a hyperlinked
environment. Ninth ACM-SIAM Symposium on Discrete
Algorithms, 1998.
[10] Mitchell, S., Mooney, M., Mason, J., Paynter, G., Ruscheinski,
J., Kedzierski, A., Humphreys, K., iVia Open Source Virtual
Library System. D-Lib Magazine, 9 (1), January 2003.
http://www.dlib.org/dlib/january03/mitchell/01mitchell.html
[11] Shah, S., Generating a web graph. M.Eng. report, Computer
Science Department, Cornell University, 2005.
http://www.infosci.cornell.edu/SIN/WebLib/papers/Shah2005a
.doc.
[12] Shah, S., Retro Browser. M.Eng. report, Computer Science
Department, Cornell University, 2005.
http://www.infosci.cornell.edu/SIN/WebLib/papers/Shah2005b
.pdf.
102
| User Interface;Dataflow;Internet Archive. 1. BACKGROUND 1.1 Research in the History of the Web The Web is one of the most interesting artifacts of our time. For social scientists;history of the Web;basic parameters are hard to come by and it is almost impossible to generate random samples for statistical purposes. But the biggest problem that social scientists face in carrying out Web research is historical;or via tools such as the Google Web API;Database Management;it is a subject of study both for itself and for the manner in which it illuminates contemporary social phenomena. Yet a researcher who wishes to study the Web is faced with major difficulties. An obvious problem is that the Web is huge. Any study of the Web as a whole must be prepared to analyze billions of pages and hundreds of terabytes of data. Furthermore;Storage;Flexible Preload System;Internet Archive;digital libraries;Scalability;the Web changes continually. It is never possible to repeat a study on the actual Web with quite the same data. Any snapshot of the whole Web requires a crawl that will take several weeks to gather data. Because the size and boundaries of the Web are ill defined;Database Access;User Support;computational social science;Full Text Indexes;the desire to track activities across time. The Web of today can be studied by direct Web crawling |
49 | Building a Sense of History: Narratives and Pathways of Women Computing Educators | This working group laid the groundwork for the collection and analysis of oral histories of women computing educators. This endeavor will eventually create a body of narratives to serve as role models to attract students, in particular women, to computing; it will also serve to preserve the history of the female pioneers in computing education. Pre-conference work included administration of a survey to assess topical interest. The working group produced aids for conducting interviews, including an opening script, an outline of topics to be covered, guidelines for conducting interviews, and a set of probing questions to ensure consistency in the interviews. The group explored issues such as copyright and archival that confront the large-scale implementation of the project and suggested extensions to this research. This report includes an annotated bibliography of resources. The next steps will include training colleagues in how to conduct interviews and establishing guidelines for archival and use of the interviews. | INTRODUCTION
During the SIGCSE Technical Symposium held in Reno, NV in
February 2003, a significant number of events focused on under-representation
of women in the computing curriculum and as
computing educators. Eric Roberts' keynote talk, "Expanding the
Audience for Computer Science" [21], was a moving discussion
of inclusiveness and a lament about the consequences of non-inclusion
. At the Friday luncheon, Jane Margolis and Allan
Fisher discussed results from their groundbreaking work,
Unlocking the Clubhouse [15]. Several private discussions begun
at the conference and continuing for some time afterward led to a
November 2004, proposal for this Working Group.
In this report, we document the results from a Working Group of
computer science educators at the 2005 ITiCSE conference held
in Lisbon, Portugal. We were drawn together by our shared
concern about women's under-representation among computing
educators. We wished to honor women who had persevered in the
early days of this field and to make their stories available as a
resource for those following after.
ITiCSE working groups are convened for the purpose of intensive
collaborative work on a topic of common interest among the
participants, prior to and during the conference, generally
174
completing the Working Group's task by conference end. In
contrast, this group was convened to lay the groundwork for a
project that we hope will continue for some time to come. The
Working Group leaders spent the preceding 18 months
formulating the charter for the Working Group: to collect oral
histories from pioneering women in computing education. The
goal of the Working Group meetings at ITiCSE in Lisbon was to
formulate a plan that could bring the charter to fruition.
We envision that the result of this project will be a large oral
history collection of broad scope with potential value to
researchers and others engaged in a variety of different projects.
Because this project could result in a large quantity of data, it
cannot be stored by one person in her private file space. The data
must be maintained and administered by an agency or institution
prepared for such a task.
We write this report for multiple audiences:
1.
Those who want a concise account of what the group
accomplished in Lisbon.
2.
Those whose work will proceed from and build on that done
in Lisbon for the oral history project.
3.
Those who want insights into the evolution and dynamic of a
working group.
4.
Those seeking historical information about the beginnings of
the oral history project.
PREPARING FOR THE PROJECT
This section outlines key steps and insights developed prior to the
ITiCSE conference.
2.1
Building a Background
The initial vision of this project was to collect stories, or
narratives, from successful computing educators, in particular
from women. We were particularly interested in the various paths
these individuals had followed through their careers.
We considered resources related to women and computing
education, in particular factors that seemed to lead to success in
the field. We found that the area of inquiry known as oral history
includes techniques conducive to the type of data-gathering we
visualized. Key resources for our project include a set of oral
history evaluation guidelines [20], an Oral History Association
[17], and a tutorial for conducting oral history from the Oral
History Institute at Baylor University [2]. We discuss these
resources further in Section 4 and in the annotated bibliography.
2.2
Project Vision
While it was clear from its inception that the primary focus for
this Working Group would be women computing educators, we
recognized that this is potentially the first phase of a longer-term
project. The techniques developed in this first phase could be used
in later phases, eventually developing into a broader project
covering the history of computing education as a profession. This
longer-term project should lead to a collection of oral histories
from both men and women in the field as well as other artifacts.
While we expect that future investigators will analyze the
materials collected during each phase of the project, analysis of
the materials is not the driving factor at this time. We feel it is
vital to create an accessible repository of the data to support future
investigations.
2.3
Survey to Gather Ideas
In order to gather ideas about the project from a broad community
of individuals, we designed a survey to request ideas from
colleagues. In recognition of the longer-range potential of this
work, the survey solicited information for the full field of
computing education, rather than restricting responses to the
narrower focus of women computing educators. The full survey
can be found in Appendix D.
We targeted two on-line communities with vested interest in the
topic of this Working Group: Systers [23] and SIGCSE.members
[22]. By the end of the conference, the survey had resulted in
responses from 24 different individuals. Respondents offered
ideas for questions, thoughts about how to recruit additional
subjects for the interviews, and advice for how to proceed. The
respondents suggested 60 educators as potential interviewees, of
whom 34 are women. Several respondents also indicated interest
in becoming involved with the project as planners, interviewers,
or subjects.
FORMULATING THE PROJECT GOALS
At the heart of this project is the recognition that women are
under-represented in the computing field [12]. In particular,
Working Group members had a variety of ideas for how to
address the lack of women in computing education. Among the
ideas:
providing role models
capturing stories of women of different ages to provide a
history of women in computing education
exploring the history of early women computing educators to
learn about and honor the stories of these women, who often
faced difficult circumstances
recording difficulties that women educators encountered
during their careers, and in some cases overcame, as a source
of inspiration and support
Considering the challenges faced by women in early computing
education also brought up questions about how they managed
those challenges: What internal reserves and external resources
did they draw on? How did they sustain their confidence in their
own capabilities, often as the only woman in what was at times a
hostile environment? This led the group to consider self-efficacy
beliefs, which Bandura ([2], p. 391) defines as "people's
judgments of their capabilities to organize and execute courses of
action required to attain designated types of performances". A
person's self-efficacy beliefs can play a significant role in her
capacity to manage difficulties: if she believes she can actualize
her intentions, obstacles presented by the environment impose less
drag.
3.2
Focus for the Short Term
This section addresses a number of key points that the group must
consider for the near term in order to coordinate work by a
distributed set of volunteers.
3.2.1
Protocol for Collecting Stories
A key task of the Working Group was to establish a protocol to be
followed by all volunteers on this project. The resources related to
175
collecting oral histories provided a rich source of information for
defining the protocol to be used over the life of the project. A
clear protocol will ensure consistency in the quality and general
content of the interviews, especially for interviewers with little
experience in collecting oral histories. We discuss the protocol
further in Section 5.
3.2.2
Identifying Potential Subjects
The primary focus for this phase of the project is women
computing educators who are late in their careers. The project
will seek an international sample in order to ensure a more
complete picture; during the conference in Lisbon, many non-U.S.
educators showed interest in the project.
It is urgent to capture narratives from older and more senior
educators while these pioneers are still able to participate in the
interview process. The Working Group has created a list of
potential subjects, including ideas drawn from the results of the
survey described in Section 2.3.
3.2.3
Legal Issues: Consent, Access, Ownership
A paramount concern for the project is the set of legal issues
associated with this form of academic inquiry. Pressing concerns
that must be resolved include the following:
obtaining permissions to ensure that the materials are openly
accessible for use in future studies and analyses;
determining who will have access to the materials collected in
this project;
determining ownership of the materials; and
designing appropriate copyright and permission forms.
3.2.4
Storage and Transcription of the Interviews
When an interview is complete, the recording(s), notes, and any
supplemental materials must be prepared for later use and
analysis. As a temporary measure, a copy on CD will provide
secure storage of the materials until incorporated into a formal
repository.
To make the interviews more accessible to future users such as
researchers and historians, common practice is to develop a
transcript. Besides being easier to scan quickly, a good quality
transcript makes it easier to create notes and cross-reference parts
of the interview. There are two main approaches to creating
transcripts for an interview:
listening to the taped interview and capturing the dialog
manually, a tedious and exacting process, or
using voice recognition software to automatically create a
transcript in a computer file, a process that tends to be very
error-prone.
Once the transcription is complete, careful editing can make the
work clearer and more accessible. However, editing requires a
deft touch, using pre-determined guidelines specific to the project.
The editing process may consider issues such as how and whether
to correct errors (for example, should transcriber errors be fixed
during editing? Should errors of fact acknowledged by the
interview subject be set right?) and whether to clear out irrelevant
information (for example, deleting [presumably meaningless]
false starts to make the transcript's meaning clearer). Editing may
also introduce paragraphs and subheadings to help highlight
topics and make it easier for a future reader to traverse the
transcribed interview.
3.3
Archival of the Project Materials
For security and availability of the collected materials, it will be
vital to identify a means for the long-term storage of the
interviews and other artifacts. In addition, the repository can be
used to maintain a bibliography of results related to the overall
project. While it is premature to determine where this work might
eventually reside, an excellent example of the appropriate style of
storage and availability is the repository related to the history of
computing maintained by the Charles Babbage Institute [4].
BACKGROUND
To set the context for the Working Group's project, this section
considers four background areas: the area of inquiry known as
oral history, resources related to the history of computing,
resources on the history of women in computing, and work related
to the history of computing education.
4.1
What is Oral History?
Oral history is a method of inquiry with a rich tradition and
specific guidelines. While folklore and storytelling are examples
of oral history through the ages, modern techniques have
improved the reliability of the data one can gather in an oral
history project. The Wikipedia article on oral history [25]
explains:
"Oral history is an account of something passed down by
word of mouth from one generation to another. Oral
history is considered by some historians to be an
unreliable source for the study of history. However, oral
history is a valid means for preserving and transmitting
history."
The Oral History Association [17] has published guidelines that
address several aspects of conducting oral history, including
responsibility to subjects, the public, and the profession; interview
content and conduct; storage and preservation of media and
interviews; and an excellent bibliography.
The Oral History Primer from the Department of History at
California State University, Long Beach [6] offers an overview of
many of the aspects of conducting an oral history project, such as
how to design the study, how to conduct and process the
interview, and how to use the completed interview. This resource
offers a sample outline, a sample transcript, and a sample
agreement form.
As the Working Group prepared for the meetings in Lisbon, a
number of oral history projects helped us formulate ideas about
how the materials from such a project can be planned for and
archived. For example, the London Voices project [14] gathered
oral histories from a variety of individuals and has made these
stories available via a Museum of London website. The Oral
History Directory from Alexander Street Press [18] is an
ambitious effort to index the major oral history collections in
English throughout the world. During our working group
presentation at the ITiCSE conference, we learned of another
project in Brazil, O Museu da Pessoa (Museum of the Person)
[16], which can provide additional ideas. The annotated
bibliography in Appendix E lists relevant projects we have
discovered thus far.
176
One of the Working Group members in Lisbon, William Aspray,
is a historian of computing who has conducted over 200
interviews eliciting oral histories. The materials related to these
interviews are in the repositories of the Charles Babbage Institute
for History of Computing [4], which we discuss further in the next
section. Aspray's participation in the Working Group provided
key inputs and examples as the group developed the guidelines
and planning reflected in this report.
4.2
History of Computing Resources
Interest in the history of computing is broad-based. A variety of
historical projects focus on areas as diverse as artifacts (e.g.,
punched cards, old computers), the timeline of events and
developments in computing, and the people involved in driving
the field forward. This section highlights a few computing history
projects that seem particularly relevant in the context of this
Working Group's project.
The Charles Babbage Institute (CBI) [4] was started in 1978 and
by 1989 became an historical archives and research center of the
University of Minnesota. CBI preserves relevant historical
documentation in a variety of media, conducts and fosters
research in history and archival methods, and sponsors scholarly
meetings and publications related to the history of computing. The
resources on this site include a set of more than 300 oral histories,
of which no more than 5% appear to be from women.
The IEEE Annals of the History of Computing [8], a quarterly
publication started in 1979, features scholarly articles by leading
computer scientists and historians, as well as firsthand accounts
by computer pioneers. The Annals is intended to serve as a focal
point for collecting and disseminating information on historical
projects and organizations, oral history activities, and
international conferences.
The IFIP Working Group 9.7 on the History of Computing [9],
established in 1992, focuses on the history of computing and
informatics with a view to providing the impetus to preserve the
records and artifacts of information processing inventions,
practices, and activities throughout the world under the auspices
of IFIP and its constituent organizations. Among the goals of this
group are to encourage the development of national archives, to
identify pioneers worthy of appreciation and distinction, to
develop publication plans for histories of Information Processing,
and to promote the inclusion of historical modules in appropriate
curricula.
The Virtual Museum of Computing (VMoC) [24], maintained by
Jonathan Bowen of London South Bank University, is a collecting
point that leads to many different sites across the web. Sections
currently featured on the VMoC site include corporate history and
overviews, history of computing organizations, and general
historical information.
The History of Computing project [7], started by Cornelis Robat
in the late 1980s, is now supported by a non-profit foundation
founded in April, 2000. This project is based in the Netherlands
and has partners from throughout the world, including the
Ukraine, Poland, and Mexico. The project seems focused on
gathering artifacts into an enormous database to ensure that
important historical information remains available.
4.3
Resources on the History of Women in
Computing
Especially relevant to this Working Group's efforts are projects to
collect oral histories of women in computing. Janet Abbate [1] is
conducting a research project to develop a history of women in
computing in the United States and Britain since World War II.
Her project draws on oral history interviews with more than fifty
women who were active in computer science departments and the
software industry.
A project that apparently never came to fruition is mentioned on a
history of computing site created by J.A.N. Lee [13]. This project
was called "Women in (the) Computing History" (with the
acronym "witch"). The description of this project states:
"In keeping with the tradition of documenting women's
history through oral histories, the Women in (the)
Computing History mailing list hopes to augment
traditional resources of women's and histories of
computing by being a repository for women's own
stories throughout the history of computing. All in
computing, too, not just those of us formally schooled in
the computing sciences."
Unfortunately, it appears that this project has disappeared from
view, as we have thus far been unable to establish contact with
anyone associated with the project.
The IFIP Working Group 9.8 on Women and Information
Technology [10] was established in 2001. Aspects of this group's
charge include the exchange of women's experiences as scholars
and professionals in information technology, integration of
feminist perspectives into computer science, and developing an
understanding of the gendered aspects in design, realization, and
implementation of information systems. The aims that seem
especially relevant for this project are analyzing the role of gender
in computing education and educational strategies to support and
retain girls and women.
4.4
History of Computing Education
Resources
Considered separately from resources related to the History of
Computing, few resources address the history of computing
education. In 1982, the Mathematical Association of America
published a perspective on the field of Computer Science. The
first chapter is an in-depth exploration of the development of
Computer
Science,
with
emphasis
on
the
educational
underpinnings of this field [19].
In August, 2004, when the IFIP 18th World Computer Congress
was held in Toulouse, France, one component of the Congress
was a History of Computing in Education conference. A book
published in 2004 derives from contributions made at this
conference. This book [11] considers two aspects: the impact of
computing on education over the past forty years and as a
pedagogical tool in computing education. Various articles
consider how organizations have used computers to enhance
teaching and learning, spanning experiences in elementary
education through university studies in several countries.
177
WORK DURING ITiCSE
Once the Working Group convened in Lisbon, the face-to-face
meeting time was spent primarily on four activities: refining the
purpose of the project, discussing and demonstrating the relevant
techniques, developing a protocol to guide the process of planning
and conducting interviews, and training members in how to use
the interviewing techniques and materials. Each of these aspects
are covered below.
5.1
Refining Purpose
During the Working Group meetings, we refined the purpose and
methods of the project. We realized the need to differentiate
between the purpose of the interviews (how they are structured
and the kind of information they elicit) and the purpose of the
project as a whole (how the interviews will be used). We also
came to realize that our original notion of interviewing
"successful" women computing educators constrained the project
in two ways: 1) defining what we meant by "success" and
2) losing the stories and lessons of those who did not continue in
computing education.
5.2
Demonstration
During the two days of meetings before the ITiCSE conference
began, a key aspect of the Working Group's efforts was to explore
the theory and techniques guiding this project. To this end, the
group discussed general techniques for how to use oral histories.
Two of the group members, Aspray and Barker, have social
science backgrounds: Aspray is a historian of computing, while
Barker is a social scientist whose work focuses on women in
computing. Because most group members had little experience
with conducting this type of inquiry, Aspray overviewed the
purposes of oral history and methods for conducting interviews.
To make the techniques tangible, Aspray conducted a
demonstration interview with Working Group leader Barbara
Owens as the subject. In preparation for the interview, Aspray and
the remaining Working Group members formulated a set of topics
and prompts to include in the interview. The demonstration
interview was recorded on several digital devices, both to test the
devices and to avoid the possible loss of information due to
technical difficulties.
After the demonstration interview was completed, Aspray and
Barker led the group in deconstructing
1
the interview. During this
session, the group reflected on what went well, what could be
improved, and what to change in the future.
5.3
The Protocol
A major product of our Working Group was a protocol for this
project. After much discussion, we concluded that having a
common set of materials would be vital for achieving consistent
results in interview sessions conducted by a wide variety of
volunteers. The protocol materials that will be used to support the
interview process include an opening script, an outline of topics, a
set of sample probing questions or prompts, and guidelines for
conducting interviews. We discuss each of these items in the
remainder of this section.
1
Deconstructing an interview is different than analyzing the
results; the former focuses on process, while the latter considers
content.
5.3.1
The Opening Script
The opening script is used by the interviewer to set the scene
before beginning the session. For example, the interviewer should
caution the subject that it is common for sensitive topics to come
up during the course of a session and that the subject should feel
free to ask that the recording be turned off. As the session gets
underway and the interviewer starts the recording device(s),
specific opening information should be read onto the recording in
order to provide a full context for this session. The interviewer
could state, for example:
"This is an interview with (interview subject's name)
from (name of institution), conducted by (interviewer's
name). This interview is being recorded on (date) at
(city, country). It is part of the (computing education
oral history series / formalized name yet to be
determined).
"Did we give and pronounce your name correctly?"
2
After this, the interviewer can begin giving prompts, such as "Tell
us about your parents, for example what they did for a living." In
this example statement, using the pronoun "us", rather than "me",
can help the subject remember that her story is being told for a
wider audience than just the interviewer at hand.
5.3.2
Outline of Topics
The Working Group developed an outline of relevant topics to be
used in guiding the interviews. The outline can also assist the
interviewer in preparing for the interview, with the goal of making
the face-to-face time with the subject as effective as possible. The
Outline of Topics that the Working Group developed appears in
Appendix A.
5.3.3
Sample Probing Questions
Prompts are follow-up questions designed to elicit more detailed
answers or follow up a thread introduced in an earlier answer.
Because an interviewer must feel free to pursue topics that emerge
as the session progresses, the prompts set provides examples for
how the interview can proceed, rather than a strict step-by-step
recipe. The Working Group developed a list of example prompts,
which appears in Appendix B.
5.3.4
Guidelines for Conducting Interviews
This oral history project will require many interviewers in order to
increase the number of stories that can be collected within a
limited timeframe and across a wide geographical area. Guidelines
will help coordinate the efforts across volunteers in order to
achieve a level of consistency across the results. Guidelines will
also help the volunteers prepare for and conduct sessions. The
guidelines can assist an interviewer in establishing the proper
setting, maintaining an appropriate flow, and helping the subject
focus on the issues at hand.
To prepare for the session, the interviewer should study relevant
background materials such as the subject's resume or vita, their
professional publications, and anything written about the subject
2
This final sentence is relevant primarily for names that are
unusual or difficult to pronounce.
178
in secondary literature. This information can help the interviewer
plan and prioritize the specific prompts, as well as the order of
prompts, to be used during the interview session. At the same
time, the interviewer should not use the outline of topics as "tick
off" items. An effective interview will be interactive in nature,
with the specific choice and ordering of prompts based on
previous answers.
Because the duration of a session must be limited to no more than
an hour or two, the time must be used effectively. This makes it
essential for the interviewer to come to the session as well
prepared as possible. The face-to-face time during the session
should be used to explore tacit knowledge and the reasons for
certain behaviors and outcomes, providing insights into the
motivations behind events in the subject's life. To use the time
well, the interviewer must avoid spending precious time during
the session pursuing information that can be gleaned from the
subject's vita or other readily available materials.
The Working Group's guidelines for conducting interviews
appear in Appendix C.
5.4
Training
On the second day of the ITiCSE Working Group meetings, the
group divided into two sub-groups, each of which included three
computing educators and a "consultant" (either Aspray or Barker).
In these sub-groups, the computing educators tested the tentative
protocol to conduct practice interviews with one another. Each
interview session lasted about 15 minutes, with one computing
educator interviewing a second using the list of topic areas, while
the third member watched and listened from the side. During the
practice interviews, both sub-groups explored technologies that
can be used to record the interviews and transcribe the audio
recordings, testing multiple devices and in one group using a
headset to capture the answers for automatic transcription. The
"consultants" observed during the interviews, then helped the
group deconstruct the interviews and critique the methods.
REFLECTIONS AFTER ITiCSE
Working Group members have offered the following as the most
positive outcomes of the time in Lisbon (given in no particular
order):
1. Learning techniques of oral history and observing an
experienced interviewer using the techniques during a
demonstration.
2. Hearing diverse ideas about project goals and reaching
consensus.
3. Fleshing out the protocol for conducting interviews, thus
making clear what should be asked during a session. The
protocol includes a detailed set of guidelines for conducting
an oral history interview (Appendix C), an opening script
(Section 5.3.1), a topic outline (Appendix A), and sample
prompts (Appendix B).
4. Being trained in interview techniques, which allowed the
group to experiment with the equipment and pilot the process
of gathering histories. Several members expressed the desire
for additional training and for the opportunity to review
recorded interviews conducted by more experienced
individuals.
5. Understanding that the operative dynamic in an interview/oral
history differs from that in conversation, although the
similarities make it tricky to balance the exchange. Members
felt very positive about the experience of the practice
interviews. One member reported that she often found herself
caught up in the stories the subjects were telling, leading her
to realize that it takes effort to learn to stick to the list of
topics that the interviewer wants to cover.
6. Seeing the importance of privacy considerations, as well as the
need to obtain permission and plan for storage and access.
7. Getting to know the other group members and hearing
significant parts some of their stories. Even this small sample
gave group members a feeling for the wide variety of paths
taken and challenges overcome.
8. Discovering that the individual paths were an interplay
between a recitation of facts (dates and places) and the deeply
felt emotional life that often motivated a person's actions.
This underscored the need for a respectful and reasonably
well-trained approach by each interviewer.
The group encountered a number of difficulties with the software
and equipment. It became clear that the equipment is the weakest
link in performing an interview. The interviewer cannot be
certain that the equipment is functioning as expected until he or
she takes a break to review the recording.
Based on experimentation during the Working Group meetings,
the Working group can make the following observations:
The group used several different models of the Olympus DVR
(Digital Voice Recorder) and was able to get each model to
work properly.
Direct recording to the computer worked well through the
Olympus VN-240PC digital recorder.
Transferring the recordings to CD was simple and seemed like
an excellent way to create a temporary archive.
While the Dragon Naturally Speaking Preferred speech
recognition software [5] may be helpful, it will require further
experimentation to use it effectively.
While the group's experiences with the i-River recording
devices were not successful, one member has been pleased
with the performance of this device in the past.
In general the digital recorders worked well. However, in
every session at least one of the recorders failed, generally due
to inexperience, human error, or time limitations. A key
conclusion is that equipment redundancy is imperative. We
decided it is safest to use at least three recording devices
during each interview in order to ensure the best possible
quality of recording.
Group members were surprised at the difficulty of transcribing
recorded interviews. Some members had hoped there would be
useful tricks or slight-of-hand for doing transcription.
Unfortunately, creating a good quality transcription is simply a
lengthy and intense process. A group member who transcribed
her own interview found that it took nearly five times as long as
the interview duration to complete the transcription of the session!
During early planning for the Working Group, the co-leaders had
hoped to "... conduct initial analysis of pilot interview data, and
identify emergent themes". In the end, the group spent no time
179
with formal analysis of the practice interviews. Instead, the group
used the time to hone interview techniques and understand how to
move the project forward.
During the first day of the ITiCSE conference (after the Working
Groups had each met for two full days), each Working Group
presented their group's mission and progress for conference
attendees. The main impressions that Working Group members
brought away from this presentation were very positive, with
many attendees showing strong interest in the project and offering
encouragement as well as suggestions for potential subjects.
WHAT COMES NEXT?
While the experience during the ITiCSE conference was valuable,
the time in Lisbon was too short and the expectations too high for
the group to be able to complete everything it had hoped to
accomplish. By the end of the conference and the completion of
this report, the Working Group had prepared an annotated
bibliography, learned about oral histories, piloted hardware and
software for recording, and set the stage for ongoing collection of
histories, including a protocol to follow in planning for and
conducting interviews. While the Working Group did consider
legal and ethical issues during their discussions, a great deal must
be resolved before the process of active interviewing can begin. In
particular, access and ownership issues must be resolved before
we can begin collecting interviews.
The Working Group has an excellent start in recruiting volunteers
to help in carrying out all aspects of the project. However, the
work of the volunteers must be coordinated in order to produce
coherent results. In addition, volunteers who conduct interviews
must be trained in the techniques. Various Working Group
members have agreed to propose workshops and other training
opportunities at a variety of venues and events.
A challenge will be to select the set of subjects from the many
suggestions we have received. For the current stage of work, we
will include only women computing educators who are retired or
in the latter stages of their careers. The entire project has an
underlying sense of urgency because many of the pioneers are in
poor health or have already passed away. We have seen clear
interest in eventually expanding the project to include the stories
of women in earlier parts of their careers and men at any stage of
their careers.
Obtaining one or more sources of funding will be essential to
achieving the full vision of the project. Funding can support
aspects such as transcription and review, travel to conduct training
or to meet with subjects, and setting up permanent archival
facilities.
While finding a permanent home for the oral histories is not
essential during the early phases of the project, it is important if
the collected stories are to be useful and usable. In addition to
providing for archival of the recordings and transcriptions, the
eventual home should allow for including contextual materials,
such as course and curriculum artifacts. The archival capability
must include sophisticated support for indexing and searching in
order to support future visitors in browsing the collection and
analyzing the interview transcriptions and other artifacts.
Ultimately, whether this project will succeed or fail depends on
the level of engagement we can generate for all phases of the
project. To start, we must involve the computing education
community in collecting stories from women computing educators
who have retired or are about to retire. At the same time, we must
create and maintain a sense of excitement about the potential of
the project. If there are sufficiently many interested volunteers,
the full-blown project to collect stories from men and from
women earlier in their careers could certainly get underway in
parallel with the current efforts.
ACKNOWLEDGMENTS
The individuals who met in Lisbon enjoyed the unique
opportunity to learn these techniques and plan for what we hope
will be a productive long-term project. The group was fortunate
to have additional individuals involved in the pre-conference
discussions, several of whom made key contributions to the
preparations. In particular, the Working Group is grateful to
Bettina Bair for her enthusiastic support. Bettina set up the group
wiki and provided feedback as well as many ideas for resources.
We are also grateful to the others who participated in the pre-conference
discussions, including Anne Applin and Amardeep
Kahlon. We thank the individuals who responded to our survey
and offered suggestions of future subjects and possible questions.
Comments from the anonymous reviewers allowed us to refine the
purpose of the report and improve the presentation. Late
discussions with Susan Gerhart provided additional ideas and
inspiration for future work.
REFERENCES
The references given here are used directly in the text. In
Appendix E we provide an annotated reference list, which repeats
several of these references supplemented with our annotations.
[17]
Abbate, J., Finding our Place in History: Six Decades of
Women in Computing, Grace Hopper Celebration of Women
in Computing. October 6-9, 2004. Chicago, IL. online:
gracehopper.org/Proceedings/PDF/wpp_Abbate.pdf
; last modified 10 January 2005, accessed 17 June 2005.
[18]
Baylor University Institute for Oral History, Oral History
Workshop on the Web, online:
www.baylor.edu/Oral_History/
, last modified 25 April
2005, accessed 17 June 2005.
[19]
Bandura, A. Social foundations of thought and action: A
social cognitive theory. Englewood Cliffs, NJ: Prentice Hall,
1986.
[20]
Charles Babbage Institute, Center for the History of
Information Technology, University of Minnesota. online:
www.cbi.umn.edu/index.html
, accessed 17 June 2005.
[21]
Dragon Naturally Speaking Preferred speech recognition
software, 1st-Dragon information page:
www.1st-dragon
.com/dragnatspeak1.html
, accessed 17 July
2005.
[22]
Gluck, S. B. An Oral History Primer, Department of History,
California State University, Long Beach, online:
www.csulb.edu/depts/history/relprm/oralprimer/
OHprimer.html
. last updated 6 March 2001, accessed 28
July 2005.
[23]
The History of Computing project. online:
www.thocp.net/
. accessed 20 July 2005.
[24]
IEEE Annals of Computing. online:
www.computer.org/annals/. accessed 28 July 2005.
180
[25]
IFIP Working Group 9.7 on the History of Computing.
online:
http://www.comphist.org/
. last modified 12
July 2005; accessed 20 July 2005.
[26]
IFIP Working Group 9.8 on Women and Information
Technology. online:
www.informatik.uni-bremen
.de/~oechteri/IFIP/
. last modified 24 February
2004; accessed 20 July 2005.
[27]
Implagliazzo, J., & J. A. N. Lee, History of computing in
education, Boston: Kluwer, 2004.
[28]
Lazowska, E. Pale and male: 19th century design in a 21st
century world. Women in computing history, SIGCSE
Bulletin inroads. 34(2) (June 2002). pp. 11-12.
[29]
Lee, J.A.N. History of Computing. online:
ei.cs.vt.edu/~history/
. last updated 6 December
2002. accessed 20 July 2005.
[30]
London Voices, Museum of London, online:
www.museumoflondon.org.uk/MOLsite/londonsvoice
s/
, last modified 8 July 2004, accessed 17 June 2005.
[31]
Margolis, J. & Fisher, A. Unlocking the Clubhouse: The
Carnegie Mellon Experience, SIGCSE Bulletin inroads,
34(2), June 2002. pp. 79-83.
[32]
Museu da Pessoa [translation: Museum of the Person].
online:
www.museudapessoa.net/
. accessed 20 July
2005.
[33]
Oral History Association. online:
omega.dickinson.edu/organizations/oha/
. accessed
20 July 2005.
[34]
Oral History Directory, Alexander Street Press, online:
www.alexanderstreet2.com/oralhist/
, last modified
18 March 2005, accessed 19 June 2005.
[35]
Pollack, S. V. The Development of Computer Science. In S.
V. Pollack (Ed.) Studies in Computer Science. MAA Studies
in Mathematics, Volume 22. The Mathematical Association
of America, 1982.
[36]
Ritchie, D. A. Oral History Evaluation Guidelines.
Pamphlet Number 3. Oral History Association. Oral History
Association. online:
www.dickinson.edu/oha/pub_eg.html
, last modified 4
May 2004, accessed 17 June 2005.
[37]
Roberts, E. Expanding the audience for computer science.
PowerPoint version of keynote talk presented at 2003
SIGCSE Technical Symposium, Reno, Nevada.
[38]
SIGCSE.members, the members-only mailing list of the
ACM Special Interest Group for Computer Science
Education. Subscription information online at
www.sigcse.org/
.
[39]
Systers, an online community for technical women in
computing. Subscription information online at
www.mecca.org/
.
[40]
Virtual Museum of Computing (VMoC), online:
http://vmoc.museophile.org/
; last modified 4 January
2005, accessed 20 July 2005.
[41]
Wikipedia, Oral History. online:
en.wikipedia.org/wiki/Oral_history/
. last modified
17 June 2005, accessed 20 July 2005. | Oral History;Computing Education History |
5 | A Dependability Perspective on Emerging Technologies | Emerging technologies are set to provide further provisions for computing in times when the limits of current technology of microelectronics become an ever closer presence. A technology roadmap document lists biologically-inspired computing and quantum computing as two emerging technology vectors for novel computing architectures <A href="5.html#12">[43]. But the potential benefits that will come from entering the nanoelectronics era and from exploring novel nanotechnologies are foreseen to come at the cost of increased sensitivity to influences from the surrounding environment. This paper elaborates on a dependability perspective over these two emerging technology vectors from a designer's standpoint. Maintaining or increasing the dependability of unconventional computational processes is discussed in two different contexts: one of a bio-inspired computing architecture (the Embryonics project) and another of a quantum computational architecture (the QUERIST project). | INTRODUCTION
High-end computing has reached nearly every corner of our
present day life, in a variety of forms taylored to accommodate
either general purpose or specialized applications. Computers
may be considerred as fine exponents of the present days'
technological wave if not their finest, they certainly do count as
solid, indispensable support for the finest.
From the very beginning of the computing advent, the main target
was squeezing out any additional performance. The inception
period was not always trouble-free, accurate computation results
being required at an ever faster pace on a road that has become
manifold: some applications do require computational speed as a
top priority; others are set for the highest possible dependability,
while still delivering sufficient performance levels.
Several definitions for dependability have been proposed: "the
ability of a system to avoid service failures that are more frequent
or more severe than is acceptable" <A href="5.html#11">[2], or "the property of a
computer system such that reliance can justifiably be placed on
the service it delivers" <A href="5.html#11">[9]<A href="5.html#12">[45]. Dependability is therefore a
synthetic term specifying a qualitative system descriptor that can
generally be quantified through a list of attributes including
reliability, fault tolerance, availability, and others.
In real world, a dependable system would have to operate
normally over extended periods of time before experiencing any
fail (reliability, availability) and to recover quickly from errors
(fault tolerance, self-test and self-repair). The term "acceptable"
has an essential meaning within the dependability's definition,
setting the upper limits of the damages that can be supported by
the system while still remaining functional or computationally
accurate. A dependability analysis should take into consideration
if not quantitative figures for the acceptable damage limit, at least
a qualitative parameter representation for its attributes.
Dependable systems are therefore crucial for applications that
prohibit or limit human interventions, such as long-term exposure
to aggressive (or even hostile) environments. The best examples
are long term operating machines as required by managing deep-underwater/nuclear
activities and outer space exploration.
There are three main concerns that should be posed through a
system's design in order to achieve high dependability <A href="5.html#12">[42]:
1.
Specifying the dependability requirements: selecting the
dependability requirements that have to be pursued in
building the computing system, based on known or assumed
goals for the part of the world that is directly affected by the
computing system;
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
2.
Designing and implementing the computing system so as to
achieve the dependability required. However, this step is hard
to implement since the system reliability cannot be satisfied
CF'06, May 35, 2006, Ischia, Italy.
Copyright 2006 ACM 1-59593-302-6/06/0005...$5.00.
187
simply from careful design. Some techniques can be used to
help to achieve this goal, such as using fault injection to
evaluate the design process.
3.
Validating a system: gaining confidence that a certain
dependability requirement/goal has been attained.
This paper will address these main concerns through an attempt to
provide an in-depth view over modern computing directions and
paradigms, which we consider to be representative for the efforts
involved in improving overall dependability.
1.1
Motivations
We have listed some of the applications of dependable computing
systems as linked to activities that take place in special
environments, such as deep underwater or outer space. At a very
first sight, these applications would appear specific enough to not
encourage a specific design for dependability approach in
computing. However, evidence suggest this is hardly the case; on
the contrary, it is difficult to imagine a domain left unconquered
by computer systems during times when industrial, transport,
financial services and others do rely heavily on accurate computer
operation at any given moment. If computer innacuracies could be
more easily overlooked at home, professional environments
cannot accept such missbehaviors.
Yet the recent history of computing provides evidence that
dependability is not a sine qua non feature. During their life
cycle, electronic devices constantly suffer a number of influences
that manifest predominantly over transient regimes, which in turn
introduce a variety of errors unified in the literature under the
name of transient faults, soft errors or single event upsets (SEUs).
The rate electronic devices are affected with is known under the
term of soft error rate or simply SER and is measured in fails per
unit time. Because it relies on transient phenomena due to
changing states and logical values, digital electronics makes up
for a special category that is also affected by soft errors. No
matter the name they are referred under, these errors affect the
computing processes and are due to electromagnetic noise and/or
<A href="5.html#12">external radiations rather than design or manufacturing flaws [28].
One cause at the origin of soft fails affecting digital devices is
known to be due to radioactive decay processes. Radioactive
isotopes, widely used for a range of purposes, might contaminate
semiconductor materials leading to soft errors; evidence is
available throughout the literature, both by empirical observations
and experimental results <A href="5.html#12">[20]. Consequently, cosmic rays,
containing a broad range of energized atomic/subatomic particles
may lead to the appearance of soft fails.
Computers therefore are susceptive to soft errors, an issue that
will potentially become essential with the advent of emerging
technologies. As acknowledged by the International Technology
Roadmap for Semiconductors (ITRS), issued at the end of 2004
<A href="5.html#12">[43], the microelectronics industry faces a challenging task in
going to and beyond 45nm scale in order to address "beyond
CMOS" applications. Scaling down the technology will enable an
extremely large number of devices to be integrated onto the same
chip. However, the great challenge will be to ensure the new
<A href="5.html#11">devices will be operational at this scale [6], since they will exhibit
a sensitive behavior to soft fails. In order to address the negative
effects brought by technology scaling, it is to be expected that
significant control resources will need to be implem<A href="5.html#11">ented [3].
Another challenging aspect concerning emerging technologies is
to match the newly developed device technologies with new
system architectures, a synergistic/collaborative development of
the two being seen as likely to be very rewarding. The potential of
biologically-inspired and quantum computing architectures is
acknowledged by the ITRS report on emerging technologies <A href="5.html#12">[43]
(see <A href="5.html#2">Figure 1). This paper will investigate the relevance of soft
fails and attempt to provide means of harnessing their negative
effects on modern computing in the context of biologically-inspired
and quantum computing architectures.
Figure 1: Bio-inspired and quantum computing are
acknowledged as architectural technology vectors in emerging
technologies <A href="5.html#12">[43]
1.2
Paper Outline
This paper is structured as follows. Section <A href="5.html#3">2 will address the first
main concern, that is, specifying and selecting dependability
requirements that will have to be pursued when building a
computational platform. Parameters that describe and quantify
dependability attributes, such as reliability, will be introduced,
with a highlight on their accepted models and their issues. A
particular consideration will be given to the failure rate parameter,
which is the basis of all reliability analyses.
Section <A href="5.html#7">3 will approach some of the means for design for
dependability; it will therefore elaborate upon two emerging
technology vectors, as seen by the ITRS report <A href="5.html#12">[43], which define
two novel architectures, namely biologically-inspired (or bio-inspired
) and quantum computing. We will introduce two projects
and their corresponding architectures, called Embryonics (as a
biologically-inspired computing platform) and QUERIST (as a
quantum computing platform designed to allow and study error
injection). These two architectures are representative for the
coming age of nano-computing, where computational processes
take place as encoded at the very inner core level of matter, be it
semiconductor material (for nanoelectronics, targetted here by the
Embryonics project) or atomic scale dynamics (for quantum
computing, targetted here by the QUERIST project). This section
will then introduce dependability aspects within bio-inspired
computing (the Embryonics project being investigated in
SubSection <A href="5.html#7">3.1) and within quantum computing (the QUERIST
<A href="5.html#10">project being investigated in SubSection 3.2).
Finally<A href="5.html#10">, Section 4 will present the conclusions and prospects for
designing emerging technology dependable computing systems,
as we see them.
188
DEPENDABILITY ATTRIBUTES
An important dependability attribute for any given system lies in
its capacity to operate reliably for a given time interval, knowing
that normal operation was delivered at initial time <A href="5.html#11">[8]. Reliability
functions are modelled as exponential functions of parameter ,
which is the failure rate. The reliability of a system is the
consequence of the reliability of all of its subsystems. The
heterogeneity of the system leads to a difficult quantitative
assessment of its overall reliability; moreover, estimating the
reliability functions is further made difficult because formal
rigour is not commercially available, this being kept under
military mandate <A href="5.html#12">[44].
The failure rate for a given system can be modelled as a function
of the failure rates of its individual subsystems, suggestions being
present in the MIL-HDBC-217 document, which is publicly
available <A href="5.html#12">[44]. However, this document has been strongly
criticized for its failure rate estimations based on the Arrhenius
model, which relates the failure rate to the operating temperature:
B
E K T
Ke
=
(1)
where K is a constant, K
B
is Boltzmann's constant, T is the
absolute temperature and E is the "activation energy" for the
process
<A href="5.html#11">. Quantitative values for failure rates show significant
differences between those predicted using MIL-HDBC-217 and
those from testing real devices (see
). There are two
conclusions that can be drawn from this:
B
<A href="5.html#11">[18]
<A href="5.html#3">Figure 2
1.
quantitative estimations for failure rate values are strongly
dependant on the quality of information used; unfortunately,
current reliable information about electronic devices is known
to be lacking <A href="5.html#12">[44];
2.
despite differences between predicted and real values, the
MIL-HDBC-217 methodology can be useful for qualitative
analyses in order to take decisions regarding sub-system parts
that should benefit from improved designs.
Figure 2. Predicted vs real failure rates plotted against
temperature <A href="5.html#11">[18]
So far the failure rate of digital devices has been considerred as
due to internal causes. However, this is not always the case, soft
fails being equally present due to the aggressive influences of the
external environment, which <A href="5.html#12">also have to be modelled [22]. The
external envirnment features highly dynamic changes in its
parameters, which will eventually affect the normal operation of
digital devices that lack sufficient protection or ability to adapt.
Ideally, computing devices would behave in a consistent and
accurate manner regardless of fluctuations in environmental
parameters. This is either a consequence of soft error mitigation
techniques or due to flexible hardware/software functionality that
allow the system as a whole to adapt to environamental changes
and tolerate induced faults.
While certain soft error mitigation techniques are available, the
technology scaling towards nanoelectronics affects their
efficiency by integrating a larger number of devices per chip
(which requires a larger amount of redundant/control logic or
other measures), which feature, at the same time, smaller
dimensions (which renders an electronic device much more
senzitive to the influence of stray energetic particles that reach it
as part of cosmic rays). Both aspects are involved in the
development of the two emerging technology vectors mentioned
<A href="5.html#2">in SubSection 1.1, although having slightly different motivations:
while the nature of the quantum environment prohibits precise
computation in the absence of fault tolerance techniques, such
techniques are targetted by bio-inspired computing as means of
improving the dependability of a computing platform.
2.1
Bio-Inspired Computing
If living beings may be considered to fulfill computational tasks,
then Nature is the ultimate engineer: each of the living beings
exhibit solutions that were successfully tested and refined in such
ways human engineers will never afford. One reason is time: the
testing period coinciding with the very existence of life itself.
Another reason is variety and complexity: Nature has found and
adapted a variety of solutions to address complex survivability
issues in a dynamically changing environment. No matter how
Nature approached the process of evolution, engineering could
perhaps benefit most from drawing inspiration from its
mechanisms rather from trying to develop particular techniques.
Bio-inspired computing is not a new idea. John von Neumann was
preoccupied to design a machine that could replicate itself and
was quite interested in the study of how the behavior of the
human brain could be implemented by a computer <A href="5.html#11">[13][14]. He
also pioneered the field of dependable computing by studying the
possibility of building reliable machines out of unreliable
components <A href="5.html#11">[15]. Unfortunately, the dream of implementing his
self-reproducing automata could not become true until the 1990s,
when massively programmable logic opened the new era of
reconfigurable computing.
But when trying to adapt nature's mechanisms in digital devices,
it becomes most evident that biological organisms are rightfully
the most intricate structures known to man. They continuously
demonstrate a highly complex behavior due to massive, parallel
cooperation between huge numbers of relatively simple elements,
the cells. And considering uncountable variety of living beings,
with a life span up to several hundreds (for the animal regnum) or
even thousands (for the vegetal regnum) of years, it seems nature
is the closest spring of inspiration for designing dependable, fault
tolerant systems.
Investigating the particularities of natural systems, a taxonomy of
three categories of processes<A href="5.html#12"> can be identified [32]:
1.
Phylogenetic processes constitute the first level of
organization of the living matter. They are concerned with the
temporal evolution of the genetic heritage of all individuals,
189
therefore mastering the evolution of all species. The
phylogenetic processes rely on mechanisms such as
recombination and mutation, which are essentially
nondeterministic; the error rate ensures here nature's
diversity.
2.
Ontogenetic processes represent the second level of
organization of the living matter. They are also concerned
with the temporal evolution of the genetic heritage of, in this
case, a single, multicellular individual, therefore mastering an
individual's development from the stage of a single cell, the
zygote, through succesive cellular division and specialization,
to the adult stage. These processes rely on deterministic
mechanisms; any error at this level results in malformations.
3.
Epigenetic processes represent the third level of organization
of the living matter. They are concerned with the integration
of interactions with the surrounding environment therefore
resulting in what we call learning systems.
This taxonomy is important in that it provides a model called POE
(from Phylogeny, Ontogeny and Epigenesis) that inspires the
combination of processes in order to create novel bio-inspired
<A href="5.html#4">hardware (see Figure 3). We believe this is also important from a
dependability engineering perspective, for the following reasons:
1.
Phylogenetic processes were assimilated by modern
computing as evolutionary computation, including genetic
algorithms and genetic programming. The essence of any
genetic algorithm is the derivation of a solution space based
on recombination, crossover and mutation processes that
spawn a population of individuals, each encoding a possible
solution. One may consider that each such step, with the
exception of discovering the solution, is equivalent to a
process of error injection, which in turn leads to wandering
from the optimal solution (or class of solutions). However,
genetic algorithms prove to be successful despite this error
injection, the fitness function being responsible for the
successful quantification of the significance of the "error".
Therefore genetic computation is intrinsicaly resilient to
faults and errors, largely due to the fact that they are part of
the very process that generates the solutions.
2.
Ontogenetic processes have been implemented in digital
hardware with modular and uniform architectures. Such an
architecture enables the implementation of mechanisms
similar to the cellular division and cellular differentiation that
take place in living beings<A href="5.html#12"> [31]. These mechanisms bring the
advantage of distributed and hierarchical fault tolerance
strategies: the uniformity of the architecture also makes any
module to be universal, that is, to be able to take over the role
of any other damaged module.
3.
Epigenetic processes were assimilated by modern computing
mainly as artificial neural networks (or ANNs) as inspired by
the nervous system, and much less as inspired by the immune
or endocrine systems from superior multicellular living
beings. ANNs are known to have a generalization capacity,
that is, to respond well even if the input patterns are not part
of the patterns used during the learning phase. This means that
ANNs possess a certain ability to tolerante faults, whether
they manifest at the inputs or inside their intenal architecture.
With the advent of field programmable logic (of which the most
salient representative are the FPGAs) it is now possible to change
hardware functionality through software, thus allowing
information to govern matter in digital electronics. This is not
dissimilar to what happens in nature: information coded in DNA
affects the development of an organism. A special kind of such
digital devices that change dynamically their behavior are known
as evolvable or adaptive hardware; they are bio-inspired
computing systems whose behaviors may change according to
computational targets, or, if harsh or unknown environments are
to be explored, for the purpose of maximizing dependability.
Figure 3. The POE model of bio-inspired systems<A href="5.html#12"> [32]
2.2
Quantum Computing
Error detection and correction techniques are vital in quantum
computing due to the destructive effect of the environment, which
therefore acts as an omnipresent error generator. Error detection
and correction must provide a safe recovery process within
quantum computing processes through keeping error propagation
under control. Without such dependability techniques there could
be no realistic prospect of an operational quantum computational
device <A href="5.html#11">[19].
There are two main sources of errors: the first is due to the
erroneous behavior of the quantum gate, producing the so-called
processing errors; the second is due to the macroscopic
environment that interacts with the quantum state, producing the
storing and transmitting errors.
The consistency of any quantum computation process can be
destroyed by innacuracies and errors if the error probability in the
basic components (qubits, quantum gates) excedes an accuracy
threshold. This is a critical aspect since the microscopic quantum
states are prone to frequent errors.
The main error source is the decoherence<A href="5.html#11"> effect [16]. The
environment is constantly attempting to measure the sensitive
quantum superposition state, a phenomenon that cannot be
avoided technologically since it is not (yet) possible to isolate
them perfectly. The superposition state will decay through
measuring and will therefore become a projection of the state
vector onto a basis vector (or eigenstate). The most insidious
error, however, appears when decoherence affects the quantum
amplitudes without destroying them; this is similar to small
analog errors. Issues stated above are solved, on one hand,
through intrinsic fault tolerance by technological implementation
<A href="5.html#11">(topological interactions [1]) and, on the other hand, by error
correcting techniques at the unitary (gate network) level. We will
focus on the error detecting and correcting techniques, which are
difficult to approach due to quantum constraints: the useful state
190
can neither be observed (otherwise it will decohere), nor can it be
cloned.
2.2.1
Background
<A href="5.html#11">As expressed in bra-ket notation [16], the qubit is a normalized
vector in some Hilbert space
,
2
H
{
}
0 , 1 being the orthonormal
basis:
0
1
0
1
a
a
=
+
(
are the so-called quantum
amplitudes, representing the square root of the associated
measurement probabilities for the eigenstates
0
1
,
a a
C
0 and 1
respectively, with
0
1
2
2
1
a
a
+
= ). Therefore, the qubit can be
affected by 3 types of errors:
Bit flip errors are somewhat similar to classical bit flip errors. For
a single qubit things are exactly the same as in classical
computation: 0
1 , 1
0 . For 2 or more qubits, flip errors
affecting the state may modify it or leave it unchanged. For
instance, if we consider the so-called cat state
(
1
00
11
2
Cat
=
+
)
<A href="5.html#11">[19], and the first qubit is affected by a
bit flip error, the resulting state will be
(
)
1 10 01
2
Cat
+
.
But, if both qubits are affected by bit flips, there will be no
change in the state:
(
)
1 11 00
2
Cat
Cat
+
=
.
Phase errors affect the phase of one of the qubit's amplitudes and
is expressed as 0
0 , 1
- 1 . This type of error is very
dangerous, due to its propagation behavior but it only makes
sense when dealing with superposition states. If we consider an
equally weighted qubit superposition state and inject a phase
error, this results in
(
)
(
)
1
1
0
1
0
1
2
2
+
.
There is a strict correspondence between bit flip and phase error
types due to the way they map onto Hilbert spaces with the same
dimension but different basis. The bit flip is an error from the
space with basis
2
H
{
}
0 , 1 , whereas the phase error appears in the
same space with basis
(
)
(
)
1
1
0
1 ,
0
1
2
2
+
or
{
}
,
+ 1
a
.
The space basis conversion, in this case, is made by applying the
Hadamard transform; <A href="5.html#5">Figure 4 shows an example of transforming
a bit flip error into a phase error (A, and vice versa (B.
Small amplitude errors: amplitudes
of the quantum bit
can be affected by small errors, similar to analog errors. Even if
such an error does not destroy the superposition and conserves the
value of the superposed states, small amplitude errors could
accumulate over time, eventually ruining the computation. In
order to avoid this situation, specific methodologies for digitizing
small errors are used to reduce them to a non-fault or a bit-flip
0
and
a
<A href="5.html#11">[19].
Due to the quantum physics laws, fault tolerance techniques have
to comply with the following computational constraints:
The observation destroys the state. Since observation is
equivalent to measurement, this leads to destroying the
useful state superposition.
Information copying is impossible. Quantum physics renders
the cloning of a quantum state impossible, meaning that a
quantum state cannot be copied correctly. Therefore
quantum error correction must address the following
problems:
Non-destructive measurement. Despite the first constraint it is
necessary to find a way to measure the encoded information
without destroying it. Because the encoded state cannot be
measured directly, one needs to properly prepare some scratch
(ancilla) qubits, which can then be measured.
Fault-tolerant recovery. Due to the high error rate in quantum
computational devices, it is likely that the error recovery itself
will be affected by errors. If the recovery process is not fault-tolerant
, then any error coding becomes useless.
Phase error backward propagation. If we consider the XOR gate
from <A href="5.html#5">Figure 5(A, a flip error affecting the target qubit (b) will
propagate backwards and also affect the source qubit. This is due
to the gate network equivalence from <A href="5.html#5">Figure 5(B and the basis
transformation described by<A href="5.html#5"> Figure 4.
Figure 4. Correspondence between bit flip and phase errors
Figure 5. (A The backward propagation of a phase error for
the XOR gate; (B Gate network equivalence
In order to deal with the problems described the next strategies
have to be followed:
Digitizing small errors. The presence of small errors is not a
major concern, as they can be digitized using a special technique
based on measuring auxiliary<A href="5.html#11"> (ancilla) qubits [19].
Ancilla usage. Since qubit cloning is impossible, a majority
voting strategy is difficult to implement. However, by using
ancilla qubits, the eigenstate information can be duplicated inside
the existing superposition, resulting in the entanglement of the
ancilla with the useful data. Because any measurement performed
on the ancilla could have repercussions on the useful qubits, the
appropriate strategy will employ special coding for both data
qubits and ancilla (data errors only will be copied onto the
ancilla), followed by the computation of an error syndrome,
which has to be obtained through measuring the ancilla (see
<A href="5.html#6">Figure 6).
Avoiding massive spreading of phase errors. As shown
previously, a phase error on the target qubit will propagate on all
source qubits. The solution is to use more ancilla qubits as targets,
so that no ancilla qubit is used more than once.
191
Figure 6. Fault-tolerant procedure with ancilla qubits
Ancilla and syndrome accuracy. Setting the ancilla code to some
known quantum state could be an erroneous process. Computing
the syndrome is also prone to errors. Hence, on one hand, one has
to make sure that the ancilla qubits are in the right state by
verifying and recovering them if needed; on the other hand, in
order to have a reliable syndrome, it must be computed
repeatedly.
Error recovery. As the small errors can be digitized (therefore,
they are either corrected or transformed into bit flip errors), the
recovery must deal only with bit flip and phase errors. A state that
needs to be recovered is described by:
0
1
1
0
0
1
0
1
1
0
0
1
if no error occurs
0
1
for a flip error
0
1
0
1
for a phase error
0
1
for both flip and phase errors
error
a
a
a
a
a
a
a
a
a
a
+
+
+
.
Correcting a bit flip error means applying the negation unitary
transformation
to the affected qubit. To
correct phase and combined errors, the following unitary
operators will have to be applied respectively:
.
0 1
1 0
N
x
U
=
=
1
0
0
,
0
1
0
Z
Y
N
Z
i
U
U
U U
i
=
=
=
2.2.2
Quantum Error Correcting Codes
Quantum error coding and correcting (QECC) is performed with
special coding techniques inspired from the classic Hamming
codes. The classical error coding is adapted so that it becomes
suitable for the quantum strategy, allowing only the ancilla qubits
to be measured.
The state-of-the-art in QECC is represented by the stabilizer
encoding, a particular case being the Steane codes (the Shor codes
may<A href="5.html#12"> also be used [29]). Steane's 7-qubit code is a single error
correcting code inspired from classical Hamming coding and can
be adapted for ancilla coding as well. Therefore it cannot recover
from two identical qubit faults, but it can recover from a bit flip a
phase flip. The Steane 7-qubit coding of 0 and 1 consists of
an equally weighted superposition of all the valid Hamming 7-bit
words with an even and odd number of 1s, respectively:
(
)
,
0 1 2 3 0 1 2
0 1 2 3 0 1 2
32
32
1
0
2
1
0000000
0010111
0101110
0111001
2
1001011
1011100
1100101
1110010
u u u u c c c
S
even
u u u u c c c
=
=
+
+
+
+
+
+
+
(
)
,
0 1 2 3 0 1 2
0 1 2 3 0 1 2
32
32
1
1
2
1 1111111 1101000 1010001 1000110
2
0110100
0100011
0011010
0001101
u u u u c c c
S
odd
u u u u c c c
=
=
+
+
+
+
+
+
+
+
(3)
Applying the Steane coding on an arbitrary given quantum state
0
1
0
a
a
=
+
1 transforms it into
0
1
0
1
S
S
a
a
=
+
S
. This
code was designed to correct bit-flip errors, but by changing the
basis (through a Hadamard transform) the phase error transforms
into a bit flip error, which can then be corrected:
(
)
(
)
1
0
0
0
1
2
1
1
1
0
1
2
S
S
S
S
S
S
S
H
H
=
=
+
=
=
S
(4)
2.2.3
Fault Tolerance Methodologies
Quantum error-correcting codes exist for r errors,
, 1
r
r
N
.
Therefore a non-correctable error occurs if a number of
1
r
+
errors occur simultaneously before the recovery process.
If the probability of a quantum gate error or storage error in the
time unit is of order
, then the probability of an error affecting
the processed data block becomes of order
1
r
+
, which is
negligible if r is sufficiently large. However, by increasing r the
safe recovery also becomes more complex and hence prone to
errors: it is possible that
1
r
+ errors accumulate in the block
before the recovery is performed.
Considering the relationship between r and the number of
computational steps required for computing the syndrome is
polynomial of the order
p
r
. It was proven that in order to reduce
as much as possible the error probability r must be chosen so that
1
1
~
p
r
e
<A
href="5.html#11">[7][19]. By consequence, if attempting to execute N
cycles of error correction without any r+1 errors accumulating
before the recovery ends, then
1
~ exp
p
N
. Therefore the
accuracy degree will be of the form
(
)
~ log
p
N
, which is
better than the accuracy degree corresponding to the no-coding
case,
1
~ N
. However, there exists a
so that if
then non-correctable error becomes likely, which limits the length
of the recovery process. Given the extremely large number of
gates employed by a quantum algorithm implementation,
also has to be very large; for Shor's algorithm
must be
higher than
max
N
max
N
N
>
max
N
max
N
9
3 10
<A href="5.html#12">[30].
<A href="5.html#7">As shown in Figure 7, the required accuracy degree approaches
today's technological limits (tipically 10
-3
for p=4) after N=10
5
.
For a fault tolerant encoding solution for Shor algorithm
implementation this should have happened after N=10
9
<A href="5.html#11"> [19][34]<A href="5.html#12">.
+
(2)
Additional fault tolerance must be employed in order to preserve
reliable quantum computation over an arbitrary number of
computational steps. Concatenated coding represents one such
technique, which improves the reliability by shaping the size of
192
the code blocks and the number of code levels. It is also resource
<A href="5.html#11">demanding and vulnerable to correlated errors [19][37].
Another approach, replacing the concatenated codes, is based on
Reconfigurable Quantum Gate Arrays (RQGAs) <A href="5.html#12">[34][37], which
are used for configuring ECC circuits based on stabilizer codes
<A href="5.html#11">[7][33]. By using a quantum configuration register for the RQGA
(i.e. a superposition of classical configurations), the
reconfigurable circuit is brought to a state where it represents a
simultaneous superposition of distinct ECC circuits. After
measuring the configuration register, only one ECC circuit is
selected and used; if k distinct ECC circuits were superposed and
the gate error rate is
, then the overall gate error probability
becomes
k
(s<A href="5.html#7">ee Figure 8). As a result, the accuracy threshold
value for the RQGA solution clearly dominates the technological
accuracy limit, as s<A href="5.html#7">hown in Figure 9 <A href="5.html#12">[37].
Figure 7. Accuracy plots: p=3 for xi
1
, p=4 for xi
2
, p=5 for xi
3
;
xi
4
for no-coding, ref is the reference accuracy (i.e. the
accuracy allowed by today's state of the art technology)
Figure 8. A quantum configuration register acts as a
superposition of k distinct circuits sharing the same input
state and the same output qubits
DEPENDABLE SYSTEM DESIGN
In order to model the erroneous behavior of a device of system it
is necessary to understand the causality of phenomena concerned.
A defect affecting a device from a physical point of view is called
a fault, or a fail. Faults may be put in evidence through logical
misbehavior, in which case they transform into errors. Finally,
errors accumulating can lead to system<A href="5.html#11"> failure [8]. The fault-error
-failure causal chain is essential to developping techniques
that reduce the risk of error occurrence, even in the presence of
faults, in order to minimize the probability of a system failure,
and can be architecture specific. We will elaborate next on
techniques used by a bio-inspired and by a quantum computing
platform.
Figure 9. Evolution of accuracy threshold value for RQHW
stabilizer codes (xir); the technological accuracy limit (lim) is
also provided for a relevant comparison
3.1
The Embryonics Approach
Several years before his untimely death John von Neumann began
developping a theory of automata, which was to contain a
systematic theory of mixed mathematical and logical forms,
aimed to a better understanding of both natural systems and
computers <A href="5.html#11">[14]. The essence of von Neumann's message appears
to entail the formula "genotype + ribotype = phenotype". He
provided the foundations of a self-replicating machine (the
phenotype), consisting of its complete description (the genotype),
which is interpreted by a ribosome (the ribotype).
Embryonics (a contraction for embryonic electronics) is a long
term research project launched by the Logic Systems Laboratory
at the Swiss Federal Institute of Technology, Lausanne,
Switzerland. Its aim is to explore the potential of biologically-inspired
mechanisms by borrowing and adapting them from
nature into digital devices for the purpose of endowing them with
the remarkable robustness present in biological <A href="5.html#12">entities [39].
Though perhaps fuzzy at a first glance, analogies between biology
and electronics are pres<A href="5.html#8">ented in Table 1 <A href="5.html#11">[12][31].
But if we consider that the function of a living cell is determined
by the genome, and that a computer's functionality is determined
by the operating program, then the two worlds may be regarded as
sharing a certain degree of similarity. Three fundamental features
shared by living entities are required to be targetted by
Embryonics in order to embody the formula "genotype + ribotype
= phenotype" into digital hardware:
multicellular organisms are made of a finite number of cells,
which in turn are made of a finite number of chemically
bonded molecules;
each cell (beginning with the original cell, the zygote) may
generate one or several daughter cell(s) through a process
called cellular division; both the parent and the daughter
cell(s) share the same genetic information, called the genome;
different types of cells may exist due to cellular
differentiation, a process through which only a part of the
genome is executed.
These fundamental features led the Embryonics project to settle
for an architectural hierarchy of four levels (see <A href="5.html#8">Figure 10). We
will not delve very deep inside the Embryonics'phylosophy, as
such details were broadly covered by<A href="5.html#11"> literature [12][20]<A href="5.html#12">[23][24]
[25][40]; we will, however, introduce each of the four levels in
193
order to be able to see how this bio-inspired platform fits modern
design for dependability efforts.
Table 1. Analogies present in Embryonics <A href="5.html#11">[12]
Biology Electronics
Multicellular organism
Parallel computer systems
Cell Processor
Molecule FPGA
Element
Figure 10. Structural hierarchy in Embryonics<A href="5.html#11"> [12]
The upmost level in Embryonics, bearing a certain similarity to
what is found in nature, is the population level, composed of a
number of organisms. One level down the hierarchy constitutes
the organismic level, and corresponds to individual entities in a
variety of functionalities and sizes. Each of the organisms may be
further decomposed into smaller, simpler parts, called cells, which
in turn may be decomposed in molecules. According to
Embryonics, a biological organism corresponds in the world of
digital systems to a complete computer, a biological cell is
equivalent to a processor, and the smallest part in biology, the
molecule, may be seen as the smallest, programmable element in
digital electronics (s<A href="5.html#8">ee Table 1).
An extremely valuable consequence of the Embryonics
architecture is that each cell is "universal", containing a copy of
the whole of the organism's genetic material, the genome. This
enables very flexible redundancy strategies, the living organisms
being capable of self-repair (healing) or self-replication (cloning)
<A href="5.html#11">[12]. Self-replication may be of great interest in the
nanoelectronics era, where extremely large areas of
programmable logic will probably render any centralized control
very inefficient. Instead, the self-replication mechanism
implemented in Embryonics will allow the initial colonization of
the entire programmable array in a decentralized and distributed
manner. <A href="5.html#8">Figure 11 presents an example of such colonization. At
initial time the configuration bitstream (containing the genome)
enters the bottom left corner of a programmable array and, at each
clock cycle, the genome is pushed through and partitions the
programmable space accordingly.
From a dependability standpoint, the Embryonics hierarchical
architecture offers incentives for an also hierarchical self-repair
strategy. Because the target applications are those in which the
failure frequency must be very low to be "acceptable", two levels
of self-repair are offered: at the molecular level (programmable
logic is susceptible to soft fail occurrences) and at the cellular
level (soft fails manifest at this level as soft errors).
Let us consider an example of a simple cell made of 3 lines and 3
columns of molecules, of which one column contains spare
molecules. If a fault occurs inside an active cell, it can be repaired
through transferring its functionality toward the appropriate spare
molecule, which will become active (see <A href="5.html#8">Figure 12).
Figure 11. Space colonization in Embryonics <A href="5.html#11">[11]
Figure 12. Self-repair at the molecular level: faulty molecule
E is replaced by spare molecule H, which becomes active <A href="5.html#12">[39]
The self-repair process at molecular level ensures the fault
recovery as long as there are spare molecules left for repair.
However, it is possible for a cell to experience a multiple error, in
which case the self-repair mechanism at the molecular level can
no longer reconfigure the inside of the cell successfully. If such a
situation arises, then a second self-repair strategy is trigerred at a
higher level. The cell will "die", therefore trigerring the self-repair
at the cellular level, the entire column containing the faulty
cell (cell C in this example) being deactivated, its role being taken
by the nearest spare column to the right (see <A href="5.html#9">Figure 13).
A critique that could be addressed to the current Embryonics
design would be its strategy of self-repair at the higher, cellular
level: in case of a faulty cell, an entire column containing that cell
will be deactivated, its role being transferred to the first available
column of spares to the right (s<A href="5.html#9">ee Figure 13). There are two points
in which this strategy could benefit:
194
1.
Instead of deactivating a whole column of cells, it would be
more efficient to only deactivate the faulty cell only (see
<A href="5.html#9">Figure 14). The resources affected by the role transfer would
be greatly reduced (one cell versus an entire column),
coupled with the fact that particle flux generating soft fails is
unlikely to be homogeneous and isotrope. This means
regions assimilable more likely to cells rather than entire
column of cells would be more affected by soft fails, not to
mention that during genetic data transfer (required by taking
over the role of the faulty cell) there is a greater risk of
enduring a new soft fail (moving data is much more sensitive
to soft fails than static data) <A href="5.html#11">[5][10].
2.
Such a strategy would be consistent with that used for the
self-repair at the molecular level, which would simplify a
thorough reliability analysis. Concatenated coding would
also seem easier to be implemented and the strategy
consistency would mean that concatenated coding would not
be limited to a two-level hierarchy <A href="5.html#12">[20][21].
Figure 13. Molecular self-repair failure: the cell "dies"
(bottom), triggering the cellular self-repair (top) <A href="5.html#12">[39]
We consider a cell of M lines and N columns, being composed of
modules of M lines and n+s columns (for instance, the cell
<A href="5.html#8">presented in Figure 12 consists of a single such module of two
active columns and one spare column), of which s are spares. In
order to meet certain reliability criteria, it is necessary to know
what is the number s of spare columns of molecules that
correspond to n columns of active molecules, that is, the
horizontal dimensions for such a module. We will not provide a
thorough reliability analysis, as this has been done previously
<A href="5.html#11">[4][17]<A href="5.html#12">[20][21]; instead, we will analyze the influences of the
proposed consistent self-repair strategy at both molecular and
cellular levels through the use of logic molecules. Therefore
Equation (5) holds:
( )
{
}( )
{
}( )
(
)
1
k
ModRow
i
R
t =Prob no fails t
Prob i fails t
N
k n s
=
+
=
+
(5)
where
( )
ModRow
R
t represents the reliability function for a row
within a module. Then, considering the failure rate for one
molecule
,
the probability of all molecules (both active and
spare) to operate normally in a module's row becomes:
{
}( )
(
)
n s t
Prob no fails t
e
+
=
(6)
The probability of a row enduring i fails in the active molecules
part is the conditional probability of having n-i active molecules
operating normally, while a number of s-i spare molecules are
ready to activate (that is, they are not affected by errors
themselves):
{
}( )
{
}( )
{
}(
Prob i fails t
Prob i fails active t
Prob i spares ok t
)
=
(7)
{
}( )
(
)
(
)
(
)
1
n i t
n i t
n
Prob i fails active t
e
e
i
=
(8)
{
}( )
(
)
(
)
1
it
k i t
k
Prob i spares ok t
e
e
i
=
(9)
Then the reliability function for an entire cell is the cummulated
reliability functions for the total number of modules:
( )
( )
MN n s
Cell
ModRow
R
t
R
t
+
=
(10)
Figure 14. Proposed reconfiguration strategy at the cellular
level
A self-repair strategy that conserves the consistency between the
molecular and the cellular level would allow for a more
straightforward reliability analysis. Basically, it would be
sufficient to substitute dimension parameters in Equations (5)
(10) with those approapriate to the analysis of an organism
instead of a cell. To illustrate this, we will consider an organism
of M
*
lines and N
*
columns, being composed of modules of M
*
lines and n
*
+s
*
columns, of which s
*
are spares; we will also use
the organism partitioning into modules, similar to the partitioning
of cells used before. Therefore Equation (5) transforms into
Equation (11):
( )
{
}( )
{
}( )
(
)
1
*
*
*
*
k
*
*
CellMR
i
R
t =Prob no fails t
Prob i fails t
N
k n
s
=
+
=
+
(11)
where
( )
CellMR
R
t represents the reliability function for a row of
cells within an organism module. In this case, the significance of
the terms will be as follows:
{
}( )
( )
*
*
n
s
*
Cell
Prob no fails t
R
t
+
=
(12)
195
While Equation (7) continues to hold under the form of Equation
(13), the significance of its terms will change according to the
dimensions at the cellular level:
{
}( )
{
}( )
{
}(
*
*
*
Prob i fails t
Prob i fails active t
Prob i spares ok t
=
)
i
k
i
(13)
{
}( )
( )
( )
(
)
*
*
1
*
n
i
Cell
Cell
n
Prob i fails active t
R
t
R
t
i
=
(14)
{
}( )
( )
( )
(
)
*
*
1
*
i
Cell
Cell
k
Prob i spares ok t
R
t
R
t
i
=
(15)
Finally, similar to Equation (10), the reliability function for an
entire organism is the cummulated reliability functions for the
total number of its modules:
( )
( )
*
*
*
*
M N n s
Org
CellMR
R
t
R
t
+
=
(16)
Equations (5) to (16) provide the basics for a thorough reliability
analysis for the proposed, uniform strategy of hierarchical
reconfiguration, as opposed to the analysis provided by<A href="5.html#12"> [21],
which specifically targetted the current Embryonics architecture.
Despite having settled the reliability model, both analyses are
incomplete, in that the failure rate parameter is missing, which
makes a precise, quantitative dependability target difficult to
meet. However, a reliability analysis is still valuable from a
qualitative point of view, allowing a direct comparison of
different systems.
3.2
The QUERIST Approach
In order to deal with errors induced by the constant influence of
the external environment upon computational processes, the
following assumptions were made: errors appear randomly, are
uncorrelated (neither in space, nor in time), there are no storage
<A href="5.html#11">errors, and there are no leakage phenomena involved [19].
Classical HDL-based fault injection methodologies can be
mapped to simulating quantum circuits without intervention
provided that the new error and fault models are taken into
account <A href="5.html#12">[35]. Of course, efficiency criteria require that they be
adapted to one of the available efficient simulation frameworks
<A href="5.html#12">[36][38][41].
QUERIST (from QUantum ERror Injection
Simulation Tool) is the name of such a project, fostering
simulated fault injection techniques in quantum <A href="5.html#12">circuits [34].
Similar to classical computation, simulated fault injection is used
in order to evaluate the employed FTAMS (Fault Tolerance
Algorithms and Methodologies) <A href="5.html#12">[26][27].
An overview of the
QUERIST<A href="5.html#11"> project is presented in Figure 15.
The three cycles of initialization, simulation, and data
computation are common to both classical and quantum
approaches. The first cycle takes the quantum circuit HDL
description as an input. Two abstract inputs are considered, the
HDL model and the assumed error model; the first influences how
the HDL description is presented, while the second one dictates
the test scenario by defining the start/stop simulation states (since
qubits are equally prone to error, all the signals must be
observed). HDL modeling of quantum circuits in order to attain
efficient simulation is discussed in <A href="5.html#12">[34][35][36][38].
The outputs of the first cycle, which are also inputs for the
simulation cycle, consist of a test scenario and an executable HDL
model with the corresponding entanglement analysis, dictated by
<A href="5.html#12">the bubble-bit encoded quantum states [36][38]. The output of the
second cycle consists of time diagrams for all qubits, from the
start to the stop states. Useful information, extracted from the raw,
bubble-bit-represented, qubit traces are compared to correct qubit
values, the result being the probabilistic accuracy threshold value,
in the third cycle. The initialization and simulation cycles depend
on specific aspects of quantum circuit <A href="5.html#12">simulation [35]. The data
processing cycle is independent from the specific simulation
framework and is aimed at determining the
accuracy threshold as
the main reliability measure that also defines the feasibility of the
quantum circuit implementations.
Suppose that, at simulation time
t we observe signals
{
}
0
1
, ,...,
n
s s
s . In our analysis, s
i
is the state observed during non-faulty
simulation, so for the same state in a faulty environment we
will have the state
*
i
s .
For validation of the quantum
FTAMs, we need to compare
i
s
with
*
i
s . This can be done by using operator
(
)
*
,
i
i
dif s s
. This
means that the total number of overall state errors at simulation
time
t is
. The error rate on the overall observed
states at moments
(
1
*
0
,
t
n
i
i
i
e
dif s s
=
=
)
0
1
1
, ,...,
m
t t
t
will
be given by
1
0
1
m
sim
j
j
t
e
m
=
=
.
The used
FTAMs are only valid if the relationship between the
experimental
sim
and the assumed singular error rate
is of the
order
2
~
sim
<A href="5.html#11">[19].
CONCLUSIONS
This paper presented arguments in favor of two novel computing
architectures for the purpose of addressing the challenges raised
by the forthcoming nanoelectronics era. Distributed self-testing
and self-repairing will probably become a must in the next years
as centralized control logic is expected to become unable to
harness the extremely large number of devices, all equally prone
to errors, that will be integrated onto the same chip. Bio-inspired
computing brings valuable techniques that explore the potential of
massively parallel, distributed computation and fault-tolerance
that will likely provide an essential help to jumpstart new
nanoelectronic architectures. As one of the representatives of bio-inspired
computing, the Embryonics project presents a
hierarchical architecture that achieves fault tolerance through
implementing an also hierarchical reconfiguration. A similar
approach for maximizing fault tolerance is present in quantum
computing, the QUERIST project; even if bio-inspired and
quantum computing may seem dissimilar at a first glance, they
both achieve fault tolerance by adapting the same techniques from
classical computing and using essentially the same error model.
Nanoelectronics will potentially change the way computing
systems are designed, not only because of the sheer number of
devices that will coexist onto the same chip, but also because of
the sensitivity of these devices.
196
Figure 15. An overview of the QUERIST project
Therefore, if nanoelectronics is to be employed to build
dependable computing machines (a certain contradiction
notwithstanding), valuable expertise in design can be drawn from
natural sciences. While biology provides countless examples of
successfully implemented fault tolerance strategies, physics offers
theoretical foundations, both of which were found to share
common ground. It is perhaps a coincidence worth exploring in
digital computing.
REFERENCES
[1]
Aharonov, D., Ben-Or, M. Fault Tolerant Quantum
Computation with Constant Error.
Proc. ACM 29th Ann.
Symposium on Theory of Computing, El Paso, Texas, May
1997, pp. 176-188.
[2]
Avizienis, A., Laprie, J.C., Randell, B., Landwehr, C. Basic
Concepts and Taxonomy of Dependable and Secure
Computing.
IEEE Transactions on Dependable and Secure
Computing, 1, 1 (Jan-Mar 2004), 11-33.
[3]
Butts, M., DeHon, A., Golstein, S.C. Molecular Electronics:
Devices, Systems and Tools for Gigagate, Gigabit Chips.
Proc. Intl. Conference on CAD (ICCAD'02), 2002, pp. 433-440
.
[4]
Canham, R., Tyrrell, A. An Embryonic Array with Improved
Efficiency and Fault Tolerance.
Proc. IEEE NASA/DoD
Conference on Evolvable Hardware, Chicago Il, 2003, 275-282
.
[5]
Gaisler, J. Evaluation of a 32-Bit Microprocessor with Built-In
Concurrent Error Detection.
Proc. 27th Annual Intl.
Symposium on Fault-Tolerant Computing (FTCS-27), 1997,
pp. 42-46.
[6]
Goldstein, S.C. The Challenges and Opportunities of
Nanoelectronics.
Proc. Government Microcircuit Applications
and Critical Technology Conference (GOMAC Tech 04
), Monterey, CA, March 2004.
[7]
Gottesman, D. Class of quantum error-correcting codes
saturating the quantum Hamming bound.
Phys. Rev. A 54,
1996, pp. 1862-1868.
[8]
Johnson, B.W.
Design and Analysis of Fault-Tolerant
Digital Systems. Addison-Wesley, 1989.
[9]
Laprie, J.-C. (Ed.). Dependability: Basic Concepts and
Terminology.
Dependable Computing and Fault-Tolerant
Systems Series, Vol. 5, Springer-Verlag, Vienna, 1992.
[10]
Liden, P., Dahlgren, P., Johansson, R., Karlsson, J. On
Latching Probability of Particle Induced Transients in
Combinational Networks.
Proc. Intl. Symposium on Fault-Tolerant
Computing (FTCS-24), 1994, pp.340-349.
[11]
Mange, D., Sipper, M., Stauffer, A., Tempesti, G. Toward
Robust Integrated Circuits: The Embryonics Approach.
Proc.
of the IEEE, vol. 88, No. 4, April 2000, pp. 516-541.
[12]
Mange, D. and Tomassini, M. eds.
Bio-Inspired Computing
Machines: Towards Novel Computational Architectures.
Presses Polytechniques et Universitaires Romandes,
Lausanne, Switzerland, 1998.
[13]
Von Neumann, J.
The Computer and the Brain (2
nd
edition).
Physical Science, 2000.
[14]
Von Neumann, J. The Theory of Self-Reproducing
Automata. A. W. Burks, ed. University of Illinois Press,
Urbana, IL, 1966.
[15]
Von Neumann, J. Probabilistic Logic and the Synthesis of
Reliable Organisms from Unreliable Components. In C.E.
Shannon, J. McCarthy (eds.)
Automata Studies, Annals of
Mathematical Studies 34, Princeton University Press, 1956,
43-98.
[16]
Nielsen, M.A., Chuang, I.L.
Quantum Computation and
Quantum Information. Cambridge University Press, 2000.
[17]
Ortega, C., Tyrrell, A. Reliability Analysis in Self-Repairing
Embryonic Systems.
Proc. 1st NASA/DoD Workshop on
Evolvable Hardware, Pasadena CA, 1999, 120-128.
[18]
O'Connor, P.D.T. Practical Reliability Engineering. John
Wiley & Sons, 4
th
edition, 2002.
[19]
Preskill, J. Fault Tolerant Quantum Computation. In H.K.
Lo, S. Popescu and T.P. Spiller, eds.
Introduction to
Quantum Computation, World Scientific Publishing Co.,
1998.
197
[20]
Prodan, L.
Self-Repairing Memory Arrays Inspired by
Biological Processes. Ph.D. Thesis, "Politehnica" University
of Timisoara, Romania, October 14, 2005.
[21]
Prodan, L., Udrescu, M., Vladutiu, M. Survivability Analysis
in Embryonics: A New Perspective.
Proc. IEEE NASA/DoD
Conference on Evolvable Hardware, Washington DC, 2005,
280-289.
[22]
Prodan, L., Udrescu, M., Vladutiu, M. Self-Repairing
Embryonic Memory Arrays.
Proc. IEEE NASA/DoD
Conference on Evolvable Hardware, Seattle WA, 2004, 130-137
.
[23]
Prodan, L., Tempesti, G., Mange, D., and Stauffer, A.
Embryonics: Electronic Stem Cells.
Proc. Artificial Life VIII,
The MIT Press, Cambridge MA, 2003, 101-105.
[24]
Prodan, L., Tempesti, G., Mange, D., and Stauffer, A.
Embryonics: Artificial Cells Driven by Artificial DNA.
Proc. 4th International Conference on Evolvable Systems
(ICES2001), Tokyo, Japan, LNCS vol. 2210, Springer,
Berlin, 2001, 100-111.
[25]
Prodan, L., Tempesti, G., Mange, D., and Stauffer, A.
Biology Meets Electronics: The Path to a Bio-Inspired
FPGA. In
Proc. 3rd International Conference on Evolvable
Systems (ICES2000), Edinburgh, Scotland, LNCS 1801,
Springer, Berlin, 2000, 187-196.
[26]
Rimen, M., Ohlsson, J., Karlsson, J., Jenn, E., Arlat, J.
Validation of fault tolerance by fault injection in VHDL
simulation models.
Rapport LAAS No.92469, December
1992.
[27]
Rimen, M., Ohlsson, J., Karlsson, J., Jenn, E., Arlat, J.
Design guidelines of a VHDL-based simulation tool for the
validation of fault tolerance.
Rapport LAAS No93170, Esprit
Basic Research Action No.6362, May 1993.
[28]
Shivakumar, P., Kistler, M., Keckler, S.W., Burger, D.,
Alvisi, L. Modelling the Effect of Technology Trends on the
Soft Error Rate of Combinational Logic.
Proc. Intl.
Conference on Dependable Systems and Networks (DSN),
June 2002, pp. 389-398.
[29]
Shor, P.
Fault-tolerant quantum computation.
arXiv.org:quant-ph/9605011, 1996.
[30]
Shor, P. Algorithms for Quantum Computation: Discrete
Logarithms and Factoring.
Proc. 35th Symp. on Foundations
of Computer Science, 1994, pp.124-134.
[31]
Sipper, M., Mange, D., Stauffer, A. Ontogenetic Hardware.
BioSystems, 44, 3, 1997, 193-207.
[32]
Sipper, M., Sanchez, E., Mange, D., Tomassini, M., Perez-Uribe
, A., Stauffer, A. A Phylogenetic, Ontogenetic and
Epigenetic View of Bio-Inspired Hardware Systems.
IEEE
Transactions on Evolutionary Computation, 1, 1, April 1997,
83-97.
[33]
Steane, A. Multiple Particle Interference and Quantum Error
Correction.
Proc. Roy. Soc. Lond. A 452, 1996, pp. 2551.
[34]
Udrescu, M.
Quantum Circuits Engineering: Efficient
Simulation and Reconfigurable Quantum Hardware. Ph.D.
Thesis, "Politehnica" University of Timisoara, Romania,
November 25, 2005.
[35]
Udrescu, M., Prodan, L., Vladutiu, M. Simulated Fault
Injection in Quantum Circuits with the Bubble Bit
Technique.
Proc. International Conference "Adaptive and
Natural Computing Algorithms", pp. 276-279.
[36]
Udrescu, M., Prodan, L., Vladutiu, M. The Bubble Bit
Technique as Improvement of HDL-Based Quantum Circuits
Simulation.
IEEE 38th Annual Simulation Symposium, San
Diego CA, USA, 2005, pp. 217-224.
[37]
Udrescu, M., Prodan, L., Vladutiu, M. Improving Quantum
Circuit Dependability with Reconfigurable Quantum Gate
Arrays.
2nd ACM International Conference on Computing
Frontiers, Ischia, Italy, 2005, pp. 133-144.
[38]
Udrescu, M., Prodan, L., Vladutiu, M. Using HDLs for
describing quantum circuits: a framework for efficient
quantum algorithm simulation.
Proc. 1st ACM Conference
on Computing Frontiers, Ischia, Italy, 2004, 96-110.
[39]
Tempesti, G.
A Self-Repairing Multiplexer-Based FPGA
Inspired by Biological Processes. Ph.D. Thesis No. 1827,
Logic Systems Laboratory, The Swiss Federal Institute of
Technology, Lausanne, 1998.
[40]
Tempesti, G., Mange, D., Petraglio, E., Stauffer, A., Thoma
Y. Developmental Processes in Silicon: An Engineering
Perspective.
Proc. IEEE NASA/DoD Conference on
Evolvable Hardware, Chicago Il, 2003, 265-274.
[41]
Viamontes, G., Markov, I., Hayes, J.P. High-performance
QuIDD-based Simulation of Quantum Circuits.
Proc. Design
Autom. and Test in Europe (DATE), Paris, France, 2004, pp.
1354-1359.
[42]
Yu, Y., Johnson, B.W. A Perspective on the State of
Research on Fault Injection Techniques. Technical Report
UVA-CSCS-FIT-001, University of Virginia, May 20, 2002.
[43]
***. ITRS International Technology Roadmap for Semiconductors
, Emerging Research Devices, 2004, http://www.
itrs.net/Common/2004Update/2004_05_ERD.pdf
[44]
***. Society of Reliability Engineers, http://www.sre.org/
pubs/
[45]
***. http://www.dependability.org/wg10.4/
198 | emerging technologies;Self replication;Embryonics;Computing technology;Error detection;Fault tolerance;Digital devices;Computing architecture;environment;Soft errors;Dependable system;Computing system;System design;Correction techniques;bio-inspired digital design;Bio-inspired computing;Reliability;Dependability;Nano computing;Failure rate;Emerging technologies;Nanoelectronics;bio-inspired computing;Self repair;evolvable hardware;Computer system;quantum computing;fault-tolerance assessment;QUERIST;Bio-computing;reliability;Quantum computing |
50 | Building Sustainable Community Information Systems: Lessons from a Digital Government Project | This paper introduces a rationale for and approach to the study of sustainability in computerized community information systems. It begins by presenting a theoretical framework for posing questions about sustainability predicated upon assumptions from social construction of technology and adaptive structuration theories. Based in part on the literature and in part on our own experiences in developing a community information system, we introduce and consider three issues related to sustainability: stakeholder involvement, commitment from key players, and the development of critical mass. | INTRODUCTION
New technologies make it feasible and in many cases practical for
individuals, groups, and organizations to collaborate in the
development of joint information systems. In fact, over the last
three decades of evolution, few applications of information
technology have stimulated so much interest on the part of so
many. Collaborative information systems are attractive to users
because they make it possible to find information from diverse
sources in an easy and efficient way. Such systems make good
sense for information providers because it becomes possible to
attract a larger audience than a solitary effort might otherwise be
able to command and to pool resources to achieve certain
economies in scale and technology expense. The advantages of
collaborative computerized information systems have been widely
recognized, but this has been particularly the case for those with
the goal of making community information more available,
accessible, and oriented toward community development.
Computerized community information systems are diverse in form
and, over time, have come to be known by many different names,
including community bulletin boards, civic networks, community
networks, community information networks, televillages, smart
communities, and Free-Nets. They have been initiated by many
different sponsors, including government organizations at the
federal, state, and local levels, academic organizations, libraries,
and ad hoc groups of citizens that may or may not later transform
their enterprises into not-for-profit organizations [7]. With respect
to longevity, these projects have come and gone, only to be
replaced by newer and more sophisticated manifestations of the
same basic information sharing capabilities.
Consistent with the evolution of technology over the last thirty
years, Kubicek and Wagner [14] analyze the historical trajectory
of community networks to understand how these applications
have evolved over time based upon their animating ideas, the
zeitgeist of the time, the state of technology access, and the kinds
of services such applications make available. Their analysis
makes it possible to see that there has never been a standard for
design or operation when it comes to community information
systems. Instead, each such project has been very much a social
experiment, born of a cluster of varied ideas related to the general
theme of using technology to promote the development of vibrant
geographically-based communities.
Since there has been no standard to follow, each instance of
computerized community information system can be seen as an
experiment in accommodating the tensions between access to
hardware/software infrastructure, design of the particular
application or system, user needs, and the initiating and ongoing
resources that support these efforts. These projects can be
resource intensive; thus, a variety of institutional actors have lent
their financial support particularly over the past decade. The
successive rounds of funding for community technology projects
by the Department of Commerce's National Telecommunications
and Information Administration (now called the Technology
Opportunities Program) is a case in point. The Digital
Government Program of the National Science Foundation has
Copyright held by the author
145
provided support for such ventures, as have many private
foundations and technology corporations. From the perspective
of funding organizations, the nature of the experiment at the heart
of CCINs is essentially this: how to build applications that
achieve their civic goals, that provide services perceived as
valuable by their users, and that can command continuing support
from the community beyond the horizon of initial funding. From
a purely academic perspective, the more general question centers
on, as Venkatesh [28] has put it, the "lifecycle" of community
information systems. More specifically, we wish to know how
such systems "originate, stabilize, and change in their
sociohistorical context" (p. 339).
We do not have extensive knowledge about the extent to which
community information systems achieve their goals, endure over
time, or the conditions that facilitate effectiveness and
sustainability. However, based on what we do know, it is
apparent that such enterprises are fragile. Perhaps the closest we
have come to a standard or model is the relatively extensive set of
experiments in community networking in the 1990s called Free-Nets
, which were fashioned after the public broadcasting system
and intended to serve their localities by providing access to wide-area
computer networks and information about the community.
Founded in 1989, the National Public Telecomputing Network, an
umbrella organization for Free-Nets, went bankrupt in 1996.
After successive decreases in the cost of computing equipment
and Internet access, and the development of the World Wide Web,
many Free-Nets went out of business [14]. Studies of community
networks funded by the federal and state governments also
suggest that community information systems have difficulty
enduring beyond their initial funding [26] [21].
In this paper, we introduce and consider conditions that facilitate
the sustainability of computerized community information
systems. We base our discussion in part on our own efforts to
develop a community information system called Connected Kids
in Troy, New York, a project that began in a formal sense in 1999
and continues today. We begin by presenting a theoretical
framework for posing questions about sustainability based on the
social construction of technology and adaptive structuration
theory. Drawing on the literature as well as our experiences, we
introduce and discuss three issues we believe to be critically
related to sustainability: stakeholder involvement, commitment
from key players, and critical mass.
THEORETICAL FRAMEWORK
All computerized community information systems are designed,
although whether researchers and participants understand the
significance of design and its relevance to sustainability varies
from context to context. In some cases, where community
networks have originated as the indigenous creation of
technology-savvy citizens, it may appear to researchers that the
design of the information system is a natural expression of
community development unfettered by theoretical considerations.
However, in other cases, design is taken more seriously and
treated as an element that can be purposefully controlled in order
to achieve particular kinds of effects.
In either case, we argue that the material form, functionalities,
conceptual configuration, and impact of technology is shaped by
the uses, goals, interests, and ideologies of those who participate
in its development and others who use it following development.
In the literature addressing the social construction of technology,
this argument is frequently illustrated by showing how users
appropriate new technologies for their own purposes, which may
be contrary to those of designers (see e.g. [15] [25]). However, we
take this position one step further by suggesting that community
information systems, and information and communication
technologies more broadly, reflect the interests, orientations, and
indeed the nave social theories of their designers, as well as being
shaped ex post facto by their users [8]. On this basis, we have
argued that academic researchers need to become involved in the
process of technology design as a way of exploring how to
improve the design of technology and as a way to test social
theory, including communication, information, and democratic
theory. However, our position suggests that users of information
systems must also be included in their initial conceptualization
and design in order to develop systems that reflect users' needs,
goals, and values. This leads quite naturally to creating
interdisciplinary (e.g. computer scientists, information scientists,
social scientists) application design teams that provide for
participation by community members; it is in such collaborative
arrangements that sustainable community information systems
may be designed.
The social construction of technology argues that technologies are
shaped by both designers and users and suggests that information
system design be undertaken collaboratively by those implicated
in both the technical and social conceptualization of the system.
However, issues of sustainability ultimately focus on reproduction
of the system. Once designed, information systems must be
deployed, and once deployed, they must be re-enacted on a
routine basis by their users to be sustained. Adaptive structuration
theory is one of the most fully developed theoretical perspectives
for understanding how new technologies come to reproduce social
structures or to generate structural change in particular social
contexts. DeSanctis and Poole [4] base their work on structuration
processes originally described by Giddens [5].
Giddens [5] suggests that technologies in organizations either
reproduce existing social structure or change social structure by
virtue of the kinds of structures that are instantiated when social
actors use technologies. Structure consists of rules and resources
that actors draw upon to produce social behavior. For DeSanctis
and Poole [4], social structures are physically incorporated in new
technologies in two complementary ways. First, technologies
embody rules and resources embedded in the form of particular
material capabilities, functionalities, and features that comprise a
variety of behavioral options to be used in constructing social
action. Second, the "spirit" of a technology, also considered to be
a property of the technology, expresses the values and goals that
are brought to bear upon the tasks the technology was originally
intended to accomplish. Together the features and spirit of a
technology comprise its "structural potential," or the range of
possible actions that users can draw upon to constitute or
reproduce social structures in technology use. Orlikowski [17]
disputes a portion of this conceptualization, noting that, according
to Giddens [5], structure has a "virtual" rather than material
existence, and thus can never be physically incorporated into
technology. Instead, "[w]hile a technology may be seen to
embody particular symbol and material properties, it does not
embody structures because those are only instantiated in practice"
([17], p. 206) and, if reproduced, are systematically repeated over
time.
146
Orlikowski's [17] point is that users may draw upon only some of
a technology's features, and may do that in ways that depart
substantially from the original conceptualizations of designers. In
essence, users "enact" technology in their collective, systematic,
and routine use of a technology, reproducing some of the
technology and some of its associated structures through practice.
Orlikowski's [17] term "technologies-in-practice" references the
idea that as users engage selectively with particular technological
features, particular structures, or sets of rules and resources
associated with the technology, are selectively reconstituted.
Thus, a technology-in-practice is a "repeatedly experienced,
personally ordered and edited version of the technological artifact,
being experienced differently by different individuals and
differently by the same individuals depending on the time or
circumstances" ([17], p. 408).
Applied to sustainability, our questions center on the conditions
under which users "appropriate" the system. For community
information systems, there are generally two kinds of users-information
providers and information consumers--and, of
course, the same individuals may play both user roles. Thus, our
questions become: Under what conditions do users collectively
and routinely draw upon and apply particular features of a
community information system? When do they reference the way
their system "should" work in order to construct a shared
perspective about community action? Through regular and routine
enactments of technology in regular use, users reproduce the rules
and resources or structures of community life that are instantiated
in technology use. This is not to say that "unfaithful"
appropriations, or those that are out-of-line with the spirit of the
technology, cannot occur; but it is to say that it is unlikely they
will sustain a community information system.
FACTORS RELATED TO SUSTAINABLE COMMUNITY INFORMATION SYSTEMS
We begin our discussion of factors related to sustainability by
distinguishing between the effectiveness and the sustainability of
computerized community information systems. Community
information systems are designed and advocated with many goals
in mind, some of which focus on traditional issues of community
development, such as decreasing unemployment, stimulating
economic growth, improving health and social welfare, and others
focus on building social capital, or enhancing interest and
participation in government decision making processes. The issue
of effectiveness addresses whether such systems are achieving the
goals for which they were designed. Sustainability, on the other
hand, addresses whether the information system is able to endure
past its initial launching phase, whether it is used and reproduced
by its intended audience, and whether it can continue to attract
resources beyond those obtained for initial development and
deployment. Clearly these two concepts are not irrelevant to each
other, but neither are they the same. It is possible that questions
of sustainability logically precede those of effectiveness, but there
may also be important relationships between effectiveness and
sustainability.
Sustainability has long been a consideration in the development of
information systems. Indeed, the failure rate of new IT
applications in the public sector has motivated significant interest
in addressing the issue of sustainability and speculation about the
extent to which participation in system development is related
ultimately to system adoption and use [10] [11]. More
specifically, government services are increasingly out-sourced to
not-for-profit organizations that may not be experienced in
collaboration [3]. Information technology makes it possible for
organizations to collaborate in providing information but whether
or not such collaborations actually take place is more than a
technical issue. The development of any information system, and
particularly collaborative systems, requires organizations to
change, in a very real way, some of their routine modes of
operation and incorporate new behaviors. Scholl's [22] research
finds that stakeholder involvement and the commitment of senior
executives to be highly related to the integration of e-government
projects into business process change for government
organizations. Stakeholder involvement has long been
acknowledged as a key element in the construction of community
information system, although applied to this context rather than
that of traditional hierarchical organizations, the idea bears further
scrutiny. We have also seen the commitment of key executives
playing a role in our own development work. We discuss each of
these two ideas at some length below, and add a third:
development of a critical mass of users.
3.1
Stakeholder Involvement
Our work was motivated in part by Schuler's [23] invitation to
academic researchers to collaborate with communities in building
community networking projects. At the time, it was fair to
characterize our institution's hometown, Troy, NY, as a "digitally
divided" community. Our experiences suggested that new
technologies and their potential seemed to be of interest to the
members of the community (see [9]). But many community and
government organizations lacked access to hardware and network
connections as well as the expertise needed for using this
equipment. It seemed most likely that we would need to do more
to generate interest in the development of a community
information service in order to stimulate participation from likely
stakeholders in such a project.
Connected Kids was conceived in Fall 1999 in the course of
discussions among Troy City Government representatives on the
topic of how new technologies might usefully be employed to
provide services to the community. At the time we learned that
the mayor sought to reinvigorate the City's office of youth
services and had speculated about whether these technologies
could be used to provide one of that office's primary and most
popular functions, which was to disseminate information about
resources and programs sponsored by not-for-profit organizations
as well as those sponsored by Troy's own Department of
Recreation. It seemed clear to us the World Wide Web might
indeed be used for such purposes. Thus, Connected Kids was
conceived as both a digital government project as well as a
community information system. We received initial assurances
that the City would administer the information system after it had
been successfully designed and deployed.
Connected Kids began with sensitivity to the need for stakeholder
involvement, particularly that of participating organizations that
we hoped would be information providers. We were aware that
the "best way to kill a community network," was to fail to involve
the community in system development [24]. We took seriously
Gygi's [6] prediction that the degree of community involvement
147
and the extent to which the project represented community
interests and participation would likely affect political and
economic outcomes. Thus, although our project began initially as
a collaboration between academic researchers and government
administrators, we moved quickly to invite community
organizations to participate at an orientation meeting in February
2000 and held a series of focus group discussions in October 2000
in which we explored with representatives of participating
organizations how such an information system might be
conceptualized to best meet their information needs. In Fall 2001
and Winter 2002 we undertook a series of participatory design
sessions in which representatives of participating organizations
were introduced to portions of a system prototype based on their
previous contributions and asked to describe their experiences and
suggest improvements. Finally, as we designed interfaces in
Summer and Fall 2002, we again consulted with representatives of
participating organizations in user testing sessions. By Fall 2003
and Winter 2004, we had demonstrated the system and trained
numerous representatives of participating organizations, who
reportedly found our interface pages easy to use. However, these
same organizations were not spontaneously--or frequently-entering
information about their programs or activities for youth.
Based on contributions from our collaborating organizations, the
design of Connected Kids reflected much of the best wisdom
about community information systems: the system could be used
to both create and easily update data [2] [13]; we had involved
end user groups (kids and their parents) in the design as well [27]
[12]; and the system focused principally on information deemed
crucial by our participating organizations, information that we
expected had the capacity to be integrated into the routine lives of
the communities they serve [21]. Further, access to technology
lost its urgency as an issue, since it is no longer the case that our
participating organizations lack access to networking technology.
Thus, we did not attribute our problems with data entry to system
attributes. Instead, we considered the suggestion by Scott and
Page [18] that "sustainable technologies are processes (authors'
emphasis); they are not products." In traditional hierarchical
organizations, lower levels of stakeholder involvement may be
sufficient for system acceptance. However, a community
information system requires that members of the community
contribute information and it must be seen to be in their
continuing interest to do so. In Fall 2004, we have sought to
create a quasi-formal governance body to administer the project, a
Connected Kids Advisory Board, recruiting representatives from
10 organizations (from among the most influential) to commit to
guiding the short-term future of the project (approximately 1 year)
as we transition to system deployment in Spring 2005. Our Board
has now met for several months, and it remains to be seen whether
this vehicle will foster a sufficient level of system participation,
perhaps ownership, to sustain Connected Kids through
deployment and beyond.
3.2
Commitment from Key Players
Scholl [22] finds that support from key executives is critical to
incorporating e-government projects into an organization's
business processes, and our experience underscores this finding.
In fact, we would expand the range of individuals likely to be
considered "key." Not only are senior executives important, but
so also are others in the organization that have any significant job-related
association with the information system under
development. Application development projects take place over
potentially long periods of time and involve many different
individuals in many different roles. Job occupants in the public
sector may be comparatively stable, but they are not permanent.
Those who champion an application development project may not
be around when it is time to deploy the system.
What is generally not recognized when academic researchers
undertake technology projects in organizational contexts is that
they may need to become the primary advocates for deployment of
the project. This is not a typical role for researchers, who may
with ample justification see their obligations confined to simply
performing the research or developmental work on the project.
However, researchers who seek to develop sustainable products
may find themselves required to situate the project politically
within the organization or group of organizations for which it was
originally intended. They may in fact be the only individuals who
can play this particular role.
In the case of Connected Kids, we secured commitment from both
the mayor and deputy mayor of Troy, along with, of course, that
of our primary organizational liaison. We continued to work for
quite some time, reporting regularly on progress to our liaison,
without realizing that this individual was getting progressively
involved in turf battles with another technology-oriented actor in
city government. As our liaison's influence within city
government eroded, so also did support for our project without
our awareness. Once we understood what was happening, we
acted quickly, and luckily in sufficient time, to re-establish the
importance of the project with the mayor and deputy mayor.
From that point hence our primary liaison was the deputy mayor.
Unfortunately, mayoral administrations come and go, and the
administration that was our primary government partner was
voted out of office in November 2003. Within six months all the
individuals who had any primary working relationship with our
project were gone, and we faced the need to re-create commitment
with a new mayor and deputy mayor, a process that took
considerable time and that has delayed implementation by nearly a
year. Of course, this is not something we could have prevented.
However, it is interesting to note that our new liaison with city
government is an individual who had worked in city government
under both administrations.
3.3
Critical Mass of Users
Ultimately, for a community information system to endure, it must
establish a significant number of regular users, who enact the
technology for at least some of the purposes for which it was
originally intended, and in so doing, reproduce community
structures that are instantiated in the technology. In our case, this
means bringing an audience of end users to the system who are
interested in information about youth that is disseminated through
it. Connected Kids is in many respects similar to an electronic
"public good" [20], that is, a product established through the
contributions and for the benefit of a set of actors that also has the
effect of benefiting other users.
In our case, the system was designed by and for youth
organizations, which serve as information providers. We have
sought to show how these organizations may appropriate the
technology and accomplish what Bannon and Griffin [1] suggest,
which is to use the technology as a means to "further their own
148
ends" (p. 48) rather than as an end in itself. However, the added
value of a collaborative information system is that in bringing an
audience to information distributed by one organization, that
audience is also available to peruse the information of other
organizations. Thus the overall effect is to increase the
cumulative size of the audience for all involved. Further, the
external user audience benefits from the ease of accessing
information from a wide variety of organizations that all provide
services for youth.
Patterson and Kavanaugh [19] argue that the pro-social benefits of
a public good are achieved when the system achieves a critical
mass of users. In our case, this would equal the number of users
that information providers consider to make it worth their
continuing efforts to input information about their activities. As
Markus [16] points out, the number of users will depend on the
diversity and value of the information available through the
system. Thus, sustainability is dependent on the reciprocal
interdependence of both information providers and information
users. Both information in the system and use must achieve
critical mass, and this must happen relatively soon after
deployment.
Our strategy is to bring both of these activities together in time.
We have asked the Connected Kids Advisory Board to develop a
marketing campaign that will accompany the deployment of the
system and they are currently embarked on this activity.
Bolstered by the participation of RPI and the City of Troy, we
seek to attract a large external audience to experiment with the
system. Advisory Board members recognize that the success of
the marketing campaign depends on the presence of considerable
amount of high quality information in the system, and have
committed to providing it. In this way, we seek to jumpstart a
virtuous circle in which sufficient quantities of information and
use reciprocally reinforce each other creating a critical mass of
information providers and consumers.
CONCLUSION
Poised for deployment, Connected Kids enables us to test a range
of expectations generated by this set of considerations regarding
sustainability. Within the next year, we should be able to assess
relationships related to stakeholder involvement such as those
between factors such as perceptions of involvement in system
design and administration; actual participation in system design
and testing activities; and perceptions of ownership with
outcomes such as the extent of data contributed to the system,
perceptions of commitment to the system, and significance of
organizational resources devoted to participation. Further, we
should be able to assess relationships between the perceived
amount and quality of information in the system and end user
satisfaction, likelihood of returning to the system, and interest in
becoming more involved in system activities. Of course, we will
continue to be able to comment anecdotally on what we have
learned about the politics of technology diffusion in public sector
organizations.
REFERENCES
[1]
Bannon, L.J., and Griffin, J. New technology, communities,
and networking: Problems and prospects for orchestrating
change. Telematics and Informatics, 18, (2001), 35-49.
[2]
Cowan, D.D., Mayfield, C. T., Tompa, F. W., and Gasparini,
W. New role for community networks. Communications of
the ACM, 41, 4, (1998), 61-63.
[3]
Dawes, S.S., Bloniarz, P. A., and Kelly, K. L. Some
Assembly Required: Building a Digital Government for the
21st Century. Albany, NY: Center for Technology in
Government, 1999.
http://www.ctg.albany.edu/research/workshop/dgfinalreport.
pdf
[4]
DeSanctis, G., and Poole, M.S. Capturing the complexity in
advanced technology use: Adaptive structuration theory.
Organization Science, 5, (1994), 121-147.
[5]
Giddens, A. The Constitution of Society. University of
California Press, Berkeley, CA, 1984.
[6]
Gygi, K. Uncovering Best Practices: A Framework for
Assessing Outcomes in Community Computer Networking,
1996. http://www.laplaza.org/about
lap/archives/cn96/gygi.html
[7]
Harrison, T., and Stephen, T. Researching and creating
community networks. In Doing Internet Research, S. Jones
ed. Sage, Newbury Park, CA, 1999, 221-241.
[8]
Harrison, T., and Zappen, J. Methodological and theoretical
frameworks for the design of community information
systems. Journal of Computer-Mediated Communication, 8,
3 (2003). http://www.ascus.org/jcmc/vol8/issue3
[9]
Harrison, T.M., Zappen, J.P., and Prell, C.L. Transforming
new communication Technologies into Community Media.
In Community Media in the Information Age: Perspectives
and Prospects, N. Jankowski and O. Prehn eds., Hampton,
Cresskill, NJ, 2002, 249-269.
[10]
Heeks, R. Better information age reform: Reducing the risk
of information systems failure. In Reinventing government in
the information age: International Practice in IT-Enabled
Public Sector Reform, R. Heeks ed., Routledge, London,
1999, 75-109.
[11]
Heeks, R., and Bhatnager, S. Understanding success and
failure in information age reform. In Reinventing government
in the information age: International Practice in IT-Enabled
Public Sector Reform, R. Heeks ed., Routledge, London,
1999, 50-73.
[12]
Howley, K. Equity, access, and participation in community
networks. Social Science Computer Review, 16, (1998),
402-410.
[13]
Keenan T. P., and Trotter, D. M. The changing role of
community networks in providing citizen access to the
Internet. Internet Research: Electronic Networking
Applications and Policy, 9, 2, (1999) 100-108.
[14]
Kubicek, H., and Wagner, R.M. Community networks in a
generational perspective: The change of an electronic
medium within three decades. Information, Communication,
and Society, 5, (2002), 291-319.
[15]
Lievrouw, L., and Livingstone, S. The social shaping and
consequences of ICTs. In The Handbook of New Media, L.
Lievrouw and S. Livingstone, Eds. Sage, London, 2002, 121
.
149
[16]
Markus, L. Toward a "critical mass" theory of interactive
media. Communication Research, 14, (1987), 491511.
[17]
Orlikowski, W. Using technology and constituting structures:
A practice lens for studying technology in organizations.
Organization Science, 11, (2000), 404-428.
[18]
Page, M. and Scott, A. Change agency and women's
learning. Information, Communication & Society, 4, 4
(2001), 528-559.
[19]
Patterson, S.J., and Kavanaugh, A.L. Building a sustainable
community network: An application of critical mass theory.
Electronic Journal of Communication, 2, (2001).
http://www.cios.org/www/ejc/v11n201.htm.
[20]
Rafaeli, S., and LaRose, R. Electronic bulletin boards, and
"public goods" explanations of collaborative mass media.
Communication Research, 20, 2, (1993), 277-297.
[21]
Rosenbaum, H. Web-based community networks: A study of
information organization and access. ASIS '98: Access in the
global information economy. In Proceedings of the 61st
Annual Meetings of the American Society for Information
Society, 35, 1998, 516-527.
[22]
Scholl, H.J. Current practices in E-government-induced
business process change. Proceedings of the National
Conference on Digital Government, Digital Government
Research Center, 2004, 99-108.
[23]
Schuler, D. Community Computer Networks: An Opportunity
for Collaboration among Democratic Technology
Practitioners and Researchers, 1997.
http://www.sn.org/ip/commnet/oslo-197.text
[24]
Schuler, D. How to Kill Community Networks. Hint: We May
Have Already Started, 1996.
http://www.scn.org/ip/commnet/kill-commnets.html
[25]
Sproull, L., and Kiesler, S. Connections: New Ways of
Working in the Networked Organization. MIT Press,
Cambridge, MA, 1991.
[26] | critical mass;construction of technology;key players;community network;computerized community information system;participatory design;sustainability;skateholder involvement;Community networks |
51 | Can Machine Learning Be Secure? | Machine learning systems offer unparalled flexibility in dealing with evolving input in a variety of applications, such as intrusion detection systems and spam e-mail filtering. However , machine learning algorithms themselves can be a target of attack by a malicious adversary. This paper provides a framework for answering the question, "Can machine learning be secure?" Novel contributions of this paper include a taxonomy of different types of attacks on machine learning techniques and systems, a variety of defenses against those attacks, a discussion of ideas that are important to security for machine learning, an analytical model giving a lower bound on attacker's work function, and a list of open problems. | INTRODUCTION
Machine learning techniques are being applied to a growing
number of systems and networking problems, particularly
those problems where the intention is to detect anomalous
system behavior. For instance, network Intrusion Detection
Systems (IDS) monitor network traffic to detect abnormal
activities, such as attacks against hosts or servers. Machine
learning techniques offer the benefit that they can detect
novel differences in traffic (which presumably represent attack
traffic) by being trained on normal (known good) and
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
ASIACCS'06, March 2124, 2006, Taipei, Taiwan
Copyright 2006 ACM 1-59593-272-0/06/0003 ...
$
5.00
attack (known bad) traffic. The traditional approach to designing
an IDS relied on an expert codifying rules defining
normal behavior and intrusions [26]. Because this approach
often fails to detect novel intrusions, a variety of researchers
have proposed incorporating machine learning techniques
into intrusion detection systems [1, 16, 18, 24, 38, 41]. On
the other hand, use of machine learning opens the possibility
of an adversary who maliciously "mis-trains" a learning system
in an IDS. A natural question arises: what techniques
(in their attacks) can an adversary use to confuse a learning
system?
This paper explores the larger question, as posed in the title
of this paper, can machine learning be secure? Specific
questions that we examine include:
Can the adversary manipulate a learning system to
permit a specific attack? For example, can an attacker
leverage knowledge about the machine learning system
used by a spam e-mail filtering system to bypass the
filtering?
Can an adversary degrade the performance of a learning
system to the extent that system administrators
are forced to disable the IDS? For example, could the
attacker confuse the system and cause valid e-mail to
be rejected?
What defenses exist against adversaries manipulating
(attacking) learning systems?
More generally, what is the potential impact from a security
standpoint of using machine learning on a system
? Can an attacker exploit properties of the machine
learning technique to disrupt the system?
The issue of machine learning security goes beyond intrusion
detection systems and spam e-mail filters. Machine
learning is a powerful technique and has been used in a variety
of applications, including web services, online agent
systems, virus detection, cluster monitoring, and a variety
of applications that must deal with dynamically changing
data patterns.
Novel contributions of this paper include a taxonomy of different
types of attacks on machine learning techniques and
systems, a variety of defenses against those attacks, a discussion
of ideas that are important to security for machine
Invited Talk
learning, an analytical model giving a lower bound on attacker's
work function, and a list of open problems.
The rest of this paper is organized as follows: Section 2 discusses
machine learning and how it is typically used in a
system, Section 3 develops a taxonomy of attacks, Section 4
introduces potential defenses against attacks and explores
their potential costs, Section 5 identifies several of the ideas
that are important to security for machine learning, Section
6 presents an analytical model that examines an attack
to manipulate a naive learning algorithm, Section 7 discusses
related work, potential research directions, and our conclusions
REVIEW
A machine learning system attempts to find a hypothesis
function f that maps events (which we call points below)
into different classes. For example, an intrusion detection
system would find a hypothesis function f that maps an
event point (an instance of network behavior) into one of
two results: normal or intrusion.
One kind of learning system called supervised learning works
by taking a training data set together with labels identifying
the class for every point in the training data set.
For example, a supervised learning algorithm for an IDS
would have a training set consisting of points corresponding
to normal behavior and points corresponding to intrusion
behavior. The learning algorithm selects the hypothesis
function f that best predicts the classification of a point.
More complicated learning algorithms can deal with event
points that are both labeled and unlabeled and furthermore
can deal with continuous streams of unlabeled points so that
training is an ongoing process. In this paper, we call these
algorithms online learning systems.
This remainder of this subsection presents a concise overview
of concepts in statistical learning theory. The presentation
below is formal and can be skipped on a first reading. For
a fuller discussion with motivation, refer to [11, 31].
A predictive learning problem is defined over an input space
X, an output space Y, and a loss function : Y Y R.
The input to the problem is a training set
S, specified as
{(x
i
, y
i
)
X Y}, and the output is a hypothesis function
f :
X Y. We choose f from a hypothesis space (or function
class)
F to minimize the prediction error given by the
loss function. In many cases, researchers assume stationarity
, that the distribution of data points encountered in the
future will be the same as the distribution of the training
set. Stationarity allows us to reduce the predictive learning
problem to a minimization of the sum of the loss over the
training set:
f
= argmin
f
F
X
(x
i
,y
i
)
S
(f (x
i
), y
i
)
(1)
Loss functions are typically defined to be non-negative over
all inputs and zero when f (x
i
) = y
i
. A commonly used loss
function is the squared-error loss
sq
(f (x
i
), y) = (f (x
i
)
-y)
2
.
The hypothesis space (or function class)
F can be any representation
of functions from
X to Y, such as linear functions,
polynomials, boolean functions, or neural networks. The
choice of
F involves a tradeoff between expressiveness and
ability to generalize. If
F is too expressive, it can overfit
the training data. The extreme case is a lookup table that
maps x
i
to y
i
for each instance of the training set but will
not generalize to new data. A linear function, on the other
hand, will generalize what it learns on the training set to
new points, though it may not be sufficiently expressive to
describe intricate data sets. We typically use simple function
classes, such as linear functions, to avoid overfitting.
We can describe a more general learning problem by dropping
the requirement that the training examples include all
the labels y
i
. The case where all labels are present is referred
to as supervised learning, when no labels are present the
problem is unsupervised, and when some labels are present
the problem is semi-supervised. In all these cases we can
pose the learning problem as the minimization of some measure
over the training set:
f
= argmin
f
F
X
(x
i
)
S
L(x
i
, f )
(2)
2.2 Terminology and Running Example
To illustrate some of our contributions, we use a running example
throughout this paper: a network Intrusion Detection
System (IDS). This IDS receives network events x
X and
classifies each event x as either f (x) = normal or f (x) =
intrusion. The literature describes a number of algorithms
for learning f over time, but we wish to consider the impact
of malicious input on the learning algorithm. This paper
poses the question: can a malicious party send events to the
IDS that will cause it to malfunction? Possible types of attacks
on the IDS include attacks on the learning algorithm,
causing the IDS to create an f that misclassifies events. As
we discuss in the next section, this is only one of a number
of types of attacks that an adversary can make on an IDS.
It is important to be careful about notation here. When we
speak of attacks, we mean an attack on the learning system
(e.g., the learner in an IDS). Attacks may try to make the
learner mis-learn, fail because of denial of service, report
information about its internal state, etc. "Attack" should be
distinguished from "intrusion." An attack targets a learning
system; an intrusion targets a computer system (such as a
system protected by an IDS). While many researchers use
the word "attack" to include intrusions, in this paper we are
careful to use the word "attack" only to mean an attack on
a learner.
We do not want to restrict ourselves to particular learning
algorithms used by intrusion detection systems to choose
hypotheses. However, we allow adversaries that have deep
understanding of the learning algorithms.
Similarly, we do not discuss mechanisms for translating network
level events into a form relevant to the learner. We call
each unit of data seen by the learner a data point, or simply
a point. In the context of the IDS, our discussion encompasses
continuous, discrete, or mixed data. We assume that
X is a metric space, allowing us to freely discuss distances
Integrity
Availability
Causative:
Targeted
Permit a specific intrusion
Create sufficient errors to make system
unusable for one person or service
Indiscriminate
Permit at least one intrusion
Create sufficient errors to make
learner unusable
Exploratory:
Targeted
Find a permitted intrusion from a
small set of possibilities
Find a set of points misclassified by
the learner
Indiscriminate
Find a permitted intrusion
Table 1: The attack model.
between points. Furthermore, we assume the set of points
classified as normal by the IDS forms multiple contiguous
subsets in
X. The border of this set is called the decision
boundary.
Below, we consider a variety of scenarios and assumptions.
ATTACKS
We give relevant properties for analyzing attacks on machine
learning systems.
Influence
Causative - Causative attacks alter the training process
through influence over the training data.
Exploratory - Exploratory attacks do not alter the
training process but use other techniques, such as
probing the learner or offline analysis, to discover
information.
Specificity
Targeted - The specificity of an attack is a continuous
spectrum. At the targeted end, the focus of
the attack is on a particular point or a small set
of points.
Indiscriminate - At the indiscriminate end, the adversary
has a more flexible goal that involves a
very general class of points, such as "any false
negative."
Security violation
Integrity - An integrity attack results in intrusion
points being classified as normal (false negatives).
Availability - An availability attack is a broader class
of attack than an integrity attack. An availability
attack results in so many classification errors,
both false negatives and false positives, that the
system becomes effectively unusable.
These three axes define a space of attacks; Table 1 provides
a concise summary.
In causative attacks, the adversary has some measure of control
over the training of the learner. An attack that causes
the learner to misclassify intrusion points, for example an
attack that fools an IDS into not flagging a known exploit
as an intrusion, is a causative integrity attack. The distinction
between targeted and indiscriminate causative integrity
attacks is the difference between choosing one particular exploit
or just finding any exploit. A causative availability
attack causes the learner's performance to degrade. For example
, an adversary might cause an IDS to reject many legitimate
HTTP connections. A causative availability attack
may be used to force the system administrator to disable
the IDS. A targeted attack focuses on a particular service,
while an indiscriminate attack has a wider scope.
Exploratory attacks do not attempt to influence learning;
they instead attempt to discover information about the state
of the learner. Exploratory integrity attacks seek to find
intrusions that are not recognized by the learner.
3.2 Online Learning
A learner can have an explicit training phase or can be
continuously trained (online learner). Online learning allows
the learner to adapt to changing conditions; the assumption
of stationarity is weakened to accommodate long-term
changes in the distribution of data seen by the learner.
Online learning is more flexible, but potentially simplifies
causative attacks. By definition, an online learner changes
its prediction function over time, so an adversary has the opportunity
to shape this change. Gradual causative attacks
may be difficult to detect.
DEFENSES
In this section we discuss potential defenses against attacks.
This section describes speculative work, and the efficacy of
these techniques in practice is a topic for future research.
4.1 Robustness
To increase robustness against causative attacks we constrain
the class of functions (hypotheses) that the learner
considers. The constraint we consider is the statistical technique
of regularization. Regularization extends the basic
learning optimization in Equation (1) by adding a term J(f )
that penalizes complex hypotheses:
f
= argmin
f
F
8
<
: X
(x
i
,y
i
)
S
(f (x
i
), y
i
) + J(f )
9
=
; (3)
Here adjusts the trade-off. The penalty term J(f ) can
be as simple as the sum of squares of the parameters of f .
Regularization is used in statistics to restrict or bias the
choice of hypothesis when the problem suffers from lack of
Integrity
Availability
Causative:
Targeted
Regularization
Randomization
Regularization
Randomization
Indiscriminate
Regularization
Regularization
Exploratory:
Targeted
Information hiding
Randomization
Information hiding
Indiscriminate
Information hiding
Table 2: Defenses against the attacks in Table 1.
data or noisy data. It can also be interpreted as encoding a
prior distribution on the parameters, penalizing parameter
choices that are less likely a priori. Regularization and prior
distributions can both be viewed as penalty functions in
Equation (3) [42].
The constraint added to the learning problem by the penalty
term may help our defenses in two ways. First, it has the effect
of smoothing the solution, removing complexity that an
adversary might exploit in attacks. Second, prior distributions
can be a useful way to encode expert knowledge about
a domain or use domain structure learned from a preprocess-ing
step. In the simplest case, we might have a reasonable
guess for the parameters (such as the mean) that we wish
to refine; in a more complex situation, we could perform an
analysis of a related dataset giving correlation information
which informs a multivariate Gaussian prior on the parameters
[28]. When the learner has more prior information (or
constraints) on which to base the learning, there is less dependence
on exact data fitting, so there is less opportunity
for the adversary to exert influence over the learning process.
4.2 Detecting Attacks
The learner can benefit from the ability to detect attacks
even if they are not prevented. Detecting attacks can be difficult
even when the adversary is not attempting to conceal
them. However, we may be able to detect causative attacks
by using a special test set. This test set could include several
known intrusions and intrusion variants, as well as some
random points that are similar to the intrusions. After the
learner has been trained, misclassifying a disproportionately
high number of intrusions could indicate compromises.
To detect naive exploratory attacks, a separate clustering
algorithm could be run against data classified by the learner.
The sudden appearance of a large cluster near the decision
boundary could indicate systematic probing. This type of
defense is akin to port scan detection, which has become an
arms race between port scanners and IDS [26].
Detecting an attack gives the learner information about the
adversary's capabilities. This information may be used to
reformulate defense strategies.
As the adversary's control over the data increases, the best
strategy for the learner is to ignore potentially tainted data.
Otherwise, the adversary can exploit misplaced trust. These
ideas have been formalized within the context of deception
games [14, 32], which typically assume all players know the
extent to which other players may manipulate data. However
, if the parties estimate each other's abilities, more sophisticated
strategies emerge.
4.3 Disinformation
In some circumstances, the learner may be able to alter the
data seen by the adversary. This strategy of disinformation
has the goal of confusing the adversary's estimate of the
learner's state. In the simplest case, the adversary would
then be faced with a situation not unlike a learner under
an indiscriminate causative availability attack. The goal of
the learner is to prevent the adversary from learning the
decision boundary. Please note how the roles of adversary
and learner have been reversed.
A more sophisticated learner could trick the adversary into
believing that a particular intrusion was not included in the
training set. This apparently permitted "intrusion" would
act as a honeypot [27], causing the adversary to reveal itself.
An increase in the incidence of that particular attack would
be detected, revealing the existence of an adversary. In this
case again, roles would reverse, and the adversary would face
a situation analogous to a learner subjected to a targeted
causative integrity attack.
4.4 Randomization for Targeted Attacks
Targeted attacks hinge on the classification of one point or a
small set of points. They are more sensitive to variations in
the decision boundary than indiscriminate attacks because
boundary movement is more likely to change the classification
of the relevant points.
This suggests randomization as a potential tool against targeted
causative attacks. In such an attack, the adversary
has to do a particular amount of work to move the decision
boundary past the targeted point. If there is some randomization
in the placement of the boundary and the adversary
has imperfect feedback from the learner, more work is required
.
0
4.5 Cost of Countermeasures
The more we know about the distribution of training data,
the less room there is for an adversary to manipulate the
learner. The disadvantage, however, is that the legitimate
data has less influence in the learning process. A tension
exists between expressivity and constraint: as the learner
includes more prior information, it loses flexibility to adapt
to the data, but as it incorporates more information from
the data, it becomes more vulnerable to attack.
Equation (3) makes this tradeoff explicit with . In the adversarial
scenario, this tradeoff becomes more relevant because
the adversary may have influence over the data.
Randomization increases the adversary's work, but it also
will increase the learner's base error rate. Determining the
right amount of randomization is an open problem.
4.6 Summary of Defenses
Table 2 shows how our defenses discussed here relate to attack
classes presented in Table 1. (Information hiding is an
additional technique discussed in Section 5 below.)
DISCUSSION
A number of defenses and attacks upon machine learning
algorithms hinge upon the types of information available to
the adversary. Some of these involve information about the
decision boundary. Below we consider factors that influence
the security and secrecy of the decision boundary.
5.2 Scale of Training
Some machine learning systems are trained by the end user,
while others are trained using data from many users or organizations
. The choice between these two models is sometimes
cast as a tradeoff between the amount of training data
and the secrecy of the resulting classifier [3]. This issue also
applies to an IDS; if an IDS is trained each time it is deployed
then it will have comparatively little data regarding
normal network traffic. It will also have no chance to learn
about novel intrusions before seeing them in the wild.
Conversely, an IDS that uses a global set of rules would
be able to adapt to novel intrusion attempts more quickly.
Unfortunately, any adversary with access to a public IDS
classification function can test to ensure that its intrusion
points will be accepted by deployments of the same classification
function.
These issues are instances of a more general problem. In
some cases, it seems reasonable to assume the adversary has
little access to information available to the learner. However
, unless the adversary has no prior knowledge about the
learning problem at hand, we cannot assume all of the information
provided in the training set is secret. Therefore,
it is unclear how much is gained by attempting to keep the
training set, and therefore the state of the classifier, secret.
Many systems already attempt to achieve a balance between
global and local retraining [3]. Systems that take this approach
have the potential to outperform systems that perform
training at a single level. However, the relationships
between multilevel training, the adversary's domain knowledge
, and secrecy are not yet well understood.
5.2.1 Adversary Observations
Even without prior knowledge regarding a particular system
, an adversary still may deduce the state of the learning
algorithm. For example, if the learning system provides
feedback to the adversary (e.g., "Request denied"), then a
probing attack could be used to map the space of acceptable
inputs.
If the adversary has no information regarding the type of
decision boundary used by the learner, this process could
require a number of probes proportional to the size of the
space. On the other hand, if the adversary knows which
learning algorithm is being used, a few well-chosen probes
could give the adversary sufficient knowledge of the learner's
state. As a standard security practice, we assume the learning
algorithm itself to be common knowledge.
Instead of expecting the learning algorithm to be a secret,
some systems attempt to prevent the adversary from discovering
the set of features the learning algorithm uses. This
may be realistic in systems with a small number of deployments
.
Ideally, we could produce an information theoretic bound
on the amount of information an adversary could gain by
observing the behavior of a particular algorithm on a particular
point. Using these bounds, we could reason about
the algorithm's robustness against probing attacks. In this
setting, it may also be interesting to distinguish between information
gained from normal points drawn from the data's
underlying distribution, intrusion points from a third party,
and (normal or intrusion) attack points of the adversary's
choosing.
An adversary with sufficient information regarding training
data, classifications of data points, or the internal state of a
learner would be able to deduce the learner's decision boundary
. This knowledge could simplify other types of attacks.
For instance, the adversary could avoid detection by choosing
intrusion points that will be misclassified by the learner,
or launch an availability attack by manipulating normal
points in a way that leads to misclassification. In either
case, by increasing the number of points that are in the region
that the defender incorrectly classifies, the adversary
could increase the error rate.
Some algorithms classify points by translating them into an
abstract space and performing the actual classification in
that space. The mapping between raw data and the abstract
space is often difficult to reason about. Therefore, it may be
computationally difficult for an adversary to use knowledge
of a classifier's decision boundary to generate "interesting"
attack points that will be misclassified.
One can imagine classes of decision boundaries that are
meaningful, yet provably provide an adversary with no information
regarding unclassified points. Even with complete
knowledge of the state of a learner that uses such a decision
boundary, it would be computationally intractable to find
0
one of a few "interesting" points in a sufficiently large search
space.
In some cases, the decision boundary itself may contain sensitive
information. For example, knowledge of the boundary
may allow an adversary to infer confidential information
about the training set. Alternatively, the way the decision
boundary was constructed might be a secret.
5.2.2 Security Properties
The performance of different algorithms will likely degrade
differently as the adversary controls larger fractions of the
training set. A measurement of an algorithm's ability to
deal with malicious training errors could help system designers
reason about and decide between different learners.
A simple approach would be to characterize an algorithm's
performance when subjected to a particular type of attack,
but this would lead to an arms race as adversaries devise
classes of attacks not well represented during the evaluation
of the algorithm.
Depending on the exact nature of the classification problem
, it may be possible to make statements regarding the
strength of predictions. For example, after making a classification
a learning algorithm could examine the training set
for that classification. It could measure the effect of small
changes to that training set; if small changes generate large
effects, the training set is more vulnerable to manipulation.
THEORETICAL RESULTS
In this section we present an analytic model that examines
a causative attack to manipulate a naive learning algorithm.
The model's simplicity yields an optimal policy for the adversary
and a bound on the effort required to achieve the
adversary's objective. We interpret the resulting bound and
discuss possible extensions to this model to capture more
realistic settings.
We discuss an outlier detection technique. Outlier detection
is the task of identifying anomalous data and is a widely used
paradigm in fault detection [40], intrusion detection [23],
and virus detection [33, 34]. We find the smallest region
that contains some fixed percentage of the observed data,
which is called the support of the data's distribution. The
outlier detector classifies points inside the support as normal
and those outside as anomalous. Outlier detection is often
used in scenarios where anomalous data is scarce or novel
anomalies could arise.
6.1 A Simple Model
One simple approach to outlier detection is to estimate the
support of the normal data by a multi-dimensional hypersphere
. As depicted in Figure 1(a) every point in the hypersphere
is classified as normal and those outside the hypersphere
are classified as outliers. The training algorithm fixes
the radius of the hypersphere and centers it at the mean of
the training data. The hypersphere can be fit into the learning
framework presented above by a squared loss function,
sphere
(
X, x
i
) =
`x
i
- X
2
, where
X is the centroid of the
data
{x
i
}. It is easy to show that the parameter that minimizes
Equation (1) is the mean of the training data.
To make the hypersphere adaptive, the hypersphere is retrained
on new data allowing for a repeated attack. To
prevent arbitrary data from being introduced, we employ a
conservative retraining strategy that only admits new points
to the training set if they are classified as normal; we say
the classifier bootstraps itself. This learning framework is
not meant to represent the state of the art in learning techniques
; instead, it is a illustrative technique that allows for
an exact analysis.
6.2 Attack Strategy
The attack we analyze involves an adversary determined to
alter our detector to include a specific point G by constructing
data to shift the hypersphere toward the target as the
hypersphere is retrained. We assume the goal G is initially
correctly classified as an anomaly by our algorithm. For instance
, in the IDS domain, the adversary has an intrusion
packet that our detector currently classifies as anomalous.
The adversary wants to change the state of our detector to
misclassify the packet as normal. This scenario is a causative
targeted integrity attack. Before the attack, the hypersphere
is centered at
X
0
and it has a fixed radius R. The attack is
iterated over the course of T > 1 training iterations. At the
i-th iteration the mean of the hypersphere is denoted by
X
i
.
We give the adversary complete control: the adversary knows
the algorithm, its feature set, and its current state, and all
points are attack points. At each iteration, the bootstrapping
policy retrains on all points that were classified as normal
in a previous iteration. Under this policy, the adversary's
optimal strategy is straightforward -- as depicted in
Figure 1(b) the adversary places points at the location where
the line between the mean and G intersects with the boundary
. This reduces the attack to a single dimension along
this line. Suppose that in the i-th iteration, the adversary
strategically places
i
points at the i-th optimal location
achieving optimal displacement of the mean toward the adversary's
goal, G. The effort of the adversary is measured
by M defined as
P
T
i=1
i
.
Placing all attack points in the first iteration is not optimal.
It achieves a finite shift while optimal strategies achieve unbounded
gains. As we discuss below, the attack strategy
must be balanced. The more points placed during an iteration
, the further the hypersphere is displaced on that iteration
. However, the points placed early in the attack effectively
weigh down the hypersphere making it more difficult
to move. The adversary must balance current gain against
future gain. Another tradeoff is the number of rounds of
iteration versus the total effort.
6.3 Optimal Attack Displacement
We calculate the displacement caused by a sequence
{
i
} of
attack points. For T iterations and M total attack points,
the function D
R,T
(
{
i
}) denotes the relative displacement
caused by the attack sequence. The relative displacement is
the total displacement over the radius of the hypersphere,
X
T
-
X
0
R
. Let M
i
be defined as
P
i
j=1
j
, the cumulative
mass. Using these terms, the relative distance is
D
R,T
(
{M
i
}) = T T
X
i=2
M
i
-1
M
i
(4)
(a) Hypersphere Outlier Detection
(b) Attack on a Hypersphere Outlier Detector
Figure 1: Depictions of the concept of hypersphere outlier detection and the vulnerability of naive approaches.
In Figure 1(a) a bounding hypersphere centered at
X of fixed radius R is used to estimate the empirical support
of a distribution excluding outliers. Samples from the "normal" distribution being modeled are indicated by
with three outliers indicated by
. Meanwhile, Figure 1(b) depicts how an adversary with knowledge of
the state of the outlier detector could shift the outlier detector toward a first goal G. It could take several
iterations of attacks to shift the hypersphere further to include the second goal G
.
where we constrain M
1
= 1 and M
T
= M [25].
By finding an upper bound to Equation (4), we can bound
the minimal effort M
of the adversary. For a particular
M , we desire an optimal sequence
{M
i
} that achieves the
maximum relative displacement, D
R,T
(M ). If the adversary
has no time constraint, the solution is M
i
= i, which corresponds
to placing a single point at each iteration. However,
if the adversary expedites the attack to T < M iterations,
the optimal strategy is given by M
i
= M
i
-1
T
-1
. This value
is not always an integer, so we have:
D
R,T
(M )
T - (T - 1) M
-1
T
-1
T
(5)
6.4 Bounding the Adversary's Effort
From these results we find a bound on the adversary's effort
M . Since M
1 and T > 1, Equation (5) is monotonically
increasing in M . If the desired relative displacement to the
goal is D
R
, the bound in Equation (5) can be inverted to
bound the minimal effort M
required to achieve the goal.
Since D
R
< T , this bound is given by:
M
,, T-1
T
- D
R
T
-1
(6)
The bound in Equation (6) gives us a worst-case bound on
the adversary's capability when the adversary has complete
control of the learner's training. For large relative displacements
D
R
> 1, the bound decreases exponentially as the
number of iterations is increased. The bound has a limiting
value of M
e
D
R
-1
. The adversary must tradeoff between
using a large number of attack points or extending the attack
over many iterations. A tightly-fit hypersphere with
small radius will be more robust since our displacement is
relative to its radius.
An apparent deficiency of this analysis is the weak bound of
M
where 0 < 1 that occurs when D
R
1. This
an important range since the adversary's goal may be near
the boundary. The deficiency comes directly from our assumption
of complete adversarial control. The lack of initial
non-adversarial data allows our adversary to ensure a first
step of one radius regardless of M . Therefore, the adversary
can reach the objective of D
R
1 with any M 1 in a
single iteration.
A more complex model could allow for initial data. By considering
an initial N training points that support the hypersphere
before the attack, we can obtain a stronger bound:
M
N
he
D
R
- 1
i
(7)
This stronger bound ensures that even for small D
R
, the
adversary's effort is a multiple of N that increases exponentially
in the desired displacement [25].
We could extend the model by adding non-adversarial data
at every training iteration, for this corresponds to scenarios
where the adversary only controls part of the data.
CONCLUSIONS
The earliest theoretical work we know of that approaches
learning in the presence of an adversary was done by Kearns
and Li [15]. They worked in the context of Valiant's Probably
Approximately Correct (PAC) learning framework [35,
36], extending it to prove bounds for maliciously chosen errors
in the training data. Specifically, they proved that if
the learner is to perform correctly, in general the fraction
of training points controlled by the adversary must be less
than /(1 + ), where is the desired bound on classification
errors by the learner [4, 6, 30].
Results from game theory may be relevant to adversarial
learning systems.
In particular, deception games involve
players that have partial information and influence the information
seen by other players. Some of these games involve
continuous variables generated by various probability distributions
[5, 9, 17, 29, 32], while others apply to scenarios
with discrete states [14]. This work and adversarial learning
both ask many of the same questions, and they both address
the same underlying issues. Integration of game theoretic
concepts is a promising direction for work in this area.
Dalvi et al. examine the learn-adapt-relearn cycle from a
game-theoretic point of view [8]. In their model, the learner
has a cost for measuring each feature of the data and the
adversary has a cost for changing each feature in attack
points. If the adversary and learner have complete information
about each other and we accept some other assumptions
, they find an optimal strategy for the learner to defend
against the adversary's adaptations.
Research has also begun to examine the vulnerability of
learners to reverse engineering. Lowd and Meek introduce a
novel learning problem for adversarial classifier reverse engineering
in which an adversary conducts an attack that minimizes
a cost function [21]. Under their framework, Lowd
and Meek construct algorithms for reverse engineering linear
classifiers. Moreover, they build an attack to reverse
engineer spam filters [22].
Although they are not machine learning systems, publicly
verifiable digital watermarks also must deal with sensitivity
(probing) attacks. An information theoretic analysis of
the sensitivity attack quantifies the amount of information
revealed per probe.
Randomization of thresholds within
the watermark verification algorithm increase the number
of probes necessary to remove a digital watermark [19].
An interesting junction of learning and game theory has
dealt with combining advice from a set of experts to predict
a sequence with the goal of doing at least as well as the best
expert in all possible sequences [7, 13, 37]. In this domain,
adaptive weighting schemes are used to combine the experts
, each accessed by how well it performs compared to the
best expert for an adversarially chosen sequence. Amongst
these schemes are the Aggregating Algorithm [37] and the
Weighted Majority Algorithm [20].
There has also been work on attacking statistical spam filters
. Wittel and Wu [39] discuss the possibility of crafting
attacks designed to take advantage of the statistical nature
of such spam filters, and they implement a simple attack.
John Graham-Cumming describes implementing an attack
he calls "Bayes vs. Bayes," in which the adversary trains
a second statistical spam filter based on feedback from the
filter under attack and then uses the second filter to find
words that make spam messages undetectable by the original
filter [10].
Methods exist to perform exact learning of a concept using
answers to a series of queries. These queries return a coun-terexample
when a "no" response is generated. In many
scenarios, it has been shown that learning is possible even
in the worst case [2].
Control theory has been proposed as an alternative to game
theory and search oriented expert-systems for military command
and control systems [12]. The motivation behind this
proposal is the difficulty associated with modeling (or even
predicting) the goals of a military adversary.
7.2 Research Directions
Can machine learning be secure?
Does adding machine
learning to a system introduce vulnerability? This paper
proposes a framework for understanding these questions.
We present a model for describing attacks against learning
algorithms, and we analyze a simple attack in detail.
We discuss potential defenses against attacks and speculate
about their effectiveness.
Here we lay out the directions for research that we see as
most promising. To evaluate and ensure the security of machine
learning, these are among the most important areas
that must be addressed:
Information
How crucial is it to keep information secret from an
adversary? If an adversary has full knowledge of the
system, are all the exploratory attacks trivial? If the
adversary has no knowledge about the system, which
attacks are still possible?
Arms race
Can we avoid arms races in online learning systems?
Arms races have occurred in spam filters. Can game
theory suggest a strategy for secure re-training?
Quantitative measurement
Can we measure the effects of attacks? Such information
would allow comparison of the security performance
of learning algorithms. We could calculate
risk based on probability and damage assessments of
attacks.
Security proofs
Can we bound the amount of information leaked by the
learner? If so, we can bound the accuracy of the adversary's
approximation of the learner's current state.
Detecting adversaries
Attacks introduce potentially detectable side effects
such as drift, unusual patterns in the data observed by
the learner, etc. These attacks are more pronounced
in online learning. When do these side effects reveal
the adversary's attack?
ACKNOWLEDGMENTS
Thanks to Michael I. Jordan, Peter Bartlett and David Mol-nar
for their insightful discussions and comments regarding
this work. We gratefully acknowledge support from the National
Science Foundation and the Homeland Security Advanced
Research Projects Agency. The views expressed here
are solely those of the authors and do not necessarily reflect
the views of the funding agencies or any agency of the U.S.
government.
REFERENCES
[1] I. Androutsopoulos, J. Koutsias, K. V. Chandrinos,
G. Paliouras, and C. D. Spyropolous. An evaluation of
naive Bayesian anti-spam filtering. Proceedings of the
Workshop on Machine Learning in the New
Information Age, pages 917, 2000.
[2] D. Angluin. Queries and concept learning. Machine
Learning, 2(4):319342, Apr. 1988.
[3] Apache, http://spamassassin.apache.org/.
SpamAssassin.
[4] P. Auer. Learning nested differences in the presence of
malicious noise. Theoretical Computer Science,
185(1):159175, 1997.
[5] V. J. Baston and F. Bostock. Deception games.
International Journal of Game Theory, 17(2):129134,
1988.
[6] N. H. Bshouty, N. Eiron, and E. Kushilevitz. PAC
learning with nasty noise. Theoretical Computer
Science, 288(2):255275, 2002.
[7] N. Cesa-Bianchi, Y. Freund, D. P. Helmbold,
D. Haussler, R. E. Schapire, and M. K. Warmuth.
How to use expert advice. Journal of the ACM,
44(3):427485, May 1997.
[8] N. Dalvi, P. Domingos, Mausam, S. Sanghai, and
D. Verma. Adversarial classification. In Proceedings of
the Tenth ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining, pages 99108,
Seattle, WA, 2004. ACM Press.
[9] B. Fristedt. The deceptive number changing game in
the absence of symmetry. International Journal of
Game Theory, 26:183191, 1997.
[10] J. Graham-Cumming. How to beat an adaptive spam
filter. Presentation at the MIT Spam Conference, Jan.
2004.
[11] T. Hastie, R. Tibshirani, and J. Friedman. The
Elements of Statistical Learning: Data Mining,
Inference and Prediction. Springer, 2003.
[12] S. A. Heise and H. S. Morse. The DARPA JFACC
program: Modeling and control of military operations.
In Proceedings of the 39th IEEE Conference on
Decision and Control, pages 25512555. IEEE, 2000.
[13] M. Herbster and M. K. Warmuth. Tracking the best
expert. Machine Learning, 32(2):151178, Aug. 1998.
[14] J. P. Hespanha, Y. S. Ateskan, and H. H. Kizilocak.
Deception in non-cooperative games with partial
information. In Proceedings of the 2nd
DARPA-JFACC Symposium on Advances in
Enterprise Control, 2000.
[15] M. Kearns and M. Li. Learning in the presence of
malicious errors. SIAM Journal on Computing,
22:807837, 1993.
[16] A. Lazarevic, L. Ert
oz, V. Kumar, A. Ozgur, and
J. Srivastava. A comparative study of anomaly
detection schemes in network intrusion detection. In
D. Barbar
a and C. Kamath, editors, Proceedings of
the Third SIAM International Conference on Data
Mining, May 2003.
[17] K.-T. Lee. On a deception game with three boxes.
International Journal of Game Theory, 22:8995,
1993.
[18] Y. Liao and V. R. Vemuri. Using text categorization
techniques for intrusion detection. In Proceedings of
the 11th USENIX Security Symposium, pages 5159,
Aug. 2002.
[19] J.-P. M. Linnartz and M. van Dijk. Analysis of the
sensitivity attack against electronic watermarks in
images. In D. Aucsmith, editor, Information Hiding
'98, pages 258272. Springer-Verlag, 1998.
[20] N. Littlestone and M. K. Warmuth. The weighted
majority algorithm. Information and Computation,
108(2):212261, 1994.
[21] D. Lowd and C. Meek. Adversarial learning. In
Proceedings of the Eleventh ACM SIGKDD
International Conference on Knowledge Discovery and
Data Mining, pages 641647, 2005.
[22] D. Lowd and C. Meek. Good word attacks on
statistical spam filters. In Proceedings of the Second
Conference on Email and Anti-Spam (CEAS), 2005.
[23] M. V. Mahoney and P. K. Chan. Learning
nonstationary models of normal network traffic for
detecting novel attacks. In Proceedings of the Eighth
ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining, pages
376385, 2002.
[24] S. Mukkamala, G. Janoski, and A. Sung. Intrusion
detection using neural networks and support vector
machines. In Proceedings of the International Joint
Conference on Neural Networks (IJCNN'02), pages
17021707, 2002.
[25] B. Nelson. Designing, Implementing, and Analyzing a
System for Virus Detection. Master's thesis,
University of California at Berkeley, Dec. 2005.
[26] V. Paxson. Bro: A system for detecting network
intruders in real-time. Computer Networks,
31(23):24352463, Dec. 1999.
[27] N. Provos. A virtual honeypot framework. In
Proceedings of the 13th USENIX Security Symposium,
2004.
[28] R. Raina, A. Y. Ng, and D. Koller. Transfer learning
by constructing informative priors. In Neural
Information Processing Systems Workshop on
Inductive Transfer: 10 Years Later, 2005.
[29] M. Sakaguchi. Effect of correlation in a simple
deception game. Mathematica Japonica,
35(3):527536, 1990.
[30] R. A. Servedio. Smooth boosting and learning with
malicious noise. Journal of Machine Learning
Research (JMLR), 4:633648, Sept. 2003.
[31] J. Shawe-Taylor and N. Cristianini. Kernel Methods
for Pattern Analysis. Cambridge University Press,
2004.
[32] J. Spencer. A deception game. American Math
Monthly, 80:416417, 1973.
[33] S. J. Stolfo, S. Hershkop, K. Wang, O. Nimeskern, and
C. W. Hu. A behavior-based approach to secure email
systems. In Mathematical Methods, Models and
Architectures for Computer Networks Security, 2003.
[34] S. J. Stolfo, W. J. Li, S. Hershkop, K. Wang, C. W.
Hu, and O. Nimeskern. Detecting viral propagations
using email behavior profiles. In ACM Transactions
on Internet Technology, 2004.
[35] L. G. Valiant. A theory of the learnable.
Communications of the ACM, 27(11):11341142, Nov.
1984.
[36] L. G. Valiant. Learning disjunctions of conjunctions.
In Proceedings of the 9th International Joint
Conference on Artificial Intelligence, pages 560566,
1985.
[37] V. Vovk. Aggregating strategies. In M. Fulk and
J. Case, editors, Proceedings of the 7th Annual
Workshop on Computational Learning Theory, pages
371383, San Mateo, CA, 1990. Morgan-Kaufmann.
[38] L. Wehenkel. Machine learning approaches to power
system security assessment. IEEE Intelligent Systems
and Their Applications, 12(5):6072, Sept.Oct. 1997.
[39] G. L. Wittel and S. F. Wu. On attacking statistical
spam filters. In Proceedings of the First Conference on
Email and Anti-Spam (CEAS), 2004.
[40] W. Xu, P. Bodik, and D. Patterson. A flexible
architecture for statistical learning and data mining
from system log streams. In Temporal Data Mining:
Algorithms, Theory and Applications, Brighton, UK,
Nov. 2004. The Fourth IEEE International Conference
on Data Mining.
[41] D.-Y. Yeung and C. Chow. Parzen-window network
intrusion detectors. In Proceedings of the Sixteenth
International Conference on Pattern Recognition,
pages 385388, Aug. 2002.
[42] K. Yu and V. Tresp. Learning to learn and
collaborative filtering. In Neural Information
Processing Systems Workshop on Inductive Transfer:
10 Years Later, 2005.
| Indiscriminate attack;Targeted attack;Statistical Learning;Exploratory attack;Machine learning;Security Metrics;Integrity;Availability;Spam Filters;Causative attack;Intrusion Detection;Machine Learning;Intrusion detection system;Computer Security;Game Theory;Adversarial Learning;Learning algorithms;Computer Networks;Security |
52 | Catenaccio: Interactive Information Retrieval System through Drawing | The Catenaccio system integrates information retrieval with sketch manipulations. The system is designed especially for pen-based computing and allows users to retrieve information by simple pen manipulations such as drawing a picture. When a user draws a circle and writes a keyword, information nodes related to the keyword are collected automatically inside the circle. In addition, the user can create a Venn diagram by repeatedly drawing circles and keywords to form more complex queries. Thus, the user can retrieve information both interactively and visually without complex manipulations. Moreover, the sketch interaction is so simple that it is possible to combine it with other types of data such as images and real-world information for information retrieval. In this paper, we describe our Catenaccio system and how it can be effectively applied. | INTRODUCTION
Pen-based computers, such as personal digital assistants (PDA)
and tablet PCs, have been developed. These computers are
characterized by simple sketch interfaces similar to drawing a
picture on paper in the real world. This drawing manipulation is
not especially useful for communicating details, but is effective
for general use. It is especially useful for creative activities, so
there have been a number of research reports on improving sketch
manipulation [1, 2, 3].
In addition, some game devices (e.g., Nintendo DS [4]) support
such kinds of interactions and provide many types of game
content. In these systems, a user can use the entire system window
as a workspace and create 3D CG from 2D drawings. However, as
the original applications may not support information retrieval,
the user has to use conventional retrieval applications along with
pen-based input styles.
Considerable research has been done to support the use of
information visualization for retrieving information [5]. Technical
visualization methods such as zooming and scaling can be used to
effectively display huge amounts of data [6, 7, 8]. However,
existing visualization systems focus on mouse manipulation (e.g.,
click and drag), so they are not effectively designed for pen-based
interactions such as a drawing.
The most popular method of retrieving information is no doubt
keyword searching. Search engines via the Web (e.g., Google and
Yahoo) have been
generally used for keyword searching [12, 13],
and people feel that they cannot live without such search engines.
Generally, keyword searching requires users to input one or more
keywords. In these systems, users can retrieve information related
to the keywords with Boolean operations (e.g., AND, OR and
NOT). However, the systems are based on conventional input
methods. Users of pen-based computers have to write a query into
a fixed dialog box with a stylus or pen.
Therefore, we have been developing an information retrieval
system based on simple sketch manipulations. Our goal is to
devise an effective and simple information retrieval system that
works on pen-based computers, so we integrated a keyword
searching that is one of the most usual methods with sketch
manipulation that is one of the simple interactions. In our system,
users retrieve information by drawing a Venn diagram instead of
inputting keywords to a dialog box. Because the Venn diagram
can be used to display Boolean operations (e.g., AND, OR, and
NOT) visually and create some relationships at the same time,
users can recognize the relationships at a glance. Moreover, the
system allows users to use other types of data as elements in a
Venn diagram (Fig. 1).
In this paper, we describe our Catenaccio system that integrates
information retrieval with sketch manipulations, and explain how
it can be effectively applied for information retrieval.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
AVI '06, May 2326, 2006, Venezia, Italy.
Copyright 2006 ACM 1-59593-353-0/06/0005...$5.00.
79
Figure 1: Venn diagram: Venn diagram can be used to display
Boolean operations and some relationships at the same time
(top). The user can create an original Venn diagram (bottom).
Figure 2. Basic manipulation: A user of the Catenaccio system
draws a circle and writes a keyword inside that circle.
Information nodes related to the keyword are then collected
within the circled area.
RELATED WORK
A wide variety of information visualization systems are used for
information retrieval [5]. Treating information as visualized nodes
(e.g., images and simple shapes) allows users to interact with the
information space visually. Moreover, several techniques (e.g.,
scaling, zooming, focus and context) are used to display a huge
amount of information more effectively [6, 7, 8, 19]. Especially,
spring model [11] provides useful ways to recognize the
relationships between nodes. In these systems, related nodes move
when the user clicks and drags a node. That is, node positions are
dynamically changed through the user manipulations, so users
retrieve information interactively. InfoCrystal [9] is also a visual
tool focused on information retrieval. The system uses Venn
diagrams to treat huge amounts of information effectively.
However, conventional systems are designed for mouse
interactions (e.g., click and drag) and their layouts are predefined,
so they are not suitable for Pen-based computing, especially
drawing or writing by hand.
There are also several sketch interfaces focusing on pen-based
computing [1, 2, 3]. Most of them enhance drawing manipulations
and focus on 3D creations performed with 2D manipulation.
Characteristically, the manipulations required for these systems
are simple and are similar to drawing a stroke on a piece of paper
with a pen. Sketch [1] users can draw 3D curves by performing
2D manipulations. This system calculates a 3D curve by
combining a 2D stroke and a shadow stroke. Users of Harold [2]
and Tolba [3] can create flat models in a 3D space by using
sketch-based manipulation, effectively creating a 2.5D scene in a
3D space.
Figure 3. Drawing a Venn diagram: By repeatedly drawing
circles and keywords, users create Venn diagrams, and can
then retrieve information by forming complex queries.
Information nodes related with both "CG" and "3D" are
collected.
SYSTEM OVERVIEW
The Catenaccio system is focused on pen-based computing and
provides an interactive and visual information retrieval
environment of using drawing manipulations.
3.1
Drawing Circles and Writing Keywords
A user of the Catenaccio system draws a circle and writes a
keyword inside that circle. The system automatically recognizes
both the circle area and the keyword. Information nodes related to
the keyword are then collected within the circled area. By making
a continuous series of simple drawings, the user can create a Venn
diagram to form a more complex query. Since the entire window
is both a search area and a drawing canvas, the user can use the
workspace freely.
Since all the manipulations required for information retrieval are
based on sketch manipulations, users can design an original Venn
diagram related their interests. Thus, users can freely exploit the
whole application window as both an input and a search area and
retrieve information without complex GUIs.
The circle provides an area where information nodes related to the
keyword will be collected, and the keyword provides a query to
search for related information nodes from a database. By
continuing to use simple drawings, users can form more complex
queries. The related information nodes are moved with a force
that depends on the distance between the node position and the
center of the circle (Fig. 2).
Figure 3 shows an example of creating a Venn diagram by
continuing to draw circles and keywords. In the example, when a
user retrieves information that has two keywords "CG" and "3D",
the user first draws a circle and writes "CG" (Fig. 3 (1)), and then
draws another circle and writes "3D" (Fig. 3 (2)). Information
nodes related to both keywords appear in the shared area of the
Venn diagram (Fig. 3 (3 and 4)). Moreover, in the Venn diagram,
the user can view four areas at a glance (Fig. 3 (3)).
80
Figure 4: Venn diagram of a drawing and an image, and Venn
diagram of a drawing and an image that contains character
information.
Figure 5: Combination with real-world information:
Capturing real-world information as a picture (1, 2) and
drawing a circle around the keyword brings up related
information nodes (3, 4).
3.2 Combination with Other Types of Data
Users can now easily take pictures using digital cameras and cell
phones that contain CCD cameras. As a result, they may have a
huge amount of original image data in their computers. These data
include some information such as name, time, or place, so we
considered using them for information retrieval.
We have developed prototype applications to explore the potential
of Catenaccio. A Venn diagram is basically constructed by
combining keywords and areas, so it is possible to combine that
diagram with other types of data such as images and real-world
information. Images contain name, time, or place information, and
that information becomes a good trigger for retrieving other
information and it can be used for queries. In addition, the image
data has a rectangular shape that is useful for setting an area by
controlling its position and size.
The example in Figure 4 shows how users can use images to
create Venn diagrams by combining drawings with image data. In
the example, a file named "Mr. Tobita" becomes a query for the
Venn diagram, so information related to "VR" AND "Mr.
Tobita" is collected. In this case, even if users have forgotten
someone's name, they can still retrieve related information
through image contents.
Figure 5 shows an example a Venn diagram with real-world
information. Generally, using captured data for an interaction
trigger is a common technique in AR systems [20]. We also use it
for elements of Venn diagrams. Users first capture real-world
information through digital cameras attached to their computers
(Fig. 5 (1, 2)), and then draw circles around keywords on the
captured data and another circle to collect information nodes (Fig.
5 (3)). As the system recognizes the keyword inside the first circle,
related information nodes appear inside the second curve (Fig. 5
(4)).
Figure 6: Recognition of user drawings: The system labels the
user drawing area (1, 2). The system recognizes written
keywords by using an OCR library (3)
Previously, we have proposed a similar information retrieval
system [10]. That system provides natural interactions, however,
it completely depends on real-world objects. However,
Catenaccio provides not only real-world information, but also
drawing manipulations. Thus, the user can retrieve information
even if there are not enough real-world objects.
IMPLEMENTATION
The entire workspace is bitmapped as in conventional 2D paint
systems, and user drawing manipulations are reflected in the
bitmap. The system supports two types of drawing, writing
keywords and area drawings. The keyword writing is displayed as
green, and the area drawing is displayed as blue. To set an area,
the system labels the inside of a green area and knows the size
and position of the area (Fig. 6 (1)). Then, the system divides the
area into four layers for node animations (Fig. 6(2)). For keyword
writing, the system sends the result to an OCR library to search
for its meaning (Fig. 6 (3)).
Catenaccio is a prototype system now, and the relationship
between keywords and information nodes are predefined in a
temporary database. The database contains three types of data:
node names, keywords, and relationship levels. After the image
recognition processes, nodes related to the keyword are selected
and start moving until they are in the area appropriate to their
relationship. The force is calculated by spring model [11]. For
example, the node with the strongest relationship with the
keyword receives a force that takes it to the deepest area.
DISSCUSSION
We have had some opportunities to demonstrate our system. Here,
we discuss user interactions with Catenaccio based on comments
made by visitors to our demonstrations. Also we consider the
limitations of the system and our plans for future work.
From our demonstrations, the visitors quickly understood the
concepts of our system that integrates information retrieval with
81
drawing manipulations. Using Venn diagrams makes recognizing
the relationships between information nodes and keywords easy.
Most visitors could create simple Venn diagrams and set related
nodes into the diagrams after watching a simple demonstration.
We observed that some users drew interesting Venn diagrams that
resembled pictures. The system facilitates creative activities, so
we expect users will be able to create more original, and
increasingly effective drawings for information retrieval.
Especially, we received good reactions from users regarding the
combination of drawing and an image to create a Venn diagram.
By exploiting such combinations, the system augments keyword
searching, and it is different from conventional search engines [12,
13]. Moreover, combining keywords and user-drawn pictures to
create Venn diagrams is possible.
Our system focuses on information retrieval for pen-based input.
However, information retrieval using Venn diagrams is quite
rough. We plan to combine our system with other types of sketch-based
systems such as VelvetPath [16] to support more detailed
interaction. With such a combination, users can use Catenaccio
for general retrieval of information and then use VelvetPath to
examine the information in more detail. In this case, all the
manipulations would still be based on drawing or handwriting, so
a user can handle a large amount of data in a natural way.
Drawing manipulation is also useful for finger gestures.
Many AR systems support the use of finger gestures as an input
method [17, 18]. As the system recognizes user finger gestures,
users can create Venn diagrams by manipulating real-world
objects, drawing circles, and writing keywords.
CONCLUSION
We described the Catenaccio system that is focused on Pen-based
computing and allows users to retrieve information by drawing
Venn diagrams. The system recognizes user writing and drawings
(keywords and circles) and places information related to the
keywords inside the circles. Using this input, the system provides
an interactive and visual information retrieval method. We
described some examples of retrieving information through
simple drawings. We also provided several examples of unique
Venn diagrams created by combining drawings with images and
real-world information.
ACKNOWLEDGMENTS
We thank Tetsuji Takada and Sinji Daigo for the visible and
useful suggestions on this work.
REFERENCES
[1]
R. C. Zeleznik, K. P. Herndon, and J. F. Hughes. An
Interface for Sketching 3D Curves. In Proceedings of
ACM SIGGRAPH '96, pp. 163-170, 1996.
[2]
J. M. Cohen, J. F. Hughes, and R. C. Zeleznik. Harold:
A World Made of Drawings. In Proceedings of
NPAR2000 (Symposium on Non-Photorealistic
Animation and Rendering), pp. 83-90, 2000.
[3]
O. Tolba, J. Doresey, and L. McMillan. Sketching with
Projective 2D Strokes. In Proceedings of ACM
UIST '99, pp. 149-157, 1999.
[4]
Nintendo DS: http://www.nintendo.co.jp/ds/
[5]
S. K. Card, J. D. MacKinlay, and B. Shneiderman.
Readings in Information Visualization: Using Vision
to Think. Morgan Kaufmann, 1999.
[6]
H. Koike. Fractal views: a fractal-based method for
controlling information display. In Proceedings of
ACM Transactions on Information Systems, Vol. 13,
No. 3, pp. 305-323, July 1995.
[7]
G. W. Furnas. Generalized fisheye views. In
Proceedings of the ACM Transactions on Computer-Human
Interaction, Vol. 1, No. 2, pp. 126-160, 1994.
[8]
B. B. Bederson, J. D. Hollan, K. Perlin, J. Meyer, D.
Bacon, and G. Furnas. Pad++: A Zoomable Graphical
Sketchpad for Exploring Alternate Interface Physics.
Journal of Visual Languages and Computing, Vol. 7,
No. 1, pp. 3-31, 1996.
[9]
A. Spoerri. Visual tools for information retrieval. In
Proceedings of VL'93, pp. 160-168, 1993.
[10]
H. Koike, Y. Sato, Y. Kobayashi, H. Tobita and M.
Kobayashi. Interactive Textbook and Interactive Venn
Diagram. In Proceedings of ACM CHI2000, pp. 121-128
, 2000.
[11]
R. Davidson and D. Harel. Drawing Graphics Nicely
Using Simulated Annealing. In Proceedings of ACM
Transactions on Graphics, Vol. 15, No. 4, pp. 301-331,
1996.
[12]
Google: http://www.google.com
[13]
Yahoo: http://www.yahoo.com
[14]
T. Calishain and R. Dornfest. GoogleHack:
100
Industrial-Strength Tips & Tricks
. O'RELLY, 2003.
[15]
P. Bausch. AmazonHack: 100 Industrial-Strength Tips
& Tools. O'RELLY, 2003.
[16]
H. Tobita. VelvetPath: Layout Design System with
Sketch and Paint Manipulations, In Proceedings of
EUROGRAPHICS2003 Short Presentations, pp. 137-144
, 2003.
[17]
J. Rekimoto. SmartSkin: An Infrastructure for
Freehand Manipulation on Interactive Surfaces, In
Proceedings of ACM CHI2002, 113-120, 2002.
[18] X. Chen, H. Koike, Y. Nakanishi, K. Oka, and Y. Sato. Two-handed
drawing on augmented desk system, In Proceedings
of AVI 2002, 2002.
[19] E. Orimo and H. Koike. ZASH: A browsing system for
multi-dimensional data. In Proceedings of IEEE VL '99, pp.
266-286, 1999.
[20] J. Rekimoto and K. Nagao. The world through the computer:
Computer augmented interaction with real world
environments. In Proceedings of ACM UIST'95, pp. 29-36,
1995.
82 | Sketch manipulation;Information node;Venn diagram;Visual information retrieval;Pen-based computing;Image data;Information retrieval;keyword searching;2D system;sketch manipulations;interactive system;Catnaccio system;Interactive information retrieval system |
53 | Compression of Inverted Indexes For Fast Query Evaluation | Compression reduces both the size of indexes and the time needed to evaluate queries. In this paper, we revisit the compression of inverted lists of document postings that store the position and frequency of indexed terms, considering two approaches to improving retrieval efficiency:better implementation and better choice of integer compression schemes. First, we propose several simple optimisations to well-known integer compression schemes, and show experimentally that these lead to significant reductions in time. Second, we explore the impact of choice of compression scheme on retrieval efficiency. In experiments on large collections of data, we show two surprising results:use of simple byte-aligned codes halves the query evaluation time compared to the most compact Golomb-Rice bitwise compression schemes; and, even when an index fits entirely in memory, byte-aligned codes result in faster query evaluation than does an uncompressed index, emphasising that the cost of transferring data from memory to the CPU cache is less for an appropriately compressed index than for an uncompressed index. Moreover, byte-aligned schemes have only a modest space overhead:the most compact schemes result in indexes that are around 10% of the size of the collection, while a byte-aligned scheme is around 13%. We conclude that fast byte-aligned codes should be used to store integers in inverted lists. | INTRODUCTION
Search engines have demanding performance requirements.
Users expect fast answers to queries, many queries must be
processed per second, and the quantity of data that must
be searched in response to each query is staggering. The
demands continue to grow:the Google search engine, for
example, indexed around one billion documents a year ago
and now manages more than double that figure
1
. Moreover,
the increasing availability and affordability of large storage
devices suggests that the amount of data stored online will
continue to grow.
Inverted indexes are used to evaluate queries in all practical
search engines [14]. Compression of these indexes has
three major benefits for performance. First, a compressed
index requires less storage space. Second, compressed data
makes better use of the available communication bandwidth;
more information can be transfered per second than when
the data is uncompressed. For fast decompression schemes,
the total time cost of transfering compressed data and sub-sequently
decompressing is potentially much less than the
cost of transferring uncompressed data. Third, compression
increases the likelihood that the part of the index required
to evaluate a query is already cached in memory, thus entirely
avoiding a disk access. Thus index compression can
reduce costs in retrieval systems.
We have found that an uncompressed inverted index that
stores the location of the indexed words in web documents
typically consumes more than 30% of the space required
to store the uncompressed collection of documents. (Web
documents often include a great deal of information that is
not indexed, such as HTML tags; in the TREC web data,
which we use in our experiments, on average around half
of each document is indexable text.) When the index is
compressed, the index size is reduced to between 10%15%
of that required to store the uncompressed collection; this
size includes document numbers, in-document frequencies,
and word positions within documents. If the index is too
large to fit entirely within main memory, then querying the
uncompressed index is slower:as we show later, it is up to
twice as slow as the fastest compressed scheme.
In this paper, we revisit compression schemes for the in-1
See http://www.google.com/
222
verted list component of inverted indexes. We also propose
a new method for decoding lists. There have been a great
many reports of experiments on compression of indexes with
bitwise compression schemes [6, 8, 12, 14, 15], which use an
integral number of bits to represent each integer, usually
with no restriction on the alignment of the integers to byte
or machine-word boundaries. We consider several aspects
of these schemes:how to decode bitwise representations of
integers efficiently; how to minimise the operations required
for the most compact scheme, Golomb coding; and the relative
performance of Elias gamma coding, Elias delta coding,
Golomb coding, and Rice coding for storing indexes.
We question whether bitwise compression schemes are the
best choice for storing lists of integers. As an alternative,
we consider bytewise integer compression schemes, which
require that each integer is stored in an integral number of
blocks, where each block is eight bits. The length of each
stored integer can therefore be measured in an exact number
of bytes. An additional restriction is to require that these
eight-bit blocks must align to machine-word or byte boundaries
.
We propose and experimentally investigate several
variations of bytewise schemes.
We investigate the performance of different index compression
schemes through experiments on large query sets
and collections of Web documents. We report two surprising
results.
For a 20 gigabyte collection, where the index is several
times larger than main memory, optimised bytewise
schemes more than halve the average decoding time
compared to the fastest bitwise approach.
For a much smaller collection, where the index fits in
main memory, a bytewise compressed index can still
be processed faster than an uncompressed index.
These results show that effective use of communication bandwidths
is important for not only disk-to-memory transfers
but also memory-to-cache transfers. The only disadvantage
of bytewise compressed indexes is that they are up to 30%
larger than bitwise compressed indexes; the smallest bitwise
index is around 10% of the uncompressed collection size,
while the bytewise index is around 13%.
INVERTED INDEXES
An inverted index consists of two major components:the
vocabulary of terms--for example the words--from the collection
, and inverted lists, which are vectors that contain
information about the occurrence of the terms [14].
In a basic implementation, for each term t there is an inverted
list that contains postings < f
d,t
, d > where f
d,t
is
the frequency f of term t in the ordinal document d. One
posting is stored in the list for each document that contains
the term t. Inverted lists of this form--along with additional
statistics such as the document length l
d
, and f
t
, the number
of documents that contain the term t--are sufficient to
support ranked and Boolean query modes.
To support phrase querying or proximity querying, additional
information must be kept in the inverted lists. Thus
inverted list postings should be of the form
< f
d,t
, d, [o
0,d,t
. . . o
f
d,t
,d,t
] >
The additional information is the list of offsets o; one offset
is stored for each of the f
d,t
occurrences of term t in
document d. Postings in inverted lists are usually ordered
by increasing d, and the offsets likewise ordered within the
postings by increasing o. This has the benefit that differences
between values--rather than the raw values--can be
stored, improving the compressibility of the lists.
Other arrangements of the postings in lists are useful when
lists are not necessarily completely processed in response to
a query. For example, in frequency-sorted indexes [9, 10]
postings are ordered by f
d,t
, and in impact-ordered indexes
the postings are ordered by quantised weights [1]. These approaches
also rely on compression to help achieve efficiency
gains, and the improvements to compression performance
we describe in this paper are as applicable to these methods
as they are to the simple index representations we use as a
testbed for our compression methods.
Consider an example inverted list with offsets for the term
"Matthew":
< 3, 7, [6, 51, 117] >< 1, 44, [12] >< 2, 117, [14, 1077] >
In this index, the terms are words, the offsets are word positions
within the documents, and the lists are ordered by d.
This inverted list states that the term "Matthew" occurs 3
times in document 7, at offsets 6, 51, and 117. It also occurs
once in document 44 at offset 12, and twice in document 117,
at offsets 14 and 1077.
Ranked queries can be answered using the inverted index
as follows. First, the terms in the user's query are located
in the inverted index vocabulary. Second, the corresponding
inverted lists for each term are retrieved from disk, and
then processed by decreasing f
t
. Third, for each posting in
each inverted list, an accumulator weight A
d
is increased;
the magnitude of the increase is dependent on the similarity
measure used, and can consider the weight w
q,t
of term t
in the query q, the weight w
d,t
of the term t in the document
d, and other factors. Fourth, after processing part [1,
6] or all of the lists, the accumulator scores are partially
sorted to identify the most similar documents. Last, for a
typical search engine, document summaries of the top ten
documents are generated or retrieved and shown to the user.
The offsets stored in each inverted list posting are not used
in ranked query processing.
Phrase queries require offsets and that a given sequence of
words be contiguous in a matching document. For example,
consider a combined ranked and phrase query:
"Matthew Richardson" Richmond
To evaluate such a query, the same first two steps as for
ranked querying are applied. Then, instead of accumulating
weights, it is necessary to construct a temporary inverted list
for the phrase, by fetching the inverted list of each of the
individual terms and combining them. If the inverted list for
"Matthew" is as above and the inverted list for "Richardson"
is
< 1, 7, [52] > < 2, 12, [1, 4] > < 1, 44, [83] >
then both words occur in document 7 and as an ordered
pair. Only the word "Richardson" is in document 12, both
words occur in document 44 but not as a pair, and only
"Matthew" occurs in document 117. The list for "Matthew
Richardson" is therefore
< 1, 7, [51] >
223
After this, the ranking process is continued from the third
step, where the list for the term "Richmond" and the newly
created list are used to adjust accumulator weights. Phrase
queries can involve more than two words.
COMPRESSING INVERTED INDEXES
Special-purpose integer compression schemes offer both
fast decoding and compact storage of inverted lists [13, 14].
In this section, we consider how inverted lists are compressed
and stored on disk. We limit our discussions here to the
special-purpose integer compression techniques that have
previously been shown to be suitable for index compression,
and focus on their use in increasing the speed of retrieval
systems.
Without compression, the time cost of retrieving inverted
lists is the sum of the time taken to seek for and then retrieve
the inverted lists from disk into memory, and the time taken
to transfer the lists from memory into the CPU cache before
they are processed. The speed of access to compressed
inverted lists is determined by two factors:first, the com-putational
requirements for decoding the compressed data
and, second, the time required to seek for and retrieve the
compressed data from disk and to transfer it to the CPU
cache before it is decoded. For a compression scheme to
allow faster access to inverted lists, the total retrieval time
and CPU processing costs should be less than the retrieval
time of the uncompressed representation. However, a third
factor makes compression attractive even if CPU processing
costs exceed the saving in disk transfer time:compressing
inverted lists increases the number of lists that can be
cached in memory between queries, so that in the context of
a stream of queries use of compression reduces the number
of disk accesses. It is therefore important that a compression
scheme be efficient in both decompression CPU costs
and space requirements.
There are two general classes of compression scheme that
are appropriate for storing inverted lists.
Variable-bit or
bitwise schemes store integers in an integral number of bits.
Well-known bitwise schemes include Elias gamma and delta
coding [3] and Golomb-Rice coding [4]. Bytewise schemes
store an integer in an integral number of blocks, where a
block is eight bits in size; we distinguish between blocks and
bytes here, since there is no implied restriction that a block
must align to a physical byte-boundary. A simple bytewise
scheme is variable-byte coding [2, 13]; uncompressed integers
are also stored in an integral number of blocks, but we
do not define them as bytewise schemes since, on most architectures
, an integer has a fixed-size representation of four
bytes. In detail, these schemes are as follows.
Elias coding [3] is a non-parameterised bitwise method of
coding integers. (Non-parameterised methods use static or
fixed codes to store integers.) The Elias gamma code represents
a positive integer k by 1 + log
2
k stored as a unary
code, followed by the binary representation of k without its
most significant bit. Using Elias gamma coding, small integers
are compactly represented; in particular, the integer 1
is represented as a single 1-bit. Gamma coding is relatively
inefficient for storing integers larger than 15 [13].
Elias delta codes are suited to coding larger integers, but
are inefficient for small values. For an integer k, a delta
code stores the gamma code representation of 1 + log
2
k ,
and then the binary representation of k without its most
significant bit.
Golomb-Rice bitwise coding [4] has been shown to offer
more compact storage of integers and faster retrieval than
the Elias codes [13]; indeed, it is bitwise optimal under the
assumption that the set of documents with a given term is
random. The codes are adapted to per-term likelihoods via
a parameter that is used to determine the code emitted for
an integer. In many cases, this parameter must be stored
separately using, for example, an Elias code. For coding of
inverted lists, a single parameter is used for all document
numbers in a postings list, but each posting requires a parameter
for its offsets. The parameters can be calculated as
the lists are decoded using statistics stored in memory and
in the lists, as we discuss later.
Coding of an integer k using Golomb codes with respect
to a parameter b is as follows. The code that is emitted is in
two parts:first, the unary code of a quotient q is emitted,
where q = (k - 1)/b + 1; second, a binary code is emitted
for the remainder r, where r = k - q b - 1. The number
of bits required to store the remainder r is either log
2
b or
log
2
b . To retrieve the remainder, the value of the "toggle
point" t = 1 ((log
2
k)+1))-b is required, where
indicates
a left-shift operation. After retrieving
log
2
b bits of the
remainder r, the remainder is compared to t. If r > t, then
one additional bit of the remainder must be retrieved. It
is generally thought that caching calculated values of log
2
b
is necessary for fast decoding, with a main-memory penalty
of having to store the values. However, as we show later,
when the standard log library function is replaced with a
fast bit-shifting version, caching is unnecessary.
Rice coding is a variant of Golomb coding where the value
of b is restricted to be a power of 2. The advantage of this
restriction is that there is no "toggle point" calculation required
, that is, the remainder is always stored in exactly
log
2
b bits. The disadvantage of this scheme is that the
choice of value for b is restricted and, therefore, the compression
is slightly less effective than that of Golomb coding.
For compression of inverted lists, a value of b is required.
Witten et al. [14] report that for cases where the probability
of any particular integer value occurring is small--which is
the usual case for document numbers d and offsets o--then
b can be calculated as:
b = 0.69 mean(k)
For each inverted list, the mean value of document numbers
d can be approximated as k = N/f
t
where N is the number
of documents in the collection and f
t
is the number of postings
in the inverted list for term t [14]. This approach can
also be extended to offsets:the mean value of offsets o for
an inverted list posting can be approximated as k = l
d
/f
d,t
where l
d
is the length of document d and f
d,t
is the number
of offsets of term t within that document. As the statistics
N, f
t
, and l are often available in memory, or in a simple
auxiliary structure on disk, storage of b values is not required
for decoding; approximate values of l can be stored in memory
for compactness [7], but use of approximate values has
little effect on compression effectiveness as it leads to only
small relative errors in computation of b.
In bytewise coding an integer is stored in an integral number
of eight-bit blocks. For variable-byte codes, seven bits
in each block are used to store a binary representation of
the integer k. The remaining bit is used to indicate whether
the current block is the final block for k, or whether an additional
block follows. Consider an example of an integer k
224
in the range of 2
7
= 128 to 2
14
= 16, 384. Two blocks are
required to represent this integer:the first block contains
the seven least-significant bits of the integer and the eighth
bit is used to flag that another block follows; the second
block contains the remaining most-significant bits and the
eighth bit flags that no further blocks follow. We use the
convention that the flag bit is set to 1 in the final block and
0 otherwise.
Compressing an inverted index, then, involves choosing
compression schemes for the three kinds of data that are
stored in a posting:a document number d, an in-document
frequency f
d,t
, and a sequence of offsets o. A standard choice
is to use Golomb codes for document numbers, gamma codes
for frequencies, and delta codes for offsets [14]. (We explore
the properties of this choice later.) In this paper, we describe
such a choice as a GolD-GamF-DelO index.
3.1
Fast Decoding
We experiment with compression of inverted lists of postings
that contain frequencies f
d,t
, documents numbers d, and
offsets o. For fast decompression of these postings, there are
two important considerations:first, the choice of compression
scheme for each component of the posting; and, second,
modifications to each compression scheme so that it is both
fast and compatible with the schemes used for the other
components. In this section, we outline the optimisations
we use for fast decompression. Our code is publically available
and distributed under the GNU public licence.
2
Bitwise Compression
We have experimented with a range of variations of bitwise
decompression schemes. Williams and Zobel [13] reported
results for several efficient schemes, where vectors that contain
compressed integers are retrieved from disk and subse-quently
decoded.
3
In their approach, vector decoding uses
bitwise shift operations, bit masks, multiplication, subtraction
, and function calls to retrieve sequences of bits that
span byte boundaries. In our experiments on Intel Pentium-based
servers running the Linux operating system, we have
found that bitwise shift operations are usually faster than
bit masks, and that the function calls are slow. By opti-mising
our code to use bitwise shifts and to remove nested
function calls, we have found that the overall time to decode
vectors--regardless of the compression scheme used--is on
average around 60% of that using the code of Williams and
Zobel.
Other optimisations that are specific to Golomb-Rice coding
are also of value. Golomb-Rice decoding requires that
log
2
b is calculated to determine the number of remainder
bits to be retrieved.
It is practicable to explicitly cache
values of log
2
b in a hash table as they are calculated, or
to pre-calculate all likely-to-be-used values as the retrieval
query engine is initialised. This saves recalculation of logarithms
when a value of b is reused in later processing, with
the penalty of additional memory requirements for storing
the lookup table.
We measured the performance of Golomb coding with and
2
The
search
engine
used
in
these
experiments
and
our
integer
compression
code
is
available
from
http://www.seg.rmit.edu.au/
3
The code used by Williams and Zobel in their experiments
is available from http://www.cs.rmit.edu.au/
~hugh/software/
without caching. Timings are average elapsed query evaluation
cost to process index information for 25,000 queries on a
9.75 gigabyte (Gb) collection of Web data [5] using our prototype
retrieval engine on a GolD-GamF-GolO index (that
is, Golomb document numbers, gamma frequencies, Golomb
offsets); we discuss collection statistics and experimental design
further in Section 4. The cache lookup table size is
unrestricted.
We found that, without caching of log
2
b values, the average
query evaluation time is 0.961 seconds. Caching of
log
2
b values as they are calculated during query processing
roughly halves the average query evaluation time, to 0.494
seconds. Pre-calculating and storing the values offers almost
no benefit over caching during query processing, reducing
the time to 0.491 seconds; this reflects that only limited
b values are required during query evaluation. Caching of
toggle points yields 0.492 seconds. As toggle points are calculated
using bitwise shifts, addition, and subtraction, this
is further evidence that bitwise shifts are inexpensive on our
hardware.
An alternative approach to managing log computations
is to replace the standard library log function with a loop
that determines
log
2
b using bitwise shifts and equality
tests; the logarithm value can be determined by locating
the position of the most-significant 1-bit in b. We found
that this led to slight additional improvements in the speed
of decoding Golomb codes, outperforming explicit caching.
All Golomb-Rice coding results reported in this paper are
computed in this way.
Bytewise Compression
We have experimented with improvements to variable-byte
coding. Unlike in bitwise coding, we have found that masking
and shifting are equally as fast because of the large number
of shifts required. We use shifts in our experiments.
Perhaps the most obvious way to increase the speed of
variable-byte decoding is to align the eight-bit blocks to byte
boundaries. Alignment with byte boundaries limits the decoding
to only one option:the flag bit indicating if this is the
last byte in the integer is always the most significant bit, and
the remaining seven bits contain the value. Without byte
alignment, additional conditional tests and operations are
required to extract the flag bit, and the seven-bit value can
span byte boundaries. We would expect that byte alignment
would improve the speed of decoding variable-byte integers.
Figure 1 shows the effect of byte alignment of variable-byte
integers. In this experiment, variable-byte coding is
used to store the offsets o in each inverted list posting. The
optimised Golomb coding scheme described in the previous
section is used to code document numbers d and Elias
gamma coding is used to store the frequencies f
d,t
. We refer
to this as a GolD-GamF-VbyO index.
The graph at the left of Figure 1 shows total index size
as a percentage of the uncompressed collection being indexed
. The first bar shows that, without byte alignment,
the GolD-GamF-VbyO index requires almost 13% of the
space required by the collection. The second bar shows that
padding to byte alignment after storing the Gamma-coded
f
d,t
values increases the space requirement to just over 13.5%
of the collection size. We discuss the other schemes in this
figure later in this section.
The graph at the right of Figure 1 shows elapsed query
evaluation times using different index designs. Timings are
225
Original
Original with byte boundary
Signature block
Signature block with byte boundary
0
5
10
15
20
Size (% of Collection)
Original
Original with byte boundary
Signature block
Signature block with byte boundary
Scanning
Scanning with byte boundary
Scanning with signature block
Scanning with signature block and byte boundary
0.0
0.2
0.4
0.6
0.8
1.0
Average Query Time (Seconds)
Figure 1: Variable-byte schemes for compressing offsets
in inverted lists in a GolD-GamF-VbyOindex.
Four different compression schemes are shown and,
for each, both original and scanning decoding are
shown. Scanning decoding can be used when offsets
are not needed for query resolution.
the average elapsed query evaluation cost to process the
inverted lists for 25,000 queries on a 20 Gb collection of
Web [5] data using our prototype retrieval engine. Queries
are processed as conjunctive Boolean queries. The first bar
shows that the average time is around 0.7 seconds for the
GolD-GamF-VbyO index without byte alignment. The second
bar shows that the effect of byte alignment is a 25% reduction
in average query time. Therefore, despite the small
additional space requirement, byte-alignment is beneficial
when storing variable-byte integers.
A second optimisation to variable-byte coding is to consider
the query mode when processing the index. For querying
that does not use offsets--such as ranked and Boolean
querying--decoding of the offsets in each posting is unnecessary
. Rather, all that is required are the document numbers
d and document frequencies f
d,t
. An optimisation is therefore
to only examine the flag bit of each block and to ignore
the remaining seven bits that contain the value. The value
of f
d,t
indicates the number of offsets o stored in the posting
. By examining flag bits until f
d,t
1-bits are processed,
it is possible to bypass the offsets with minimal processing.
We call this approach scanning.
Scanning can also be used in query modes that do require
offset decoding. As we discussed earlier, phrase querying
requires that all terms are present in a matching document.
After processing the inverted list for the first term that is
evaluated in a phrase query, a temporary inverted list of
postings is created. This temporary list has a set D of documents
that contain the first term. When processing the
second term in the query, a second set of document numbers
D are processed. Offsets for the posting associated
with document d D can be scanned, that is, passed over
without decoding, if d is not a member of D. (At the same
time, document numbers in D that are not in D are discarded
.)
We show the performance of scanning in Figure 1. The
fifth and sixth bars show how scanning affects query evaluation
time for variable-bytes that are either unaligned and
aligned to byte boundaries in the GolD-GamF-VbyO index.
Scanning removes the processing of seven-bit values. This
reduces the cost of retrieving unaligned variable-bytes to less
than that of the aligned variable-byte schemes; the small
speed advantage is due to the retrieval of smaller lists in the
unaligned version. Scanning has little effect on byte-aligned
variable bytes, reflecting that the processing of seven-bit values
using shift operations has a low cost. Overall, however,
byte-alignment is preferred since the decoding cost of offsets
is expensive in an unaligned scheme.
A third optimisation is an approach we call signature
blocks, which are a variant of skipping. Skipping is the approach
of storing additional integers in inverted lists that
indicate how much data can be skipped without any processing
[14]. Skipping has the disadvantage of an additional
storage space requirement, but has been shown to offer substantial
speed improvements [14]. A signature block is an
eight-bit block that stores the flag bits of up to eight blocks
that follow. For example, a signature block with the bit-string
11100101 represents that five integers are stored in
the eight following eight-bit blocks:the string 111 represents
that the first three blocks store one integer each; the
string 001 represents that the fourth integer is stored over
three blocks; and, the string 01 represents that the final integer
is stored over two blocks. As all flag bits are stored
in the signature block, the following blocks use all eight bits
to store values, rather the seven-bit scheme in the standard
variable-byte integer representation.
The primary use of signature blocks is skipping. To skip
offsets, f
d,t
offset values must be retrieved but not processed.
By counting the number of 1-bits in a signature block, the
number of integers stored in the next eight blocks can be
determined. If the value of f
d,t
exceeds this, then a second
or subsequent signature block is processed until f
d,t
offsets
have been skipped. The last signature block is, on average,
half full. We have found that bitwise shifts are faster than
a lookup table for processing of signature blocks.
The speed and space requirements are also shown in Figure
1. Not surprisingly, the signature block scheme requires
more space than the previous variable-byte schemes. This
space requirement is further increased if byte alignment of
blocks is enforced. In terms of speed, the third and fourth
bars in the right-hand histogram show that signature blocks
are slower than the original variable-byte schemes when offsets
are processed in the GolD-GamF-VbyO index. These
results are not surprising:signature blocks are slow to process
when they are unaligned, and the byte-aligned version
is slow because processing costs are no less than the original
variable-byte schemes and longer disk reads are required.
As shown by the seventh bar, when offsets are skipped the
unaligned signature block scheme is slower than the original
226
variable-byte scheme. The savings of skipping with signature
blocks are negated by more complex processing when
blocks are not byte-aligned. In contrast, the right-most bar
shows that the byte-aligned signature block scheme with
skipping is slightly faster on average than all other schemes.
However, we conclude--given the compactness of the index
and good overall performance--that the best all-round
scheme is the original variable-byte scheme with byte alignment
. Therefore, all variable-byte results reported in the
Section 4 use the original byte-aligned variable-byte scheme
with scanning.
Customised Compression
Combinations of bitwise and bytewise compression schemes
are also possible. The aim of such approaches is to combine
the fast decoding of bytewise schemes with the compact
storage of bitwise schemes. For example, a simple and
efficient custom scheme is to store a single bit that indicates
which of two compression schemes is used, and then to store
the integer using the designated compression scheme. We
have experimented with several approaches for storing offsets
. The simplest and most efficient approach we tested
is as follows:when f
d,t
= 1, we store a single bit indicating
whether the following offset is stored as a bitwise Elias
delta code or as a bytewise eight-bit binary representation.
When storing values, we use Elias delta coding if the value
is greater than 256 and the binary scheme otherwise. This
scheme has the potential to reduce space because in the median
posting f
d,t
is 1 and the average offset is around 200.
Selective use of a fixed-width representation can save storage
of the 6-bit prefix used to indicate magnitude in the
corresponding delta code.
We report the results with this scheme, which we call custom
, in the next section. This was the fastest custom scheme
we tested. Other approaches we tried included switching between
variable-byte and bitwise schemes, using the custom
scheme when f
d,t
is either 1 or 2, and other simple variations
. We omit results for these less successful approaches.
RESULTS
All experiments described in this paper are carried out on
an Intel Pentium III based machine with 512 Mb of main-memory
running the Linux operating system. Other processes
and disk activity was minimised during timing experiments
, that is, the machine was under light-load.
A theme throughout these experiments and greatly impacting
on the results is the importance of caching. On a
modern machine, caching takes place at two levels. One level
is the caching of recently-accessed disk blocks in memory, a
process that is managed by the operating system. When the
size of the index significantly exceeds memory capacity, to
make space to fetch a new inverted list, the blocks containing
material that has not been accessed for a while must be
discarded. One of the main benefits of compression is that a
much greater volume of index information can be cached in
memory. For this reason, we test our compression schemes
with streams of 10,000 or 25,000 queries extracted from a
query log [11], where the frequency distribution of query
terms leads to beneficial use of caching. Again, queries are
processed as conjunctive Boolean queries.
The other level at which caching takes place is the retention
in the CPU cache of small blocks of data, typically of
128 bytes, recently accessed from memory. CPU caching
DelD-GamF-GolO
GolD-GamF-GamO
GolD-GamF-DelO
GolD-GamF-GolO
GolD-GamF-RicO
GolD-GamF-VbyO
GolD-VbyF-VbyO
RicD-GamF-RicO
RicD-VbyF-VbyO
VbyD-VbyF-VbyO
Custom
No compression
0
10
20
30
40
Index Size (% of Collection)
DelD-GamF-GolO
GolD-GamF-GamO
GolD-GamF-DelO
GolD-GamF-GolO
GolD-GamF-RicO
GolD-GamF-VbyO
GolD-VbyF-VbyO
RicD-GamF-RicO
RicD-VbyF-VbyO
VbyD-VbyF-VbyO
Custom
No compression
0
1
2
3
4
Average Query Time (Seconds x 10^-2)
Figure 2:
Performance of integer compression
schemes for offsets in inverted lists, in an index with
Golomb document numbers and gamma frequencies.
In this experiment, the index fits in main memory.
A 500 Mb collection is used, and results are averaged
over 10,000 queries.
is managed in hardware. In current desktop computers, as
many as 150 instruction cycles are required to fetch a single
machine-word into the CPU. At a coarser level, compression
of postings lists means that the number of fetches from
memory to cache during decompression is halved.
Small collection
Figure 2 shows the relative performance of the integer compression
schemes we have described for storing offsets, on
a 500 Mb collection of 94,802 Web documents drawn from
the TREC Web track data [5]; timing results are averaged
over 10,000 queries drawn from an Excite search engine
query log [11]. The index contains 703, 518 terms.
These results show the effect of varying the coding scheme
used for document numbers d, frequencies f
d,t
, and offsets o.
In all cases where both bitwise and variable-byte codes are
used, the bitwise codes are padded to a byte boundary before
a variable-byte code is emitted; thus, for example, in a GolD-GamF
-VbyO index, there is padding between the gamma
227
frequency and the sequence of variable-byte offsets. Not all
code combinations are shown; for example, given that the
speed advantage of using variable-byte document numbers
is small, we have not reported results for index types such as
VbyD-GamF-RicD, and due to the use of padding a choice
such as VbyD-GamF-VbyD. Given the highly skew distribution
of f
d,t
values, Golomb or Rice are not suitable coding
methods, so these have not been tried.
In the "no compression" case, fixed-width fields are used
to store postings. Document numbers are stored in 32 bits,
frequencies in 16 bits, and offsets in 24 bits; these were the
smallest multiples of bytes that would not overflow for reasonable
assumptions about data properties.
The relative performance of Elias delta and gamma, Rice,
and Golomb coding is as expected. The non-parameterised
Elias coding schemes result in larger indexes than the param-terised
Golomb-Rice schemes that, in turn, result in slower
query evaluation. The average difference between offsets is
greater than 15, making Elias delta coding more appropriate
overall than gamma coding; the latter is both slower and
less space-efficient.
On the lower graph in Figure 2, comparing the fourth and
fifth columns and comparing the fifth and eighth columns,
it can be seen that choice of Golomb or Rice codes for either
offsets or document numbers has virtually no impact on index
size. Comparing the fifth and eighth columns on the upper
graph, the schemes yield similar decoding times for document
numbers. However, Rice codes are markedly faster for
decoding offsets, because no toggle point calculation is required
. Among the bitwise schemes, we conclude that Rice
coding should be used in preference to other schemes for
coding document numbers and offsets.
The most surprising result is the effect of using the optimised
byte-boundary variable-byte scheme for coding offsets
. Despite the variable-byte index being 26% larger than
the corresponding Rice-coded index, the overall query evaluation
time is 62% less. Further speed gains are given by coding
all values in variable-byte codes. Indeed, variable-byte
decoding is faster even than processing uncompressed lists.
This result is remarkable:the cost of transfering variable-byte
coded lists from memory to the CPU cache and then
decoding the lists is less than the cost of transferring uncompressed
lists. To our knowledge, this is the first practical
illustration that compression improves the efficiency of
an in-memory retrieval system. We conclude from this that
variable-byte coding should be used to store offsets to reduce
both disk retrieval and memory retrieval costs.
In experiments with integers, Williams and Zobel found
that variable-byte coding is faster than the bitwise schemes
for storing large integers of the magnitude stored in inverted
lists [13]. Our result confirms this observation for retrieval
systems, while also showing that the effect extends to fast
retrieval from memory and that improvements to variable-byte
coding can considerably increase decoding speed.
The custom scheme uses both Elias delta and a binary
bytewise scheme, reducing query evaluation to around 58%
of the time for the Elias delta scheme. However, the custom
scheme is almost twice as slow as the variable-byte scheme
and, therefore, has little benefit in practice.
Large collection
Figure 3 shows the results of a larger experiment with an
index that does not fit within the main-memory of our ma-DelD
-GamF-GolO
GolD-GamF-GamO
GolD-GamF-DelO
GolD-GamF-GolO
GolD-GamF-VbyO
RicD-GamF-RicO
RicD-VbyF-VbyO
VbyD-VbyF-VbyO
No compression
0
10
20
30
40
Index Size (% of Collection)
DelD-GamF-GolO
GolD-GamF-GamO
GolD-GamF-DelO
GolD-GamF-GolO
GolD-GamF-VbyO
RicD-GamF-RicO
RicD-VbyF-VbyO
VbyD-VbyF-VbyO
No compression
0.0
0.5
1.0
1.5
2.0
Average Query Time (Seconds)
Figure 3: The performance of integer compression
schemes for compressing offsets in inverted lists,
with Golomb-coded document numbers and gamma-coded
offsets. In this experiment, the index is several
times larger than main memory. A 20 Gb collection
is used, and results are averaged over 25,000
queries.
chine. Exactly the same index types are tried as for the
experiment above. A 20 Gb collection of 4,014,894 Web documents
drawn from the TREC Web track data [5] is used
and timing results are averaged over 25,000 Boolean queries
drawn from an Excite search engine query log [11].
The
index contains 9,574,703 terms. We include only selected
schemes in our results.
We again note that we have not used heuristics to reduce
query evaluation costs such as frequency-ordering or early
termination. Indeed, we have not even used stopping; with
stopwords removed, query times are greatly impoved. Our
aim in this research is to measure the impact on index decoding
time of different choices of compression method, not
to establish new benchmarks for query evaluation time. Our
improvements to compression techniques could, however, be
used in conjunction with the other heuristics, in all likelihood
further reducing query evaluation time compared to
the best times reported previously.
228
The relative speeds of the bitwise Golomb, Elias delta, and
variable-byte coded offset schemes are similar to that of our
experiments with the 500 Mb collection. Again, variable-byte
coding results in the fastest query evaluation. Perhaps
unsurprisingly given the results described above, an uncompressed
index that does not fit in main-memory is relatively
much slower than the variable-byte scheme; the disk transfer
costs are a larger fraction of the overall query cost when
the index does not fit in memory, and less use can be made
of the memory cache. Indexes with variable-byte offsets are
twice as fast as indexes with Golomb, delta, or gamma offsets
, and one-and-a-half times as fast as indexes with Rice
offsets. VbyD-VbyF-VbyO indexes are twice as fast as any
index type with non-variable-byte offsets.
In separate experiments we have observed that the gains
demonstrated by compression continue to increase with collection
size, as the proportion of the index that can be held
in memory declines. Despite the loss in compression with
variable-byte coding, indexes are still less than one-seventh
of the size of the indexed data, and the efficiency gains are
huge.
CONCLUSIONS
Compression of inverted lists can significantly improve the
performance of retrieval systems. We have shown that an efficiently
implemented variable-byte bytewise scheme results
in query evaluation that is twice as fast as more compact
bitwise schemes. Moreover, we have demonstrated that the
cost of transferring data from memory to the CPU cache
can also be reduced by compression:when an index fits in
main memory, the transfer of compressed data from memory
to the cache and subsequent decoding is less than that
of transferring uncompressed data. Using byte-aligned coding
, we have shown that queries can be run more than twice
as fast as with bitwise codes, at a small loss of compression
efficiency. These are dramatic gains.
Modern computer architectures create opportunities for
compression to yield performance advantages.
Once, the
main benefits of compression were to save scarce disk space
and computer-to-computer transmission costs. An equally
important benefit now is to make use of the fact that the
CPU is largely idle. Fetching a single byte from memory involves
a delay of 12 to 150 CPU cycles; a fetch from disk involves
a delay of 10,000,000 cycles. Compression can greatly
reduce the number of such accesses, while CPU time that
would otherwise be unused can be spent on decoding. With
fast decoding, overall costs are much reduced, greatly increasing
query evaluation speed. In current computers such
architecture considerations are increasingly important to development
of new algorithms for query processing.
Poor
caching has been a crucial shortcoming of existing algorithms
investigated in this research.
There are several possible extensions to this work. We
plan to investigate nibble-coding, a variant of variable-byte
coding where two flag bits are used in each variable-byte
block. It is likely that this approach may improve the performance
of signature blocks. We will also experiment with
phrase querying in practice and to explore the average query
evaluation speed when partial scanning is possible.
REFERENCES
[1] V. Anh, O. de Kretser, and A. Moffat. Vector-Space
ranking with effective early termination. In W. Croft,
D. Harper, D. Kraft, and J. Zobel, editors, Proc.
ACM-SIGIR International Conference on Research
and Development in Information Retrieval, pages
3542, New York, Sept. 2001.
[2] E. de Moura, G. Navarro, N. Ziviani, and
R. Baeza-Yates. Fast and flexible word searching on
compressed text. ACM Transactions on Information
Systems, 18(2):113139, 2000.
[3] P. Elias. Universal codeword sets and representations
of the integers. IEEE Transactions on Information
Theory, IT-21(2):194203, Mar. 1975.
[4] S. Golomb. Run-length encodings. IEEE Transactions
on Information Theory, IT12(3):399401, July 1966.
[5] D. Hawking, N. Creswell, and P. Thistlewaite.
Overview of TREC-7 very large collection track. In
E. Voorhees and D. Harman, editors, Proc. Text
Retrieval Conference (TREC), pages 91104,
Washington, 1999. National Institute of Standards
and Technology Special Publication 500-242.
[6] A. Moffat and J. Zobel. Self-indexing inverted files for
fast text retrieval. ACM Transactions on Information
Systems, 14(4):349379, Oct. 1996.
[7] A. Moffat, J. Zobel, and R. Sacks-Davis.
Memory-efficient ranking. Information Processing &
Management, 30(6):733744, 1994.
[8] G. Navarro, E. de Moura, M. Neubert, N. Ziviani, and
R. Baeza-Yates. Adding compression to block
addressing inverted indexes. Information Retrieval,
3(1):4977, 2000.
[9] M. Persin. Document filtering for fast ranking. In
W. Croft and C. van Rijsbergen, editors, Proc.
ACM-SIGIR International Conference on Research
and Development in Information Retrieval, pages
339348, Dublin, Ireland, 1994.
[10] M. Persin, J. Zobel, and R. Sacks-Davis. Filtered
document retrieval with frequency-sorted indexes.
Journal of the American Society for Information
Science, 47(10):749764, 1996.
[11] A. Spink, D. Wolfram, B. J. Jansen, and T. Saracevic.
Searching the web:The public and their queries.
Journal of the American Society for Information
Science, 52(3):226234, 2001.
[12] A. Vo and A. Moffat. Compressed inverted files with
reduced decoding overheads. In R. Wilkinson,
B. Croft, K. van Rijsbergen, A. Moffat, and J. Zobel,
editors, Proc. ACM-SIGIR International Conference
on Research and Development in Information
Retrieval, pages 290297, Melbourne, Australia, July
1998.
[13] H. Williams and J. Zobel. Compressing integers for
fast file access. Computer Journal, 42(3):193201,
1999.
[14] I. Witten, A. Moffat, and T. Bell. Managing
Gigabytes: Compressing and Indexing Documents and
Images. Morgan Kaufmann Publishers, Los Altos, CA
94022, USA, second edition, 1999.
[15] N. Ziviani, E. de Moura, G. Navarro, and
R. Baeza-Yates. Compression:A key for
next-generation text retrieval systems. IEEE
Computer, 33(11):3744, Nov. 2000.
229
| Variable byte;Decoding;Efficiency;integer coding;Bytewise compression;Search engine;retrieval efficiency;Integer Compression;Inverted indexes;index compression;Optimisation;Compression;Inverted index;Document retrieval |
54 | Computing Consistent Query Answers using Conflict Hypergraphs | A consistent query answer in a possibly inconsistent database is an answer which is true in every (minimal) repair of the database. We present here a practical framework for computing consistent query answers for large, possibly inconsistent relational databases. We consider relational algebra queries without projection , and denial constraints. Because our framework handles union queries, we can effectively (and efficiently) extract indefinite disjunctive information from an inconsistent database. We describe a number of novel optimization techniques applicable in this context and summarize experimental results that validate our approach. | INTRODUCTION
Traditionally, the main role of integrity constraints in
databases was to enforce consistency.
The occurrence of
integrity violations was prevented by DBMS software. However
, while integrity constraints continue to express important
semantic properties of data, enforcing the constraints
has become problematic in current database applications.
For example, in data integration systems integrity violations
may be due to the presence of multiple autonomous
data sources. The sources may separately satisfy the constraints
, but when they are integrated the constraints may
not hold. Moreover, because the sources are autonomous,
the violations cannot be simply fixed by removing the data
involved in the violations.
Example
1. Let Student be a relation schema with the
attributes Name and Address and the key functional dependency
N ame Address. Consider the following instance
of Student:
The first two tuples may come from different data sources,
so it may be impossible or impractical to resolve the inconsistency
between them. However, there is clearly a difference
between the first two tuples and the third one. We don't
know whether Jeremy Burford lives in Los Angeles or New
York, but we do know that Linda Kenner lives in Chicago.
An approach to query answering that ignores inconsistencies
will be unable to make this distinction the distinction between
reliable and unreliable data. On the other hand, any
approach that simply masks out inconsistent data (the first
two tuples in this example) will lose indefinite information
present in inconsistent databases. In this example, we know
that there is a student named Jeremy Burford (existential
information) and that Jeremy Burford lives in Los Angeles
or New York (disjunctive information).
The above example illustrates the need to modify the standard
notion of query answer in the context of inconsistent
databases. We need to be able to talk about query answers
that are unaffected by integrity violations. In [2], the notion
of consistent query answer was proposed to achieve that ob jective
. [2] introduced the notion of repair: a database that
satisfies the integrity constraints and is minimally different
from the original database. A consistent answer to a query,
in this framework, is an answer present in the result of the
query in every repair.
Example
2. In Example 1, there are two repairs corresponding
to two different ways of restoring the consistency:
either the first or the second tuple is deleted. If a query asks
for all the information about students, only the tuple (Linda
Kenner,Chicago) is returned as a consistent answer because
it is the only tuple that is present in both repairs. On the
other hand, if a query asks for the names of students living
in Los Angeles or New York, then Jeremy Burford is a
consistent answer.
The framework of [2] has served as a foundation for most
of the subsequent work in the area of querying inconsistent
databases [3, 5, 11, 12, 13, 15, 17, 19, 23] (see [7] for a survey
and an in-depth discussion). The work presented here
addresses the issue of computing consistent query answers
for projection-free queries and denial integrity constraints.
It is shown in [13] that this task can be done in polynomial
time, using the notion of conflict hypergraph that succinctly
417
represents all the integrity violations in a given database.
This line research is pursued further in the present paper.
The main contributions of this paper are as follows:
A complete, scalable framework for computing consistent
answers to projection-free relational algebra
queries in the presence of denial constraints. Our approach
uses a relational DBMS as a backend and scales
up to large databases.
Novel optimization techniques to eliminate redundant
DBMS queries.
Encouraging experimental results that compare our
approach with an approach based on query rewriting
and estimate the overhead of computing consistent
query answers. No comprehensive results of this kind
exist in the literature.
Because our query language includes union, our approach
can extract indefinite disjunctive information present in an
inconsistent database (see Example 1). Moreover, consistent
query answers are computed in polynomial time. Other existing
approaches are either unable to handle disjunction in
queries [2, 12, 17] or cannot guarantee polynomial time com-putability
of consistent query answers [3, 5, 11, 15, 19, 23].
The latter is due to the fact that those approaches rely on
the computation of answers sets of logic programs with disjunction
and negation a
p
2
-complete problem. Only the
approach of [2, 12] (which uses query rewriting) and the approach
presented here scale up to large databases. Related
research is further discussed in Section 6.
The plan of the paper is as follows. In Section 2, we introduce
basic concepts. In Section 3, we present our approach
to computing consistent answers to projection-free queries
and describe its implementation in a system called Hippo.
In Section 4, we describe several techniques for eliminating
redundant DBMS queries, that we have implemented in
Hippo. In Section 5, we discuss a number of experiments we
have conducted with Hippo and query rewriting. In Section
6, we briefly discuss related work. Section 7 contains conclusions
and a discussion of possible future research directions.
BASIC NOTIONS AND FACTS
In this paper we work in the relational model of data. We
recall that a database schema
S is a set of relation names
with attribute names and types. An instance of a database
is a function that assigns a finite set of tuples to each relation
name. For the purposes of this paper we consider only two
fixed database domains
N (natural numbers) and D (unin-terpreted
constants). We also use the natural interpretation
over
N of binary relational symbols =, =, <, >, and we assume
that two constants are equal only if they have the same
name. We also view I as a structure for the first-order language
over the vocabulary consisting of symbols of
S, and
standard built-in predicates over
N (=, =, <, >).
In this article, we use projection-free (-free) relational
algebra expressions, defined using the following grammar:
E :: R |
(E) | E E | E E | E \ E.
|R| is the arity of the relation symbol R and (unless specified
otherwise) for the sake of simplicity we assume that attribute
names are consecutive natural numbers. We extend
this to expressions, i.e.
|E| is the arity of the expression,
and E.i is the reference to the i-th column resulting from
the expression E (used in conditions for subexpressions).
Morover, t[i] is the value on the i-th position of t, t[i, j] is
an abbreviation for a tuple (t[i], . . . , t[j]), and with |t| we
denote the length of the tuple t. We say that a tuple t is
compatible with an expression E if the length of the tuple is
equal to the arity of the expression, i.e.
|t| = |E|.
For a given expression E, QA
E
(I) is the result of evaluating
E in the database instance I. In this paper we use only the
set semantics of relational algebra expressions.
We also use relational calculus queries consisting of
quantifier-free first-order formulas which may be open (having
free variables) or ground.
In fact, our approach can
handle relational algebra queries that require projection, as
long as they can be translated to quantifier-free relational
calculus queries. That's why we can deal with the relational
algebra query corresponding to the query
Student (X, LosAngeles ) Student(X, NewYork )
in Example 1.We also occasionally use SQL.
2.2
Repairs and consistent query answers
An integrity constraint is a consistent closed first-order
formula. In this paper we consider only the class of denial
integrity constraints of the form:
x
1
, . . . ,
x
k
. [R
i
1
(
x
1
)
. . . R
i
k
(
x
k
)
(x
1
, . . . ,
x
k
)] ,
(1)
where is a boolean expression consisting of atomic formulas
referring to built-in predicates. The number k is called
the arity of a constraint.
Note that, for example, functional dependencies and exclusion
constraints are of the above form. Below we give another
example.
Example
3. Consider the relation Emp with attributes
Name, Salary, and Manager, with Name being the primary
key.
The constraint that no employee can have a salary
greater that that of her manager is a denial constraint:
n, s, m, s , m . [Emp(n, s, m) Emp(m, s , m ) s > s ].
Definition 1
(Consistent database).
A
database
instance I is consistent with a set of integrity constraints C
if I |= C (i.e., C is true in I); inconsistent otherwise.
Definition
2. For a given database instance I of schema
S, its set of facts (I) is the set of all positive facts that hold
in this database:
(I) = {R(t)|R S t I(R)}.
Definition 3
(Database distance).
Given two instances
I
1
and I
2
of the same database, the distance between
those instances (I
1
, I
2
) is the symmetric difference between
sets of facts of those instances:
(I
1
, I
2
) = ((I
1
)
\ (I
2
))
((I
2
)
\ (I
1
)).
Definition 4
(Proximity relation).
Given
three
instances I, I
1
, I
2
, the instance I
1
is closer to I than the
instance I
2
if the distance between I
1
and I is contained in
the distance between I
2
and I, i.e.
I
1
I
I
2
(I, I
1
)
(I, I
2
).
418
Definition 5
(Database repair).
For a given instance
I and set of integrity constraints C, I is a repair
of I w.r.t. C if I is the closest instance to I, which is consistent
with C , i.e. I |= C and I is
I
-minimal among
the instances that satisfy C.
By Rep
C
(I) we denote the set of all repairs of I with respect
to C.
The following fact captures an important property of repairs
of denial constraints: each repair is a maximal consistent
subset of the database.
Fact
1. If C consists only of denial constraints, then:
I Rep
C
(I) (I ) (I).
Definition 6
(Core instance).
For a given instance
I, its core w.r.t a set of integrity constraints C is an instance
Core
I
C
such that:
Core
I
C
(R) =
I Rep
C
(I)
I (R).
For any relation R and set of integrity constraints C, if there
exists a relational algebra expression
R
C
such that that for
any instance I:
QA
R
C
(I) = Core
I
C
(R),
we call
R
C
a core expression of the relation R w.r.t the set
of integrity constraints C.
Fact
2. If C is a set of denial integrity constraints, then
for any R S there exists a core expression
R
C
of R w.r.t
C.
Example
4. Suppose we have a table P (A, B) with a
functional dependency A B. The core expression for P
in SQL is:
SELECT * FROM P P1 WHERE NOT EXISTS (
SELECT * FROM P P2
WHERE P1.A = P2.A AND P1.B <> P2.B);
Having defined repairs, we can define consistent answers to
queries. In general, the intuition is that the consistent query
answer is an answer to the query in every repair. In this paper
we consider consistent answers for two classes of queries.
Definition 7
(CQA for ground queries).
Given a
database instance I and a set of denial integrity constraints
C, we say that true (resp. false) is the consistent answer to
a ground query w.r.t. C in I , and we write I |=
C
, if
in every repair I Rep
C
(I), I |= (resp. I |= ).
Definition 8
(CQA for relational algebra).
Given a database instance I and a set of denial integrity
constraints C, the set of consistent answers to a query E
w.r.t. C in I is defined as follows:
CQA
E
C
(I) =
I Rep
C
(I)
QA
E
(I ).
2.3
Conflict hypergraphs
The conflict hypergraph [13] constitutes a compact, space-efficient
representation of all repairs of a given database instance
. Note that this representation is specifically geared
toward denial constraints.
Definition 9
(Conflict).
For a given integrity constraint
c of form (1), a set of facts {R
i
1
(t
1
), . . . , R
i
k
(t
k
)
},
where t
j
I(R
i
j
), is a conflict in a database instance I
if (t
1
, . . . , t
k
). By
E
c,I
we denote the set of all conflicts
generated by the integrity constraint c in I.
Definition 10
(Conflict hypergraph).
For
a
given set of integrity constraints C and a database instance
I, a conflict hypergraph G
C,I
is a hypergraph with the set
of vertices being the set of facts from the instance I, and
the set of hyperedges consisting of all conflicts generated by
constraints from C in I, i.e.
G
C,I
= (
V
I
, E
C,I
), where V
I
= (I), and E
C,I
=
cC
E
c,I
.
Definition 11
(Maximal independent set).
For a
hypergraph
G = (V, E), the set of vertices is a maximal independent
set if it is a maximal set that contains no hyperedge
from
E.
Fact
3. Let I be a database instance, and C a set of
denial constraints, then for any repair I Rep
C
(I), (I )
is a maximal independent set M in G
C,I
, and vice versa.
As shown in the following example in case of denial constraints
the set of conflicts can be defined using a simple
query.
Example
5. Suppose we have a table P (A, B) with a
functional dependency A B. The SQL expression for selecting
all conflicts from P generated by the functional constraint
is:
SELECT * FROM P P1, P P2
WHERE P1.A = P2.A AND P1.B <> P2.B;
Definition 12
(Data complexity).
The data complexity
of consistent answers to ground first-order queries
is the complexity of determining the membership in the set
D
C,
=
{I|I |=
C
}, where is a fixed ground first-order
query, and C is a fixed finite set of integrity constraints.
We note that for a fixed set of integrity constraints, the
conflict hypergraph is of polynomial size (in the number of
tuples in the database instance).
IMPLEMENTATION
We review here the algorithm [13] for checking the consistency
of ground queries in the presence of denial constraints,
and then show how to use it to answer -free queries relational
algebra queries, which which correspond to open
quantifier-free relational calculus queries.
We assume here that we work with a set of integrity constraints
consisting only of denial constraints. The input to
the algorithm consists of a ground quantifier-free formula
, a set of integrity constraints C, and a database instance
I. We want the algorithm to answer the question whether
I |=
C
.
Theorem
1. [13] The data complexity of consistent answers
to quantifier-free ground queries w.r.t a set of denial
constraints is in P .
419
The proof of this theorem can be found in [13] together with
the corresponding algorithm that we call HProver.
This
algorithm takes the query in CNF, and a conflict hypergraph
G
C,I
that corresponds to the database instance I in
the presence of integrity constraints C. The first step of
Input: =
1
. . .
k
ground input formula in CNF,
G
C,I
= (
V
I
, E
C,I
) conflict hypergraph of
I w.r.t. C.
1
for i {1, . . . , k} do
2
let
i
R
i
1
(
t
1
)
. . . R
i
p
(
t
p
)
R
i
p+1
(
t
p+1
)
. . . R
i
m
(
t
m
).
3
for j {p + 1, . . . , m} do
4
if t
j
I(R
i
j
) then
5
next i;
6
B {R
i
p+1
(
t
p+1
)
, . . . , R
i
m
(
t
m
)
}
7
for j {1, . . . , p} do
8
if t
j
I(R
i
j
) then
9
choose e
j
{e E
C,I
|R
i
j
(
t
j
)
e} nondeterm.
10
B B (e
j
\ {R
i
j
(
t
j
)
}).
11
if B is independent in G
C,I
then
12
return false;
13 return true;
Figure 1: Algorithm
HProver
the algorithm reduces the task of determining whether true
is the consistent answer to the query to answering the
same question for every conjunct
i
. Then each formula
i
is negated and the rest of the algorithm attempts to find a
repair I in which
i
is true, i.e., in which
1. t
j
I (R
i
j
) for (j = p + 1, . . . , m)
2. t
j
I (R
i
j
) for (j = 1, . . . , p)
Such a repair corresponds to a maximal independent set M
in the conflict hypergraph such that:
1 . every of R
i
p+1
(t
p+1
), . . . , R
i
m
(t
m
) is an element of M ,
2 . none of R
i
1
(t
1
), . . . , R
i
p
(t
p
) is an element of M .
If the algorithm succeeds in building an independent set
satisfying the properties 1 and 2 , such a set can be extended
to a maximal one which also satisfies those properties. That
means that there is a repair in which
i
, and thus also
, is true. If the algorithm does not succeed for any i,
i = 1, . . . , k, then true is the consistent answer to .
The condition 1 is satisfied by simply including the appropriate
facts in M . The condition 2 is satisfied by excluding
the appropriate facts from M . A fact can be excluded if it is
not in (I) or if it belongs to a hyperedge whose remaining
elements are already in M .
3.2
Finding an envelope
Any relational algebra expression E can be translated to a
corresponding first-order formula
E
(
x) in a standard way.
Since we consider only -free algebra expressions, the formula
E
(
x) is quantifier-free. To be able to use HProver, we
have to ground this formula, i.e., find an appropriate set of
bindings for the variables in the formula. This will be done
by evaluating an envelope query over the database. An envelope
query should satisfy two properties: (1) it should return
a superset of the set of consistent query answers for every
database instance, and (2) it should be easily constructible
from the original query. The result of evaluating an envelope
query over a given database will be called an envelope.
Suppose K
E
is an envelope query for a query E. We have
that
CQA
E
C
(I) = {
t QA
K
E
(I) | I |=
C
E
(
t)}.
If an expression E does not use the difference operator (and
thus is a monotonic expression), E itself is an envelope
query, as stated by the following lemma:
Lemma
1. For any monotonic relational expression E,
the following holds:
CQA
E
C
(I) QA
E
(I).
However when E is not monotonic, then the set of consistent
query answers may contain tuples not contained in QA
E
(I).
That kind of a situation is shown in the example below.
Example
6. Suppose we have two relations R(A, B) and
S(A, B, C, D), and we have functional dependency over R :
A B. In case when I(R) = {(1, 2), (1, 3)}, and I(S) =
{(1, 2, 1, 3)}, the set of answers to the query
E = S \ (R(A
1
, B
1
)
B
1
=B
2
R(A
2
, B
2
))
is
, while the set of consistent query answers is {(1, 2, 1, 3)}.
To obtain the expression for an envelope, we define two
operators F and G by mutual recursion. The operator F
defines the envelope by overestimating the set of consistent
answers. The auxiliary operator G underestimates the set
of consistent answers.
Definition
13. We define the operators F and G recursively
:
F (R) = R,
F (E
1
E
2
) = F (E
1
)
F (E
2
),
F (E
1
\ E
2
) = F (E
1
)
\ G(E
2
),
F (E
1
E
2
) = F (E
1
)
F (E
2
),
F (
(E)) =
(F (E)),
G(R) =
R
C
,
G(E
1
E
2
) = G(E
1
)
G(E
2
),
G(E
1
\ E
2
) = G(E
1
)
\ F (E
2
),
G(E
1
E
2
) = G(E
1
)
G(E
2
),
G(
(E)) =
(G(E)).
Because C consist only of denial constraints, Fact 2 guarantees
that the expression
R
C
exists, and therefore the operators
are well defined. The pair of operators (F, G) has the
following properties:
Lemma
2. For any -free relational algebra expression E:
QA
G(E)
(I) QA
E
(I) QA
F (E)
(I), and
CQA
G(E)
C
(I) CQA
E
C
(I) CQA
F (E)
C
(I).
Lemma
3. For any -free relational algebra expression E:
I Rep
C
(I). QA
G(E)
(I) QA
E
(I ) QA
F (E)
(I)
With those two lemmas we can prove the following theorem.
Theorem
2. If C contains only denial constraints, then
for any -free relational algebra expression E the following
holds for every database instance I:
QA
G(E)
(I) CQA
E
C
(I) QA
F (E)
(I).
420
3.3
The system Hippo
We have implemented a system called Hippo for finding
consistent answers to -free relational algebra queries. The
data is stored in an RDBMS (in our case, PostgreSQL).
The flow of data in Hippo is shown in Figure 2. The only
E : , , \,
Estimating
F (E) : , , , \
Evaluation
Conflict Detection
DB
Translation
E
:
, ,
Envelope
Conflict Hypergraph
Grounding
HProver
Answer Set
IC
Figure 2: Data flow in Hippo
output of this system is the Answer Set consisting of the
consistent answers to the input query E with respect to
a set of integrity constraints IC in the database instance
DB. Before processing any input query, the system performs
Conflict Detection, and creates the Conflict Hypergraph. We
assume that the number of conflicts is small enough to allow
us to store the hypergraph in main memory. We keep in
main memory only the set of hyperedges corresponding to
conflicts in database. The set of all the vertices represents
the entire contents of the database and thus may be too big
to fit in main memory. In this way, we guarantee that our
approach is scalable.
The processing of a query E consists of Estimating it to an
envelope query F (E) that after Evaluation b y an RDBMS
gives us the Envelope. Also, the system performs Translation
of the input query E to a corresponding first-order logic
formula
E
. Now, for every tuple from the Envelope we
perform Grounding of
E
. Having now a first-order ground
query we can check if true is the consistent answer to this
query using HProver. Depending on the result of this check
we return the tuple or not. It's important to notice here that
because the hypergraph is stored in main memory, HProver
doesn't need any immediate knowledge of the integrity constraints
(no arrow from IC to HProver). This is because
in HProver the independence of constructed sets B is being
checked only for sets of vertices that are contained in the
database, and if such vertices are in any conflict, it is registered
in the hypergraph. HProver makes, however, database
accesses to check tuple membership in database relations.
OPTIMIZATIONS
The previous section showed how to build a system for
computing consistent query answers. But even though we
have decided to store the conflict hypergraph in main memory
, we still have to perform tuple membership checks (steps
4 and 8 in the HProver algorithm). To check if a tuple is
present in a given table, we execute a simple membership
query. For every tuple from the envelope we have to perform
several tuple checks (depending on the complexity of
the query). Executing any query is usually a costly operation
in the database context. Therefore tuple membership
checks are a significant factor in the algorithm execution
time.
In this section we address the problem of eliminating tuple
membership checks. We propose two improvements:
1. The first infers information about the tuples present
in the database from the current envelope tuple. That
makes it possible to answer some tuple checks without
interrogating the database.
2. The second supplements the first by extending the envelope
expression so that we can find the results of all
relevant tuple checks without executing any membership
query.
4.1
Knowledge gathering
In this section we address the problem of answering tuple
checks.
Definition 14
(Relevant facts).
For a given -free
expression E and a tuple t compatible with E, the set
TC(E, t) of relevant facts is defined recursively:
TC(R, t) = {R(t)},
TC(E
1
E
2
, t) = TC(E
1
, t) TC(E
2
, t),
TC(E
1
\ E
2
, t) = TC(E
1
, t) TC(E
2
, t),
TC(E
1
E
2
, (t
1
, t
2
)) = TC(E
1
, t
1
)
TC(E
2
, t
2
),
TC(
(E), t) = TC(E, t).
The set of facts TC(E, t) consists of all facts that HProver
may need when working with the query
E
(t) (we conjecture
that the same set of facts will be needed by any practical
checker of consistent query answers for quantifier-free
queries). In the following example we show that the tuple t
itself may carry information that can be used to derive some
relevant facts.
Example
7. Recall that relation attributes are named by
natural numbers. Assume that we have two tables R(1, 2),
P (1, 2) and a query E = F (E) =
1=a
(R(RP )). Suppose
that a tuple t = (a, b, c, d) is the only result of the evaluation
of F (E) in a database instance I. The set of relevant facts
is TC(E, t) = {R(a, b), R(c, d), P (c, d)}. A natural consequence
of the semantics of relational algebra expressions is
that t QA
1=a
(R(RP ))
(I) implies (a, b) I(R). We can
use this information to avoid performing some membership
queries. At the same time the tuple t itself doesn't carry
enough information to decide whether (c, d) belongs to either
I(R), I(P ), or both of them.
We call the process of inferring the information from result
of the evaluation of a query knowledge gathering. Formally,
we define the set of derived facts in the following way:
Definition 15
(Knowledge gathering).
For
a
given -free expression E and a tuple t compatible with E
421
we define the set KG recursively:
KG(R, t) = {R(t)},
KG(E
1
E
2
, t) = KG(E
1
, t) KG(E
2
, t),
KG(E
1
\ E
2
, t) = KG(E
1
, t),
KG(
(E), t) = KG(E, t),
KG(E
1
E
2
, (t
1
, t
2
)) = KG(E
1
, t
1
)
KG(E
2
, t
2
).
We note here that the cardinality of the set of facts inferred
with KG is linear in the size of the query and doesn't depend
on the value of the tuple t. Now we state the main property
of KG.
Theorem 3
(Soundness of KG).
Given a database
instance I and a -free expression E
t QA
F (E)
(I).R(t ) TC(E, t).
R(t )
KG(E, t) I |= R(t ).
Knowledge gathering is also complete in the case of
{, }-expressions
, i.e. it derives all relevant facts that hold in the
database I.
Theorem 4
(Completeness of KG for
{, }).
Given a database I and any {, }-query E.
t QA
F (E)
(I).R(t ) TC(E, t).
I |= R(t )
R(t ) KG(E, t).
4.2
Extended knowledge gathering
In general, when the expression translates to a disjunctive
query we need to extend the query so that the resulting tuple
carries some additional information allowing us to derive all
relevant facts. The extended approach described in detail
below is illustrated first by the following example.
Example
8. For the previously considered expression
E =
1=a
(R (R P )) the extended approach constructs
the expression
1=a
(R (R P ))
3,4
-- R
3,4
-- P , where
is the left outer join operator
1
. Suppose now, I(R) =
{(a, b), (e, f)} and I(P ) = {(c, d), (e, f)}. Then the evaluation
of the extended envelope expression yields the following:
1=a
(R (R P ))
3,4
-- R
3,4
-- P
a
b
a
b
a
b
a
b
c
d
c
d
a
b
e
f
e
f
e
f
Now, consider the tuple (a, b, c, d, , , c, d). We can decompose
it into two parts (a, b, c, d) and (, , c, d). The
first part is simply the tuple from the envelope F (E), and
it can be used to infer the fact R(a, b). The second part allows
us to make two other important inferences. Namely,
(c, d) I(R) and (c, d) I(P ).
Our goal is to minimally extend the expression so that we
can derive all relevant facts. In order to find what information
is not guaranteed to be gathered from evaluation of the
envelope expression, we generalize the definitions of KG and
TC to non-ground tuples consisting of distinct variables.
1
For clarity we simplify the notion of the outer join condition
. When writing S
3,4
-- T we mean S
S.3=T.1S.4=T.2
----------- T ,
and we assume the left join operator is left associative
Definition 16
(Complementary set).
For a given
-free expression E, the complementary set (E) is defined
as follows:
(E) = TC(E,
x) \ KG(E,
x),
where
x = (x
1
, . . . , x
|E|
).
Example
9. Taking again under consideration the expression
E =
1=a
(R (R P )) and
x = (x
1
, . . . , x
4
) we
have:
TC(E,
x) = {R(x
1
, x
2
), P (x
3
, x
4
), R(x
3
, x
4
)
},
KG(E,
x) = {R(x
1
, x
2
)
}.
R(x
1
, x
2
)
TC(E) means that for any tuple (t
1
, t
2
, t
3
, t
4
)
from the evaluation of the envelope expression for E,
HProver may perform the tuple check R(t
1
, t
2
). We have
also R(x
1
, x
2
)
KG(E) and therefore we are able to answer
this check using knowledge gathering. On the other
hand R(x
3
, x
4
)
TC(E) means that for HProver may perform
a tuple check R(t
3
, t
4
).
Since we don't have that
R(x
3
, x
4
)
KG(E) we cannot guarantee that we can answer
tuple checks R(t
3
, t
4
) without executing a membership query
on the database, even though we are able to answer tuple
checks R(t
1
, t
2
). The complementary set for the discussed
expression is:
(E) = {R(x
3
, x
4
), P (x
3
, x
4
)
}.
Analogous examples can be used to show that the simple
knowledge gathering is not sufficient to avoid membership
checks when processing expressions with the difference operator
. Next, we extend the envelope expression so that
it evaluation provides us with all information sufficient to
answer the tuple checks.
Definition 17
(Extended envelope expression).
For a given -free expression E the extended envelope
expression is defined as follows:
H(E) = F (E)
|R|
j=1
E.(i+j-1)=R.j
---------------R
(x
i
,...,x
i+|R|-1
)(E)
R.
The notation means that we have as many outer joins as
there are elements in (E). They can appear in any order.
We also define the following auxiliary expression:
S(E) = E
R(x
i
,...,x
i+|R|-1
)(E)
R.
For both H(E) and S(E) the elements of (E) need to be
considered in the same order.
Using outer joins results in a natural one-to-one correspondence
between the tuples from the evaluation of the extended
envelope expression and the tuples from the original
envelope.
Fact
4. For a given database instance I and -free expression
E, the map t t[1, |E|] is a one-to-one map of
QA
H(E)
(I) onto QA
F (E)
(I).
Extending knowledge gathering to null tuples KG(R, (
, . . . , )) = allows us to state that using the extended
envelope expression we can determine correctly all relevant
facts without querying the database.
422
Theorem 5
(Soundness, completeness of ext. KG).
For any database instance I and a -free expression E the
following holds:
t QA
H(E)
(I).R(t ) TC(E, t[1, |E|]).
R(t )
KG(S(E), t) I |= R(t ).
We note that in the case of
{, }-expressions this approach
doesn't unnecessarily extend the expression.
4.3
Other possibilities of optimizations
4.3.1
Negative knowledge gathering
Knowledge gathering KG (as defined in Section 4.1) is
complete only for queries that translate to a conjunction of
positive literals. However, it is possible to come up with a
construction that will be complete for queries that translate
to a conjunction of positive as well as negative literals. The
following example presents this idea.
Example
10. Suppose we have tables R(1, 2) and P (1, 2)
and a set of constraints C. For the query E = R\P , we have
F (E) = R \
P
C
. Take any tuple t QA
F (E)
(I) for some
instance I. We can easily conclude that t I(R). Also,
we can say that t QA
P
C
(I). Having this and hypergraph
G
C,I
= (
V
I
, E
C,I
) we can easily find if t I(P ). Namely, if
there exists an edge e E
C,I
that P (t) e, then t I(P ).
And if the vertex P (t) is not involved in any conflict in E
then t I(P ).
Reasoning of that sort cannot be applied to a query E =
R R \ P P . Given a tuple t = (t
1
, t
2
) from the envelope
we know that t
1
, t
2
R, but the fact (t
1
, t
2
)
QA
P
C
P
C
(I)
doesn't imply that t
1
QA
P
C
(I) or t
2
QA
P
C
(I). And
therefore we are not able to find if t
1
I(P ) or t
2
I(P ).
This mechanism hasn't been included in the tested implementation
yet. Implementing only positive knowledge gathering
allows us to better observe the benefits of extending
the envelope expression.
We notice here that the query rewriting approach to computing
consistent query answers described in [2] works also
only for queries that are conjunctions of literals. However,
as shown below, our approach leads to faster computation
of consistent answers than query rewriting.
4.3.2
Intersection
Another possible venue of optimization comes from directly
implementing derived operators of relational algebra.
For example, for intersection the appropriate extensions of
the operators F and G are very simple:
F (E
1
E
2
) = F (E
1
)
F (E
2
), G(E
1
E
2
) = G(E
1
)
G(E
2
).
Now R P is equivalent to R \(R \P ) b ut F (R P ) = R P
is not equivalent to F (R \ (R \ P )) = R \ (
R
C
\P ). Thus the
envelope constructed by the operator F becomes sensitive
to the way the original query is formulated.
EXPERIMENTAL RESULTS
Among available methods for computing consistent query
answers, only the query rewriting technique [2] seems to be
feasible for large databases. This is why in this work we
compare the following engines:
SQL An engine that executes the given query on the underlying
RDBMS, and returns the query result. This
method doesn't return consistent query answers, but
provides a baseline to observe the overhead of computing
consistent query answers using the proposed
methods.
QR Using the SQL engine, we execute the rewritten query
constructed as decribed in [2]. More details on this
approach can be found in Section 6.
KG This method constructs the basic envelope expression
and uses knowledge gathering, as described in Section
4.1.
ExtKG This engine constructs the extended envelope expression
(Section 4.2) and uses extended knowledge
gathering.
5.1.1
Generating test data
Every test was performed with the database containing
two tables P and Q, both having three attributes X, Y, Z.
For the constraints, we took a functional dependency X
Z in each table. The test databases had the following parameters
:
n : the number of base tuples in each table,
m : the number of additional conflicting tuples,
and had both tables constructed in the following way:
1. Insert n different base tuples with X and Z being equal
and taking subsequent values 0, . . . , n - 1, and Y being
randomly drawn from the set
{0, 1}.
2. Insert m different conflicting tuples with X taking subsequent
values
{0, n/m , 2 n/m , . . . , (m - 1)
n/m }, Z = X + 1, and Y being randomly drawn
from the set
{0, 1}.
In addition, we define auxiliary tables (P
core
and Q
core
)
containing only non-conflicting tuples from the base tables
(resp. P and Q). Those table were used as materialized
views of the core expressions (
P
C
and
Q
C
).
Example
11. We show how a table P with n = 4 and
m = 2 can be generated:
1. First
we
insert
the
base
tuples
(0, 1, 0), (1, 0, 1), (2, 0, 2), (3, 1, 3) into P
2. Then
we
insert
the
following
conflicting
tuples
(0, 1, 1), (2, 0, 3) into P
3. P
core
will hold the following tuples (1, 0, 1), (3, 1, 3).
In every table constructed in such a way the number of tuples
is n + m, and the number of conflicts is m.
5.1.2
The environment
The implementation is done in Java2, using PostgreSQL
(version 7.3.3) as the relational backend. All test have been
performed on a PC with a 1.4GHz AMD Athlon processor
under SuSE Linux 8.2 (kernel ver. 2.4.20) using Sun JVM
1.4.1.
423
5.2
Test results
Testing a query with a given engine consisted of computing
the consistent
2
answers to the query and then iterating
over the results. Iteration over the result is necessary, as
the subsequent elements of the consistent query answer set
are computed by Hippo in a lazy manner (this allows us
to process results bigger than the available main memory).
Every test has been repeated three times and the median
taken. Finally, we note that the cost of computing the conflict
hypergraph, which is incurred only once per session, is
ignored while estimating the time of the query evaluation.
We take a closer look at the time required for hypergraph
construction in Section 5.2.3.
5.2.1
Simple queries
We first compared performance of different engines on
simple queries: join, union, and difference.
Because we
performed the tests for large databases, we added a range
selection to the given query to obtain small query results,
factoring out the time necessary to write the outputs. As
parameters in the experiments, we considered the database
size, the conflict percentage, and the estimated result size.
Figure 3 shows the execution time for join as a function
of the size of the database. In the case of
{, }-expressions
(thus also joins), the execution times of KG, ExtKG and
SQL are essentially identical. Since no membership queries
have to be performed, it means that for simple queries the
work done by HProver for all tuples is practically negligible.
X<200
(P
X
Q)
Time
(sec.)
Database size (increases in 1k)
0
100
200
300
0
3
6
9
12
SQL
QR
KG
ExtKG
Conflicts: 2%
Figure 3: Execution time for join.
Figure 4 contains the results for union.
It shows that
basic knowledge gathering KG is not sufficient to efficiently
handle union. The cost of performing membership queries
for all tuples is very large. Note that query rewriting is not
applicable to union queries.
Figure 5 contains the results for set difference (the execution
time for KG was relatively much larger than values
of other solutions and in order to increase readability it has
not been included on this figure). Here the execution time is
a function of the percentage of conflicts. We note that ExtKG
performs as well as QR and both are approximately
twice slower than SQL.
2
Except when using SQL engine.
X<200
(P Q)
Time
(sec.)
Database size (increases in 1k)
0
20
40
60
80
100
0
3
6
9
12
15
SQL
KG
ExtKG
Conflicts: 2%
Figure 4: Execution time for union.
X<200
(P \ Q);
Time
(sec.)
Conflicts (%)
0
1
2
3
4
5
0
1
2
3
SQL
QR
ExtKG
DB size: 100k
Figure 5: Execution time for difference.
5.2.2
Complex queries
In order to estimate the cost of extending the envelope we
considered a complex union query
X<d
(P
X
Q
X
P
X
Q Q
X
P
X
Q
X
P ),
with d being a parameter that will allow us to control the
number of tuples processed by each engine. To assure no
membership queries will be performed, we have to add 8
outer joins. The main goal was to compare two versions
of knowledge gathering: KG and ExtKG. We have also
included the results for SQL. (It should be noted here that
this query has common subexpressions and RDBMS might
use this to optimize the query evaluation plan. PostgreSQL,
however, does not perform this optimization.)
As we can see in Figure 6, KG outperforms ExtKG only
in the case when the number of processed tuples is small.
As the result size increases, the execution time of ExtKG
grows significantly slower than that of KG. We notice also
that ExtKG needs 23 times more time than SQL but the
execution times of both grow in a similar fashion.
5.2.3
Hypergraph computation
The time of constructing the hypergraph is presented on
Figure 7). It depends on the total number of conflicts and
the size of the database.
It should be noticed here that the time of hypergraph construction
consists mainly of the execution time of conflict
detection queries. Therefore the time of hypergraph computation
depends also on the number of integrity constraints
and their arity.
424
X<d(P X Q X P X Q Q X P X Q X P )
Time
(sec.)
Result size estimation (d tuples)
0
30
60
90
120
150
0
10
20
30
SQL
KG
ExtKG
DB size: 100k
Conflicts: 2%
Figure 6: Impact of the result size.
Time
(sec.)
Database size (increments in 1k)
0
50
100
150
200
250
0
3
6
9
12
Conflicts:
0%
2%
4%
Figure 7: Hypergraph computation time
RELATED WORK
The discussion of related work here is very brief and focuses
mainly on the most recent research. For a comprehensive
discussion, please see [7].
Bry [10] was the first to note that the standard notion of
query answer needs to be modified in the context of inconsistent
databases and to propose the notion of a consistent
query answer. Bry's definition of consistent query answer is
based on provability in minimal logic and expresses the intuition
that the part of the database instance involved in an
integrity violation should not be involved in the derivation
of consistent query answers. This is not quite satisfactory, as
one would like to have a semantic, model-theoretic notion of
consistent query answer that parallels that of the standard
notion of query answer in relational databases. Moreover,
the data involved in an integrity violation is not entirely
useless and reliable indefinite information can often be extracted
from it, as seen in Example 1.
Query rewriting [2, 12] rewrites the original query Q to
another query Q with the property that the set of all the
answers to Q in the original database is equal to the set
of consistent answers to Q in that database. When applicable
, this approach provides an easy way to compute
consistent query answers, as the rewritten query Q can
typically be evaluated using the same query engine as the
query Q. Because the query Q is rewritten independently
of the database, the existence of a rewriting shows that requesting
consistent query answers instead of the regular ones
does not increase data complexity. However, query rewriting
has been found to apply only to restricted classes of
queries: the
{, , \}-subset [2] or the {, }-subset [13] of
the relational algebra. No method is presently known to
rewrite queries with projection considered together with the
binary operators, or union. Also, the class of constraints
is limited to binary universal constraints [2] or single functional
dependencies [13]. The line of research from [2] is con-tinued
in [17] where a class of tractable conjunctive queries,
based on generalized perfect matching, is identified. It is
proved that the consistent answers to queries in this class
cannot be obtained by query rewriting. We note here that
the nonexistence of query rewriting for conjunctive queries
follows also from the fact that computing consistent query
answers for such queries is a co-NP-complete problem [4,
13]. This is because the rewritten query is first-order and
thus can be evaluated in AC
0
, while known NP-complete
problems like SAT are not in AC
0
.
Several different approaches have been developed to specify
all repairs of a database as a logic program with disjunction
and classical negation [3, 6, 15, 18, 19, 22]. Such a
program can then be evaluated using an existing system like
dlv [14]. These approaches have the advantage of generality,
as typically arbitrary first-order queries and universal constraints
(or even some referential integrity constraints [3])
can be handled. However, the generality comes at a price:
The classes of logic programs used are
p
2
-complete. Therefore
, the approaches based on logic programming are unlikely
to work for large databases. The paper [15] proposes
several optimizations that are applicable to logic programming
approaches. One is localization of conflict resolution,
another - encoding tuple membership in individual repairs
using bit-vectors, which makes possible efficient computation
of consistent query answers using bitwise operators.
However, it is known that even in the presence of one functional
dependency there may be exponentially many repairs
[4]. With only 80 tuples involved in conflicts, the number
of repairs may exceed 10
12
! It is clearly impractical to efficiently
manipulate bit-vectors of that size.
[11] describes several possible definitions of repair, including
Definition 5, and analyzes the complexity of computing
consistent query answers under those definitions. Key and
inclusion dependencies are considered. The computational
approaches proposed are based on combinations of repair
enumeration and chase computation [1]. New tractability
results are obtained for classes of databases that satisfy key
constraints but may violate inclusion dependencies.
Presently, our approach requires that the integrated
database be materialized at a single site. It remains to be
seen if it can be generalized to a scenario where data is pulled
from different sites during the evaluation of queries rewritten
using, for example, the LAV approach [20]. This problem
has been considered in the context of a logic-program-based
approach to the computation of consistent query answers [8,
9] but, as explained earlier, such an approach does not scale
up to large databases.
A new scenario for data integration, data exchange, has
been recently proposed [16].
In this scenario, a target
database is materialized on the basis of a source database using
source-to-target dependencies. In the presence of target
integrity constraints, a suitable consistent target database
may not exist. This is a natural context for the application
of the concepts of repair and consistent query answer. However
, [16] does not consider the issue of the inconsistency of
target databases. [11] addresses the problem of consistent
query answering in a restricted data exchange setting.
425
CONCLUSIONS AND FUTURE WORK
In this paper, we have presented a practical, scalable
framework for computing consistent query answers for large
databases. We have also described a number of novel optimization
techniques applicable in this context and summa-rized
experimental results that validate our approach.
The approach, however, has a number of limitations. Only
projection-free relational algebra queries and denial integrity
constraints are currently supported. Adding projection to
the query language is a difficult issue because the complexity
of computing consistent query answers becomes in that case
co-NP-complete [4, 13]. So, unless P=NP, we cannot hope
for computing consistent query answers efficiently for arbitrary
conjunctive queries and arbitrary database instances.
However, the evaluation of queries with projection can make
use of the conflict hypergraph representation of all repairs,
and of the operators F and G introduced in Section 3. Moreover
, we expect to be able to compute consistent answers to
queries with projection in polynomial time if conflict hypergraphs
are suitably restricted. We hope that such restrictions
can be translated into corresponding restrictions on
database instances and integrity constraints.
In [4], we have studied scalar aggregation queries in the
presence of functional dependencies, also making use of conflict
graphs. It remains to be seen whether the techniques
developed in [4] can be combined with those of the present
paper.
Going beyond denial constraints appears challenging, too.
Essentially, integrity violations of denial constraints are due
to the presence of some facts in the database, and thus can
be compactly represented using the conflict hypergraph. If
arbitrary universal constraints, for example tuple-generating
dependencies [1, 21], are allowed, constraint violations may
be due to the simultaneous presence and absence of certain
tuples in the database. It is not clear how to construct in
this case a compact representation of all repairs that can
be used for the computation of consistent query answers.
Also, repairs are no longer guaranteed to be subsets of the
original database but can contain additional tuples. If referential
integrity is to be captured, constraints have to contain
existentially quantified variables, which leads to the unde-cidability
of consistent query answers [11]. Only in very restricted
cases this problem has been shown to be tractable
[11, 13].
Another avenue of further research involves using preferences
to reduce the number of repairs and consequently make
the computation of consistent query answers more efficient.
For example, in data integration, we may have a preference
for certain sources or for more recent information.
The issue of benchmarking systems that compute consistent
query answers requires more work.
It would be
desirable to design mechanisms that generate inconsistent
databases in a systematic way and to perform more extensive
experimental comparisons between implemented systems
REFERENCES
[1] S. Abiteboul, R. Hull, and V. Vianu. Foundations of
Databases. Addison-Wesley, 1995.
[2] M. Arenas, L. Bertossi, and J. Chomicki. Consistent Query
Answers in Inconsistent Databases. In ACM Symposium on
Principles of Database Systems (PODS), pages 6879, 1999.
[3] M. Arenas, L. Bertossi, and J. Chomicki. Answer Sets for
Consistent Query Answering in Inconsistent Databases. Theory
and Practice of Logic Programming, 3(45):393424, 2003.
[4] M. Arenas, L. Bertossi, J. Chomicki, X. He, V. Raghavan, and
J. Spinrad. Scalar Aggregation in Inconsistent Databases.
Theoretical Computer Science, 296(3):405434, 2003.
[5] M. Arenas, L. Bertossi, and M. Kifer. Applications of
Annotated Predicate Calculus to Querying Inconsistent
Databases. In International Conference on Computational
Logic, pages 926941. Springer-Verlag, LNCS 1861, 2000.
[6] P. Barcelo and L. Bertossi. Logic Programs for Querying
Inconsistent Databases. In International Symposium on
Practical Aspects of Declarative Languages (PADL), pages
208222. Springer-Verlag, LNCS 2562, 2003.
[7] L. Bertossi and J. Chomicki. Query Answering in Inconsistent
Databases. In J. Chomicki, R. van der Meyden, and G. Snake,
editors, Logics for Emerging Applications of Databases, pages
4383. Springer-Verlag, 2003.
[8] L. Bertossi, J. Chomicki, A. Cortes, and C. Gutierrez.
Consistent Answers from Integrated Data Sources. In
International Conference on Flexible Query Answering
Systems (FQAS), pages 7185, Copenhagen, Denmark,
October 2002. Springer-Verlag.
[9] L. Bravo and L. Bertossi. Logic Programs for Consistently
Querying Data Integration Systems. In International Joint
Conference on Artificial Intelligence (IJCAI), pages 1015,
2003.
[10] F. Bry. Query Answering in Information Systems with
Integrity Constraints. In IFIP WG 11.5 Working Conference
on Integrity and Control in Information Systems, pages
113130. Chapman &Hall, 1997.
[11] A. Cali, D. Lembo, and R. Rosati. On the Decidability and
Complexity of Query Answering over Inconsistent and
Incomplete Databases. In ACM Symposium on Principles of
Database Systems (PODS), pages 260271, 2003.
[12] A. Celle and L. Bertossi. Querying Inconsistent Databases:
Algorithms and Implementation. In International Conference
on Computational Logic, pages 942956. Springer-Verlag,
LNCS 1861, 2000.
[13] J. Chomicki and J. Marcinkowski. Minimal-Change Integrity
Maintenance Using Tuple Deletions. Information and
Computation, 2004. To appear. Earlier version: Technical
Report cs.DB/0212004, arXiv.org e-Print archive.
[14] T. Eiter, W. Faber, N. Leone, and G. Pfeifer. Declarative
Problem-Solving in DLV. In J. Minker, editor, Logic-Based
Artificial Intelligence, pages 79103. Kluwer, 2000.
[15] T. Eiter, M. Fink, G. Greco, and D. Lembo. Efficient
Evaluation of Logic Programs for Querying Data Integration
Systems. In International Conference on Logic Programming
(ICLP), pages 163177, 2003.
[16] R. Fagin, P. G. Kolaitis, R. J. Miller, and L. Popa. Data
Exchange: Semantics and Query Answering. In International
Conference on Database Theory (ICDT), pages 207224.
Springer-Verlag, LNCS 2572, 2003.
[17] A. Fuxman and R. Miller. Towards Inconsistency Management
in Data Integration Systems. In IJCAI-03 Workshop on
Information Integration on the Web (IIWeb-03), 2003.
[18] G. Greco, S. Greco, and E. Zumpano. A Logic Programming
Approach to the Integration, Repairing and Querying of
Inconsistent Databases. In International Conference on Logic
Programming (ICLP), pages 348364. Springer-Verlag, LNCS
2237, 2001.
[19] G. Greco, S. Greco, and E. Zumpano. A Logical Framework for
Querying and Repairing Inconsistent Databases. IEEE
Transactions on Knowledge and Data Engineering,
15(6):13891408, 2003.
[20] A. Y. Halevy. Answering Queries Using Views: A Survey.
VLDB Journal, 10(4):270294, 2001.
[21] P. C. Kanellakis. Elements of Relational Database Theory. In
Jan van Leeuwen, editor, Handbook of Theoretical Computer
Science, volume B, chapter 17, pages 10731158. Elsevier/MIT
Press, 1990.
[22] D. Van Nieuwenborgh and D. Vermeir. Preferred Answer Sets
for Ordered Logic Programs. In European Conference on
Logics for Artificial Intelligence (JELIA), pages 432443.
Springer-Verlag, LNAI 2424, 2002.
[23] J. Wijsen. Condensed Representation of Database Repairs for
Consistent Query Answering. In International Conference on
Database Theory (ICDT), pages 378393. Springer-Verlag,
LNCS 2572, 2003.
426
| Knowledge gathering;Conflict hypergraph;inconsistency;Denial constraints;Inconsistent database;Optimization;Relational algebra;Polynomial time;Consistent Query answer;Disjunctive query;integrity constraints;query processing;Repair |
55 | Consistent Query Answering under Key and Exclusion Dependencies: Algorithms and Experiments | Research in consistent query answering studies the definition and computation of "meaningful" answers to queries posed to inconsistent databases, i.e., databases whose data do not satisfy the integrity constraints (ICs) declared on their schema. Computing consistent answers to conjunctive queries is generally coNP-hard in data complexity, even in the presence of very restricted forms of ICs (single, unary keys). Recent studies on consistent query answering for database schemas containing only key dependencies have an-alyzed the possibility of identifying classes of queries whose consistent answers can be obtained by a first-order rewriting of the query, which in turn can be easily formulated in SQL and directly evaluated through any relational DBMS. In this paper we study consistent query answering in the presence of key dependencies and exclusion dependencies. We first prove that even in the presence of only exclusion dependencies the problem is coNP-hard in data complexity , and define a general method for consistent answering of conjunctive queries under key and exclusion dependencies, based on the rewriting of the query in Datalog with negation . Then, we identify a subclass of conjunctive queries that can be first-order rewritten in the presence of key and exclusion dependencies, and define an algorithm for computing the first-order rewriting of a query belonging to such a class of queries. Finally, we compare the relative efficiency of the two methods for processing queries in the subclass above mentioned. Experimental results, conducted on a real and large database of the computer science engineering degrees of the University of Rome "La Sapienza", clearly show the computational advantage of the first-order based technique. | INTRODUCTION
Suppose to have a database whose data violate the integrity
constraints (ICs) declared on its schema. What are
the answers that have to be returned to queries posed to
such a database?
The standard approach to this problem
is through data cleaning, i.e., by explicitly modifying
the data in order to eliminate violation of ICs: only when
data are "repaired", i.e., are consistent with the ICs, queries
can be answered. However, in many situations it would be
much more desirable to derive significant information from
the database even in the presence of data inconsistent with
the ICs. Indeed, in many application scenarios, the explicit
repair of data is not convenient, or even not possible.
This happens, for instance, in data integration applications,
which provide a unified, virtual view of a set of autonomous
information sources [5].
This alternative approach is the one followed by research
in consistent query answering, which studies the definition
(and computation) of "meaningful" answers to queries posed
to databases whose data do not satisfy the ICs declared on
the database schema [1, 14, 4]. All these approaches are
based on the following principle: schema is stronger than
data. In other words, the database schema (i.e., the set of integrity
constraints) is considered as the actually reliable information
(strong knowledge), while data are considered as
information to be revised (weak knowledge). Therefore, the
problem amounts to deciding how to "repair" (i.e., change)
data in order to reconcile them with the information expressed
in the schema. Therefore, the intuitive semantics of
consistent query answering can be expressed as follows: a
tuple t is a consistent answer to a query q in an inconsistent
database D if t is an answer to q in all the repairs of D,
i.e., in all the possible databases obtained by (minimally)
modifying the data in D to eliminate violations of ICs.
Example 1. Let D = {r(a, b)} be a database whose
schema contains the declaration of a key dependency on the
first attribute of r. Since the database instance does not violate
the key dependency on r, the only repair of the database
792
is D itself. Hence, the following query q(X, Y ) : r(X, Y )
has the consistent answer t = a, b . Now, let D be the
database instance obtained by adding the fact r(a, c) to D.
D is inconsistent with the key dependency, and has two possible
repairs: {r(a, b)} and {r(a, c)}. Since there is no tuple
which is an answer to q in both repairs, it follows that there
are no consistent answers to the query q in D . In contrast,
observe that the query q (X) : r(X, Y ) has the answer a
both in D and in D , which can be therefore considered consistent
.
Recent studies in this area have established declarative semantic
characterizations of consistent query answering over
relational databases, decidability and complexity results for
consistent query answering, as well as techniques for query
processing [1, 6, 14, 4, 3, 5]. In particular, it has been shown
that computing consistent answers of conjunctive queries
(CQs) is coNP-hard in data complexity, i.e., in the size of
the database instance, even in the presence of very restricted
forms of ICs (single, unary keys).
From the algorithmic viewpoint, the approach mainly
followed is query answering via query rewriting: (i) First,
the query that must be processed (usually a conjunctive
query) is reformulated in terms of another, more complex
query. Such a reformulation is purely intensional, i.e., the
rewritten query is independent of the database instance; (ii)
Then, the reformulated query is evaluated over the database
instance. Due to the semantic nature and the inherent complexity
of consistent query answering, Answer Set Programming
(ASP) is usually adopted in the above reformulation
step [14, 3, 5], and stable model engines like DLV [15] can
be used for query processing.
An orthogonal approach to consistent query answering is
the one followed by recent theoretical works [1, 6, 13], whose
aim is to identify subclasses of CQs whose consistent answers
can be obtained by rewriting the query in terms of a first-order
(FOL) query. The advantage of such an approach is
twofold: first, this technique allows for computing consistent
answers in time polynomial in data complexity (i.e., for such
subclasses of queries, consistent query answering is compu-tationally
simpler than for the whole class of CQs); second,
consistent query answering in these cases can be performed
through standard database technology, since the FOL query
synthesized can be easily translated into SQL and then evaluated
by any relational DBMS. On the other hand, this approach
is only limited to polynomial subclasses of the problem
. In particular, Fuxman and Miller in [13] have studied
databases with key dependencies, and have identified a
broad subclass of CQs that can be treated according to the
above strategy.
In this paper we study consistent query answering in the
presence of key dependencies and exclusion dependencies, a
well-known class of ICs. Notice that exclusion dependencies
are not only typical of relational database schemas, but are
also relevant and very common in languages for conceptual
modeling, e.g., ontology languages [2]: indeed such dependencies
allow for modeling partitioning/disjointness of entities
. This makes the study of exclusion dependencies particularly
important for the broad applicability of consistent
query answering.
Our contribution can be summarized as follows:
1. We prove that consistent answering of conjunctive
queries for databases with only exclusion dependencies
is coNP-hard in data complexity, i.e., the problem
presents the same complexity lower bound already
known for databases with only key dependencies [6, 4].
2. We define a method for consistent query answering
under key dependencies and exclusion dependencies
based on the rewriting of the query in Datalog
[10], a
well-known extension of Datalog that allows for using
negation in the body of program rules. The rewriting
extends the one defined in [5] to the presence of
exclusion dependencies. The rewriting is used by INFOMIX
,
1
a system for the integration of inconsistent
data, based on the use of the DLV system.
3. We extend the work of [13] to the presence of exclusion
dependencies in the database schema. In particular
, we identify the class of KE-simple queries (a
subclass of CQs) that can be first-order rewritten in
the presence of both key dependencies and exclusion
dependencies, and define an algorithm for computing
the first-order rewriting of a query belonging to such
a class of queries. We point out that our algorithm,
though inspired by the one of [13], in the presence of
only key dependencies applies to a broader class of
queries than the class considered first-order rewritable
in [13]. Therefore, the technique of the present paper
is relevant also for consistent query answering under
only key dependencies.
4. We compare the relative efficiency of these two methods
for processing KE-simple queries. To this aim,
we have realized a software module that implements
the above two rewriting methods.
Then, we have
compared query answering based on the rewriting in
Datalog
and evaluation in the DLV system [15] with
the method based on first-order rewriting and query
evaluation on MySQL DBMS. We have conducted our
experiments on a real and large database of the computer
science engineering degrees of the University of
Rome "La Sapienza".
Our experimental results clearly show, for KE-simple
queries, the computational advantage of the specialized first-order
based technique over the more general one based on
Datalog
. In particular, the results indicate that the advantage
of the first-order based technique grows with the number
of database tuples that violate the ICs. Such results thus
provide, in a general sense, an experimental validation of the
first-order based approach: its computational advantage is
not only theoretical, but also can be effectively measured
when applied to practical, realistic scenarios. However, it
turns out that the general method based on Datalog
, although
not specifically tailored for KE-simple queries, proves
particularly efficient in the presence of few data inconsistencies
.
In the next section, we briefly introduce the formal framework
of consistent query answering. In Section 3, we prove
coNP-hardness of consistent query answering under only exclusion
dependencies, and present our Datalog
rewriting
and our algorithm for first-order rewriting in the presence
of key and exclusion dependencies. In Section 4, we present
our experimental results, and in Section 5 we address related
work and conclude the paper.
1
http://sv.mat.unical.it/infomix.
793
INCONSISTENT DATABASES AND CONSISTENT ANSWERS
Syntax. A database schema S is a triple A, K, E , where:
A is a relational signature.
K is a set of key dependencies over A. A key dependency
(KD) over A is an expression of the form
key(r) = {i
1
, . . . , i
k
}, where r is a relation of A, and,
if n is the arity of r, 1 i
j
n for each j such that
1 j k. We assume that at most one KD is specified
over a relation r.
E is a set of exclusion dependencies over A. An exclusion
dependency (ED) over A is an expression of the
form r
1
[i
1
, . . . , i
k
] r
2
[j
1
, . . . , j
k
] = , where r
1
, r
2
are
relations of A, and, if n
1
and n
2
are the arities of r
1
and r
2
respectively, for each
such that 1
k,
1 i n
1
and 1 j n
2
.
A term is either a variable or a constant symbol. An
atom is an expression of the form p(t
1
, . . . , t
n
) where p is a
relation symbol of arity n and t
1
, . . . , t
n
is a sequence of n
terms (either variables or constants). An atom is called fact
if all the terms occurring in it are constants. A database
instance D for S is a set of facts over A. We denote as r
D
the set {t | r(t) D}.
A conjunctive query of arity n is an expression of the form
h(x
1
, . . . , x
n
) : a
1
, . . . , a
m
, where the atom h(x
1
, . . . , x
n
),
is called the head of the query (denoted by head(q)), and
a
1
, . . . , a
m
, called the body of the query (and denoted by
body(q)), is a set of atoms, such that all the variables occurring
in the query head also occur in the query body. In a
conjunctive query q, we say that a variable is a head variable
if it occurs in the query head, while we say that a
variable is existential if it only occurs in the query body.
Moreover, we call an existential variable shared if it occurs
at least twice in the query body (otherwise we say that it
is non-shared). A FOL query of arity n is an expression
of the form {x
1
, . . . , x
n
| (x
1
, . . . , x
n
)}, where x
1
, . . . , x
n
are variable symbols and is a first-order formula with free
variables x
1
, . . . , x
n
.
Semantics. First, we briefly recall the standard evaluation
of queries over a database instance. Let q be the CQ
h(x
1
, . . . , x
n
) : a
1
, . . . , a
m
and let t = c
1
, . . . , c
n
be a tuple
of constants. A set of facts I is an image of t w.r.t. q
if there exists a substitution of the variables occurring in
q such that (head(q)) = h(t) and (body(q)) = I. Given a
database instance D, we denote by q
D
the evaluation of q
over D, i.e., q
D
is the set of tuples t such that there exists
an image I of t w.r.t. q such that I D.
Given a FOL query q and a database instance D, we denote
by q
D
the evaluation of q over D, i.e., q
D
= {t
1
, . . . , t
n
|
D |= (t
1
, . . . , t
n
)}, where each t
i
is a constant symbol and
(t
1
, . . . , t
n
) is the first-order sentence obtained from by
replacing each free variable x
i
with the constant t
i
.
Then, we define the semantics of queries over inconsistent
databases.
A database instance D violates the
KD key(r) = {i
1
, . . . , i
k
} iff there exist two distinct facts
r(c
1
, . . . , c
n
), r(d
1
, . . . , d
n
) in D such that c
i
j
= d
i
j
for
each j such that 1 j k. Moreover, D violates the
ED r
1
[i
1
, . . . , i
k
] r
2
[j
1
, . . . , j
k
] = iff there exist two facts
r
1
(c
1
, . . . , c
n
), r
2
(d
1
, . . . , d
m
) in D such that c
i
= d
j
for
each
such that 1 k.
Let S = A, K, E be a database schema. A database
instance D is legal for S if D does not violate any KD in K
and does not violate any ED in E.
A set of ground atoms D is a repair of D under S iff: (i)
D D; (ii) D is legal for S; (iii) for each D such that
D D D, D is not legal for S. In words, a repair for
D under S is a maximal subset of D that is legal for S.
Let q be a CQ. A tuple t is a consistent answer to q in D
under S iff, for each repair D of D under S, t q
D
.
Example 2. Consider
the
database
schema
S
=
A, K, E ,
where A comprises the relations
Journal (title, editor),
ConfPr (title, editor)
and
Editor (name, country),
K
comprises
the
dependencies
key(Journal)
=
{1},
key(ConfPr )
=
{1},
key(Editor )
=
{1},
E
comprises the dependency
Journal [1] ConfPr [1] = .
Consider the database
instance D described below
{Journal(TODS, ACM), Journal(TODS, IEEE),
Editor (ACM, USA), ConfPr (PODS05, ACM),
ConfPr (PODS05, SV), Editor (IEEE, USA)}.
It is easy to see that D is not consistent with the KDs on
Journal and ConfPr of S. Then, the repairs of D under S
are:
{Journal(TODS, ACM), ConfPr (PODS05, ACM),
Editor (ACM, USA), Editor (IEEE, USA)}
{Journal(TODS, ACM), ConfPr (PODS05, SV),
Editor (ACM, USA), Editor (IEEE, USA)}
{Journal(TODS, IEEE), ConfPr (PODS05, ACM),
Editor (ACM, USA), Editor (IEEE, USA)}
{Journal(TODS, IEEE), ConfPr (PODS05, SV),
Editor (ACM, USA), Editor (IEEE, USA)}.
Let q(x, z)
:
Journal (x, y), Editor (y, z) be a user
query.
The consistent answers to q in D under S are
{ TODS, USA }.
QUERY ANSWERING
Computational Complexity. The problem of computing
consistent answers to conjunctive queries over inconsistent
databases in the presence of KDs (under the repair
semantics introduced in Section 2) is coNP-hard in data
complexity [4, 6]. In the following, we prove that such a
problem is coNP-hard in data complexity also for schemas
in which only EDs occur
2
.
Theorem 3. Let S = A, , E be a database schema containing
only EDs, D a database instance for S, q a CQ of
arity n over S, and t an n-tuple of constants. The problem
of establishing whether t is a consistent answer to q in D
under S is coNP-hard with respect to data complexity.
Proof (sketch). We prove coNP-hardness by reducing the
3-colorability problem to the complement of our problem.
Consider a graph G = V, E with a set of vertices V and
edges E. We define a relational schema S = A, , E where
A consists of the relation edge of arity 2, and the relation col
of arity 5, and E contains the dependencies col[3]col[4] = ,
col[3] col[5] = , col[4] col[5] = . The instance D is
defined as follows:
D = {col(n, 1, n,
,
), col(n, 2,
, n,
), col(n, 3,
,
, n)|
n V } {edge(x, y)| x, y E}.
2
We consider the decision problem associated to query answering
(see e.g., [6])
794
Where each occurrence of the meta-symbol
denotes
a different
constant not occurring elsewhere in the database. Intuitively
, to represent the fact that vertex n V is assigned
with color i {1, 2, 3}, D assigns to col a tuple in which i
occurs as second component and n occurs as first and also
as 2 + i-th component. The EDs of S impose that consistent
instances assign no more than one color to each node.
Finally, we define the query
q
edge(x, y), col(x, z, w
1
, w
2
, w
3
), col(y, z, w
4
, w
5
, w
6
).
On the basis of the above construction it is possible to show
that G is 3-colorable (i.e., for each pair of adjacent vertices,
the vertices are associated with different colors) if and only if
the empty tuple
is not a consistent answer to q in D under
S (i.e., the boolean query q has a negative answer).
Datalog
Rewriting. We now provide a sound and
complete query rewriting technique for consistent query answering
in the presence of key and exclusion dependencies.
To this aim, we make use of Datalog
, i.e., Datalog enriched
with (unstratified) negation, under stable model semantics
[10]. From a computational point of view, Datalog
is coNP-complete
with respect to data complexity, and therefore is
well suited for dealing with the high computational complexity
of our problem.
The rewriting that we present in the following extends the
one proposed in [4] for CQs specified over database schemas
with KDs, in order to properly handle the presence of EDs.
The rewriting is employed in the system INFOMIX. Anal-ogously
to other proposals that solve consistent query answering
via query rewriting (although for different classes
of constraints and query languages, see, e.g., [14, 3]), the
basic idea of the technique is to encode the constraints of
the relational schema into a Datalog
program, such that
the stable models of the program yield the repairs of the
database instance D.
Definition 4. Given a CQ
3
q and a schema S, the
Datalog
program (q, S) is defined as the following set of
rules
4
:
1. the rule corresponding to the definition of q;
2. for each relation r S, the rules
r(~
x, ~
y)
:
r
D
(~
x, ~
y) , not r(~
x, ~
y)
r(~
x, ~
y)
:
r
D
(~
x, ~
y) , r(~
x, ~
z) , y
1
= z
1
r(~
x, ~
y)
:
r
D
(~
x, ~
y) , r(~
x, ~
z) , y
m
= z
m
where: in r(~
x, ~
y) the variables in ~
x correspond to the
attributes constituting the key of the relation r; ~
y =
y
1
, . . . , y
m
and ~
z = z
1
, . . . , z
m
.
3. for
each
exclusion
dependency
(r[i
1
, . . . , i
k
] s[j
1
, . . . , j
k
]) = in E, with r = s, the
rules:
r(~
x, ~
y)
:
r
D
(~
x, ~
y) , s(~
x, ~
z)
s(~
x, ~
y)
:
s
D
(~
x, ~
y) , r(~
x, ~
z)
3
The present rewriting is not actually restricted to CQs,
since it can be immediately extended to general Datalog
queries.
4
Without loss of generality, we assume that the attributes
in the key precede all other attributes in r, that i
1
= j
1
=
1, . . . , i
k
= j
k
= k,
1
= 1, . . . ,
h
= h, and m
1
= h +
1, . . . , m
h
= h + h.
where ~
x = x
1
, . . . , x
k
, i.e., the variables in ~
x correspond
to the sequence of attributes of r and s involved
in the ED.
4. for
each
exclusion
dependency
r[
1
, . . . ,
h
]
r[m
1
, . . . , m
h
] = in E, the rules:
r(~
x, ~
y, ~
z)
:
r
D
(~
x, ~
y, ~
z) , r(~
y, ~
w
1
, ~
w
2
) ,
r(~
x, ~
y, ~
z)
:
r
D
(~
x, ~
y, ~
z) , r( ~
w
1
, ~
x, ~
w
2
) ,
r(~
x, ~
x, ~
z)
:
r
D
(~
x, ~
x, ~
z).
Furthermore, we denote with (D) the database instance
obtained from D by replacing each predicate symbol r with
r
D
.
Informally, for each relation r, (q, S) contains (i) a relation
r
D
that represents r
D
; (ii) a relation r that represents
a subset of r
D
that is consistent with the KD for r and the
EDs that involve r; (iii) an auxiliary relation r that represents
the "complement" of r, i.e., the subset of r
D
that
together with r results inconsistent with the EDs and KDs
on the schema. Notice that the extension of r depends on
the choice made for r (and vice-versa), and that such choices
are made in a non-deterministic way (enforced by the use of
the unstratified negation). The above rules force each stable
model M of (q, S) (D) to be such that r
M
is a maximal
subset of tuples from r
D
that are consistent with both the
KD for r and the EDs in E that involve r.
Example 2.(contd.) The Datalog
rewriting (q, S) of the
query q(x, z) : Journal(x, y), Editor (y, z) is the following
program:
q(x, z)
:
Journal(x, y), Editor (y, z)
Journal(x, y)
:
Journal
D
(x, y) , not Journal(x, y)
Editor (x, y)
:
Editor
D
(x, y) , not Editor (x, y)
ConfPr (x, y)
:
ConfPr
D
(x, y) , not ConfPr (x, y)
Journal(x, y)
: Journal
D
(x, y) , Journal(x, z) , z = y
Editor (x, y)
: Editor
D
(x, y) , Editor (x, z) , z = y
ConfPr (x, y)
: ConfPr
D
(x, y) , ConfPr (x, z) , z = y
Journal(x, y)
: Journal
D
(x, y) , ConfPr (x, z)
ConfPr (x, y)
: ConfPr
D
(x, y) , Journal(x, z)
The first rule of the rewriting encodes the query. The second
, third and fourth rule establish the relationship between
each relation and the corresponding complementary predicate
. The fifth, sixth, and seventh rule encode the KDs of
S, whereas the last two rules encode the ED.
We now state correctness of our encoding with respect to
the semantics of consistent query answering.
Theorem 5. let S = A, K, E be a database schema, D
be a database instance for S, and q be a CQ over S. A tuple
t is a consistent answer to q in D under S iff t q
M
for
each stable model M of (q, S) (D).
From the above theorem and Theorem 3 it follows that the
consistent query answering problem under KDs and EDs is
coNP-complete in data complexity.
FOL Rewriting. Let us now consider a different approach
to consistent query answering, which aims at identifying
subclasses of queries for which the problem is tractable.
This is the line followed in [1, 6, 13]. In particular, in [13]
the authors define a subclass of CQs, called C
tree
, for which
795
they prove tractability of consistent query answering in the
presence of KDs, and provide a FOL rewriting technique.
The class C
tree
is based on the notion of join graph: a join
graph of a query q is the graph that contains (i) a node N
i
for every atom in the query body, (ii) an arc from N
i
to N
j
iff an existential shared variable occurs in a non-key position
in N
i
and occurs also in N
j
, (iii) an arc from N
i
to N
i
iff an
existential shared variable occurs at least twice in N
i
, and
one occurrence is in a non-key position. According to [13],
C
tree
is the class of conjunctive queries (a) without repeated
relation symbols, (b) in which every join condition involves
the entire key of at least one relation and (c) whose join
graph is acyclic. As pointed out in [13], this class of queries
is very common, since cycles are rarely present in queries
used in practice. However, no repeated symbols may occur
in the queries, and queries must have joins from non-key
attributes of a relation to the entire key of another one.
We now extend the work of [13] as follows:
We refine the class C
tree
by allowing join conditions
in which not necessarily the entire key of one relation
has to be involved, but it is sufficient that, for each
pair of attributes, at least one attribute must belong
to a key (i.e., we allow for joins involving portions of
key). In such a way, we obtain a new class, called C
+
tree
,
larger than C
tree
, for which consistent query answering
is polynomial in the presence of KDs. In other words,
C
+
tree
is the class of conjunctive queries for which only
condition (a) and (c) above hold.
We refine the class C
+
tree
in order to obtain a class of
queries, called KE-simple, for which consistent query
answering is polynomial in the presence of both KDs
and also EDs.
We provide a new algorithm for computing the FOL
rewriting for KE-simple queries. In the algorithm, we
exploit the notion of join graph of [13], but we enrich
the structure of the graph by associating to each node
an adornment which specifies the different nature of
terms in the atoms (see below), in order to deal with
KE-simple queries.
Let us describe in detail our technique. Henceforth, given a
CQ q, we denote by R
q
the set of relation symbols occurring
in body(q). Given a database schema S = A, K, E and a CQ
q, we denote by O
E
(q) the set of relation symbols O
E
(q) =
{s | r[j
1
, . . . , j
k
] s[
1
, . . . ,
k
] = E and r R
q
}. In
words, O
E
(q) contains each relation symbol s A such that
there exists an exclusion dependency between s and r in E,
where r is a relation symbol occurring in body(q).
Definition 6. Let S = A, K, E be a database schema.
A conjunctive query q is KE-simple if q C
+
tree
, and
there exists no pair of relation symbols r, s in O
E
(q)
such that there exists an exclusion dependency between
r and s in E,
there exists no relation symbol r in O
E
(q) such that
there exists r[i
1
, . . . , i
k
] s[j
1
, . . . , j
k
] = in E, and
either key(r)
{i
1
, . . . , i
k
} or key(s)
{j
1
, . . . , j
k
},
where s is a relation symbol in R
q
.
In words, a query q is KE-simple if it belongs to the class
C
+
tree
, and if both there are no EDs between relations that
are in O
E
(q), and each ED between a relation r R
q
and
a relation s O
E
(q) does not involve non-key attributes of
r or s. Notice that this last condition does not limit the
applicability of our approach in many practical cases. For
example, in relational databases obtained from ER-schemas,
EDs are typically specified between keys.
For KE-simple CQs, we present in the following a query
rewriting algorithm which, given a query q, produces a FOL
rewriting, whose evaluation over any database instance D
for the database schema S returns the consistent answers to
q in D under S. The basic idea of the algorithm is to specify
a set of conditions, expressible in FOL, that, if verified over
a database instance D, for a given tuple t, guarantee that
in any repair of D there is an image of t w.r.t q, i.e., t is
a consistent answer to q in D. We point out that, for non-KE
-simple CQs, such conditions cannot be specified in FOL.
Observe that, in our approach, the FOL rewriting is then in
turn translated into SQL, and query evaluation is performed
by means of standard DBMS query answering techniques.
This further encoding does not present particular difficulties,
and due to space limit we omit such transformation.
In order to construct our join graph we need the following
definition.
Definition 7. Let S = A, K, E be a database schema,
q be a CQ, and a = r(x
1
, . . . , x
n
) be an atom (of arity n)
occurring in R
q
. Then, let key(r) = {i
1
, . . . , i
k
} belong to
K, and let 1 i n. The type of the i-th argument of a in
q, denoted by type(a, i, q) is defined as follows:
1. If i
1
i i
k
, then:
if x
i
is a head variable of q, a constant, or an existential
shared variable, then type(a, i, q) = KB;
if x
i
is an existential non-shared variable of q,
then type(a, i, q) = KU.
2. Otherwise (i /
{i
1
, . . . , i
k
}):
if x
i
is a head variable of q or a constant, then
type(a, i, q) = B;
if x
i
is an existential shared variable of q, then
type(a, i, q) = S;
if x
i
is an existential non-shared variable of q,
then type(a, i, q) = U.
Terms typed by KB or B are called bound terms, otherwise
they are called unbound. We call the typing of a in q
the expression of the form r(x
1
/t
1
, . . . , x
n
/t
n
), where each
t
i
is the type of the argument x
i
in q.
The following algorithm KEFolRewrite computes the FOL
rewriting to a KE-simple conjunctive query q. In the algorithm
, JG(q) denotes the join graph of q, in which each node
N
i
is labelled with the typing of the corresponding atom a
i
in q. Furthermore, roots(JG(q)) denotes the set of nodes
that are roots in JG(q) (notice that for KE-simple queries
the join graph is a forest, since it is acyclic).
Algorithm KEFolRewrite(q, S)
Input: KE-simple CQ q (whose head variables are x
1
, . . . , x
n
);
schema S = A, K, E
Output: FOL query (representing the rewriting of q)
begin
796
Algorithm FolTree(N ,E)
Input: node N of JG(q); set of EDs E
Output: FOL formula
begin
let a = r(x
1
/t
1
, . . . , x
n
/t
n
) be the label of N ;
for i := 1 to n do
if t
i
{KB, B} then v
i
:= x
i
else v
i
:= y
i
, where y
i
is a new variable
if each argument of a is of type B or KB then f
1
:= r(x
1
, . . . , x
n
)
else begin
let i
1
, . . . , i
m
be the positions of the arguments of a of type S, U, KU;
f
1
:= y
i
1
, . . . , y
i
m
. r(v
1
, . . . , v
n
)
end;
for each ED r[j
1
, . . . , j
k
] s[
1
, . . . ,
k
] = E do
begin
let m be the arity of s;
for i := 1 to m do
if i {
1
, . . . ,
k
} then if i =
c
then z
i
= v
j
c
else z
i
= y
i
where y
i
is a new variable;
let y
i
1
, . . . , y
i
k
be the new variables above introduced;
f
1
= f
1
y
i
1
, . . . , y
i
k
. s(z
1
, . . . , z
m
)
end
if there exists no argument in a of type B or S then return f
1
else begin
let p
1
, . . . , p
c
be the positions of the arguments of a of type U, S or B;
let
1
, . . . ,
h
be the positions of the arguments of a of type B;
for i := 1 to c do
if t
p
i
= S then z
p
i
:= x
p
i
else z
p
i
:= y
i
, where y
i
is a new variable
for i := 1 to n do
if t
i
{KB, KU} then w
i
:= v
i
else w
i
:= z
i
;
f
2
:= z
p
1
, . . . , z
p
c
. r(w
1
, . . . , w
n
)
N jgsucc(N )
FolTree(N )
i{
1
,...,
h
}
w
i
= x
i
return f
1
f
2
end
end
Figure 1: The algorithm FolTree
compute JG(q);
return {x
1
, . . . , x
n
|
N roots(JG(q))
FolTree(N, E)}
end
Basically, the algorithm builds the join graph of q and
then builds the first-order query by invoking the algorithm
FolTree on all the nodes that are roots of the join graph.
The algorithm FolTree is defined in Figure 1. Roughly
speaking, the algorithm FolTree(N, E) returns a first-order
formula that constitutes the encoding of the whole subtree
of the join graph of the query whose root is the node N .
To do that, the algorithm computes two subformulas f
1
and
f
2
. The formula f
1
contains an atom whose predicate is
the predicate r labelling the node N , in which the unbound
variables of r are renamed with new existentially quantified
variables. Furthermore, f
1
contains an atom of the form
y
i
1
, . . . , y
i
k
. s(z
1
, . . . , z
m
) for each ED that involves r and
a relation s. Intuitively, when evaluated over a database instance
D, each such atom checks that there are no facts of
the form s(t
s
) D that violate the ED together with a fact
of the form r(t
r
) D, which is in an image I of a tuple
t w.r.t. the input query q, i.e., the atom guarantees that I
is not contradicted w.r.t. the ED. The formula f
2
is empty
only when all non-key arguments of the atom r are existential
non-shared variables (i.e., of type U ). Otherwise, the
formula f
2
is a universally quantified implication. In such
an implication, the antecedent is an atom whose predicate
is r, and the consequent is a conjunction of equality conditions
and other subformulas: more precisely, there is an
equality condition for each non-key argument in r of type
B, and a subformula for each successor N of N in the join
graph of q, computed by recursively invoking FolTree on N .
Intuitively, f
2
enforces the joins between r and each atom
labelling the successors of r in the join graph of q. At the
same time f
2
ensures that, when evaluated over a database
instance D, if there exists a fact of the form r(t
r
) D that
violates the KD specified on r together with a fact of the
form r(t
r
) D, which is in the image of a tuple t w.r.t. q,
r(t
r
) belongs to another image of t w.r.t. q. In other words,
the atom guarantees that in any repair there exists an image
of t (w.r.t. the KD on r). Such a check is iterated for other
KDs by recursively invoking FolTree. The following example
illustrates the way the algorithm works.
Example 2.(contd.) It is easy to verify that the query
q(x, z) : Journal(x, y), Editor (y, z) is KE-simple.
Journal(x/KB, y/S) (N 1) - (N 2) Editor (y/KB, z/B)
Now, by applying the algorithm KEFolRewrite and FolTree
we obtain:
KEFolRewrite(q)
=
{x, z | FolTree(N 1)}
FolTree(N 1)
=
y
2
. Journal(x, y
2
) y
2
. ConfPr (x, y
2
)
y. Journal(x, y) (FolTree(N 2))
FolTree(N 2)
=
Editor (y, z) y
2
. Editor (y, y
2
) y
2
= z.
797
relations
integrity constraints
faculty/3
exam plan/10
key(f aculty) = {1, 2}
key(plan status) = {1}
course assignment/3
degree/5
key(exam plan) = {1}
key(positioning) = {1}
positioning/2
course/4
key(university) = {1}
key(prof data) = {1}
plan status/2
key(exam type) = {1}
key(degree) = {1}
prof data/3
key(course) = {1}
key(exam) = {2}
university/3
key(master exam) = {1}
bachelor exam/2
key(bachelor exam) = {1}
master exam/2
course assignment[2] professor [1] =
exam type/2
master exam[1] bachelor exam[1] =
exam/4
course[3, 4] bachelor exam[1, 2] =
Figure 2: A portion of the test database schema
By evaluating the rewriting over D we get { TODS, USA },
i.e., the set of consistent answers to q in D under S.
Next, we state soundness and completeness of the algorithm.
Theorem 8. Let S = A, K, E be a database schema,
q be a KE-simple conjunctive query over S, and q
r
be the
FOL rewriting returned by KEFolRewrite(q). Then, for every
database instance D for S, a tuple t is a consistent answer
to q in D under S iff t q
D
r
.
As a corollary, consistent query answering for KE-simple
conjunctive queries over database schemas with KDs and
EDs is polynomial in data complexity.
EXPERIMENTS
We now present some experimental results comparing the
FOL and the Datalog
rewriting previously described. To
perform the experiments, we implemented a rewriting module
that translates CQs issued over the database schema into
both FOL queries and Datalog
queries. FOL queries are
in turn translated by the module into SQL queries. Then,
we ran the SQL queries on a MySQL 4.1.10 instance of the
test database, while we executed Datalog
queries on DLV
[15]. The experiments were conducted on a double processor
machine, with 3 GHz Pentium IV Xeon CPU and 2 GB of
main memory, running the Linux operating system.
The test database holds information about the computer
science engineering degrees of the university of Rome "La
Sapienza" and contains 27 tables with an overall size of over
200.000 tuples. In Figure 2, we present the portion of the
test database schema that is relevant for the queries (in the
figure, "r/n" indicates that relation r is of arity n).
Due to space limits, we only report details about three of
the queries we tested:
Q
0
=
q(C) : -f aculty(C, U, INGEGNERIA ).
Q
2
=
q(S, D, P ) : -positioning(P S, P ), plan status(ST, DE),
exam plan(C, S, P S, DT, ST, 1 , U 1, U 2, U 3, U 4).
Q
3
=
q(N, D, N P, CP ) : -master exam(C, N, T, 5 ),
exam type(T, D),
The queries have been posed on various instances of the
test database with an increasing number of pairs of tuples
violating some ICs. Figure 3, shows experimental results.
In the charts 3(a), 3(b) and 3(c), the execution time of the
SQL encoding and of the Datalog
program are compared
for queries Q
0
, Q
2
, and Q
3
. As expected, from a certain
inconsistency level on, the execution time of the Datalog
encoding has an exponential blow-up; in contrast, the execution
time for the SQL encoding is constant on the average,
and for Q
3
(Figure 3(b)) it decreases: although this might
be surprising, it turns out that some inconsistency allows the
SQL engine to prune the search space for query answering.
Moreover, the chart presented in Figure 3(d) compares, on
a logarithmic scale, the execution time of all queries at the
highest inconsistency level. It shows that the SQL encoding
is always more efficient when the degree of data inconsistency
grows; however, it turns out that the method based
on Datalog
and DLV proves particularly efficient in the
presence of few data inconsistencies.
CONCLUSIONS
The present work provides a general experimental validation
of the first-order rewriting approach to the optimization
of consistent query answering. Of course, the applicability
of our technique is limited to the class of KE-simple queries.
For general CQs, the use of a more expressive, and compu-tationally
harder, query language like Datalog
is necessary.
Very recently, the first prototype implementations of consistent
query answering have appeared, and the first efforts
towards optimization of query processing are emerging.
Within INFOMIX, several optimizations are currently under
development to improve consistent query answering for
more expressive classes of queries [9, 8]. In this respect,
binding propagation techniques based on magic sets might
significantly reduce execution time for Datalog
programs
on DLV [11], even if the coNP structure of the Datalog
encoding suggests that the efficiency of the SQL rewriting
can be hardly reached (especially for a large number of inconsistencies
).
The ConQuer system [12] implements an extension of the
technique of [13] which allows to rewrite in SQL queries
belonging to the class C
tree
enriched with aggregates. Experiments
show that the overhead of evaluating rewritten
queries is not onerous if compared with evaluation of the
original query over the inconsistent database. Therefore,
[12] focuses on comparing standard query answering and
consistent query answering, while our experiments compare
two different query answering techniques. In this respect,
we point out that optimization of our SQL rewriting was
outside the scope of the present paper.
Finally, Hippo [7] is a system for consistent answering of
union of conjunctive queries without existential variables in
the presence of denial constraints. Hence, this approach
is different from our in terms of both query language and
integrity constraints allowed. Moreover, Hippo techniques
are not based on rewritings.
As future work, we aim at extending our approach to other
forms of ICs (e.g., foreign keys) and at optimizing the SQL
rewriting produced by KEFolRewrite.
798
(a) Q
0
execution time
(b) Q
3
execution time
(c) Q
2
execution time
(d) SQL vs. Datalog
Figure 3: Experimental Results
ACKNOWLEDGMENTS
This research has been partially supported by the Project
INFOMIX (IST-2001-33570) funded by the EU.
REFERENCES
[1] Marcelo Arenas, Leopoldo E. Bertossi, and Jan
Chomicki. Consistent query answers in inconsistent
databases. In Proc. of PODS'99, pages 6879, 1999.
[2] Franz Baader, Diego Calvanese, Deborah McGuinness,
Daniele Nardi, and Peter F. Patel-Schneider, editors.
The Description Logic Handbook: Theory,
Implementation and Applications. Cambridge
University Press, 2003.
[3] Loreto Bravo and Leopoldo Bertossi. Logic
programming for consistently querying data
integration systems. In Proc. of IJCAI 2003, pages
1015, 2003.
[4] Andrea Cal`i, Domenico Lembo, and Riccardo Rosati.
On the decidability and complexity of query
answering over inconsistent and incomplete databases.
In Proc. of PODS 2003, pages 260271, 2003.
[5] Andrea Cal`i, Domenico Lembo, and Riccardo Rosati.
Query rewriting and answering under constraints in
data integration systems. In Proc. of IJCAI 2003,
pages 1621, 2003.
[6] Jan Chomicki and Jerzy Marcinkowski. On the
computational complexity of minimal-change integrity
maintenance in relational databases. In Inconsistency
Tolerance, pages 119150, 2005.
[7] Jan Chomicki, Jerzy Marcinkowski, and Slawomir
Staworko. Computing consistent query answers using
conflict hypergraphs. In Proc. of CIKM 2004, pages
417426, 2004.
[8] Chiara Cumbo, Wolfgang Faber, Gianluigi Greco, and
Nicola Leone. Enhancing the magic-set method for
disjunctive datalog programs. In Proc. ICLP 2004),
pages 371385, 2004.
[9] Thomas Eiter, Michael Fink, Gianluigi Greco, and
Domenico Lembo. Efficient evaluation of logic
programs for querying data integration systems. In
Proc. of ICLP'03, pages 163177, 2003.
[10] Thomas Eiter, Georg Gottlob, and Heikki Mannilla.
Disjunctive Datalog. ACM Trans. on Database
Systems, 22(3):364418, 1997.
[11] Wolfgang Faber, Gianluigi Greco, and Nicola Leone.
Magic sets and their application to data integration.
In Proc. of ICDT 2005, pages 306320, 2005.
[12] Ariel Fuxman, Elham Fazli, and Renee J. Miller.
Conquer: Efficient management of inconsistent
databases. In Proc. of SIGMOD 2005, pages 155166,
2005.
[13] Ariel Fuxman and Renee J. Miller. First-order query
rewriting for inconsistent databases. In Proc. of
ICDT 2005, pages 337351, 2005.
[14] Gianluigi Greco, Sergio Greco, and Ester Zumpano. A
logical framework for querying and repairing
inconsistent databases. IEEE Trans. on Knowledge
and Data Engineering, 15(6):13891408, 2003.
[15] Nicola Leone, Gerald Pfeifer, Wolfgang Faber,
Thomas Eiter, Georg Gottlob, Simona Perri, and
Francesco Scarcello. The DLV system for knowledge
representation and reasoning. ACM Trans. on
Computational Logic, 2005. To appear.
799
| relational database;Query Rewriting;integrity constraints;query rewriting;consistent query answering;Computational Complexity;conjunctive queries;inconsistent database;Inconsistency;database schemas |
56 | Context-Aware Web Information Systems | Apart from completeness usability, performance and maintainability are the key quality aspects for Web information systems. Considering usability as key implies taking usage processes into account right from the beginning of systems development. Context-awareness appears as a promising idea for increasing usability of Web Information Systems. In the present paper we propose an approach to context-awareness of Web Information Systems that systematically distinguishes among the various important kinds of context. We show how parts of this context can be operationalized for increasing customers' usage comfort. Our approach permits designing Web information systems such that they meet high quality expectations concerning usability, performance and maintainability. We demonstrate the validity of our approach by discussing the part of a banking Web Information System dedicated to online home-loan application. | Introduction
1.1
Generations of Web Services
Understanding Web Information Systems (WIS) as
monolithic and presentation-oriented query-answer
systems would be too simplistic. Implementing the
individual services of a WIS only on the basis of XML
or (D)HTML suites suffices for the interface accessible
by a particular customer. The quality of service
provided by a WIS both expected and implemented,
however, evolved over the last decade and has evolved
beyond mere completeness. Extending the classification
in (Berger 2003, p.146) we distinguish between
different generations of WIS.
First generation (1G): "build it, and they will come"
First develop a WIS, then customers will come,
because they believe that it is useful. Many of the
1G-WIS were informational, i.e., they weren't interactive
.
Second generation (2G): "advertise online sales, and
they will come"
Develop a WIS and market it. Customers will
Copyright c 2004, Australian Computer Society, Inc. This paper
appeared at First Asia-Pacific Conference on Conceptual
Modelling (APCCM 2004), Dunedin, New Zealand. Conferences
in Research and Practice in Information Technology, Vol.
31. Sven Hartmann, John Roddick, Ed. Reproduction for academic
, not-for profit purposes permitted provided this text is
included.
come, because the advertisement convinced them
about the WIS's usability.
The WIS may be
transactional, i.e., contain interactive interfaces
to company products and services. A standard
interface is provided but hard to learn. No particular
customer usage aid is offered.
Third generation (3G): "realize a pleasant use of high
quality services, and they will come"
Customers will find using the WIS helpful. They
will do the marketing. 3G-WIS's typical characteristics
are:
high value and up-to-date content,
high performance,
brand value of the provider, and
pleasant and easy use for casual as well as
for frequent customers.
Many WIS including several banking WIS are still
2G. However, impressive and well-developed WIS,
e.g., the Amazon web-site, demonstrate the feasibil-ity
of 3G-WIS. The success of such WIS is based on
deep understanding of the application area, the customers
needs, abilities and habits. Adaptation to customers
-- if provided -- is based on allocating the
most suited subspace of the WIS application space to
the customer.
WIS can be classified into e-business, e-learning,
edutainment, community, information and personality
WIS. In the e-business class the B2B systems
have been more successful than B2C systems. This
success results from well-understood usage scenarios
built into the WIS. We observe that usage scenarios
are better understood for B2B-WIS than for B2C-WIS
.
Storyboarding is a design approach focusing on usage
scenarios. However, so far it is mainly used employing
pinboard approaches, see e.g. (Siegel 1998,
Van Duyne et al. 2003). Pinboard approaches map a
number of scenarios observed in the application onto
tree-structured web sites. Storyboarding in the movie
business is used to design much more complex scenarios
. To overcome this limitation the storyboard
specification language SiteLang has been introduced
in (Thalheim and D
usterh
oft 2001). Until now it has
been applied in more that two dozen WIS projects of
the Cottbus InfoTeam since 1999.
Our development experience implies that implementing
3G-WIS requires sophisticated database support
, see (Thalheim 2000a). Our approach to guarantee
for this support is based on the theory of media
types, which generalize database views (see e.g.
(Schewe and Thalheim 2001)). Another finding from
our practical experiences is that customer behavior
has changed. They are no more patiently waiting until
their needs are met. They require personal interfaces
. Customization of system interfaces to users is
37
known for quite a while. However, WIS are targeting
new and casual customers. These customers are not
capable or willing to arrange for system adaptation.
Internet service providers report customers frequently
complaining about insufficient user-friendliness and
unsophisticated WIS.
1.2
Problems of Complex Applications
Modern applications, in particular WIS often appear
to be relatively simple, if only their interface is considered
. Their point-and-click operating mode is de-liberately
set up in a way that causes the impression
of simplicity. Internally, however, things may be quite
different. A client-server multi-tier architecture with
HTML-server, database server and application server
might be used. This implies some non-trivial development
tasks done such as database design and development
of an application programmer interface or
similar.
In addition, several customer types may be known
to the application system. A WIS may appear very
different to customers of different type. The functionality
they access, however, is still the basic functionality
as implemented by the servers mentioned before
. Consequently as many function schemas and
data schemas need to be developed as there are anticipated
customer types. These schemas need to be integrated
to develop a consistent view of the key application
functionalities. Views that are based on these
schemas need to be generated allowing the individual
customers to operate with the application in the way
that is most natural for them. The development of
WIS thus can be a quite complex process.
Among others the complexity of this development
process depends on the degree of use of an underlying
database, from which dynamic web pages are created.
Furthermore, the complexity of this process depends
on the number of versions of usage processes of the
WIS that need to be anticipated. Since different usage
processes may lead to different data and functionality
accessible to customers. Additional complexity
comes in -- e.g. in the case of modern retail banking
-- when a requirement is set in place that various
access channels -- e.g. channels needed for cell-phone
- or PDA-access -- should be made available to
customers. Apart from the purely technical problem
arising from discretionary access channels the problem
of layout for these channels has to be solved.
1.3
An Application Example
An example of a WIS in retail banking showing a relatively
high diversity of the usage process is online
loan application, if considered in full generality as we
do it here. For an introduction to lending in general
, of which the loan business is just a part, see e.g.
(Valentine 1999). Not all banks offer online home-loan
application facilities. Those that provide such
facilities do not necessarily allow customers to deal
with them completely online. Banks that offer effective
online loan application are the Swiss UBS AG
and the New Zealand and Australia based ASB Bank.
The acceptance of a longer interruption of service at
the ASB site indicated that at least for this bank online
home-loan application is not yet considered a major
part of their business.
Complexity in home-loan applications results from
the fact that the applicant not necessarily is exactly
one natural person. For each of the applicants properties
and debts need to be identified and valuated.
Often banks would accept only home-loan applications
of at most two people, in general the couple that
is going to live in the home financed with the loan.
Complexity is further increased by the loan not necessarily
being a fresh one but being already granted
to someone who due to his or her financial conditions
has chosen to move the loan to a different bank.
Furthermore, the properties offered for securing
the loan may belong to a variety of types. Some of
these types, e.g.
real estate property may require
physical inspection to determine the value they can
cover. Other properties such as financial instruments,
i.e. shares, options or accounts, may only need an inquiry
to the respective depot or account. If cash is
a security, then it might even be impossible to finish
the process electronically, as the cash needs to be
brought to the bank branch, counted and deposited.
It is similar with debts. On real estate properties
there might be liabilities that require an assessment
of the actual value. Of course there is quite a number
of so-called loan structures (see (Valentine 1999,
p.226f.)) distinguishing between loans. For instance,
they may differ from each other in their term, frequency
of repayments, borrower's authorization to increase
the debt (e.g. overdraft facility), the minimum
security ratio or the repayment structure. The latter
addresses the schema of how capital and interests
are paid for by the customer. Independently of the
loan structure a customer might chose among several
loan options that specify how the interests develop
over time, i.e. they may be fixed for a particular time
period or they may float like general interest rates in
banking.
Apart from these principal choices in a home-loan
there are a number of tools available for customer's
use throughout online home loan applications such as
a borrowing power calculator, a repayment schedule
calculator, etc. Additionally, dictionaries of banking
terms, act excerpts and comments as well as descriptions
of the applying financial instruments need to be
accessible to customers.
All the possible options in the financial instruments
and the respective variations of the WIS usage
will only be considered by a small number of customers
. In more technical terms we have to deal with
a generic process type. Most of its instances realize
only a part of the possible variations. At present online
home-loan application systems are not typical retail
banking applications. Automated clearing house
(ACH), i.e. direct deposit of payments, withdrawing
monthly mortgage payments, etc. are more typical.
According to (Berger 2003, p.150f.) its use in the
US is steeply increasing and has after starting problems
even increased productivity. However, ACH is
a back-office activity, whereas online home-loan application
is a customer-home or front-office activity.
According to (Berger 2003, p.149) internet-only banks
performed more poorly than conventional banks did.
If this finding implies that online home-loan applications
are less productive than conventional home-loan
application processing then we believe that this is
only a temporary phenomenon. We believe that cultural
obstacles concerning internet-banking will disappear
when 3G-WIS have become more popular.
According to (Berger 2003) there is empirical evidence
for an increasing market share of electronic payment
. According to studies reported in (Berger 2003,
p.162) there is even empirical evidence for increased
productivity due to investment in IT labor, while
there is no empirical evidence for IT investments increasing
efficiency in general. This is consistent with
the basic insight that not the mere use of IT but the
kind and quality of this use can increase productivity
. Our paper shall help making 3G-WIS more popular
and thus contributes to internet banking more
completely covering the business at a higher level of
quality.
38
1.4
Related Work
A lot of related work has been done on the development
of web information systems.
The work in
(Atzeni et al. 1998) emphasizes the design of content
leading to databases, navigation leading to hypertext
, and presentation leading to the pages layout.
Other authors (see for example (Baresi et al. 2000),
(Bonifati et al. 2000), (G
adtke and Turowski 1999)
and (Rossi et al. 1999)) follow the same lines of
thought or concentrate on the "add-on" to database
design, emphasizing mainly the hypertext design
dealing with navigation structures (see (Garzotto et
al. 1993) and (Schwabe and Rossi 1998)). The work in
(Feyer et al. 1998) presents the forerunner of the theory
of media types (see (Schewe and Thalheim 2001)).
Media types provide a theoretically sound way to integrate
databases, external views, navigation structures
, operations, and even support adaptivity to different
users, environments and channels. The adaptivity
feature distinguishes them from the dialogue
types that are used to integrate database systems with
their user interfaces (see (Schewe and Schewe 2000)).
The work in (Schewe and Thalheim 2001) already
emphasizes that conceptual abstraction from content,
functionality, and presentation of an intended site
is not sufficient for the adequate conceptual modelling
of web-based systems, even if complex media
types are taken into consideration. Some of the approaches
mentioned before (see (Atzeni et al. 1998),
(Baresi et al. 2000), (Bonifati et al. 2000), (G
adtke
and Turowski 1999), (Rossi et al. 1999), (Garzotto
et al. 1993) and (Schwabe and Rossi 1998)) miss out
on the important aspect of story boarding, which is
needed to capture the business content of the system.
Story boarding in a process-oriented holistic manner
focusses on user intentions. In more recent work
some of the authors (Kaschek et al. 2003a) started
to investigate this idea more thoroughly.
Conceptual
modelling traditionally considered more ontolog-ical
aspects than epistemological ones. Since web information
systems in two respects considerably differ
from non-web information systems epistemological
aspects, however, need to be taken more seriously:
Web information systems are open in the sense that
actual users virtually may be just anyone. In non-web
system there was traditionally a much stricter
access control preventing non-staff from using the system
. The business idea, however, has changed and
customers need to be attracted and pre-selected by
a web information system. Furthermore, web information
systems are open in the sense that it is very
easy to use them for accessing other web systems.
This introduces more competition among those who
offer services on the web. Quality of web information
systems in the sense of fitness for users' use thus
tends to be more important than it was for non-web
systems. Web information systems partly substitute
staff-customer interaction by customer-computer interaction
.
Consequently, web information systems
must focus on aiding customers in doing the business
the system provider is engaged in. Clearly this
only can be done on the basis of a customer model.
User profiling together with story boarding is a holistic
manner for this.
In (Schewe and Thalheim 2001) it is suggested
that story boarding be supported through directed
graphs called scenarios, in which the nodes represent
the scenes and the edges correspond either to navigation
or to actions issued by the user. This extends
the work in (Feyer et al. 1998), where simply partially
ordered sets have been used. In addition, user profiling
is approached by using user dimensions capturing
various aspects of how to characterise users. This has
been extended in (Srinivasa 2001) to a formal description
of interactive systems.
The work in (D
usterh
oft and Thalheim 2001)
presents a formalised language SiteLang to support
the specification of story boards. the work also indicates
ideas how to exploit word fields for designing
dialogue steps in story boards. In (Schewe et al. 1995)
and (Schewe 1996) refinement primitives for dialogues
have been discussed. Due to the connection between
dialogues and scenarios, this approach to refinement
is also useful for story boarding. The work in (Schewe
et al. 2002) applies story boarding and user profiling
to the area of on-line loan systems.
1.5
Outline
In section 2 we discuss WIS specification, in particular
story spaces and scenarios, we further discuss media
objects, dialogue-step specification and context. In
the following section 3 we discuss database design for
WIS, utilization of context for WIS and a stepwise
WIS generation approach called "onion generation".
Finally, in section 4 we continue the discussion of our
example and show how our approach can be applied to
modelling of WIS. Due to space restrictions, however,
we can only discuss the storyboarding part.
WIS Specification
2.1
Story Spaces and Scenario
Modelling usage processes right from the beginning of
systems development requires using a sufficiently expressive
high level semantic model as a respective conceptual
framework. Storyboarding uses the metaphor
"story" to conceptualize usage processes.
We presuppose
that a story (for the source of the interrogatives
used here refer to (Zachman 1987, Sowa and
Zachman 1992)) tells what happened, why and where,
as well as who did it how and when. The story of
customer-WIS interaction thus is the intrigue or plot
of a narrative work or an account of events.
Within a story one can distinguish threads of activity
, so-called scenarios, i.e., paths of scenes that
are connected by transitions. See figure 2.1 for an example
scenario. We do not intend to model branching
stories. These require managing a number of activities
at the same time, i.e., in parallel. A capability
that -as we believe- many casual customers won't
have. With the term story space we mean the integration
of all scenarios in a story.
We define the story space
W
of a WIS W as
the 7-tuple (S
W
, T
W
, E
W
, G
W
, A
W
,
W
,
W
) where
S
W
, T
W
, E
W
, G
W
and A
W
are the set of scenes created
by W , the set of scene transitions and events that
can occur, the set of guards and the set of actions that
are relevant for W , respectively. Thus, T
W
is a subset
of S
W
S
W
. Furthermore
W
: S
W
SceneSpec is
a function associating a scene specification with each
scene in S
W
, and
W
: T
W
E
W
G
W
A
W
,
t (e, g, a) is a function associating with each scene
transition t occurring in W the event e that triggers
transition t, the guard g, i.e. a logical condition blocking
the transition if it evaluates to false on occurrence
of e, and the action a that is performed while the
transition takes place. The language SiteLang, see
(Thalheim and D
usterh
oft 2001), offers concepts and
notation for specification of story spaces, scene and
scenarios in them. Scenes and their specifications are
discussed in subsection 2.2.
2.2
Scenes
We consider scenes as the conceptual locations at
which the customer-WIS interaction, i.e., dialogue
39
sc
1
- sc
2
- sc
3
- sc
4
- sc
5
- ...
y
6
?
9
side story
Figure 2.1: Scenario with a loop representing a side
story
takes place.
Dialogues can be specified using so-called
dialogue-step expressions. Scenes can be distinguished
from each other by means of their identifier
: Scene-ID. With each scene there is associated a
media object and the set of actors that are involved
in it. Furthermore, with each scene a representation
specification is associated as well as a context. Scenes
therefore can be specified using the following frame:
Scene = ( Scene-ID
DialogueStepExpression
MediaObject
Actors
ActorID
Right
Tasks
Assigned
Roles
Representation (styles, defaults, emphasis, ...)
Context (equipment, channel, particular)
Dialogue-step expressions consist of dialogues and
operators applied to them. Dialogue steps are discussed
in subsection 2.4 below. The provided operators
are based on the basic dialogue step algebra introduced
in (Thalheim and D
usterh
oft 2001):
Basic control commands are sequence ; (execution
of steps in sequence), parallel split
|
| (execute
steps in parallel), exclusive choice
|
| (choose one
execution path from many alternatives), synchronization
|
sync
| (synchronize two parallel threads of
execution by an synchronization condition
sync
,
and simple merge + (merge two alternative execution
paths). The exclusive choice is considered
to be the default parallel operation and is denoted
by
||.
Structural control commands are arbitrary cycles
(execute steps w/out any structural restriction
on loops), arbitrary cycles
+
(execute steps
w/out any structural restriction on loops but at
least once), optional execution [ ] (execute the
step zero times or once), implicit termination
(terminate if there is nothing to be done), entry
step in the scene
and termination step in the
scene
.
Advanced branching and synchronization control
commands are multiple choice
|
(
m,n)
| (choose between
m and n execution paths from several alternatives
), multiple merge (merge many execution
paths without synchronizing), discriminator
(merge many execution paths without synchronizing
, execute the subsequent steps only
once) n-out-of-m join (merge many execution
paths, perform partial synchronization and execute
subsequent step only once), and synchronizing
join (merge many execution paths, synchronize
if many paths are taken, simple merge
if only one execution path is taken).
We also may define control commands on multiple
objects (CMO) such as CMO with a priori
known design time knowledge (generate many instances
of one step when a number of instances
is known at the design time), CMO with a priori
known runtime knowledge (generate many instances
of one step when a number of instances
can be determined at some point during the runtime
(as in FOR loops)), CMO with no a priori
runtime knowledge (generate many instances of
one step when a number of instances cannot be
determined (as in a while loop)), and CMO requiring
synchronization (synchronization edges)
(generate many instances of one activity and synchronize
afterwards).
State-based control commands are deferred choice
(execute one of the two alternative threads, the
choice which tread is to be executed should be
implicit), interleaved parallel executing (execute
two activities in random order, but not in parallel
), and milestone (enable an activity until a
milestone has been reached).
Finally, cancellation control commands are used,
e.g. cancel step (cancel (disable) an enabled step)
and cancel case (cancel (disable) the step).
These control composition operators are generalizations
of workflow patterns ( see, e.g. (Workflow Management
Coalition 1999, Jablonski 1996)) and follow
approaches developed for Petri net algebras.
A graphical representation of a login scene is given
in figure 2.2. We are interested in well-formed dialogues
and do not allow specifications which lead to
and-split or or-split common in workflow specifications
. This scene is specified by the dialogue step
expression
Enter login ;
( Customer login ; [ Change profile ; ]
( Service kind selection ; Service selection ;
Service customization)
|| Join cooperating group
|| Join bank club
|| Join bank programs
|| General customer information )
|
| ( Anonymous Login ; [Extend adding identity ; ]
( Program selection ; Module selection ;
Unit selection) )
Enter
Login
:
U
Change
profile
Y
j
General
customer
information
Anonymous
login
j
:
Extend
by adding
identity
K
Service
kind
selection
j
Service
seeking
selection
U
Service
customization
Customer
login
K
y
j
U
:
Join
cooperating
group
Join
bank
program
Join
bank
club
Login Scene With Adaptation of System Facilities
Figure 2.2: Scene for Login Into a Bank WIS
2.3
Media Objects
A scene is supported by media objects following the
codesign approach. Media objects are instances of
media types.
40
Bank
Service
Customer
Service
Role
Customer
Login
Customer
Profile
Profile
Type
Task
Portfolio
Type
Customer
Portfolio
Web
Address
Account
Login
History
Figure 2.3: Cutout of the profiling schema
The core of a media type is defined by a view on
some underlying database schema, i.e. it consists of
a view schema and a defining query. However, this
query must be able to create identifiers in order to create
links between the various media objects. This core
of a media type -- called raw media type in (Schewe
and Thalheim 2001) -- is extended in three directions
:
As a first extension operations are added to the
view in the same way as d-operations were added
to dialogue objects in (Schewe and Schewe 2000).
Basically, the use of operations just adds dynamics
to the media objects. So, if a media object
is associated with a scene, the operations of the
media object define the available dynamic functionality
.
The second extension provides adaptivity and
hierarchies. Adaptivity to the user deals with
needs arising from different users.
Adaptivity
to the technical environment copes with technical
restrictions of end-devices. Adaptivity to
the communication channel deals with adaptation
to needs arising from various communication
channels. For all three forms of adaptivity media
types provide mechanisms for a controlled form
of information loss, which is coupled with algorithms
for the splitting of information content.
The hierarchies are adopted from dimension hierarchies
in OLAP.
The third extension simply covers ordering and
other presentation options.
Thus, roughly speaking media objects consist of
abstract containers, supported DBMS processes and
database manipulations requests.
Basic media objects
(Schewe and Thalheim 2000) are characterized
by syntactic expressions, have a semantical meaning
and are used within a certain pragmatical framework.
Media objects can be parameterized. Typical parameters
are the representation style, the actor frame, and
the context frame. Therefore we distinguish between
media objects and runtime media objects in which all
parameters are instantiated.
During runtime, the media object is extended by
specific escort information (Thalheim 2000). This escort
information is represented for user support. It
allows the user to see the history of steps performed
before being in the current state. Escort information
is further generated from the story space. In this case
a user is informed on alternative paths which could
be used to reach the given scene and which might be
used for backtracking from the current scene.
For the generation of media objects and their
composition on the basis of information units we
extend the classical SQL frame to the frame
generate Mapping : Vars
Structure
from Views
where Selection condition
represent using Style guide
& Abstraction
browsing definition Condition
& Navigation
The views and therefore the media object may have
hidden parameters (for instance, EventID) which are
not visible to the actor. They can be parameterized
by variables (for instance, @Today). For media objects
we reuse ideas developed for OLAP technology
(Thalheim 2000):
views on ER schemata (abstraction on schemata
(aggregation, scoping, ...), versions),
variations of generation functions,
display with canonical functionality (drill-down,
roll-up, rotate, pivoting, push, pull, dimension,
aggregation),
using generic evaluation functions and models,
implicit incorporation of hierarchies and
implicit incorporation of time, space, ....
Furthermore, involved actors are specified in dependence
on their profiles, tasks assigned to them,
their access and manipulation rights, and their roles
to be taken while visiting the scene. This specification
is based on (Altus 2000) and similar to profiles
of actors in information systems.
It is our aim to specify generic scenes. Thus, we
add the representation styles which can be applied to
the media object of the scene. Representation depends
on the equipment of the actor.
In the city
site projects, we have gained experience with different
representation styles: internet display with high-speed
channel, internet-display with medium speed
display (default style), videotext and WAP display.
For instance, for videotext any graphical information
is cut out or replaced by textual information.
Finally, the context of access is specified. Access
determines the display facilities. Channels can be of
high or low speed. The particular usage of a scene by
an actor depends on the scenario history.
The login scene in Figure 2.2 is based on the
schema in Figure 2.3.
The corresponding media object specification has
the following structure:
MediaObject(
@Customer ID) =
generate (ID, profile, portfolio, context)
from Customer
1 Login Account History
1 Customer Profile
1 Customer Portfolio 1 ...
where Customer.ID = @Customer ID ...
represent using
41
XSL style.Ident =
Profile Type.Preference.StyleIdent
&
createVarsFor(profile, portfolio, context)
browsing definition Customer
portfolio
...
&
Navigation
none
The representation styles determine the order and
the tailoring of the elements of the media object.
2.4
Dialogue Steps
We conceptualize the customer-WIS interaction as a
dialogue between these two. Therefore the customer-WIS
interaction unfolds in a sequences of dialogue
steps, i.e., elementary communication acts.
The
basic WIS-state transformations triggered by actors
can thus be understood as caused by dialogue steps.
These may access the media object that is associated
to the scene within which the dialogue step occurs.
Comparable to (Goldin et al. 2000) we use the following
frame for specifying the control of dialogue steps:
on precond if event and guard
do action result in postcond
Consequently dialogue steps may be specified by
the following frame:
DialogueStep(
Identification ) =
( sub-unit = view on media object of the scene
enabled processes =
subset of supplied processes,
manipulation requests
actor =
subset of enabled actors in a given context
control =
( precondition, enabling event,
guard, postcondition) )
Dialogue step specifications can be represented
graphically as shown in figure 2.4. The figure for the
scene 'anonymous login' represents the specification
of dialogue step 'login'.
Anonymous
login)
BankSurveyView
ServiceOfferView
SelectModule,
SelectCommunication
NewSession
AddToLog
LogChannelData
LogUserEngineData
Anonymous User, Visitor
6
?
(ClickAnonymous,ServicesAvailable,ServiceSelected
CustomerStyleSelected
ClickOnOneOption)
Figure 2.4: Dialogue Step for Anonymous Login
Based on the properties of the actions we conclude,
for instance, that after withdrawal a previous member
of a cooperating group cannot participate in the discussions
in the community. A task property frame is
defined by a task name, reasons for task involvement,
an aim, a postcondition (enabled next activities), the
information from the database, the information for
the database, the resources (actor, resources, partner
), and a starting situation (precondition, activity,
priority, frequency, repetition rate).
We use graphical representations of scene specifications
as indicated by figure 2.5. Scenes are represented
by frameboxes and dialogue steps by ellipses.
The transitions among dialogue steps are represented
by arrows between these. We use the graphical notation
developed for state charts, e.g., the default start
step of a scene is denoted by a solid circle, the end
state by a solid circle surrounded by an empty circle,
the history entry into a scene is denoted by an `H'
surrounded by an empty circle. Furthermore, we can
adopt refinement and clustering, concurrency, delays
and time-outs, transient states, event priorities and
parameterized states. For more detail on state charts
see, e.g. (Harel and Naamad 1996) and for their application
(Rumbaugh et al. 1991).
dialogue
step
next
dialogue
step
sub
-unit
enabled
process
manipulation
sub-request
enabled actor
6
?
control
:
U
dialogue scene expression
transition according to
scene
involved
actors
story scene
sequence
media
object
representation
style
context,
task
6
6
6
6
6
6
Figure 2.5: Representation of scene specifications
2.5
Context
Context has been usually defined within the object
sets of the database (Bell 2001, Connolly 2001).
There only very few trials to consider context of the
scenarios or stories (Whitsey 2003).
In (Thalheim
2000a) context has been defined for media types. For
dealing more complete and justifiable with context we
start with a dictionary definition of context of something
as that what one needs to understand the something
. This implies our understanding of context as
a three place predicate C(S, H, A) which if true says
that actor A needs helper H to act reasonably on
S. If the actor is an individual then we stay with
the focus on understanding. For non-human actors,
however, we focus on acting according to predefined
quality aspects and behavior rules. The something we
consider here as relevant are WIS-parts. The helpers
we here take into account are the various data that
are relevant for the WIS-parts in question. The actors
we consider here are the WIS and the individuals
occupying the roles: customer, vendor and developer
with respect to the WIS at hand. We thus distinguish
the following contexts:
Customer's scenario context , i.e., that what the customer
needs to understand for efficiently and ef-fectively
solve his/her business problem.
Vendor's WIS-context , i.e., that what the vendor
needs to understand how to run the WIS economically
. Data that typically is part of this context
are:
the intention of the provider,
the theme of the web site,
the mission or corporate identity of the site,
and
the occasion and purpose of the visits of actors
.
Developer's WIS-context , i.e., that what the developer
needs to understand for being capable of implementing
the WIS. Data that typically is part
of this context are:
the potential environment, e.g. hard- and
software, channels,
the information system, especially the associated
databases,
42
Context
Dialogue
Expression
AcceptCond
ID
ID
Do
Condition
Event
Particular
Default
Obligat
Usage
Emphasis
Profile
Group
(1,1)
(1,n)
(1,1)
?
enabled
6
?
involved
:
Actor
Dialogue
Step
uses
6
?
used
Right
:
Right
Category
Media
Object
Task
Role
Category
Task
Assignment
?
Representation
Style
6
basedOn
Scene
Activity
Sequence
?
?
Story
in
Figure 3.1: The Structure of the Web Site Database
the story space, scenes, dialogue steps, roles,
and rights,
the tasks to be performed within the story,
and
the roles in the scenario.
WIS's scene context , i.e., that what the WIS needs
to be capable of making solving certain business
problems easy and pleasant for customers. Data
that typically is part of this context are:
History and current usage allow context
adaptation to scenarios which are played at
present by the current user.
Adaptation to the current environment is defined
as context adaptation to the current
channel, to the client infrastructure and to
the server load.
Users are grouped to actors. Therefore, we
can define the current user by instantiation
of the actor.
Goals and particular, policy (exceptions, social
, organizational) define a specialization
of the content, structuring and functionality
of a web page.
A WIS is supported by media objects that belong
to media types.
The collection of all media types
is called suite. Our framework offers four hooks for
dealing with the context we need to consider:
1. Specialization of media type suite for
usage
and
user adaptation:
The database types may have subtypes specializing
the database types. Media types are defined
on the basis of views. Therefore, we can follow
the approach discussed in (Thalheim 2000a) for
specialization of types. Specialization is defin-able
through specialization of types, instantiation
of parameters. extension of types, and restriction
and constraint application.
2. Application of rules towards
generation
of
extended
suites:
Suites can be extended by providing view rules
defining views on top of the media types. This
approach supports portfolio extension and container
extension.
3. Instantiation of explicit context parameters can be
used for adaptation of web sites to the current
profile, to the current environment or to the current
workload.
4. Storage of utilization profile similar to login track
supports to use history of previous utilization of
the web site by the user. We extend the web site
database by explicit utilization logs for used media
objects, preferences of usage, users workspace
or work rooms, and variations of users media objects
Developing the Database Used to Generate the Web Site
3.1
Database Modelling for WIS
Web site management becomes a nightmare whenever
a web site has been developed in a handicraft
approach. For this reason, generation of web
sites is currently based on web site content management
. Content management systems currently support
the representation of web pages on a take-and-place
metaphor: Select or compile the content objects
of a page, compile the navigation structure and place
the content objects using page frames. Our web site
team also used this approach. This approach is entirely
satisfying the needs as long as the general structure
of a web site is stable and no adaptation to the
user is required.
In order to dynamically generate the web site we
decided to store the web site stories in a database.
The structure of this database is displayed in figure
3.1.
We specify the web site story space based on the
our web SiteLang. This specification is inserted into
the database by the SiteLang editor. We can now extract
the page under consideration by an instantiated
query from this database. Context may be infused
directly depending on the query result.
Similar to the context infusion, users of a web site
have their own profile, their own portfolio and their
history. This information is used for adapting the
content of the web site to the current usage.
3.2
Context Infusion in Scenarios
Typical business processes have a very large number
of variants.
Classically, workflow approaches have
43
been used for specification of such varieties. Since
the complexity of variants might be much higher the
workflow approach did not succeed in providing a
sound basis for the specification of all variants. We
observe, however, that in practice these varieties are
internally structured. They may be composed, extended
or filtered by smaller scenarios.
e-banking
challenges storyboarding by its orthogonality and variety
.
Instead of specifying all possible variants we prefer
to model the generation mechanism of the very
large variety of scenarios. This generation supports
runtime adaptation to the current scenario, the context
and other parameters. At the same time, banking
sites are threatened to expose customers to the
"lost in hyperspace syndrome". Therefore, customers
should be supported in tracking back onto the right
path.
Our solution to this challenge is based on generic
parameters that are instantiated depending on the
customer, the history, the context etc. Each set of
media objects is specified by a context-free expression
with a set of parameters. These parameters are
instantiated depending on
the customer profile,
the customer task portfolio,
the customer computational environment,
the presentation environment, and
the available and accessible media objects.
Instead of providing a full generation rule set we illustrate
our approach on the basis of an example. A
customer of a bank provides his/her identity e
1
, inserts
some data e
2
,1
and e
2
,2
in any order or signs that
the bank may request these data from somewhere else
e
2
,3
. Then the customer seeks a loan and fills the corresponding
forms e
3
. The customer gives bail data in
different variants (e
5
,1
|| (e
5
,2
; e
5
,3
)). The scenario is
supported by the eight media objects. Now we can
inject the context into the media object expression of
the scenario. For instance, we may have the following
stepwise refinements:
Media objects of a scenario:
e
1
; ((e
2
,1
||e
2
,2
)
|
| e
2
,3
) ; e
3
; (e
5
,1
|| (e
5
,2
; e
5
,3
))
Extending by objects syntactic verbal context
and meta-information:
e
16
; [ e
21
; ] e
1
; ((e
2
,1
||e
2
,2
)
|
| e
2
,3
) ; e
9
; e
3
;
(e
10
||e
11
) ; (e
5
,1
|| (e
5
,2
; e
5
,3
))
Extending by story space associations, e.g.,
side paths,
,
filtering
against
availability
and
compiling
against
the
customer profile
e
16
; [ e
21
; ] e
1
; ((e
2
,1
||(
SB
e
2
,2
||
CB
e
2
,2
))
|
| e
2
,3
) ; [(
e
17
; e
18
; )] e
9
;
Gr
e
3
,1
;
An
e
3
,2
;
Inf
e
3
,3
;
F orm
e
3
;
(e
10
||e
11
) ; (e
5
,1
|| (e
5
,2
; e
5
,3
))
Filtering
with
or
extending
by
the
web site context: e
16
; [ e
21
; ] e
1
; (
SB
e
2
,2
|
| e
2
,3
) ; [(
e
17
; e
18
; )]
e
9
;
Gr
e
3
,1
;
An
e
3
,2
;
Inf
e
3
,3
;
F orm
e
3
; (e
10
||e
11
) ; (
e
5
,2
; e
5
,3
)
Coping customer's history - already finished
dialogue steps and repeating dialogue steps:
e
Repe
1
; [(
e
17
; e
18
; )] e
Repe
9
;
Gr
e
3
,1
;
An
e
3
,2
;
Inf
e
3
,3
;
F orm
e
3
; (e
10
||e
11
) ; (e
5
,2
; e
5
,3
)
Coping
with
customers
history
negotiation
steps and pragmatical elements:
e
Repe
1
; e
25
; [(
e
17
; e
18
; )] e
Repe
9
;
Gr
e
3
,1
;
An
e
3
,2
;
Inf
e
3
,3
;
F orm
e
3
; (e
10
||e
11
) ; (e
5
,2
;
e
P rak
5
,2
; e
5
,3
)
3.3
The Onion Generation
XML documents provide a universal structuring
mechanism. XSL rules allow to generate XML documents
from XML suites. This opportunity supports
a multi-layer generation of web information systems.
Thus we use the multi-layer onion generation presented
in Figure 3.2.
presentation engine
actor profile adaptation, equipment adaptation,
channel adaptation, decomposer, style extension
container engine
services packages, wrapping functions,
dialogue scene and scenario functions
units engine
survey, landmark, indexing, I/O,
navigation, integration etc. functions
view handler
virtual
materialized views
update views
DBS
...
DBMS
Figure 3.2: The Onion Approach to Stepwise WIS-Generation
The onion generation approach is based on the layered
structure of the WIS arising from the use of SiteLang
and media objects. On the outermost shell the
presentation facilities are introduced. This shell deals
with style presentation functions. Containers used in
the next inner shell are used to ship information from
the web-server to the user. Thus, this shell deals with
the adaptation to the user and his/her environment.
The next inner shell handles the information units,
i.e. the core media objects. Inside this shell we find
further shells dealing with views on the underlying
database, and innermost we find the database itself.
The onion approach fits nicely into a translational
approach, which generates consistent sets of XML
documents. In our projects we used the XML extender
of the database system DB2 to generate XML
documents. Thus, the layering approach to the generation
of XML displayed in Figure 3.2 allows to use
another strategy to generate XML documents. This
facility is displayed in Figure 3.3.
This transformation approach has been success-fully
used in two of our e-learning projects and our
community services projects. These project require
sophisticated context adaptation. The approach implements
an XML suite on top of the relational DBMS
DB2. The extended ER model (Thalheim 2000) provides
a better approach to XML suite generation than
relational models or the classical ER model for a number
of reasons:
Structures can be defined already in complex
nested formats.
Types of higher order are supported.
The model uses cardinality constraints with participation
semantics.
44
conceptual
representation
abstract XML
representation
XML implementation
on top of DB2
dynamic scene
object
XML scene
onion
reflective
adaptations
container
media
object
meta
functions
views
functions
=
=
=
database
schema (HERM)
functors
for XSLT
functors
for XSLT
container
onion
media object
onion
XML
suite
DTD
functors
for XSLT
functors
for XSLT
enriched
XML suite
enriched
XML suite
enriched
XML suite
XML
documents
DAC
for DB2 access
?
?
?
?
?
?
?
?
j
j
j
?
?
?
?
Figure 3.3: The General Procedure for Translation from SiteLang to XML
An Advanced e-Banking Application
4.1
Banking and Mortgages
According to (Wierichs and Smets 2001) a bank is an
". . . institution that as part of an economy offers financial
services. The economical function of banks is
to create a liquidity equalization in the cash flow that
is reverse to the product and service flow. The focal
points of the bank operational activity are conducting
payments, acceptance of money for investment, and
granting credits." Furthermore the particular liquidity
equalization that is chosen out of the set of possible
such equalizations is a preferable one. The respective
preference structure is worked out by banks
on base of an assessment involving financing cost and
interests, see (Matthews et al. 2003).
A loan according to (Wierichs and Smets 2001) is
the "relinquishment of money or other fungible (...)
properties connected with the obligation of the debtor
to give back the relinquished in equal kind, quality
and quantity." An enhanced version of the model of
the loan process is represented in figure 4.1 as a UML
sequence diagram. It shows the roles involved in the
loan process as the labels inside the rectangular boxes
on the top of the diagram. It further indicates the
concurrency that may be utilized in this process. It
achieves this by means of showing the communication
between the roles. This communication is represented
by the arrows starting at the dashed lines
that represent logical time, i.e., life lines of the roles.
The labels attached to the arrows indicate the content
of the message associated with the respective arrow.
The bottom level rectangle containing the messages
'Payback()' and 'CheckPayback()' signifies that these
messages are to be repeatedly sent until the stop condition
signified by the asterisk and displayed below
the rectangle 'debit position balanced' becomes true.
4.2
Mortgages and variants
The figure 4.1 from a bank technical point of view
schematizes the process. This process clearly is not
fully suited as the only base of application development
. For aiding development more information is
needed about how customers are anticipated to interact
with the system under construction. We use here
the function
W
of the story space
W
of a WIS W
to show how the customer interaction with the WIS
changes the appearance of it for the customer Story
boarding is a useful technique to obtain the required
information.
Our respective starting point is the investigation
of the Web site of the Australian and New Zealand
based ASB Bank. From earlier work, see (Kaschek
et al. 2003) we knew that it offered an online loan
application facility.
We investigated this Web site
more closely and found that this site at each of its
pages essentially offered customers data that can be
typed as follows:
advertisement, i.e., information about ASB
Bank including a welcome and a logo.
disclaimer, i.e., a statement limiting the legal
responsibility of ASB Bank with respect to the
data displayed and the implications customers
might draw from it.
search, i.e., a facility taking an unlimited customer
input and returning those ASB Bank
pages that best met this search expression.
highlights, i.e., the main contents that ASB
Bank wants to be displayed at each particular
of its Web pages.
path, i.e., a redundancy eliminated sequence of
ASB Bank Web pages visited so far by the customer
interacting with the site and supposed to
be used as a navigation aid.
reference, i.e., a couple of links the target of
which offer more information about the page actually
visited by the customer.
business branch selector, i.e., a navigation
bar that breaks down the information space of
the site into subspaces according to the business
branches of ASB Bank.
subspace selector, i.e., a navigation bar that
for each subspace that corresponds to a business
branch breaks down the subspace into 2nd. level
subspaces.
subspace navigator, i.e., for each 2nd. level
subspace a navigation bar breaking down the
subspace in a number of information space locations
.
45
Marketer
Customer
ProductAd()
Inquiry()
RevisedProductAd()
Analyst
Application()
Lender
ApplicationApproval()
Notification()
Service
Documentation()
Documentation()
SignedContract()
SignedContract()
Accounts
AdvanceFunds()
PositionsGenerated()
Payback()
UseLoan()
Moniter
Start()
CheckPayback()
*[debit
position
balanced]
Figure 4.1: UML diagram representing the loan process
We have then represented the navigation structure
offered by the ASB Web site as a state chart the
states of which represent scenes. The state transitions
are presupposed to be triggered by customers clicking
links, i.e., navigation events. The labels attached
to the state transitions are a string representing the
navigation event and an action carried out throughout
the transition. This action is prefixed by a slash,
i.e., by "/". The semantics of the action is specified
in form of a programming language like assignment
and assigns new values to variables holding the data
listed above. In this way one can specify what data
and functionality is accessible to a customer at a particular
scene. Explanations and tables or the like are
presupposed to be just text. All other transition labels
used as values in assignments are presupposed
to be links. If a variable is supposed to hold several
links then these are connected by a plus sign, i.e., by
"+". If more than one action has to take place at a
transition then all these actions are connected by a
&-sign.
The variables used in figure 4.2 are D, S, H, A,
R and P respectively representing values of type disclaimer
, search facility, highlight, advertisement, references
and path. Those of them being displayed at
a particular page are represented as non delimited
string, i.e., if all of them occur the string DSHARP is
attached as label to the state representing the page.
Furthermore the variables BS, SS and SN are used to
respectively represent values of type business branch
selector, subspace selector and subspace navigator.
The initial state in the figure is reached after moving
onto the home page of ASB Bank and clicking
BS.Personal which signifies the business branch of retail
banking. The other 1st. level subspaces of the
application's information space are "All", "Business",
"Institutional" and "Rural" in the obvious meaning
. The subspace selector of "BS.Personal" allows
to chose from 18 different 2nd. level subspaces. One
of them is Home loans. Clicking it, i.e., "BS.Personal-SS
.Home loans" leads to the initial state of figure 4.2.
If required we could add further navigation detail
including the impact of navigation on the variables
occurring in the figure. Furthermore if it would be
required we could add further variables to represent
data of types here not dealt with.
4.3
Adaptation to customers, context and
specific case
Adaptation to customers is a must if optimal customer
support is aimed at. ASB Bank realizes a limited
customer adaptation in that it offers in the subspace
selector of BS.Personal second level subspaces
both for kids and for young folks. ASB Bank concerning
the home loan subspace of its information space
does not offer much adaptation to customers. It only
offers a bank technical terms dictionary and specif-ically
addresses first home buyers.
Neither are all
New Zealand official language versions of the Web
site available nor can it be tuned to meet any kind of
disabilities such as weak eyesight or color blindness.
The approach to adaptation to customers taken by
sites like the one under investigation consists in identifying
the subspace of the information space they
create that most likely will fit best the needs of a
particular customer. The match between customer
and subspace is then done such that the customer
is asked to give some characteristics of him or her
into the system and based on that the respective subspace
is chosen. ASB Bank does so concerning kids
and young folks. Other banks have additionally the
customer type student or wealthy individual. This
strategy is suggested by the fact that the site vendor
in general does not know much about the individuals
accessing its site. A technique to consider knowledge
about customers to the design process is the creation
and use of personas, i.e., archetypical customers and
design the navigation structure as well as the page
layout such that it fits optimally to the personas used.
Concerning more detail about personas in particular
their construction see, e.g. (Wodtke 2003, pp. 159)
Adaptation of the business case at hand of course
can only be achieved in response to the customer-site
interaction. In the navigation structure diagram
in figure 4.2 we have used variable of data types that
were chosen with respect to the site at hand, i.e., ASB
Bank's Web site. We expect that this adaptation can
always be achieved the way we have proposed here.
Once the analysis has shown what data and functionality
shall be accessible to customers data and functionality
can be typed and variables of the respective
type can be used to describe how the site adapts to
the actual use. A type level adaptation that can be
carried out while customers are interacting with a site
is semi automatic reconsideration of the type of customer
: Customers in this respect are presupposed to
be characterized by a value for each of a number of
46
Lending Calculators
/ H:=Affordability Calculator +
Amount Requested Calculator
+ Home Loan Options
Calculator + Arrange your loan
Home loans
DSHARP
BS, SS,SN
Arranging a loan
DSHARP, BS, SS,SN
Affordability
Calculator
Amount Required
Calculator
Home Loan
Options Calculator
H.Affordability
Calculator
/ H:=
Amount
Required
Calculator +
Home Loan
Options
Calculator
SN.Arranging
a home loan
H.Home
Loan
Options
Calculator
/ H:=
Amount
Required
Calculator +
Affordability
Calculator
H.Amount Required Calculator
/ H:= Apply by
phone + Apply
online + We come
2 u + U come 2 us
+ Send inquiry &
refine SN
calculator
input form
calculator
nput form
calculator
input form
/ H:= Affordability Calculator + Home Loan Options Calculator
Online home loan
application
H.Apply
online
SN.Home
Loan
Calculator
SS.Home loans.Introduction
/ H:= Home loan rates + Buying your first home? + Loan top up needed? + Fixed
rate loan expiring & SN := Introducyion + Loans at a glance + Interest rate options +
Interest rates + Home loan calculators + Home buyers guide + Review your home
loan + Move your loan to us + Fixed rate expiry + Arranging a home loan + Mobile
lending service
application form
SN.Loans at a glance
/ H:= Loan types +
loan options &
refine SN (types,
options)
Interest rate options
SN.Interest rate
options
/ H:=
variable
rates + fixed
rates +
explanation
DSHARP
BS, SS,SN
Interest rates
DSHARP
BS, SS,SN
SN.Interest
rates
/ H:=
latest
rates
table
Home buyers guide
SN.Home buyers guide
/ H:= Houses 4 sale + Home
buyers inspection list + Home
valuation + Property information
+ Priority checklist + Glossary of
terms & refine SN
DSHARP
BS, SS,SN
Review your
loan
DSHARP
BS, SS,SN
SN.Review your loan
/ H:= explanation
& R:= Credit
cards + Omni
cards +
Moneymaker
account
Move your loan to us
SN.Move your
oan to us
/ H:= explanation
DSHARP
BS, SS,SN
Loans at a glance
DSHARP
BS, SS,SN
SN.Fixed rate expiry
Fixed rate expiry
DSHARP
BS, SS,SN
/ H:=
explanation
& refine SN
DSHARP
BS, SS,SN
Mobile lending service
DSHARP
BS, SS,SN
SN.Mobile lending service
/ H:=
explanation +
contact phone
numbers
Figure 4.2: Navigation structure of a part of ASB Bank's Web site
dimensions. The customer type according to (Schewe
and Thalheim 2001) can be defined as a convex region
in the multi dimensional space create as cartesian
product of the scales associated to the customer
dimensions. Based on an automatic customer assessment
that in response to his or her site-interaction
updates the scores in each of the dimension throughout
customer-site interaction one can then track how
a customer's trace moves through this space and detect
when a modified type would better fit the customer's
behavior than the actual type does. Clearly
such update should only be done with customer permission
Conclusion
Banking services such as online home-loan application
require a very sophisticated and well-adapted internet
interface. Customers want to focus on solving
their business problem, i.e., the goal they want to
achieve by means of interacting with the WIS. They
consider WIS as tools that shall be easy to handle,
completely cover the business and do not add technical
complexities to it. Customers want to have a
pleasant usage experience, in particular they do not
want to be treated like everybody else. They want
WIS remember and exploit their usage peculiarities
in authenticated and where adequate in anonymous
sessions. This paper shows how this can be achieved.
As a guiding principle we introduce considering context
and using it to simplify WIS-handling. We have
shown how WIS's scene context can be injected into
the WIS and how XML suites can be generated using
the story board of the site and available customer
data.
Acknowledgements
We would like to thank Hans-J
urgen Engelbrecht
from the Department of Applied & International Economics
at Massey University for pointing to us work
on the the economic effects of technical progress in
the banking industry.
References
Altus, M.
Decision support for conceptual database
design based on the evidence theory - An intelligent
dialogue interface for conceptual database
design.
PhD thesis, Faculty of Mathematics,
Natural Sciences and Computer Science of BTU
Cottbus, Cottbus, 2000.
Atzeni, P., Gupta, A., and Sarawagi, S.
Design
and maintenance of data-intensive web-sites.
In Proceeding EDBT'98, vol. 1377 of LNCS.
Springer-Verlag, Berlin, 1998, pp. 436450.
Baresi, L., Garzotto, F., and Paolini, P.
From
web sites to web applications: New issues for
conceptual modeling. In ER Workshops 2000,
vol. 1921 of LNCS. Springer-Verlag, Berlin, 2000,
pp. 89100.
Bell,
J.
Pragmatic reasoning:
Inferring contexts
. In Proc. Context'1999 (1999), LNAI 1688,
Springer, pp. 4253.
Berger, A. N.
The Economic Effects of Technolog-ical
Progress: Evidence from the Banking Industry
. Journal of Money, Credit, and Banking 35,
2 (2003), 141 176.
Bonifati,
A.,
Ceri,
S.,
Fraternali,
P.,
and
Maurino,
A.
Building multi-device, content-centric
applications using WebML and the W3I3
tool suite. In ER Workshops 2000, vol. 1921 of
LNCS. Springer-Verlag, Berlin, 2000, pp. 6475.
Workflow
Management
Coalition
, Ed.
The
Workflow Management Coalition specification:
Workflow Management Coalition terminology
& glossary.
Workflow Management Coalition,
47
Winchester, United Kingdom, 1999. Document
Status Issue 3.0.
Connolly, J. H.
Context in the study of human languages
and computer programming languages: A
comparison. In Proc. Context'2001 (2001), LNAI
2116, Springer, pp. 116128.
D
usterh
oft,
A.,
and
Thalheim,
B.
SiteLang:
Conceptual modeling of internet sites. In Conceptual
Modeling ER 2001, H. S. K. et al., Ed.,
vol. 2224 of LNCS. Springer-Verlag, Berlin, 2001,
pp. 179192.
Feyer,
T.,
Schewe,
K.-D.,
and Thalheim,
B.
Conceptual modelling and development of information
services.
In Conceptual Modeling
ER'98, T. Ling and S. Ram, Eds., vol. 1507 of
LNCS. Springer-Verlag, Berlin, 1998, pp. 720.
G
adke, M., and Turowski, K.
Generic web-based
federation of business application systems for e-commerce
applications.
In EFIS 1999. 1999,
pp. 2542.
Garzotto,
F.,
Paolini,
P.,
and
Schwabe,
D.
HDM - a model-based approach to hypertext application
design. ACM ToIS 11, 1 (1993), 126.
Goldin, D., Srinivasa, S., and Thalheim, B.
Is
= dbs + interaction - towards principles of information
systems. In Proc. ER'2000 (2000), LNCS
1920, Springer, pp. 140153.
Harel, D., and Naamad, A.
The STATEMATE
Semantics of Statecharts. ACM Transactions on
Software Engineering and Methodology 5, 4 (Ok-tober
1996), 293333.
Jablonski,
S.
Workflow-Management-Systeme:
Modellierung und Architektur. Thomson's Ak-tuelle
Tutorien. International Thomson Publishing
, Bonn, Germay et al., 1996.
Kaschek, R., Matthews, C., and Wallace, C.
e-Mortgages: NZ State of the Art and Perspectives
. In Proceedings of SCI 2003 (2003).
Kaschek,
R.,
Schewe,
K.-D.,
Thalheim,
B.,
Zhang, L.
Modelling contexts in web information
systems. Proc. WES 2003.
Matthews,
C.
D.,
Kaschek,
R.
H.,
Wallace
,
C.
M.,
and
Schewe,
K.
D.
IST
in
Lending:
Unlimited
potential
but
limited
practice.
Available
from:
http://cbs.dk/staff/lars.heide/ISTOS/program.htm,
2003. Paper presented at ISTOS Workshop in
Barcelona, Spain, 28-30 March 2003.
Rossi, G., Schwabe, D., and Lyardet, F.
Web
application models are more than conceptual
models.
In Advances in Conceptual Modeling,
P. C. et al., Ed., vol. 1727 of LNCS. Springer-Verlag
, Berlin, 1999, pp. 239252.
Rumbaugh,
J.,
Blaha,
M.,
Premerlani,
W.,
Eddy, F., and Lorensen, W.
Object-Oriented
Modeling and Design. Prentice-Hall, Inc., Engle-wood
Cliffs, New Jersey, 1991.
Schewe,
B.
Kooperative Softwareentwicklung.
Deutscher Universit
atsverlag, Wiesbaden, Germany
, 1996.
Schewe,
B.,
Schewe,
K.-D.,
and
Thalheim,
B.
Objektorientierter Datenbankentwurf in der
Entwicklung betrieblicher Informationssysteme.
Informatik Forschung und Entwicklung 10
(1995), 115127.
Schewe,
K.-D.,
Kaschek,
R.,
Matthews,
C.,
and Wallace, C.
Modelling web-based banking
systems: Story boarding and user profiling.
In Proceedings of the Workshop on Conceptual
Modelling Approaches to E-Commerce, H. Mayr
and W.-J. Van den Heuvel, Eds. Springer-Verlag,
2002.
Schewe,
K.-D.,
and
Schewe,
B.
Integrating
database and dialogue design. Knowledge and
Information Systems 2, 1 (2000), 132.
Schewe, K.-D., and Thalheim, B.
Modeling interaction
and media objects. In Proc. NLDB'
2000 (2000), LNCS 1959, Springer, pp. 313324.
Schewe, K.-D., and Thalheim, B.
Modeling interaction
and media objects.
In Advances in
Conceptual Modeling, E. M
etais, Ed., vol. 1959
of LNCS. Springer-Verlag, Berlin, 2001, pp. 313
324.
Schwabe, D., and Rossi, G.
An object oriented approach
to web-based application design. TAPOS
4, 4 (1998), 207225.
Siegel, D.
The secrets of successful web sites. Markt
und Technik, M
unchen, 1998.
Sowa, J. F., and Zachman, J. A.
Extending and
formalizing the framework for information systems
architecture. IBM Systems Journal 31, 3
(1992), 590 616.
Srinivasa, S.
A Calculus of Fixed-Points for Char-acterising
Interactive Behaviour of Information
Systems. PhD thesis, BTU Cottbus, Fachbereich
Informatik, Cottbus, 2001.
Thalheim,
B.
Entity-relationship modeling
Foundations of database technology.
Springer,
2000.
See also http://www.informatik.tu-cottbus
.de/
thalheim/HERM.htm.
Thalheim, B.
Readings in fundamentals of interaction
in information systems. Reprint BTU Cottbus
, 2000.
Collection of papers by C. Binder, W.
Clau, A. D
usterh
oft, T. Feyer, T. Gutacker, B. Heinze,
J. Lewerenz, M. Roll, B. Schewe, K.-D. Schewe, K. Seelig,
S. Srinivasa, B. Thalheim. Accessible through
http://www.informatik.tu-cottbus.de/
thalheim
.
Thalheim,
B., | scenarios;Web Information Systems;web site;Web services;usability;story boarding;context-awareness;context-aware information systems;web information system;media objects;scenes;media type;SiteLang |
57 | Contour-based Partial Object Recognition using Symmetry in Image Databases | This paper discusses the problem of partial object recognition in image databases. We propose the method to reconstruct and estimate partially occluded shapes and regions of objects in images from overlapping and cutting. We present the robust method for recognizing partially occluded objects based on symmetry properties, which is based on the contours of objects. Our method provides simple techniques to reconstruct occluded regions via a region copy using the symmetry axis within an object. Based on the estimated parameters for partially occluded objects, we perform object recognition on the classification tree. Since our method relies on reconstruction of the object based on the symmetry rather than statistical estimates, it has proven to be remarkably robust in recognizing partially occluded objects in the presence of scale changes, rotation, and viewpoint changes. | INTRODUCTION
Most existing methods for object recognition are based on full objects.
However, many images in electronic catalogs contain multiple objects
with occluded shapes and regions. Due to the occlusion of objects,
image retrieval can provide incomplete, uncertain, and inaccurate
results. To resolve this problem, we propose new method to
reconstruct objects using symmetry properties since most objects in a
given image database are represented by symmetrical figures.
Even though there have been several efforts in object recognition with
occlusion, currents methods have been highly sensitive to object pose,
rotation, scaling, and visible portion of occluded objects [12] [9] [17]
[3] [15]. In addition, many appearance-based and model-based object
recognition methods assumed that they have known occluded regions
of objects or images through extensive training processes with
statistical approach. However, our approach is not limited to
recognizing occluded objects by pose and scale changes, and does not
need extensive training processes.
Unlike existing methods, our method finds shapes and regions to
reconstruct occluded shapes and regions within objects. Our approach
can handle object rotation and scaling for dealing with occlusion, and
does not require extensive training processes. The main advantage of
our approach is that it becomes simple to reconstruct objects from
occlusions. We present the robust method, which is based on the
contours of objects, for recognizing partially occluded objects based
on symmetry properties. The contour-based approach finds a
symmetry axis using the maximum diameter from the occluded object.
In experiments, we demonstrate how our method reconstruct and
recognize occluded shapes and regions using symmetry. Experiments
use rotated and scaled objects for dealing with occlusion. We also
evaluate the recognition rate of the reconstructed objects using
symmetry and the visible portion of the occluded objects for
recognition.
The rest of this paper is organized as follows. In Section 2, we briefly
review work related to this study. In Section 3, we describe a method
to recognize partial objects from given classes. In Section 4, we
describe experimental results for partial object recognition. Finally,
we summarize this paper in Section 5.
RELATED WORK
There have been several research efforts in object recognition for
dealing with occlusion. Krumm [13] proposed a new algorithm for
detecting objects in images which uses models based on training
images of the object, with each model representing one pose.
Williams [23] proposed a method for the reconstruction of solid-shape
from image contour using the Huffman labeling scheme. For
object recognition, Chang and Krumm [3] used the color
cooccurrence histogram based on pairs of pixels. Schiele et al. [20]
proposed a method to perform partial object recognition using
statistical methods, which are based on multidimensional receptive
field histograms. In addition, Rajpal et al. [17] introduced a method
for partial object recognition using neural network based indexing.
In appearance-based object recognition, Edwards and Murase [6]
addressed the occlusion problem inherent in appearance-based
methods using a mask to block out part of the basic eigenimages and
the input image. Leonardis and Bischof [14] handled occlusion,
scaling, and translation by randomly selecting image points from the
scene and their corresponding points in the basis eigenvectors. Rao
[18] applied the adaptive learning of eigenspace basis vectors in
appearance-based methods. Ohba and Ikeuchi [16] were able to
handle translation and occlusion of an object using eigenwindows.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise,
or republish, to post on servers or to redistribute to lists, requires prior
specific permission and/or a fee.
SAC'05, March 13-17, 2005, Santa Fe, New Mexico, USA.
Copyright 2005 ACM 1-58113-964-0/05/0003...$5.00.
1190
2005 ACM Symposium on Applied Computing
Current methods for dealing with occlusion have been based on
template matching, statistical approaches using localized invariants,
and recognition of occluded regions based on local features. In
addition, there are many efforts in ellipse construction and detection
[7][9][22]. In this paper, we propose unique methodologies in object
recognition for dealing with occlusion based on symmetry properties
through the ellipse reconstruction.
Even though there have been several efforts in object recognition with
occlusion, current methods have been highly sensitive to object pose
and scaling. In addition, many appearance-based and model-based
object recognition methods assumed that they have known occluded
regions of objects or images through extensive training processes.
However, our method is not limited to recognizing occluded objects
by pose and scale changes, and do not require extensive training
processes.
THE PROPOSED METHOD
We discuss the object reconstruction and the parameter estimation
method to find the best matching class of input objects using the
classification tree method [4]. We extracted shape parameters from
reconstructed objects using RLC lines, such as roundness, aspect ratio,
form factor, surface regularity [5].
The approach tries to find occluded shapes within partially occluded
objects. The basic assumption is that most objects are represented by
symmetrical figures. When a symmetric object is partially occluded,
we use the symmetry measure to evaluate the symmetric shape. We
estimate the most similar parameters of occluded shape and region of
objects, and we retrieve objects that have the estimated parameters of
occluded objects.
A basic idea of reconstruction and estimation of occluded objects is to
use symmetry properties within objects and use to the contour of
objects. Fortunately, most products in electronic catalogs have
symmetry in their shapes and they are represented by symmetrical
figures. Symmetrical descriptions of shape or detection of
symmetrical features of objects can be useful for shape matching,
model-based object matching, and object recognition [2] [1].
In the given database, we have elliptical and roughly-rounded objects
such as plates, cups, pans, and pots, depending on their poses and
shapes. First, we consider elliptical objects in which the occlusion
changes values of measurements and parameters related to diameters.
We assume that we can get diameters from elliptical objects, which
are partially occluded.
Figure 3.1 Three-Spoke from the Triangle.
However, the elliptical objects are limited to the shape of objects.
Therefore, it may not be applied to other types of shape such as
irregular shapes. In this case, since we cannot easily detect the
symmetry axes, we introduce the three-spoke type symmetry method
as shown in Figure 3.1. We apply this approach to roughly-rounded
objects such as cups.
For roughly-rounded objects, we use the three-spoke type method,
which is derived from the triangle. The triangle is a basic model to
represent figures such as circle, rectangle, and polygon. We use
extended lines of the triangle to make axes as shown in Figure 3.1.
The three-spoke type symmetry axes, which are equally assigned by
120 degrees, provide the possibility to detect proper symmetry axes
on roughly-rounded objects. Therefore, this method can detect
symmetry axes in roughly-rounded objects.
In order to perform the following procedures, we assume that objects
are represented by symmetrical figures.
1.
We have an occluded elliptical object in Figure 3.2 and roughly-rounded
object in Figure 3.6, we can get cutting points of the
occlusion (x,y)' and (x,y)'', that are given by overlapping or
cutting.
Figure 3.2 The Occlusion Area
Estimation using Symmetry: Get
cutting points (x,y)' and (x,y)''
and get a distance l'.
Figure 3.3 The Occlusion Area
Estimation using Symmetry:
Get the maximum diameter
and the symmetry axis.
Figure 3.4 The Occlusion
Area Estimation using
Symmetry: Get the estimated
region a' using a line l' and
the symmetry axis.
Figure 3.5 The Occlusion Area
Estimation using Symmetry:
Add region a' to occluded shape
and region and re-captured the
estimated shape of an object.
2.
Compute a distance between two cutting points from (x,y)' and
(x,y)'', which is called a line l' as in Figure 3.2 and 3.6.
3.
Based on a line l', make a connection between two points, fill the
concave region and re-captured the shape. It is important to
compute a centroid in an object.
4.
Get the maximum diameter from re-captured shape using
extremal points as shown in Figure 3.4 and 3.7. Two extremal
points (r, l) and (r, l)' from re-captured shape as in Figure 3.7.
The distance between two extreme boundary points are
represented by the maximum diameter.
5.
In elliptical objects, one of the maximum and minimum
diameters can be a symmetry axis. In roughly-rounded objects,
we use the three-spoke type symmetry, one spoke can be a
1191
symmetry axis to find occluded region within an object.
6.
Centroid Detection: In case of elliptical objects, we find a
centroid based on the maximum diameter and a line
perpendicular to the maximum diameter, which is located in the
center of the length of the maximum diameter. We select
symmetry axes based on one of these lines as in Figure 3.3. In
roughly-rounded objects, we get a centroid, based on whole
region of an object. Equation 2 is adapted from Russ [19]. If the
centroid is calculated by equation 1 using the boundary pixels
only, the results may not be correct. The calculated points will be
biased toward whichever part of the boundary is most complex
and contains the most pixels. The correct centroid location uses
the pairs of coordinates
i
x
,
i
y
for each point in the shape
boundary. The centroid of an irregular shape is calculated
correctly using all of the pixels in an object.
Area
y
C
Area
x
C
k
i
i
y
k
i
i
x
=
=
=
=
0
0
,
(1)
Area
x
x
y
y
C
Area
y
y
x
x
C
k
i
i
i
i
i
y
k
i
i
i
i
i
x
+
=
+
=
0
2
1
2
1
0
2
1
2
1
)
(
)
(
,
)
(
)
(
(2)
7.
In roughly-rounded objects, a centroid is put at the same position
at the center of the three-spoke type symmetry axes.
Figure 3.6 The occlusion of a
cup: Get a centroid after re-captured
a shape.
Figure 3.7 Get extremal points
(r,l), (r,l)' and (r,l)'',(r,l)''' and
the maximum diameter of an
object.
Figure 3.8 Use the three spoke
type symmetry: Match a center of
the spoke to a centroid and
parallel one of axes to the
maximum diameter.
Figure 3.9 Extend axes and
make symmetry axes.
Figure 3.10 Select a symmetry
axis based on two regions,
which are A and B.
Figure 3.11 Find a region a'
of occluded shape using a
symmetry axis and add to a
occluded shape.
8.
Axis Detection: The midpoint of the major axis is called the
center of the ellipse. The minor axis is the line segment
perpendicular to the major axis which also goes through the
center and touches the ellipse at two points. In elliptical objects,
we detect a symmetry axis based on the maximum diameter or
the minimum diameter. To find a symmetry axis in roughly-rounded
objects, one of axes of the three-spoke type symmetry
axes is in parallel with the maximum diameter of an object as
shown in Figure 3.8.
Based on occluded shape and region, we select a symmetry axis
to estimate this region within an object. Figures 3.9 and 3.10
show how to select a symmetry axis. When we select an axis in
roughly-rounded objects, we consider conditions as follows:
Select axes, which don't intersect the occluded region.
3.9 and 3.10 show how to select a symmetry axis. Select
axes, which have a region with the maximum diameter l'.
Area and perimeter are invariants as in equation 3, compare
the proportion of region A and B.
B
A
Area
Perimeter
Area
Perimeter
(3)
9.
Using mirror symmetry, we can get points across an axis. We
find points on the contour across an axis which have the same
length l' and the same angle corresponding to the axis that is
perpendicular to a symmetry axis, but the distance between axis
and points may or may not be the same.
10.
Capture a region a', move the captured region to the occluded
shape using the mirror symmetry, and add to these regions as
shown in Figure 3.4, 3.5, and 3.11.
11.
Re-compute shape measurements such as area, diameters, and
perimeter using RLC lines from re-captured shape of an object.
Then, re-compute shape parameters based on measurements.
12.
Apply to a classifier.
From the above discussions, we described how to reconstruct and
estimate the partially occluded shape and region of an object and how
to find the best matching class of partially occluded objects after the
estimation.
1192
EXPERIMENTAL RESULTS
In the sections, we evaluate and describe the results of partial object
recognition by our proposed a method. We have selected 190 partially
occluded objects of images from electronic catalogs on the Internet as
well as manipulated images. We assume that occluded objects have
more than 50% visibility of objects, and images of catalogs contain
partially occluded objects. The objects are categorized by semantic
meanings such as cup and plate. In addition, our approaches and
experiments are limited to cups and plates since we use roughly-rounded
or elliptical objects. More precisely, the database contains 32
objects from different viewpoints and images of 97 objects
comprising image plane rotations and scale changes.
In sample images, we have extracted image features of partially
occluded objects such as shape and texture. We experimented with
shape reconstruction based on the contour of objects using symmetry
properties. We assumed that inputs are not correctly classified and
have occlusion.
We experimented with samples such as plates and cups to reconstruct
the occluded shape of objects as shown in Figure 4.1 and 4.2. In
Figure 4.2, it is correctly classified after the reconstruction with an
occlusion about 30%. On the other hand, Figure 4.1 is not correctly
classified after the reconstruction since the width of plate is too
narrow. This experiment shows that our method heavily relies on
shape of objects.
Figure 4.1 Example of the occlusion with a Plate.
Figure 4.2 Example of the manipulated occlusion with a Cup.
Finally, we performed an experiment for the relationships between
visible portion of objects and recognition rates. In order to evaluate
the visibility of objects, we used manipulated images of cups and
plates. Figure 4.3 shows the pattern of object recognition in the
presence of partial occlusion of objects and the results obtained by the
symmetric recognition. A visible portion of approximately 67% is
sufficient for the recognition of objects based on the contour.
Figure 4.3 Object recognition in the presence of the occlusion of
objects based on the contour.
There are many efforts in object recognition for dealing with
occlusion. The visible portion of objects required to recognize
occluded objects are shown in Table 4.1. Table 4.1 shows a simple
comparison between our method and other existing methods. The
probabilistic method based on local measurements requires small
portions of objects to recognize the whole objects, but it required
extensive training processes to recognize occluded objects [21] [20].
Our method shows good visibility of partial object recognition and do
not need extensive training processes.
Table 4.1 The visibility of object recognition in the presence of
partial occlusion.
Methods Visibility
Training
processes
Appearance matching techniques using
adaptive masks
90% not
required
Probabilistic technique using Chi-square
72%
required
Probabilistic technique using local
measurements
34% required
Contour-based approach using symmetry
67%
not required
In order to measure the influence of occlusion and compare its impact
on the recognition performance of the different methods, we
performed an experiment as follows.
Figure 4.4 summarizes the recognition results for different visible
object portions. For each test object, we varied the visible object
portion from 20% to 100% and recorded the recognition results using
Chi-square divergence and our method.
Figure 4.4 Experimental results with occlusion.
1193
The results show that our method clearly obtains better results than
Chi-square divergence. Using only 60% of the object area, almost
80% of the objects are still recognized. This confirms that our method
is capable of reliable recognition in the presence of occlusion.
Table 4.2 Summary of Object Recognition Methods for dealing
with Occlusion.
Methods
Occlusion
Scale changes
Object Pose
Rotation
Bischof et al. [1]
Yes
Yes
No
No
Edwards et al. [6]
Yes
Yes
No
Yes(limited)
Ohba et al. [16]
Yes
No
Yes
No
Rao [18]
Yes
No
Yes
No
Jacob et al. [11]
Yes
No
Yes
No
Krumm [13]
Yes
No
No
NO
Contour-based
using symmetry
Yes Yes
Yes(limited)
Yes
Table 4.2 summarizes the various object recognition methods. The
table indicates whether the methods can handle occlusion, rotation,
pose, and changes in the size of objects in the database. Unlike the
other methods, our method can handle scale change, object pose, and
rotated objects with occlusion, even though our method has minor
limitations of object poses.
CONCLUSION
In this paper, we have discussed how to estimate parameters and to
reconstruct the occluded shape of partial objects in image databases.
In order to reconstruct occluded shapes, we used symmetry, which
provides powerful method for the partial object recognition. Unlike
the existing methods, our method tried to reconstruct occluded shapes
and regions within objects, since most objects in our domain have
symmetrical figures. However, we have limitations in the shape of
objects and the occluded region of objects. For example, if a pan has
an occlusion in handle, it cannot correctly reconstruct and be
recognized.
Another minor limitation of our method is that a method is sensitive
to the pose of an object. For example, if we cannot see an ellipse due
to the object's pose, we cannot recognize the object. After estimation,
we have applied inputs, which include estimated parameters, to the
existing classification trees, to get to the best matching class.
All experiments are performed based on the classifier in earlier work.
In experiments, the results show that the recognition of the occluded
object is properly reconstructed, estimated, and classified, even
though we have limited to the size of samples. In addition, we have
experienced the power of the symmetry through experiments.
REFERENCES
[1]
H. Bischof and A. Leonardis. Robust recognition of scaled
eigenimages through a hierachical approach. In IEEE Conference
on Computer Vision and Pattern Recognition, 1998.
[2]
H. Blum and R.N. Nagel. Shape description using weighted
symmetric axis features. Pattern Recognition, 1978.
[3]
P. Chang and J. Krumm. Object Recognition with Color
Cooccurrence Histograms. In IEEE Conference on Computer
Vision and Pattern Recognition, 1999.
[4]
J. Cho and N. Adam. Efficient Splitting Rules based on the
Probabilities of Pre-Assigned Intervals. In IEEE Conference on
Data Mining, 2001.
[5]
J. Cho, A. Gangopadhyay and N. Adam. Feature Extraction for
Content-based Image search in Electronic Commerce. In MIS/OA
International Conference, 2000.
[6]
J. Edwards and H. Murase. Appearance matching of occluded
objects using coarse-to-fine adaptive masks. In IEEE Conference
on Computer Vision and Pattern Recognition, 1997.
[7]
A. W. Fitzgibbon, M. Pilu, and R. B. Fisher. Direct least squares
fitting of ellipses. In International Conference on Pattern
Recognition, 1996.
[8]
M. Fleck. Local Rotational Symmetries. In IEEE Conference on
Computer Vision and Pattern Recognition, 1986.
[9]
C. Ho and L. Chan. A fast ellipse/circle detector using geometric
symmetry. Pattern Recognition, 1995.
[10]
Joachim Hornegger, Heinrich Niemann, and Robert Risack.
Appearance-based object recognition using optimal feature
transforms. Pattern Recognition, 2000.
[11]
David W. Jacobs and Ronen Basri. 3D to 2D recognition with
regions. In IEEE Conference on Computer Vision and Pattern
Recognition, 1997.
[12]
Grinnell Jones and Bir Bhanu. Recognition of articulated and
occluded objects. IEEE Transaction on Pattern Analysis and
Machine Intelligence, 1999.
[13]
John Krumm. Object detection with vector quantized binary
features. In IEEE Conference on Computer Vision and Pattern
Recognition, 1997.
[14]
Ales Leonardis and Horst Bishof. Dealing with Occlusions in the
Eigenspace Approach. In IEEE Conference on Computer Vision
and Pattern Recognition, 1996.
[15]
David G. Lowe. Object Recognition from Local Scale-Invariant
Features. In International Conference on Computer Vision, 1999.
[16]
K. Ohba and K. Ikeuchi. Detectability, uniqueness, and
reliability of eigen windows for stable verification of partially
occluded objects. IEEE Trans. Pattern Anal. Mach, 1997.
[17]
N. Rajpal, S. Chaudhury, and S. Banerjee. Recognition of
partially occluded objects using neural network based indexing.
Pattern Recognition, 1999.
[18]
R. Rao. Dynamic appearance-based recognition. In IEEE
Conference on Computer Vision and Pattern Recognition, 1997.
[19]
John C. Russ. The Image Processing Handbook. CRC Press, 3rd
edition, 1998.
[20]
Bernt Schiele and Alex Pentland. Probabilistic Object
Recognition and Localization. In International Conference on
Computer Vision, 1999.
[21]
H. Schneiderman and T. Kanade. Probabilistic modeling of local
appearance and spatial relationships for object recognition. In
IEEE Conference on Computer Vision and Pattern Recognition,
1998
[22]
W. Wu and M. J. Wang. Elliptical object detection by using its
geometrical properties. Pattern Recognition 1993.
[23]
Lance R. Williams. Topological reconstruction of a smooth
manifold-solid from its occluding contour. Journal of Computer
Vision, 1997.
1194
| object recognition;reconstruction;Object;contour;Recognition;Symmetry;Image;Contour;occlusion;estimation;symmetry |
58 | COOLCAT: An entropy-based algorithm for categorical clustering | In this paper we explore the connection between clustering categorical data and entropy: clusters of similar poi lower entropy than those of dissimilar ones. We use this connection to design an incremental heuristic algorithm, COOLCAT , which is capable of efficiently clustering large data sets of records with categorical attributes, and data streams. In contrast with other categorical clustering algorithms published in the past, COOLCAT's clustering results are very stable for different sample sizes and parameter settings. Also, the criteria for clustering is a very intuitive one, since it is deeply rooted on the well-known notion of entropy. Most importantly, COOLCAT is well equipped to deal with clustering of data streams (continuously arriving streams of data point) since it is an incremental algorithm capable of clustering new points without having to look at every point that has been clustered so far. We demonstrate the efficiency and scalability of COOLCAT by a series of experiments on real and synthetic data sets. | INTRODUCTION
Clustering is a widely used technique in which data points
are partitioned into groups, in such a way that points in the
same group, or cluster, are more similar among themselves
than to those in other clusters. Clustering of categorical
attributes (i.e., attributes whose domain is not numeric) is
a difficult, yet important task: many fields, from statistics
to psychology deal with categorical data.
In spite of its
importance, the task of categorical clustering has received
scant attention in the KDD community as of late, with only
a handful of publications addressing the problem ([18, 14,
12]).
Much of the published algorithms to cluster categorical
data rely on the usage of a distance metric that captures
the separation between two vectors of categorical attributes,
such as the Jaccard coefficient. In this paper, we present
COOLCAT (the name comes from the fact that we reduce
the entropy of the clusters, thereby "cooling" them), a novel
method which uses the notion of entropy to group records.
We argue that a classical notion such as entropy is a more
natural and intuitive way of relating records, and more importantly
does not rely in arbitrary distance metrics. COOLCAT
is an incremental algorithm that aims to minimize
the expected entropy of the clusters. Given a set of clusters
, COOLCAT will place the next point in the cluster
where it minimizes the overall expected entropy. COOLCAT
acts incrementally, and it is capable to cluster every
new point without having to re-process the entire set. Therefore
, COOLCAT is suited to cluster data streams (contin-uosly
incoming data points) [2].
This makes COOLCAT
applicable in a large variety of emerging applications such
as intrusion detection, and e-commerce data.
This paper is set up as follows. Section 2 offers the background
and relationship between entropy and clustering, and
formulates the problem. Section 3 reviews the related work.
Section 4 describes COOLCAT, our algorithm. Section 5
presents the experimental evidence that demonstrates the
advantages of COOLCAT. Finally, Section 6 presents conclusions
and future work.
BACKGROUND AND PROBLEM FOR-MULATION
In this section, we present the background of entropy and
clustering and formulate the problem.
2.1
Entropy and Clustering
Entropy is the measure of information and uncertainty of
a random variable [28]. Formally, if X is a random variable,
S(X) the set of values that X can take, and p(x) the prob-582
ability function of X, the entropy E(X) is defined as shown
in Equation 1.
E(X) =
x
S(X)
p(x)log(p(x))
(1)
The entropy of a multivariate vector ^
x =
{X
1
,
, X
n
}
can be computed as shown in Equation 2, where p(^
x) =
p(x
1
,
, x
n
) is the multivariate probability distribution.
E(^
x)
x
1
S(X
1
)
...
x
n
S(X
n
)
p(^
x)logp(^
x)
(2)
Entropy is sometimes referred to as a measure of the
amount of "disorder" in a system. A room with socks strewn
all over the floor has more entropy than a room in which
socks are paired up, neatly folded, and placed in one side of
your sock and underwear drawer.
2.2
Problem formulation
The problem we are trying to solve can be formulated as
follows. Given a data set D of N points ^
p
1
,
, ^
p
N
, where
each point is a multidimensional vector of d categorical attributes
, i.e., ^
p
j
= (p
1
j
,
, p
d
j
), and given an integer k, we
would like to separate the points into k groups C
1
,
, C
k
,
or clusters, in such a way that we minimize the entropy of
the whole arrangement. Unfortunately, this problem is NP-Complete
, and moreover, difficult to approximate [13]. In
fact, the problem is NP-Complete for any distance function
d(x, y), defined over pairs of points x, y, such that the function
maps pairs of points to real numbers (and hence, our
entropy function qualifies), therefore we need to resort to
heuristics to solve it.
We first have to resolve the issue of what we mean by the
"whole entropy of the system." In other words, we have
to make our objective function clear. We aim to minimize
the expected entropy, whose expression is shown in Equation
3, where E(C
1
),
, E(C
k
), represent the entropies of
each cluster, C
i
denotes the points assigned to cluster i,
C
i
D, with the property that C
i
C
j
=
, for all
i, j
= 1, .., k i = j. The symbol
C
=
{C
1
,
, C
k
}
represents the clustering.
E(
C) =
k
( |C
k
|
|D| (E(C
k
)))
(3)
This function, as we will see later, allows us to implement
an incremental algorithm that can effectively deal with
large datasets, since we do not need to look at the entire set
of points to decide about the entropy of an arrangement.
Rather, we will be able to decide for each point, how it
would affect the entropy of each of the existing clusters if
placed in each one of them.
The solution we propose in this paper (and present in Section
4) is a heuristic based in finding a set of initial clusters
(using the entropic criteria), and then incrementally (greed-ily
) add points to the clusters according to a criteria that
minimizes Equation 3.
Furthermore, we make a simplification in the computation
of entropy of a set of records. We assume independence of
the attributes of the record, transforming Equation 2 into
Equation 5.
In other words, the joint probability of the
combined attribute values becomes the product of the prob-members
E
Exp.
Entropy
Cluster0
{"red", "heavy"}
1.0
0.66
{"red", "medium"}
Cluster1
{"blue", "light"}
0
Cluster0
{"red", "heavy"}
2.0
1.33
{"blue", "light"}
Cluster1
{"red", "medium"}
0
Cluster0
{"red", heavy"}
0
1.33
Cluster1
{"red", "medium"}
2.0
{"blue", "light"}
Figure 1:
Three different clusterings for the set
v
1
, v
2
, v
3
.
Clustering 1 minimizes the expected entropy
of the two clusters.
abilities of each attribute, and hence the entropy can be
calculated as the sum of entropies of the attributes.
E(^
x)
=
x
1
S(X
1
)
x
n
S(X
n
)
(4)
i
(p(x
i
))log(
i
p(x
i
))
=
E(X
1
) + E(X
2
) +
+ E(X
n
)
(5)
Assume that we have a set of three records, v
1
=
{"red"
, "heavy"
}, v
2
=
{"blue", "light"}, and v
3
=
{"red",
"medium"
}, and we want to form two clusters with them.
Figure 1 shows all the possible arrangements, with the entropy
of each cluster, and the expected entropy in each arrangement
. As we can see, the minimum expected entropy
is that of arrangement 1, which obviously is the correct way
of clustering the records (using two clusters).
Even though the assumption of attribute independence
is not true in every data set, it proves to work very well
in practice (as shall be shown in the experimental section
of this paper). Moreover, in the cases we can demonstrate
that there is a correlation between two or more attributes of
the data set, we can always change the data points by creating
attributes that reflect these correlations and then apply
Equation 5 to compute the join entropy. For instance, if the
data set is composed of records of attributes A, B, C, D, E, F
and we know that (A, B), (A, C) and (E, F ) are correlated.
we can convert the data set into one having records with
attributes AB, AC, D, EF and compute the entropy assuming
that these new attributes are independent. Notice that
for the grouped attributes, we are in effect computing their
joint probabilities. The correlations between attributes can
be easily found by techniques such as the Chi-Square and
likelihood ratio tests. In our experimental experience, the
gains obtained by doing this are small enough to justify the
usage of the independence assumption.
2.3
Expected entropy and the Minimum Description
Length principle
The Minimum Description Length principle (MDL) [26,
27] recommends choosing the model that minimizes the sum
of the model's algorithmic complexity and the description of
the data with respect to that model. This principle is widely
583
used to compare classifiers (see [23]) but it has not been used
much to deal with clustering.
Formally, the complexity of the model can be stated as
shown in Equation 6, where K() indicates the complexity, h
is the model, and D denotes the data set. The term K(h)
denotes the complexity of the model, or model encoding,
while K(D using h) is the complexity of the data encoding
with respect to the chosen model.
K(h, D) = K(h) + K(D using h)
(6)
Consider first the term K(h). To encode the model, we
need to encode for each cluster the probability distribution
for the attribute values. This can be done by encoding the
number of times each attribute value appears in the cluster,
and the number of points in each cluster. Assuming that
there are d attributes in the data, and that attribute A
j
can assume v
j
different values. As usual, k represents the
number of clusters in the model. K(h) can be written as
shown in Equation 7. In each cluster i, we need to encode
c
i
=
d-1
j=0
v
j
values. So, the total number of values we
need to encode is
k-1
i=0
c
i
= k, where is a constant. We
also need to encode the number of points in each cluster, or
k values. The number of bits needed to encode the number
of times each attribute value occurs in the cluster, or the
number of points in a cluster is equal to log(
|D|), since the
maximum number for these values is the size of the entire
data set. Therefore K(h) is a linear function of k, with
a constant that represents all the contributions described
above.
K(h) = klog(
|D|)
(7)
On the other hand, the encoding of the data given the
model can be stated as shown in Equation 8.
Once the
probabilities of occurrence of each attribute value in each
cluster are known, an optimal code (Huffman) can be chosen
to represent each attribute value in the cluster. Each point
is simply represented by the encoding of its attributes' values
. The optimal code is achieved by giving to each value a
number of bits proportional to log(Pijl), where P (ijl) is the
probability that the l value of attribute j occurs in cluster i.
The second term in the equation simply indicates the membership
of all the points, needing log(k) for the encoding of
the individual memberships.
K(D using h) =
k-1
i=0
|C
i
|
|D|
d-1
j=0
v-1
l=0
Pijllog(Pijl) + Dlog(k)
(8)
Noticing that the first term of Equation 8 is simply the
expected entropy of the clustering, we can write K(h, D) as
shown in Equation 9. Notice that for a fixed k, the MDL
principle indicates that the best model can be found by minimizing
the expected entropy of the clustering, which is pre-cisely
our goal.
K(h, D) = log(
|D|) + Dlog(k) +
E(
C)
(9)
2.4
Evaluating clustering results
A frequent problem one encounters when applying clustering
algorithms in practice is the difficulty in evaluating the
solutions. Different clustering algorithms (and sometimes
multiple applications of the same algorithm using slight variations
of initial conditions or parameters) result in very different
solutions, all of them looking plausible. This stems
from the fact that there is no unifying criteria to define clusters
, and more often than not, the final clusters found by the
algorithm are in fact the ones that correspond to the criteria
used to drive the algorithm. Methods to evaluate whether
or not the structure found is a property of the data set and
not one imposed by the algorithm are needed.
Authors have pondered about good ways to validate clusters
found by algorithms (e.g., see [21, 1]). Two widely used
methods are the following:
Significance Test on External Variables This technique
calls for the usage of significance tests that compare
the clusters on variables not used to generate them.
One way of doing this is to compute the entropy of
the solution using a variable that did not participate
in the clustering. (A class attribute.) The entropy of
an attribute C in a cluster C
k
is computed as shown
in Equation 10, where V
j
denotes one of the possible
values that C can take. The evaluation is performed
by computing the expected entropy (taken into consideration
the cluster sizes). The smaller the value of
E(C
k
), the better the clustering fares.
E(C
k
) =
j
P (C = V
j
)logP (C = V
j
)
(10)
The category utility function The category utility (CU)
function [15] attempts to maximize both the probability
that two objects in the same cluster have attribute
values in common and the probability that objects
from different clusters have different attributes. The
expression to calculate the expected value of the CU
function is shown in Equation 11, where P (A
i
= V
ij
|C
k
)
is the conditional probability that the attribute i has
the value V
ij
given the cluster C
k
, and P (A
i
= V
ij
)
is the overall probability of the attribute i having the
value V
ij
(in the entire set).
The function aims to
measure if the clustering improves the likelihood of
similar values falling in the same cluster. Obviously,
the higher the value of CU, the better the clustering
fares.
CU
=
k
C
k
|D|
i
j
[P (A
i
= V
ij
|C
k
)
2
- P (A
i
= V
ij
)
2
] (11)
We have used both techniques in validating our results,
as shall be seen in the experimental section.
2.5
Number of clusters
The issue of choosing the number of clusters is one common
to all clustering methods, and our technique is no exception
. Many methods have been proposed for determining
the right number of clusters (e.g.,[4, 9]). Unfortunately
many of these methods (e.g., [4]) assume that it is possible
to compute a centroid for each cluster, which in categorical
data is not easy. We consider this issue out of the scope of
584
this paper since we plan to examine good ways of selecting
the optimal number of clusters in the context of our metric.
RELATED WORK
Clustering is an extensively researched area not only by
data mining and database researchers [31, 11, 17, 18, 3], but
also by people in other disciplines [10]. Among the numerical
clustering algorithms, ENCLUS [6] uses entropy as a
criteria to drive the algorithm. However, ENCLUS follows
a completely different algorithm to our approach, dividing
the hyperspace recursively. For each subspace, ENCLUS estimates
its density and entropy and determines if it satisfies
the goodness criteria: its entropy has to be lower than a
threshold. However, it is not possible to translate either the
algorithm or the relationships to the area of categorical clustering
, since the notion of density has no intuitive meaning
when the attributes are categorical. In a recent paper [16],
the authors use Renyi's definition of entropy [25] to define
a clustering evaluation function that measures the distance
between clusters as the information potential [24] between
them. Using this function, they describe an algorithm that,
starting with a random placing of points in clusters, perturbs
the placement until the improvement on the information potential
is not appreciable. This algorithm, however, cannot
scale to large data sets since it requires all points to perform
the calculation of the distance.
In the area of clustering categorical records, a few recent
publications are worth mentioning. In [19], the authors
address the problem of clustering transactions in a market
basket database by representing frequent item sets as hyper-edges
in a weighted hypergraph. The weight of the graph is
computed as the average of the confidences for all possible
association rules that can be generated from the item set.
Then, a hypergraph partitioning algorithm is employed to
partition the items, minimizing the weight of the cut hyper-edges
. The algorithm does not produce a clustering of the
transactions and it is not obvious how to obtain one from
the item clusters. A related paper by Gibson et al [14] also
treats categorical clustering as hypergraph partitioning, but
uses a less combinatorial approach to solving it, based on
non-linear dynamical systems.
CACTUS [12], is an agglomerative algorithm that uses the
author's definitions of support, strong connection and similarity
to cluster categorical data. Support for an attribute
value pair (a
i
, a
j
), where a
i
is in the domain of attribute
A
i
and a
j
in the domain of attribute A
j
is defined as the
number of tuples that have these two values. The two attributes
a
i
, a
j
are strongly connected if their support exceeds
the value expected under the attribute-independence.
This concept is then extended to sets of attributes. A cluster
is defined as a region of attributes that are pairwise
strongly connected, no sub-region has the property, and its
support exceeds the expected support under the attribute-independence
assumption.
ROCK [18] computes distances between records using the
Jaccard coefficient.
Using a threshold, it determines, for
each record, who are its neighbors. For a given point p, a
point q is a neighbor of p if the Jaccard coefficient J(p, q)
exceeds the threshold. Then, it computes the values of a
matrix LIN K, in which the entries link(p, q) are the number
of common neighbors between p and q. The algorithm
then proceeds to cluster the records in an agglomerative way,
trying to maximize for the k clusters (k is a predefined integer
) the function
k
i=1
n
i
p,qC
i
link(p,q)
n
1 + 2f ()
i
, where is
the threshold, and f () is a function selected by the user.
The choice of f () is critical in defining the fitness of the
clusters formed the the ROCK algorithm, and, as the authors
point out, the function is dependent on the data set
as well as on the kind of cluster the user is interested in. We
feel that choosing the function is a delicate and difficult task
for users that may be a roadblock to using ROCK efficiently.
Snob [29, 30] is an unsupervised learning algorithm based
on the notion of Minimum Message Length (MML). MML
is an information theoretic criterion for parameter estimation
and model selection. Although MML is similar to the
MDL criterion of Rissanen, MML is a Bayesian criterion
and therefore uses an a-priori distribution of parameter values
. Snob is in the category of mixture model algorithms
[22]. Snob is iterative in nature and therefore does not scale
with large data sets. Moreover, contrary to COOLCAT, it
is difficult to envision how Snob can be used to cluster data
streams. AUTOCLASS [5] also uses mixture models and
Bayesian criteria to cluster data sets. Again, AUTOCLASS
does not scale well with large data sets.
OUR ALGORITHM
Our entropy-based algorithm, COOLCAT, consists of two
steps: initialization and incremental step.
4.1
Initialization
The initialization step "bootstraps" the algorithm, finding
a suitable set of clusters out of a sample S, taken from the
data set (
|S| << N), where N is the size of the entire data
set. We first find the k most "dissimilar" records from the
sample set by maximizing the minimum pairwise entropy
of the chosen points. We start by finding the two points
ps
1
, ps
2
that maximize E(ps
1
, ps
2
) and placing them in two
separate clusters (C
1
, C
2
), marking the records (this takes
O(
|S|
2
)). From there, we proceed incrementally, i.e., to find
the record we will put in the j-th cluster, we choose an unmarked
point ps
j
that maximizes min
i=1,..,j-1
(E(ps
i
, ps
j
)).
The rest of the sample unmarked points (
|S| - k), as well
as the remaining points (outside the sample), are placed in
the clusters using the incremental step.
We are interested in determining the size of the sample
that guarantees with high probability the existence in the
sample of at least one member of each cluster, given the
number of clusters. In [17], the authors address the same
problem and use Chernoff bounds[7] to bound the size of the
sample given an estimate of the size of the smallest cluster
with respect to the average size (
|D|
k
), and the confidence
level for the probability of finding at least a member of
each cluster. The estimate of the size of the smallest cluster
with respect to the average size is given in the form of a
parameter =
|D|
k
m
, where m is the size of the smallest
cluster. The parameter is then a number greater than
1. The bound on the size of the sample is then given by
Equation 12.
s = k + klog( 1
) + k
(log( 1
))
2
+ 2log( 1
)
(12)
It is important to remark that Equation 12 does not
depend on the size of the data set, which makes the bound
585
1.Given an initial set of clusters
C = C
1
,
, C
k
2.Bring points to memory from disk and
for each point p do
3. For i = 1, .., k
4. Tentatively place p in C
i
and
compute
E(
C
i
) where
C
i
denotes the clustering obtained
by placing p in cluster C
i
5. Let j = argmin
i
(
E(
C
i
))
6. Place p in C
j
7. Until all points have been placed in
some cluster
Figure 2: Incremental step.
very favorable for larger sets (and unfavorable for small ones,
but this is not a problem since for small sets we can simply
use the entire set as a sample).
4.2
Incremental Step
After the initialization, we process the remaining records
of the data set (the rest of the sample and points outside
the sample) incrementally, finding a suitable cluster for each
record. This is done by computing the expected entropy
that results of placing the point in each of the clusters and
selecting the cluster for which that expected entropy is the
minimum. We proceed in the incremental step by bringing
a buffer of points to main memory and clustering them one
by one.
The order of processing points has a definite impact on
the quality of the clusters obtained. It is possible that a
point that seems a good fit for a cluster at a given point
in time, becomes a poor fit as more points are clustered.
In order to reduce this effect, we enhanced the heuristic by
re-processing a fraction of the points in the batch. After a
batch of points is clustered, we select a fraction m of points
in the batch that can be considered the worst fit for the clusters
they were put in. We proceed to remove these points
from their clusters and re-cluster them. The way we figure
out how good a fit a point is for the cluster where it landed
originally, is by keeping track of the number of occurrences
of each of its attributes' values in that cluster. That is, at
the end of the batch, we know the values of q
ij
, for each
record i in the batch and each attribute j, where q
ij
represent
the number of times that the value V
ij
appears in the
cluster where i was placed. We convert these numbers into
probabilities by dividing q
ij
by the cluster size (i.e.,
C
l
,
where C
l
is the cluster where i was placed). Let us call these
numbers p
ij
. For each record, we can compute a fitting probability
p
i
=
j
(p
ij
). Notice that the lower the p
i
is, the
worst fit the record is in that cluster (we can say that the
global combination of attributes is not very common in the
cluster). We then sort records according to p
i
and select the
m records in the batch with lowest p
i
as the records to be
reprocessed. Each re-processed record is placed in the cluster
that minimizes the expected entropy (as done originally
in the incremental step).
EXPERIMENTAL RESULTS
Our experiments were run in a DELL server equipped
with a Pentium III running at 800 MHz, and 1 Gigabyte of
main memory, running Red Hat Linux 2.2.14. We used two
kinds of data sets: real data sets (for evaluating the quality
of our algorithm) and synthetic data sets (for the evaluation
of scalability). The experiments were conducted using the
following datasets (plus a synthetically generated data set
to test the scalability of the algorithm).
Archaeological data set
Our first data set is a hypothetical collection of human
tombs and artifacts from an archaeological site. Although
the data set is not "real," it is realistic enough
and so we include it in this section. It has also the
property of being small, so brute force can be used
to find the optimal clustering. The data set is taken
from [1] The first attribute (not used for clustering
but for verification) indicates the sex (M for male, F
for female) of the individuals buried. The other eight
attributes are binary (1 present, 0 non-present), and
represent artifacts types (e.g., ceramics, bracelets, arrow
points) that were found (or not found) in the tomb.
Congressional votes This data set was obtained from
the UCI KDD Archive ([20]) and contains the United
States Congressional Voting Records for the year 1984.
Each record contains a Congressman's votes on 16 issues
. All the attributes are boolean ("yes" or "no"),
with a few of the votes containing missing values. We
decided to treat missing values as another domain value
for the attribute. A classification field with the labels
"Democrat," or "Republican" is provided for each
record, which are not used for clustering, but can be
loosely used for quality measuring. (Some congress-men
"crossed" parties to vote.) There are 435 records
in the set (267 Democrats and 168 Republicans).
KDD Cup 1999 data This data set can be obtained
from the UCI Archive [20], and was used for the the
Third International Knowledge Discovery and Data
Mining Tools Competition. This database contains a
standard set of network audit data, which includes a
wide variety of simulated intrusions. Each record, corresponding
to a connection, contains 42 features, some
of them categorical, and the rest continuous variables.
We transformed the continuous variables in categorical
by a simple process of discretization: we computed
the median of each attribute, and assigned any value
below and including the median a label "0," while the
rest of the values were assigned a label "1." There are
many intrusion data sets in the repository, some of
them to be used as training sets and some as test sets.
We utilized the set that corresponds to 10% of the
training data. In this set, records have an extra attribute
(class), labeled with a "1" if the connection is
part of an attack, or a "0" if it is not. We use this
attribute for the evaluation of external entropy (not in
the clustering process).
5.1
Archaeological Data
Figure 3 show the results of using COOLCAT in the archaeological
data set.
We performed experiments with 2
clusters, since the attribute with which we evaluate the external
entropy (not used in the clustering) is Sex (and the
586
Alg.
m
CU
Ext
E.
Expected
(sex)
entropy
COOLCAT 0%
0.7626
0
4.8599
10%
0.7626
0
4.8599
20%
0.7626
0
4.8599
Brute
Force
0
.7626
0
4.8599
ROCK
0
.3312
0.9622 n/a
Figure 3: Results for COOLCAT, ROCK and brute
force in the Archaeological data set.
data set is small, so we believed that the clustering could effectively
separate the two sexes). We conducted experiments
with the original data set (which we label "independent"),
and a modified data set in which we grouped attributes in
the following way: (1), (24), (26), (34), (35), (46), (78), to reflect
the correlations found among the attributes of the set
(found by using a Likelihood ratio test). However, we only
report the results for independent data, since the correlated
set results are essentially the same. (The same phenomena
was observed in the other experiments.) We also conducted
"brute force" experiments, in which we found the optimum
clustering, i.e., that for which the expected entropy was the
minimum.
We did this to compare how well our heuristic
(COOLCAT) performed. We also report in the table
the best results found by ROCK (which have to be found
by varying the parameter over a range of values). The
results shown in Figure 3 show that the expected entropy
function does an excellent job in clustering this data. The
results obtained by COOLCAT (in terms of CU , and external
entropy with respect to the variable sex, which is
not used in the clustering), and expected entropy are the
same obtained by the brute force (optimal) approach. In
all cases, both the CU function and the external entropy
of the COOLCAT solutions are better than those found for
the best ROCK solution. Particularly encouraging is the
fact that the external entropy for the variable SEX (which
the authors of the data set indicated as the one being more
correlated with the clusters), is 0 in all the COOLCAT solutions
, so a perfect separation is achieved. (ROCK's solution
does not achieve this, resulting in a high external entropy.)
In this data set, the re-processing step does not have any
effect, as seen by the fact that the results are the same for
all the values of m. This is attributed to the size of the data
set (only 20 records). Both COOLCAT and ROCK took
0.01 seconds to find a solution for this data set.
5.2
Congressional Voting results
Figure 4 summarizes the results obtained by COOLCAT
in the Congressional Voting records (no grouping of attributes
was performed), for three values of m. The results obtained
for various sample sizes are extremely stable. The CU values
for the clusterings obtained with COOLCAT are, in
all the cases superior to the one obtained by ROCK. The
values show no fluctuations on our results as m changes,
while the value for CU is 11% better than ROCK's value.
The external entropy for the COOLCAT solutions is slightly
better than the value in ROCK's solution. The buffer size
(batch) in this experiment was 100 records, making the num-Alg
.
m
CU
Ext.Ent.
Expected
Running
(pol.
affl.)
entropy
time
(sec.)
COOL
0%
2.9350
0.4975
13.8222
0.16
CAT
10%
2.9350
0.4975
13.8222
0.26
20%
2.9350
0.4975
13.8222
0.28
ROCK
2
.6282
0.4993
N/A
0.51
Figure 4: Results for COOLCAT and ROCK in the
Congressional Voting data set
ber of re-processed points 0,10, and 20 (m = 0%, 10%, 20%).
(Again, these numbers correspond to the means of 500 runs.)
The running time of COOLCAT is significantly better than
the one for ROCK (a decrease of 45% in the slowest case,
m = 20%, of COOLCAT).
5.3
KDD Cup 1999 data set
Since we did not have explicit knowledge of how many
clusters we could find in this data set, we decided to find
clusterings for many k values, and report, in each case, the
expected entropy, external entropy (with respect to the attribute
that denotes whether the record is an attack or not),
and CU . The results are shown in the form of a graph in
Figure 5.
In the figure, the left hand side scale is used
for expected entropy and CU , while the right hand side is
used for external entropy (the values of external entropy are
between 0 and 1, while the other parameters have larger
ranges). The figure shows that all the parameters tend to
an asymptotic limit as k grows. The saturation starts to
occur in the value k = 10, which exhibits an external entropy
of 0.09, which indicates that most of the clusters are
"inhabited" by either attack records or attack-free records.
In other words, the clustering achieves a good separation of
the points. The experiments were conducted using a sample
size of 1,000 points, which guarantees a level of confidence
of 95% (for = 10).
5.4
Synthetic data set
We used a synthetic data generator ([8]) to generate data
sets with different number of records and attributes. We
used these data sets to test the scalability of COOLCAT.
The results are shown in the graph of Figure 7, where the
y-axis shows the execution time of COOLCAT in seconds,
and the x-axis the number of records (in multiples of 10
3
),
for four different number of attributes (A = 5, 10, 20, 40).
In all the cases, COOLCAT behaves linearly with respect to
the number of records, due to the incremental nature of the
algorithm (it processes each record in the data set at most
twice: those that are selected for re-processing are clustered
twice, the rest only once; moreover, points are brought from
disk to memory only once). We used for these experiments
an m equal to 20%, and a buffer size of 300 records. Notice
that in this experiment, we do not report running times for
ROCK. The reason for this is that ROCK is designed to
be a main memory algorithm. In [18], the authors make it
explicit that ROCK deals with large data sets by using random
sampling (not by looking at the entire set). Therefore,
it would have been unfair to compare COOLCAT's running
times with those of ROCK (over samples of the sets).
We performed another experiment with synthetic data
587
Figure 5: Expected entropy, external entropy and
CU vs.
Number of Clusters (k) in the KDD Cup
1999 data set.
The left scale (y-axis) is used for
xpected entropy and CU , while the right one is used
for external entropy.
sets generated by [8]. In this experiment, each synthetic
set contained 8,124 records of 23 attributes each. Twenty
two of the attributes are used for clustering and one (Indx)
for the evaluation of external entropy. Each data set was
generated using 21 different types of rules. A rule involves
12 attributes. An example of the rules used is: A = c&C =
a&D = b&K = c&N = a&O = c&Q = b&R = c&S =
c&T = b&U = b
Indx = r
1
. This rule says that when
the 11 attributes on the left hand side take the values shown,
the attribute Indx takes the value r
1
. Every record obeys
one of the rules. Two sets were generated, using different
probability distributions for the rules. In the first one (uniform
), every rule is used in the same number of records in
the data set. (In other words the number of records that
obey a particular rule is equal to the size of the data set
divided by 21.) In the normal distribution, the populations
are distributed following a Gaussian distribution (some rules
receive more records than others). The 23rd attribute takes
the value of the rule number (rule index).
The external
entropy is calculated using this attribute (which does not
participate in the clustering). Figure 6 shows the evaluation
of clusters obtained by COOLCAT over different synthetic
data sets.
The table shows also the results obtained by
using ROCK. As we can see, COOLCAT results are significantly
better than those obtained by ROCK for both data
sets. Particularly significant is the fact that the external
entropy for the COOLCAT solutions in the Uniform case
with m = 10%, 20% are 0, indicating a perfect separation
of rules. The values for other cases are extremely close to
0 as well. As expected, re-processing (increasing m) helps
in finding a better clustering. However, the impact is more
marked when going from no re-processing (m = 0) to re-processing
10% of the points, leveling out from then on.
The running times of COOLCAT are more than one order
of magnitude smaller than those of ROCK.
CONCLUSIONS
In this paper we have introduced a new categorical clustering
algorithm, COOLCAT, based in the notion of entropy
. The algorithm groups points in the data set trying
to minimize the expected entropy of the clusters. The ex-Dist
.
m
CU
Ext.Ent.
Expected
Running
(rule index
)
entropy
time
(sec.)
COOLCAT
Uniform 0%
6.9187
0.00816
17.4302
6.73
10%
6.9268
0.00000
17.2958
11.85
20%
6.9268
0.00000
17.3958
12.95
Normal
0%
6.8893
0.02933
17.4969
6.88
10%
6.8996
0.00813
17.4458
11.99
20%
6.9008
0.00742
17.4328
13.07
ROCK
Uniform 6
.6899
0.09861
n/a
207.37
Normal
6
.2749
0.34871
n/a
223.49
Figure 6: Results for COOLCAT and ROCK in the
synthetic data sets
0
2000
4000
6000
8000
10000
12000
14000
16000
18000
20000
22000
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
execution time
Number of records x 1000
A = 5
A = 10
A = 20
A = 40
Figure 7:
COOLCAT's performance for the synthetic
data sets:
response time (in seconds) vs.
the number of records in the data set (in multiples
of 10
3
), for different number of attributes
(A = 5, 10, 20, 40).
perimental evaluation supports our claim that COOLCAT
is an efficient algorithm, whose solutions are stable for different
samples (and sample sizes) and it is scalable for large
data sets (since it incrementally adds points to the initial
clusters). We have evaluated our results using category utility
function, and the external entropy which determines if
the clusters have significance with respect to external variables
(i.e., variables not used in the clustering process). In
our comparisons with ROCK, COOLCAT always shows a
small advantage in terms of the quality measures (CU and
external entropy). However, the real advantage of COOLCAT
resides in the fact that ROCK is extremely difficult to
tune (finding the right ), while COOLCAT's behavior to
its only parameter (m) is extremely stable: small values of
m are sufficient to obtain a good result. In the largest data
set for which we compared both techniques (Mushrooms),
COOLCAT had a significantly better running time.
The incremental nature of COOLCAT makes it possible
to apply the algorithm to data streams, and as the results in
scalability show, the algorithm can cope with large volumes
of data. We are currently doing research in tracking evolving
clusters using COOLCAT.
588
ACKNOWLEDGMENTS
We like to thank Vipin Kumar and Eui-Hong (Sam) Han
for lending us their implementation of ROCK.
REFERENCES
[1] M.S. Aldenderfer and R.K. Blashfield. Cluster
Analysis. Sage Publications, (Sage University Paper
series on Quantitative Applications in the Social
Sciences, No. 44), 1984.
[2] D. Barbar
a. Requirements for clustering data streams.
SIGKDD Explorations (Special Issue on Online,
Interactive, and Anytime Data Mining), 3(2), 2002.
[3] D. Barbar
a and P. Chen. Using the fractal dimension
to cluster datasets. In Proceedings of the ACM
SIGKDD International Conference on Knowledge
Discovery and Data Mining, Boston, MA, August
2000.
[4] R.B. Calinski and J. Harabasz. A dendrite method for
cluster analysis. Communications in Statistics, pages
127, 1974.
[5] P. Cheeseman and J. Stutz. Bayesian classification
(AUTOCLASS): Theory and Results. In U.M. Fayyad,
G. Piatetsky-Shapiro, P. Smyth, and R. Uthurusamy,
editors, Advances in Knowledge Discovery and Data
Mining. AAAI Press, Menlo Park, 1995.
[6] C. CHen, A.W. Fu, and Y. Zhang. Entropy-based
Subspace Clustering for Mining Numerical Data. In
Proceedings of ACM SIGKDD International
Conference on Knowledge Discovery and Data Mining,
San Diego, CA, August 1999.
[7] H. Chernoff. A Measure of Asymptotic Efficiency for
Tests of a Hypothesis Based on the Sum of
Observations. Annals of Mathematical Statistics, pages
493509, 1952.
[8] DataGen. Data Generator: Perfect data for an
imperfect world. http://www.datasetgenerator.com/.
[9] R.C. Dubes and A.K. Jain. Validity studies in
clustering methodologies. Pattern Recognition, pages
235254, 1979.
[10] R.O. Duda and P.E. Hart. Pattern Classification and
Scene Analysis. Wiley-Interscience, New York, 1973.
[11] M. Ester, H.P. Kriegel, and X. Wu. A density-based
algorithm for discovering clusters in large spatial
database with noise. In Proceedings of the
International Conference on Knowledge Discovery and
Data Mining, Portland, Oregon, August 1996.
[12] V. Ganti, J. Gehrke, and R. Ramakrishnan.
CACTUS-Clustering Categorical Data Using
Summaries. In Proceedings of the ACM-SIGKDD
International Conference on Knowledge Discovery and
Data Mining, San Diego, CA, 1999.
[13] M. Garey and D. Johnson. Computers and
Intractability: A Guide to the Theory of
NP-Completeness. W.H. Freeman, 1979.
[14] D. Gibson, J. Kleinberg, and P. Raghavan. Clustering
Categorical Data: An Approach Based on Dynamical
Systems. In Proceedings of the International
Conference on Very Large Databases (VLDB), New
York, NY, September 1998.
[15] A. Gluck and J. Corter. Information, uncertainty, and
the utility of categories. In Proceedings of the Seventh
Annual Conference of the Cognitive Science Society,
1985.
[16] E. Gokcay and J.C. Principe. Information Theoretic
Clustering. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 24(2), February 2002.
[17] S. Guha, R. Rastogi, and K. Shim. CURE: A
clustering algorithm for large databases. In
Proceedings of the ACM SIGMOD Conference on
Management of Data, Seattle, WA, May 1998.
[18] S. Guha, R. Rastogi, and K. Shim. ROCK: A Robust
Clustering Algorithm for Categorical Attributes. In
Proceedings of the 15th International Conference on
Data Engineering, Sydney, Australia, April 1999.
[19] E.H. Han, G. Karypis, V. Kumar, and B. Mobasher.
Clustering based on association rule hypergraphs. In
Proceedings of the SIGMOD Workshop on Research
Issues on Data Mining and Knowledge Discovery,
June 1997.
[20] S. Hettich(librarian). UCI KDD Archive.
http://kdd.ics.uci.edu/.
[21] A.K. Jain and R.C. Dubes. Algorithms for clustering
data. Prentice Hall, 1988.
[22] G.J McLachlan and K.E. Basford. Mixture Models.
Marcel Dekker, New York, 1988.
[23] T.M. Mitchell. Machine Learning. McGraw-Hill, 1997.
[24] J.C. Pincipe, D. Xu, and J. Fisher. Information
theoretic learning. In S. Haykin, editor, Unsupervised
Adaptive Filtering. John Wiley & Sons, 2000.
[25] A. Renyi. On Measures of Entropy and Information.
In Proc. of the Fourth Berkeley Symp. Math.,
Statistics, and Probability, 1960.
[26] J. Rissanen. A universal prior for integers and
estimation by minimum description length. The
Annals of Statistics, 1983.
[27] J. Rissanen. Stochastic complexity in statistical
inquiry. World Scientific Pub., 1989.
[28] C.E. Shannon. A mathematical theory of
communication. Bell System Techical Journal, pages
379423, 1948.
[29] C.S. Wallace and D.M. Boulton. An information
measure for classification. The Computer Journal,
11(2), 1968.
[30] C.S. Wallace and D.L. Dowe. Intrinsic classification by
MML, the Snob program. In Proceedings of the 7th
Australian Joint Conference on Artificial Intelligence,
1994.
[31] R. Zhang, R. Ramakrishnan, and M.Livny. Birch: An
efficient data clustering method for very large
databases. In Proceedings of the ACM SIGMOD
Conference on Data Management, Montreal, Canada,
June 1996.
589
| data streams;incremental algorithm;COOLCAT;categorical clustering;data stream;entropy;clustering |
59 | Coupling and Cohesion Measures for Evaluation of Component Reusability | This paper provides an account of new measures of coupling and cohesion developed to assess the reusability of Java components retrieved from the internet by a search engine. These measures differ from the majority of established metrics in two respects: they reflect the degree to which entities are coupled or resemble each other, and they take account of indirect couplings or similarities. An empirical comparison of the new measures with eight established metrics shows the new measures are consistently superior at ranking components according to their reusability. | INTRODUCTION
The work reported in this paper arose as part of a project that
retrieves Java components from the internet [1]. However,
components retrieved from the internet are notoriously variable in
quality. It seems highly desirable that the search engine should
also provide an indication of both how reliable the component is
and how readily it may be adapted in a larger software system.
A well designed component, in which the functionality has been
appropriately distributed to its various subcomponents, is more
likely to be fault free and easier to adapt. Appropriate distribution
of function underlies two key concepts: coupling and cohesion.
Coupling is the extent to which the various subcomponents
interact. If they are highly interdependent then changes to one are
likely to have significant effects on others. Hence loose coupling
is desirable. Cohesion is the extent to which the functions
performed by a subsystem are related. If a subcomponent is
responsible for a number of unrelated functions then the
functionality has been poorly distributed to subcomponents.
Hence high cohesion is a characteristic of a well designed
subcomponent.
We decided that the component search engine should provide the
quality rankings of retrieved components based on measures of
their coupling and cohesion. There is a substantial literature on
coupling and cohesion metrics which is surveyed in the next
section. We then describe in detail the metrics we have developed
which attempt to address some of the limitations of existing
metrics. In particular, we consider both the strength and
transitivity of dependencies. The following section describes an
empirical comparison of our proposed metrics and several popular
alternatives as predictors of reusability. Section 5 presents an
analysis of the results which demonstrate that our proposed
metrics consistently outperform the others. The paper concludes
with a discussion of the implications of the research.
COUPLING AND COHESION METRICS
Cohesion is a measure of the extent to which the various functions
performed by an entity are related to one another. Most metrics
assess this by considering whether the methods of a class access
similar sets of instance variables. Coupling is the degree of
interaction between classes. Many researches have been done on
software metrics [8], the most important ones are selected used in
our comparative study. Table 1 and Table 2 summarize the
characteristics of these cohesion and coupling metrics.
Table 1. Coupling metrics
Name Definition
CBO
[4][5][11]
Classes are coupled if methods or instance variables
in one class are used by the other. CBO for a class is
number of other classes coupled with it.
RFC
[4][5]
Count of all methods in the class plus all methods
called in other classes.
CF
[3][6]
Classes are coupled if methods or instance variables
in one class are used by the other. CF for a software
system is number of coupled class pairs divided by
total number of class pairs.
DAC[9]
The number of attributes having other classes as
their types.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
MSR'06, May 22-23, 2006, Shanghai, China.
Copyright 2006 ACM 1-59593-085-X/06/0005...$5.00.
18
Table 2. Cohesion metrics
Name Definition
LCOM [5]
Number of non-similar method pairs in a class of
pairs.
LCOM3[7][
9]
Number of connected components in graph whose
vertices are methods and whose edges link similar
methods.
RLCOM
[10]
Ratio of number of non-similar method pairs to
total number of method pairs in the class.
TCC [2]
Ratio of number of similar method pairs to total
number of method pairs in the class.
All of these measures have two important features in common.
First, they treat relationship between a pair of classes or methods
as a binary quantity; second, they treat coupling and cohesion as
an intransitive relation; that is no account is taken of the indirect
coupling and cohesion, although two of cohesion (LCOM3 [7][9]
and TCC [2]) have suggested extensions to incorporate indirect
relationships between methods. In cohesion metrics, it should be
noted that three of them (LCOM, LCOM3 and RLCOM) are in
fact measures of lack of cohesion. TCC [2], in contrast to the
other three metrics, measures cohesion rather than its absence. In
other respects it is similar to RLCOM, being the number of
similar method pairs divided by the total number of method pairs.
PROPOSED NEW METRICS
The study suggested that none of these measures was very
effective in ranking the reusability of Java components. We
therefore decided to develop alternative coupling and cohesion
metrics in the hope of achieving superior performance. One
obvious step was to develop measures that reflected the extent to
which a pair of classes was coupled or a pair of methods
resembled each other. Because none of the measures treated
coupling or similarity as transitive relations, we decided that such
indirect dependencies should be incorporated into our metrics.
3.1
Cohesion
We develop a cohesion metric that takes account of both the
degree of cohesion and transitive (i.e indirect) cohesion between
methods. Methods are said to be similar if the sets of instance
variables that they access overlap. We adopt a graph theoretical
approach. The methods of the class are the vertices. Suppose a
class has a set of method members M
{ M
1
, M
2
,...M
m
} and let.
V
j
{V
j,1
, V
j,2
, .... V
j,n
} be the instance variables accessed by
method M
j
. Then the edge from M
j
to M
i
exists if and only if V
j
V
i
is not null. Thus an edge of the graph reflects the similarity of
the methods in that they have at least one instance variable in
common. The similarity graph is undirected because intersection
is a symmetric relation. The next step is to associate a number
with each edge that reflects the extent to which the two methods
have instance variables in common. We therefore define
SimD(i,j), our measure of direct similarity of two methods, M
i
and
M
j
, as
( )
j
i
j
i
V
V
V
V
j
i
SimD
=
,
where i
j (SimD(j,j) is defined to be zero). Note that 1
SimD(i,j)
0.
The extension of the measure to include indirect similarity
proceeds along the same lines as we employed for indirect
coupling. The strength of similarity provided by a path between
two methods is the product of the SimD values of the edges that
make up the path. Thus we define SimT(i,j,
), the transitive
similarity between methods M
i
and M
j
due to a specific path
, as
( )
=
=
t
s
t
s
e
t
s
t
s
e
V
V
V
V
t
s
SimD
j
i
SimT
,
,
,
)
,
,
(
where e
s
,
t
denotes the edge between vertices s and t. As in the
case of coupling, the path with the highest SimT value is selected
to define the similarity of the two methods, Sim(i,j).
)
,
,
(
)
,
(
max
j
i
SimT
j
i
Sim
=
where
and
is
the set of all paths from M
i
to M
j
. This measure is used to provide
a measure of the cohesion of the class, ClassCoh, by summing the
similarities of all method pairs and dividing by the total number
of such pairs:
)
,
,
(
max
arg
)
,
(
max
j
i
SimT
j
i
=
m
m
j
i
Sim
ClassCoh
m
j
i
=
=
2
1
,
)
,
(
where m is the number of methods in the class. Finally, the
weighted transitive cohesion of the complete software system,
WTCoh, is defined as the mean cohesion of all the classes of
which it is comprised:
n
ClassCoh
WTCoh
n
j
j
=
=
1
where n is the number of classes in the system.
3.2
Coupling
As with cohesion measure, we regard software system as a
directed graph, in which the vertices are the classes comprising
the system. Suppose such a system comprises a set of classes C
{C
1
, C
2
,...C
m
}. Let M
j
{M
j,1
, M
j,2
, .... M
j,n
} be the methods of
the class C
j
, and R
j,i
the set of methods and instance variables in
class C
i
invoked by class C
j
for j
i (R
j,j
is defined to be null).
Then the edge from C
j
to C
i
exists if and only if R
j,j
is not null.
Thus an edge of the graph reflects the direct coupling of one class
to another. The graph is directed since R
j,i
is not necessarily equal
to Ri,j.
The next step is to associate a number with each edge that reflects
the extent of direct coupling from one class to another. We define
CoupD(i,j), as the ratio of the number of methods in class j
invoked by class I to the total number of methods in class I, which
indicates the impact of class j to class i.
(
)
i
i
j
i
M
R
R
j
i
CoupD
+
=
,
,
Then the indirect coupling between classes is included. Suppose
that CoupD(i,j) and CoupD(j,k) have finite values but that
CoupD(i,k) is zero. Thus although there is no direct coupling
between classes C
i
and C
k
, there is a dependency because C
i
invokes methods in C
j
which in turn invokes methods in C
k
. The
strength of this dependency depends on the two direct couplings
of which it is composed, a reasonable measure is defined as:
19
CoupD(i,j)
CoupD(j,k). This notion is readily generalised. A
coupling between two classes exists if there is a path from one to
the other made up edges whose CoupD values are all non-zero.
Thus we define CoupT(i,j,
), the transitive coupling between
classes C
i
and C
j
due to a specific path
, as
( )
+
=
=
t
s
t
s
e
s
s
t
s
e
M
R
R
t
s
CoupD
j
i
CoupT
,
,
,
,
)
,
,
(
e
s
,
t
denotes the edge between vertices s and t. Note first that
CoupT includes the direct coupling, which corresponds to path of
length one, and second that, because the CoupD values are
necessarily less than one, transitive couplings due to longer paths
will typically have lower values.
In general there may be more than one path having a non-zero
CoupT value between any two classes. We simply select the path
with largest CoupT value and hence define Coup(i,j), the strength
of coupling between the two classes, C
i
and C
j
to be:
)
,
,
(
)
,
(
max
j
i
CoupT
j
i
Coup
=
where
)
,
,
(
max
arg
)
,
(
max
j
i
CPT
j
i
=
and
is the
set of all paths from C
i
to C
j
. The final step is to use measure
between each pair of classes as a basis for a measure of the total
coupling of a software system. The weighted transitive coupling
(WTCoup) of a system is thus defined
m
m
j
i
Coup
WTCoup
m
j
i
=
=
2
1
,
)
,
(
where m is the number of classes in the system.
AN EXPERIMENTAL COMPARISON
In our study, the metrics are used for a specific purpose:
predicting how much effort would be required to reuse a
component within a larger system. We therefore chose to measure
reusability as simply the number of lines of code that were added,
modified or deleted (NLOC) in order to extend its functionality in
a prescribed way. The more lines required, the lower the
reusability. This appears to us to be a crude but reasonable
measure of the effort that would be required to adapt a component
for use within a larger system. Three case studies were carried
out: Case 1 HTML Parser: The original components analysed
HTML documents, eliminated tags and comments and output the
text. The required extension was to count and output the number
of tags found during parsing.
Case 2 Lexical Tokenizer: The original components tokenized a
text document using user supplied token rules and output the
tokens on a web interface. The required extension was to count
and output the number of tokens retrieved.
Case 3 Barcode: The original components accepted a sequence of
alphanumeric characters and generated the corresponding
barcode. The required extension was to count the number of
letters.
For each case, 20 Java components were retrieved from a
repository of about 10,000 Java components retrieved form the
internet. The requisite extensions were then implemented by a
very experienced Java programmer and NLOC counted. Despite
the relative simplicity of the extensions, there was considerable
variation in the quantity of extra code required. We then
proceeded to investigate how successful the various measures of
coupling and cohesion are in predicting this quantity. Our
proposed metrics are compared with all the metrics reviewed in
section 2. In order to present the results on the same graph, those
measures that do not produce values in the range (0,1) (i.e. CBO,
RFC, DAC, LCOM and LCOM3) were divided by 100.
RESULTS
Two approaches were used to evaluate the performance of the
various measures in predicting reusability: linear regression and
rank correlation.
5.1
Linear Regression
The regression lines obtained for the five cohesion measures
when applied to the HTML parser components are shown in
Figure 1. The results for the other two sets of components were
similar. It is clear that some measures provide much more
consistent predictors than others. There are no obvious systematic
departures from linearity so the use of simple regression appears
reasonable. The regression lines obtained for coupling measures
demonstrate the same situation.
The coefficient of determination, R
2
, provides a measure of how
much of the variation in NLOC is accounted for by the measures.
Table 3 and Table 4 display the values of R
2
obtained for each of
the coupling and cohesion measures on all three sets of
components. In each case, our proposed new measure, WTCoup
and WTCoh gave the largest value of R
2
, indicating that it was the
best linear predictor of reusability. The remaining measures
produced at least one R
2
value so low as to indicate that that the
correlation was not significantly above chance at the 5% level.
Figure 1. Regression of cohesion measures against reusability
Table 3. R
2
values for coupling measure regression lines.
Cases WTCoup CF
CBO RFC DAC
HTML Parser
.846
.621
.259
.793
.254
Lexical Token.
.836 .098
.004
.729
.738
Barcode Gen.
.958 .693
.121
534
.507
20
Table 4. R
2
values for cohesion measure regression lines.
Cases WTCoh
RLCOM
LCOM3
LCOM
TCC
H. Parser
.847
.319
.259
.564
.178
L. Token.
.838
.783
.002
.709
.646
B. Gen.
.892
.702
.177
.101
.785
5.2
Spearman Rank Correlation
Although these results provide a strong indication that the
proposed new measures are better predictors of reusability than
the alternatives, our primary purpose is simply to rank a set of
components retrieved from the repository. We therefore also
computed the Spearman rank correlation coefficients between the
rankings determined by NLOC and those produced by the various
coupling and cohesion measures (Tables 5 and 6).
Table 5. Rank correlations values for coupling measures.
Cases WTCoup
CF
CBO
RFC
DAC
HTML
Parser
.975 .882 .465 .896 .507
Lexical Token.
.952 .291 .117 .822 .817
Barcode Gen.
.974
.758 .485 .656 .800
Table 6. Rank correlations values for cohesion measures.
Cases WTCoh
RLCOM
LCOM3
LCOM
TCC
H. Parser
-.993
.522
.218
.564
-.343
L. Token.
.838
.783
.002
.709
.646
Bar. Gen.
.892
.702 .177 .101
.785
The relative performance of the various measures is consistent
with the regression studies. In all cases, the two proposed
measures, WTCoup and WTCoh, produced the highest rank
correlations. They are in fact extremely high; no value was lower
than 0.95.
DISCUSSION
These results clearly demonstrate that our proposed metrics for
coupling and cohesion are very good predictors of the number of
lines of code required to make simple modifications to Java
components retrieved from the internet and are superior to other
measures. The majority of coupling and cohesion metrics treat
coupling and similarity as simple binary quantities and ignore the
transitive relationship. Both our proposed measures concern these
issues: First, they are weighted; that is, they use a numeric
measure of the degree of coupling or similarity between entities
rather than a binary quantity. Second they are transitive; that is,
they include indirect coupling or similarity mediated by
intervening entities. It is reasonable to enquire whether both these
characteristics are necessary to achieve good prediction
performance. In fact our investigations suggest that both
contribute to the performance.
Although both WTCoup and WTCoh are good predictors, it is
worth considering whether a linear combination might not
produce even better results. Multiple regression for the Lexical
Tokenizer components produced an R
2
of 0.981; the ranking
produced using the regression coefficients to weight the terms had
a Spearman correlation of 0.986. These are superior to the results
produced by each metric alone but not by a great margin simply
because there original results leave only modest scope for
improvement. Developing such a composite quality measure
would entail assuming the relative weighting of the two metrics
should be the same for all types of component.
This work arose from, and is intended primarily as a contribution
to, search engine technology. Nevertheless, we believe it may be
of interest to a wider body of researchers: in particular, those
involved in developing and evaluating software metrics.
ACKNOWLEDGMENTS
We are grateful to the
four UK higher education funding bodies (for
England, Scotland, Wales and Northern Ireland) for an Overseas Research
Studentship (ORS/2002015010) awarded to G. Gui.
REFERENCES
[1]
Gui, G. and Scott, P. D. Vector Space Based on Hierarchical
Weighting: A Component Ranking Approach to Component
Retrieval. In Proceedings of the 6th International Workshop
on Advanced Parallel Processing Technologies (APPT'05)
[2]
Bieman, J. M. and Kang, B-Y. Cohesion and Reuse in an
Object-Oriented System. In Proc. ACM Symposium on
Software Reusability (SSR'95). (April 1995) 259-262.
[3]
Briand, L., Devanbu, P. and Melo, W. An investigation into
coupling measures for C++. Proceedings of ICSE 1997.
[4]
Brito e Abreu, F. and Melo, W. Evaluating the impact of OO
Design on Software Quality. Proc. Third International
Software Metrics Symposium. (Berin 1996).
[5]
Chidamber, S. R. and Kemerer, C. K. A Metrics Suite for
Object Oriented Design. IEEE Transactions on Software
Engineering, Vol. 20 (June 1994), 476-493.
[6]
Harrison, R., S.J.Counsell, & R.V.Nith. An Evaluation of the
MOOD Set of Object-Oriented Software Metrics. IEEE
Transactions on Software Engineering, Vol. 24 (June 1998),
491-496.
[7]
Hitz, M. and Montazeri, B. Measuring coupling and cohesion
in object-oriented systems. Proceedings of International
Symposium on Applied Corporate Computing. (Monterrey,
Mexico, 1995).
[8]
Kanmani, S., Uthariraj, R., Sankaranarayanan, V. and
Thambidurai, P. Investigation into the Exploitation of
Object-Oriented Features. ACM Sigsoft, Software
Engineering Notes, Vol. 29 (March 2004).
[9]
Li, W. & Henry, S. Object-Oriented metrics that predict
maintainability. Journal of Systems and Software. 23(2) 1993
111-122.
[10]
Li, X., Liu, Z. Pan, B. & Xing, B. A Measurement Tool for
Object Oriented Software and Measurement Experiments
with It. In Proc. IWSM 2000, 44-54.
[11]
Subramanyam, R. & Krishnan, M. S. Empirical Analysis of
CK Metrics for Object-Oriented Design Complexity:
Implications for Software Defects. IEEE Transactions on
Software Engineering, Vol. 29 (April 2003), 297-310.
21
| Binary Quantity;Experimentary Comparsion;Component search engine;Search Engine Technology;Spearman Rank Correlation;Intransitive Relation;Reusability;Coupling;Cohesion Metric;Linear Regression;Cohesion;Java components |
6 | A Distributed 3D Graphics Library | We present Repo-3D, a general-purpose, object-oriented library for developing distributed, interactive 3D graphics applications across a range of heterogeneous workstations. Repo-3D is designed to make it easy for programmers to rapidly build prototypes using a familiar multi-threaded, object-oriented programming paradigm. All data sharing of both graphical and non-graphical data is done via general-purpose remote and replicated objects, presenting the illusion of a single distributed shared memory. Graphical objects are directly distributed, circumventing the "duplicate database" problem and allowing programmers to focus on the application details. Repo-3D is embedded in Repo, an interpreted, lexically-scoped, distributed programming language, allowing entire applications to be rapidly prototyped. We discuss Repo-3D's design, and introduce the notion of local variations to the graphical objects, which allow local changes to be applied to shared graphical structures. Local variations are needed to support transient local changes, such as highlighting, and responsive local editing operations. Finally, we discuss how our approach could be applied using other programming languages, such as Java. | INTRODUCTION
Traditionally, distributed graphics has referred to the architecture
of a single graphical application whose components are distributed
over multiple machines [14, 15, 19, 27<A href="6.html#1">] (Figure 1
a
). By taking
advantage of the combined power of multiple machines, and the
particular features of individual machines, otherwise impractical
applications became feasible. However, as machines have grown
more powerful and application domains such as Computer
1. {bm,feiner}@cs.columbia.edu, http://www.cs.columbia.edu/graphics
Supported Cooperative Work (CSCW) and Distributed Virtual
Environments (DVEs) have been making the transition from
research labs to commercial products, the term distributed graphics
is increasingly used to refer to systems for distributing the shared
graphical state of multi-display/multi-person, distributed, interactive
applications<A href="6.html#1"> (Figure 1b). This is the definition that we use here.
While many excellent, high-level programming libraries are
available for building stand-alone 3D applications (e.g. Inventor
[35], Performer [29], Java 3D [33]), there are no similarly powerful
and general libraries for building distributed 3D graphics applications
. All CSCW and DVE systems with which we are familiar
(e.g., [1, 7, 11, 12, 16, 28, 30, 31, 32, 34, 37, 41]) use the following
approach: A mechanism is provided for distributing application
state (either a custom solution or one based on a general-purpose
distributed programming environment, such as ISIS [4] or Obliq
[8]), and the state of the graphical display is maintained separately
in the local graphics library. Keeping these "dual databases" synchronized
is a complex, tedious, and error-prone endeavor. In contrast
, some non-distributed libraries, such as Inventor [35], allow
programmers to avoid this problem by using the graphical scene
description to encode application state. Extending this "single database"
model to a distributed 3D graphics library is the goal of our
work on Repo-3D.
Repo-3D is an object-oriented, high-level graphics package,
derived from Obliq-3D [25]. Its 3D graphics facilities are similar to
those of other modern high-level graphics libraries. However, the
objects used to create the graphical scenes are directly distribut-able
--from the programmer's viewpoint, the objects reside in one
large distributed shared memory (DSM) instead of in a single
process. The underlying system replicates any of the fine-grained
objects across as many processes as needed, with no additional
effort on the part of the programmer. Updates to objects are
automatically reflected in all replicas, with any required objects
automatically distributed as needed. By integrating the replicated
objects into the programming languages we use, distributed
applications may be built using Repo-3D with little more difficulty
than building applications in a single process.
Figure 1:
Two meanings of distributed graphics: (a) a single logical
graphics system with distributed components, and (b) multiple distributed
logical graphics systems. We use the second definition here.
No matter how simple the construction of a distributed application
may be, a number of differences between distributed and
monolithic applications must be addressed. These include:
Distributed control. In a monolithic application, a single component
can oversee the application and coordinate activities
among the separate components by notifying them of changes
to the application state. This is not possible in a non-trivial distributed
application. Therefore, we must provide mechanisms
for different components to be notified of changes to the
distributed state.
Interactivity. Updates to distributed state will be slower than
updates to local state, and the amount of data that can be
distributed is limited by network bandwidth. If we do not want
to sacrifice interactive speed, we must be able to perform some
operations locally. For example, an object could be dragged
locally with the mouse, with only a subset of the changes
applied to the replicated state.
Local variations. There are times when a shared graphical
scene may need to be modified locally. For example, a
programmer may want to highlight the object under one user's
mouse pointer without affecting the scene graph viewed by
other users.
Repo-3D addresses these problems in two ways. First, a
programmer can associate a notification object with any replicated
object. The notification object's methods will be invoked when the
replicated object is updated. This allows reactive programs to be
built in a straightforward manner. To deal with the second and third
problems, we introduce the notion of local variations to graphical
objects. That is, we allow the properties of a graphical object to be
modified locally, and parts of the scene graph to be locally added,
removed, or replaced.
In<A href="6.html#2"> Section 2 we describe how we arrived at the solution presented
here.<A href="6.html#2"> Section 3 discusses related work, and<A href="6.html#3"> Section 4 offers a
detailed description of the underlying infrastructure that was used.
The design of Repo-3D is presented in <A href="6.html#5">Section 5, followed by
some examples and concluding remarks in Sectio<A href="6.html#8">ns 6 <A href="6.html#8">and 7.
BACKGROUND
Repo-3D was created as part of a project to support rapid prototyping
of distributed, interactive 3D graphical applications, with a
particular focus on DVEs. Our fundamental belief is that by
providing uniform high-level support for distributed programming
in the languages and toolkits we use, prototyping and experimenting
with distributed interactive applications can be (almost) as
simple as multi-threaded programming in a single process. While
care must be taken to deal with network delays and bandwidth
limitations at some stage of the program design (the languages and
toolkits ought to facilitate this), it should be possible to ignore such
issues until they become a problem. Our view can be summarized
by a quote attributed to Alan Kay, "Simple things should be
simple; complex things should be possible."
This is especially true during the exploration and prototyping
phase of application programming. If programmers are forced to
expend significant effort building the data-distribution components
of the application at an early stage, not only will less time be spent
exploring different prototypes, but radical changes in direction will
become difficult, and thus unlikely. For example, the implementation
effort could cause programs to get locked into using a communication
scheme that may eventually prove less than ideal, or even
detrimental, to the program's final design.
Since we are using object-oriented languages, we also believe
that data distribution should be tightly integrated with the
language's general-purpose objects. This lets the language's type
system and programming constructs reduce or eliminate errors in
the use of the data-distribution system. Language-level integration
also allows the system to exhibit a high degree of network data
transparency, or the ability for the programmer to use remote and
local data in a uniform manner. Without pervasive, structured,
high-level data-distribution support integrated into our programming
languages and libraries, there are applications that will never
be built or explored, either because there is too much programming
overhead to justify trying simple things ("simple things are not
simple"), or because the added complexity of using relatively
primitive tools causes the application to become intractable ("com-plex
things are not possible").
Of the tools available for integrating distributed objects into
programming languages, client-server data sharing is by far the
most common approach, as exemplified by CORBA [26],
Modula-3 Network Objects [5], and Java RMI [39]. Unfortunately,
interactive graphical applications, such as virtual reality, require
that the data used to refresh the display be local to the process
doing the rendering or acceptable frame refresh rates will not be
achieved. Therefore, pure client-server approaches are inappropriate
because at least some of the shared data must be replicated.
Furthermore, since the time delay of synchronous remote method
calls is unsuitable for rapidly changing graphical applications,
shared data should be updated asynchronously. Finally, when data
is replicated, local access must still be fast.
The most widely used protocols for replicated data consistency,
and thus many of the toolkits (e.g., ISIS [4] and Visual-Obliq [3]),
allow data updates to proceed unimpeded, but block threads reading
local data until necessary updates arrive. The same reason we
need replicated data in the first place--fast local read access to the
data--makes these protocols unsuitable for direct replication of the
graphical data. Of course, these protocols are fine for replicating
application state that will then be synchronized with a parallel
graphical scene description, but that is what we are explicitly trying
to avoid. Fortunately, there are replicated data systems (e.g.,
Orca [2] or COTERIE [24]) that provide replicated objects that are
well suited to interactive applications, and it is upon the second of
these systems that Repo-3D is built.
RELATED WORK
There has been a significant amount of work that falls under the
first, older definition of distributed graphics. A large number of
systems, ranging from established commercial products (e.g., IBM
Visualization Data Explorer [21]) to research systems (e.g.,
PARADISE [19] and ATLAS [14]), have been created to distribute
interactive graphical applications over a set of machines. However,
the goal of these systems is to facilitate sharing of application data
between processes, with one process doing the rendering. While
some of these systems can be used to display graphics on more
than one display, they were not designed to support high-level
sharing of graphical scenes.
Most high-level graphics libraries, such as UGA [40], Inventor
[35] and Java 3D [33], do not provide any support for distribution.
Others, such as Performer [29], provide support for distributing
components of the 3D graphics rendering system across multiple
processors, but do not support distribution across multiple
machines. One notable exception is TBAG [13], a high-level
constraint-based, declarative 3D graphics framework. Scenes in
TBAG are defined using constrained relationships between time-varying
functions. TBAG allows a set of processes to share a
single, replicated constraint graph. When any process asserts or
retracts a constraint, it is asserted or retracted in all processes.
However, this means that all processes share the same scene, and
that the system's scalability is limited because all processes have a
copy of (and must evaluate) all constraints, whether or not they are
interested in them. There is also no support for local variations of
the scene in different processes.
Machiraju [22] investigated an approach similar in flavor to ours,
but it was not aimed at the same fine-grained level of interactivity
and was ultimately limited by the constraints of the implementation
platform (CORBA and C++). For example, CORBA objects
are heavyweight and do not support replication, so much of their
effort was spent developing techniques to support object migration
and "fine-grained" object sharing. However, their fine-grained
objects are coarser than ours, and, more importantly, they do not
support the kind of lightweight, transparent replication we desire.
A programmer must explicitly choose whether to replicate, move,
or copy an object between processes when the action is to occur (as
opposed to at object creation time). Replicated objects are independent
new copies that can be modified and used to replace the original
--simultaneous editing of objects, or real-time distribution of
changes as they are made is not supported.
Of greater significance is the growing interest for this sort of system
in the Java and VRML communities. Java, like Modula-3, is
much more suitable as an implementation language than C or C++
because of its cross-platform compatibility and support for threads
and garbage collection: Without the latter two language features,
implementing complex, large-scale distributed applications is
extremely difficult. Most of the current effort has been focused on
using Java as a mechanism to facilitate multi-user VRML worlds
(e.g., Open Communities [38]). Unfortunately, these efforts
concentrate on the particulars of implementing shared virtual
environments and fall short of providing a general-purpose shared
graphics library. For example, the Open Communities work is
being done on top of SPLINE [1], which supports only a single
top-level world in the local scene database.
Most DVEs [11, 12, 16, 31, 32] provide support for creating
shared virtual environments, not general purpose interactive 3D
graphics applications. They implement a higher level of abstraction
, providing support for rooms, objects, avatars, collision detection
, and other things needed in single, shared, immersive virtual
environments. These systems provide neither general-purpose
programming facilities nor the ability to work with 3D scenes at a
level provided by libraries such as Obliq-3D or Inventor. Some use
communication schemes that prevent them from scaling beyond a
relatively small number of distributed processes, but for most the
focus is explicitly on efficient communication. SIMNET [7], and
the later NPSNet [41], are perhaps the best known large-scale
distributed virtual-environment systems. They use a fixed, well-defined
communication protocol designed to support a single,
large-scale, shared, military virtual environment.
The techniques for object sharing implemented in recent CSCW
toolkits [28, 30, 34, 37] provide some of the features we need,
particularly automatic replication of data to ease construction of
distributed applications. However, none of these toolkits has
integrated the distribution of data into its programming language's
object model as tightly as we desire. As a result, they do not provide
a high enough level of network data transparency or suffi-ciently
strong consistency guarantees. In groupware applications,
inconsistencies tend to arise when multiple users attempt to perform
conflicting actions: the results are usually obvious to the
users and can be corrected using social protocols. This is not an
acceptable solution for a general-purpose, distributed 3D graphics
toolkit. Furthermore, none of these CSCW systems provides any
support for asynchronous update notification, or is designed to
support the kind of large-scale distribution we have in mind.
Finally, while distributed games, such as Quake, have become
very popular, they only distribute the minimum amount of application
state necessary. They do not use (or provide) an abstract, high-level
distributed 3D graphics system.
UNDERLYING INFRASTRUCTURE
Our work was done in the Modula-3 programming language [18].
We decided to use Modula-3 because of the language itself and the
availability of a set of packages that provide a solid foundation for
our infrastructure. Modula-3 is a descendant of Pascal that corrects
many of its deficiencies, and heavily influenced the design of Java.
In particular, Modula-3 retains strong type safety, while adding
facilities for exception handling, concurrency, object-oriented
programming, and automatic garbage collection
2
. One of its most
important features for our work is that it gives us uniform access to
these facilities across all architectures.
Repo-3D relies on a number of Modula-3 libraries, as illustrated
in<A href="6.html#3"> Figure 2. Distributed data sharing is provided by two packages,
the Network Object client-server object package [5], and the
Replicated Object shared object package [2<A href="6.html#3">4] (see Section 4.1).
DistAnim-3D is derived from Anim-3D [25], a powerful, non-distributed
, general-purpose 3D library originally designed for 3D
algorithm animation (see<A href="6.html#5"> Section 4.2). Finally, Repo itself is a
direct descendant of Obliq [8], and uses the Replicated Object
package to add replicated data to Obliq (se<A href="6.html#5">e Section 4.3).
4.1 Distributed Shared Memory
Repo-3D's data sharing mechanism is based on the Shared Data-Object
Model of Distributed Shared Memory (DSM) [20]. DSM
allows a network of computers to be programmed much like a mul-tiprocessor
, since the programmer is presented with the familiar
paradigm of a common shared memory. The Shared Data-Object
Model of DSM is particularly well suited to our needs since it is a
high-level approach that can be implemented efficiently at the
application level. In this model, shared data is encapsulated in
user-defined objects and can only be accessed through those
objects' method calls. The DSM address space is partitioned
implicitly by the application programmer, with an object being the
smallest unit of sharing. All shared data is fully network transpar-2
. The Modula-3 compiler we used is available from Critical Mass, Inc. as
part of the Reactor programming environment. The compiler, and thus
our system, runs on all the operating systems we have available (plus
others): Solaris, IRIX, HP-UX, Linux, and Windows NT and 95.
Figure 2:
The architecture of Repo-3D. Aside from native graphics
libraries (X, Win32, OpenGL, Renderware) the Modula-3 runtime
shields most of the application from the OS. The Replicated Object
package uses an Event communication package and the Network
Object package. DistAnim-3D is implemented on top of a variety of
native graphics libraries and Replicated Objects. Repo exposes most of
the useful Modula-3 packages, as well as using Network Objects and
Replicated Objects to present a distributed shared memory model to
the programmer.
Operating System Services
Network Objects
Replicated Objects
Modula-3 Runtime
Events
Native
Graphics
DistAnim-3D
Repo
Repo-3D
Network
ent because it is encapsulated within the programming language
objects.
Distribution of new objects between the processes is as simple as
passing them back and forth as parameters to, or return values
from, method calls--the underlying systems take care of the rest.
3
Objects are only distributed to new processes as necessary, and (in
our system) are removed by the garbage collector when they are no
longer referenced. Furthermore, distributed garbage collection is
supported, so objects that are no longer referenced in any process
are removed completely.
There are three kinds of distributed object semantics in our DSM:
Simple objects correspond to normal data objects, and have no
special distributed semantics. When a simple object is copied
between processes, a new copy is created in the destination
process that has no implied relationship to the object in the
source process.
Remote objects have client-server distribution semantics. When
a remote object is copied between processes, all processes
except the one in which the object was created end up with a
proxy object that forwards method invocations across the
network to the original object.
Replicated objects have replicated distribution semantics.
When a replicated object is passed between processes, a new
replica is created in the destination process. If any replica is
changed, the change is reflected in all replicas.
The Network Object package provides support for remote
objects. It implements distributed garbage collection, exception
propagation back to the calling site, and automatic marshalling and
unmarshalling of method arguments and return values of virtually
any data type between heterogeneous machine architectures. The
package is similar to other remote method invocation (RMI) packages
developed later, such as the Java RMI library [39]. All method
invocations are forwarded to the original object, where they are
executed in the order they are received.
The Replicated Object package supports replicated objects. Each
process can call any method of an object it shares, just as it can
with a simple or remote object. We will describe the Replicated
Object package in more detail, as Repo-3D relies heavily on its
design, and the design of a replicated object system is less straightforward
than a remote one. The model supported by the Replicated
Object package follows two principles:
All operations on an instance of an object are atomic and
serializable. All operations are performed in the same order on
all copies of the object. If two methods are invoked simultaneously
, the order of invocation is nondeterministic, just as if
two threads attempted to access the same memory location
simultaneously in a single process.
The above principle applies to operations on single objects.
Making sequences of operations atomic is up to the programmer
.
The implementation of the Replicated Object package is based
on the approach used in the Orca distributed programming
language [2]. A full replication scheme is used, where a single
object is either fully replicated in a process or not present at all.
Avoiding partial replication significantly simplifies the implementation
and the object model, and satisfies the primary rationale for
replication: fast read-access to shared data. To maintain replication
consistency an update scheme is used, where updates to the object
are applied to all copies.
The method of deciding what is and is not an update is what
makes the Orca approach particularly interesting and easy to
implement. All methods are marked as either read or update methods
by the programmer who creates the object type. Read methods
are assumed to not change the state of the object and are therefore
applied immediately to the local object without violating consistency
. Update methods are assumed to change the state. To distribute
updates, arguments to the update method are marshalled into a
message and sent to all replicas. To ensure all updates are applied
in the same order, the current implementation of the Replicated
Object package designates a sequencer process for each object.
There may be more than one sequencer in the system to avoid
overloading one process with all the objects (in this case, each
object has its updates managed by exactly one of the sequencers.)
The sequencer is responsible for assigning a sequence number to
each message before it is sent to all object replicas. The replicas
then execute the incoming update messages in sequence. The process
that initiated the update does not execute the update until it
receives a message back from the sequencer and all updates with
earlier sequence numbers have been executed.
There are three very important reasons for choosing this
approach. First, it is easy to implement on top of virtually any
object-oriented language, using automatically generated object
subtypes and method wrappers that communicate with a simple
runtime system. We do this in our Modula-3 implementation, and it
would be equally applicable to an implementation in C++ or Java.
For example, the JSDT [36] data-sharing package in Java uses a
similar approach.
Second, the Replicated Object package does not pay attention to
(or even care) when the internal data fields of an object change.
This allows the programmer great flexibility in deciding exactly
what constitutes an update or not, and what constitutes the shared
state
4
. For example, objects could have a combination of global
and local state, and the methods that change the local state could
be classified as read methods since they do not modify the global
state. Alternatively, read methods could do some work locally and
then call an update method to propagate the results, allowing time-consuming
computation to be done once and the result distributed
in a clean way. We took advantage of both of these techniques in
implementing Repo-3D.
Finally, the immediate distribution of update methods ensures
that changes are distributed in a timely fashion, and suggests a
straightforward solution to the asynchronous notification problem.
The Replicated Object package generates a Notification Object
type for each Replicated Object type. These new objects have
methods corresponding to the update methods of their associated
Replicated Object. The arguments to these methods are the same as
the corresponding Replicated Object methods, plus an extra
argument to hold the Replicated Object instance. These notifiers
can be used by a programmer to receive notification of changes to
a Replicated Object in a structured fashion. To react to updates to a
Replicated Object instance, a programmer simply overrides the
methods of the corresponding Notification Object with methods
that react appropriately to those updates, and associates an instance
3. An important detail is how the communication is bootstrapped. In the
case of the Network and Replicated Object packages, to pass a first
object between processes, one of them exports the object to a special
network object demon
under some known name on some known
machine. The second process then retrieves the object.
4. Of course, it falls squarely on the shoulders of the programmer to
ensure that the methods provided always leave the object in a consistent
state. This is not significantly different than what needs to be done
when building a complex object that is simultaneously accessed by
multiple threads in a non-distributed system. For example, if a
programmer reads an array of numbers from inside the object and then
uses an update method to write a computed average back into the
object, the internal array may have changed before the average is
written, resulting in a classic inconsistency problem. In general,
methods that perform computations based on internal state (rather than
on the method arguments) are potentially problematic and need to be
considered carefully.
of it with the Replicated Object instance. Each time an update
method of the Replicated Object is invoked, the corresponding
method of the Notifier Object is also invoked. Notification Objects
eliminate the need for object polling and enable a "data-driven"
flow of control.
4.2 Obliq-3D
Obliq-3D is composed of Anim-3D, a 3D animation package
written in Modula-3, and a set of wrappers that expose Anim-3D to
the Obliq programming language (see<A href="6.html#5"> Section 4.3). Anim-3D is
based on three simple and powerful concepts: graphical objects for
building graphical scenes, properties for specifying the behavior of
the graphical objects, and input event callbacks to support interactive
behavior. Anim-3D uses the damage-repair model: whenever a
graphical object or property changes (is damaged), the image is
repaired without programmer intervention.
Graphical objects (GOs) represent all the logical entities in the
graphical scene: geometry (e.g., lines, polygons, spheres, polygon
sets, and text), lights and cameras of various sorts, and groups of
other GOs. One special type of group, the
RootGO
, represents a
window into which graphics are rendered. GOs can be grouped
together in any valid directed acyclic graph (DAG). The GO class
hierarchy is shown i<A href="6.html#5">n Figure 3
.
A property is a defined by a name and a value. The name determines
which attribute is affected by the property, such as "Texture
Mode" or "Box Corner1". The value specifies how it is affected
and is determined by its behavior, a time-variant function that
takes the current animation time and returns a value. Properties,
property values, and behaviors are all objects, and their relationships
are shown in<A href="6.html#5"> Figure 4. When a property is created, its name
and value are fixed. However, values are mutable and their behavior
may be changed at any time. There are four kinds of behaviors
for each type of properties: constant (do not vary over time),
synchronous (follow a programmed set of requests, such as "move
from A to B starting at time t=1 and taking 2 seconds"), asynchronous
(execute an arbitrary time-dependent function to compute the
value) and dependent (asynchronous properties that depend on
other properties). Synchronous properties are linked to animation
handles and do not start satisfying their requests until the animation
handle is signalled. By linking multiple properties to the same
handle, a set of property value changes can be synchronized.
Associated with each GO g is a partial mapping of property
names to values determined by the properties that have been associated
with g. A property associated with g affects not only g but
all the descendants of g that do not override the property. A single
property may be associated with any number of GOs. It is perfectly
legal to associate a property with a GO that is not affected by it; for
example, attaching a "Surface Color" property to a GroupGO does
not affect the group node itself, but could potentially affect the
surface color of any GO contained in that group. A RootGO sets an
initial default value for each named property.
There are three types of input event callbacks in Anim-3D, corresponding
to the three kinds of interactive events they handle:
mouse callbacks (triggered by mouse button events), motion callbacks
(triggered by mouse motion events) and keyboard callbacks
(triggered by key press events). Each object has three callback
stacks, and the interactive behavior of an object can be redefined
by pushing a new callback onto the appropriate stack. Any event
that occurs within a root window associated with a RootGO r will
be delivered to the top handler on r's callback stack. The handler
could delegate the event to one of r's children, or it may handle it
itself, perhaps changing the graphical scene in some way.
DistAnim-3D is a direct descendant of Anim-3D. In addition to
the objects being distributed, it has many additional facilities that
are needed for general-purpose 3D graphical applications, such as
texture mapping, indexed line and polygon sets, choice groups,
projection and transformation callbacks, and picking. Since
DistAnim-3D is embedded in Repo instead of Obliq (see
<A href="6.html#5">Section 4.3), the resulting library is called Repo-3D.
4.3 Obliq and Repo
Obliq [8] is a lexically-scoped, untyped, interpreted language for
distributed object-oriented computation. It is implemented in, and
tightly integrated with, Modula-3. An Obliq computation may
involve multiple threads of control within an address space, multiple
address spaces on a machine, heterogeneous machines over a
local network, and multiple networks over the Internet. Obliq uses,
and supports, the Modula-3 thread, exception, and garbage-collection
facilities. Its distributed-computation mechanism is based on
Network Objects, allowing transparent support for multiple
processes on heterogeneous machines. Objects are local to a site,
while computations can roam over the network. Repo [23] is a
descendant of Obliq that extends the Obliq object model to include
replicated objects. Therefore, Repo objects have state that may be
local to a site (as in Obliq) or replicated across multiple sites.
DESIGN OF REPO-3D
Repo-3D's design has two logical parts: the basic design and local
variations. The basic design encompasses the changes to Obliq-3D
to carry it into a distributed context, and additional enhancements
that are not particular to distributed graphics (and are therefore not
discussed here). Local variations are introduced to handle two
issues mentioned in<A href="6.html#1"> Section 1: transient local changes and responsive
local editing.
Figure 3:
The Repo-3D GO class hierarchy. Most of the classes are
also in Obliq-3D; the italicized ones were added to Repo-3D.
GroupGO
GO
CameraGO
LightGO
NonSurfaceGO
SurfaceGO
RootGO
ChoiceGroupGO
OrthoCameraGO
PerspCameraGO
AmbientLightGO
VectorLightGO
PointLightGO
SpotLightGO
LineGO
MarkerGO
TextGO
PolygonGO
BoxGO
SphereGO
CylinderGO
DiskGO
TorusGO
QuadMeshGO
IndexedPolygonSetGO
Text2DGO
IndexedLineSetGO
Figure 4:
The relationship between properties, names, values, and
behaviors. Each oval represents an object and arrows show containment
.
Value
Behavior
Property
Name
Request
Request
. . .
5.1 Basic Repo-3D Design
The Anim-3D scene-graph model is well suited for adaptation to a
distributed environment. First, in Anim-3D, properties are attached
to nodes, not inserted into the graph, and the property and child
lists are unordered (i.e., the order in which properties are assigned
to a node, or children are added to a group, does not affect the final
result). In libraries that insert properties and nodes in the graph and
execute the graph in a well-defined order (such as Inventor), the
siblings of a node (or subtree) can affect the attributes of that node
(or subtree). In Anim-3D, and similar libraries (such as Java 3D),
properties are only inherited down the graph, so a node's properties
are a function of the node itself and its ancestors--its siblings do
not affect it. Therefore, subtrees can be added to different scene
graphs, perhaps in different processes, with predictable results.
Second, the interface (both compiled Anim-3D and interpreted
Obliq-3D) is programmatical and declarative. There is no "graphi-cal
scene" file format per se: graphical scenes are created as the
side effect of executing programs that explicitly create objects and
manipulate them via the object methods. Thus, all graphical
objects are stored as the Repo-3D programs that are executed to
create them. This is significant, because by using the Replicated
Object library described in<A href="6.html#3"> Section 4.1 to make the graphical
objects distributed, the "file format" (i.e., a Repo-3D program) is
updated for free.
Converting Anim-3D objects to Replicated Objects involved
three choices: what objects to replicate, what methods update the
object state, and what the global, replicated state of each object is.
Since replicated objects have more overhead (e.g., method execution
time, memory usage, and latency when passed between
processes), not every category of object in Repo-3D is replicated.
We will consider each of the object categories described in
<A href="6.html#5">Figure 4.2 in turn: graphical objects (GOs), properties (values,
names, behaviors, animation handles) and callbacks. For each of
these objects, the obvious methods are designated as update methods
, and, as discussed in<A href="6.html#3"> Section 4.1, the global state of each object
is implicitly determined by those update methods. Therefore, we
will not go into excessive detail about either the methods or the
state. Finally, Repo-3D's support for change notification will be
discussed.
5.1.1 Graphical Objects
GOs are the most straightforward. There are currently twenty-one
different types of GOs, and all but the RootGOs are replicated.
Since RootGOs are associated with an onscreen window, they are
not replicated--window creation remains an active decision of the
local process. Furthermore, if replicated windows are needed, the
general-purpose programming facilities of Repo can be used to
support this in a relatively straightforward manner, outside the
scope of Repo-3D. A GO's state is comprised of the properties
attached to the object, its name, and some other non-inherited
property attributes.
5
The methods that modify the property list are
update methods. Group GOs also contain a set of child nodes, and
have update methods that modify that set.
5.1.2 Properties
Properties are more complex. There are far more properties in a
graphical scene than there are graphical objects, they change much
more rapidly, and each property is constructed from a set of
Modula-3 objects. There are currently 101 different properties of
seventeen different types in Repo-3D, and any of them can be
attached to any GO. A typical GO would have anywhere from two
or three (e.g., a BoxGO would have at least two properties to
define its corners) to a dozen or more. And, each of these properties
could be complex: in the example in <A href="6.html#8">Section 6, a single
synchronous property for a long animation could have hundreds of
requests enqueued within it.
Consider again the object structure illustrated<A href="6.html#5"> in Figure 4. A
property is defined by a name and a value, with the value being a
container for a behavior. Only one of the Modula-3 objects is
replicated, the property value. Property values serve as the replicated
containers for property behaviors. To change a property, a
new behavior is assigned to its value. The state of the value is the
current behavior.
Animation handles are also replicated. They tie groups of related
synchronous properties together, and are the basis for the interaction
in the example in<A href="6.html#8"> Section 6. In Anim-3D, handles have one
animate
method, which starts an animation and blocks until it
finishes. Since update methods are executed everywhere, and block
access to the object while they are being executed, they should not
take an extended period of time. In creating Repo-3D, the
animate
method was changed to call two new methods: an update
method that starts the animation, and a non-update method that
waits for the animation to finish. We also added methods to pause
and resume an animation, to retrieve and change the current relative
time of an animation handle, and to stop an animation early.
The state of an Animation handle is a boolean value that says if it is
active or not, plus the start, end, and current time (if the handle is
paused).
Most of the Modula-3 objects that comprise a property are not
replicated, for a variety of reasons:
Properties represent a permanent binding between a property
value and a name. Since they are immutable, they have no synchronization
requirements and can simply be copied between
processes.
Names represent simple constant identifiers, and are therefore
not replicated either.
Behaviors and requests are not replicated. While they can be
modified after being created, they are treated as immutable
data types for two reasons. First, the vast majority of behaviors,
even complex synchronous ones, are not changed once they
have been created and initialized. Thus, there is some justification
for classifying the method calls that modify them as part
of their initialization process. The second reason is practical
and much more significant. Once a scene has been created and
is being "used" by the application, the bulk of the time-critical
changes to it tend to be assignments of new behaviors to the
existing property values. For example, an object is moved by
assigning a new (often constant) behavior to its
GO_Transform
property value. Therefore, the overall performance
of the system depends heavily on the performance of
property value behavior changes. By treating behaviors as
immutable objects, they can simply be copied between
processes without incurring the overhead of the replicated
object system.
5.1.3 Input Callbacks
In Repo-3D, input event callbacks are not replicated. As discussed
in<A href="6.html#5"> Section 4.2, input events are delivered to the callback stacks of a
RootGO. Callbacks attached to any other object receive input
events only if they are delivered to that object by the programmer,
perhaps recursively from another input event callback (such as the
one attached to the RootGO). Therefore, the interactive behavior of
a root window is defined not only by the callbacks attached to its
RootGO, but also by the set of callbacks associated with the graph
rooted at that RootGO. Since the RootGOs are not replicated, the
5. Some attributes of a GO, such as the arrays of Point3D properties that
define the vertices of a polygon set, are not attached to the object, but
are manipulated through method calls.
callbacks that they delegate event handling to are not replicated
either. If a programmer wants to associate callbacks with objects as
they travel between processes, Repo's general-purpose programming
facilities can be used to accomplish this in a straightforward
manner.
5.1.4 Change Notification
The final component of the basic design is support for notification
of changes to distributed objects. For example, when an object's
position changes or a new child is added to a group, some of the
processes containing replicas may wish to react in some way. Fortunately
, as discussed in<A href="6.html#3"> Section 4.1, the Replicated Object
package automatically generates Notification Object types for all
replicated object types, which provide exactly the required
behavior. The Notification Objects for property values allow a
programmer to be notified of changes to the behavior of a property,
and the Notification Objects for the various GOs likewise allow
notification of updates to them.
5.2 Local Variations
Repo-3D's local variations solve a set of problems particular to the
distributed context in which Repo-3D lives: maintaining interactivity
and supporting local modifications to the shared scene graph.
If the graphical objects and their properties were always strictly
replicated, programmers would have to create local variations by
copying the objects to be modified, creating a set of Notification
Objects on the original objects, the copies of those objects, and all
their properties (to be notified when either change), and reflecting
the appropriate changes between the instances. Unfortunately,
while this process could be automated somewhat, it would still be
extremely tedious and error prone. More seriously, the overhead of
creating this vast array of objects and links between them would
(a)
(b)
(c)
(d)
Figure 5:
Simultaneous images from a session with the distributed CATHI animation viewer, running on four machines, showing an animation
of an engine. (a) Plain animation viewer, running on Windows NT. (b) Overview window, running on Windows 95. (c) Animation viewer
with local animation meter, running on IRIX. (d) Animation viewer with local transparency to expose hidden parts, running on Solaris.
make this approach impractical for short transient changes, such as
highlighting an object under the mouse.
To overcome this problem, Repo-3D allows the two major
elements of the shared state of the graphical object scene--the
properties attached to a GO and the children of a group--to have
local variations applied to them. (Local variations on property
values or animation handles are not supported, although we are
considering adding support for the latter.)
Conceptually, local state is the state added to each object (the
additions, deletions, and replacements to the properties or
children) that is only accessible to the local copies and is not
passed to remote processes when the object is copied to create a
new replica. The existence of local state is possible because, as
discussed in<A href="6.html#3"> Section 4.1, the shared state of a replicated object is
implicitly defined by the methods that update it
6
. Therefore, the
new methods that manipulate the local variations are added to the
GOs as non-update methods. Repo-3D combines both the global
and local state when creating the graphical scene using the underlying
graphics package.
As mentioned above, local variations come in two flavors:
Property variations. There are three methods to set, unset, and
get the global property list attached to a GO. We added the
following methods to manipulate local variations: add or
remove local properties (overriding the value normally used for
the object), hide or reveal properties (causing the property
value of the parent node to be inherited), and flush the set of
local variations (removing them in one step) or atomically
apply them to the global state of the object.
Child variations. There are five methods to add, remove,
replace, retrieve, and flush the set of children contained in a
group node. We added the following ones: add a local node,
remove a global node locally, replace a global node with some
other node locally, remove each of these local variations, flush
the local variations (remove them all in one step), and atomically
apply the local variations to the global state.
This set of local operations supports the problems local variations
were designed to solve, although some possible enhancements are
discussed in<A href="6.html#8"> Section 7.
EXAMPLE AN ANIMATION EXAMINER
As an example of the ease of prototyping distributed applications
with Repo-3D, we created a distributed animation examiner for the
CATHI [6] animation generation system. CATHI generates short
informational animation clips to explain the operation of technical
devices. It generates full-featured animation scripts, including
camera and object motion, color and opacity effects, and lighting
setup.
It was reasonably straightforward to modify CATHI to generate
Repo-3D program files, in addition to the GeomView and Render-Man
script files it already generated. The resulting output is a
Repo-3D program that creates two scene DAGs: a camera graph
and a scene graph. The objects in these DAGs have synchronous
behaviors specified for their surface and transformation properties.
An entire animation is enqueued in the requests of these behaviors,
lasting anywhere from a few seconds to a few minutes.
We built a distributed, multi-user examiner over the course of a
weekend. The examiner allows multiple users to view the same
animation while discussing it (e.g., via electronic chat or on the
phone)<A href="6.html#7">. Figure 5 shows images of the examiner running on four
machines, each with a different view of the scene. The first step
was to build a simple "loader" that reads the animation file, creates
a window, adds the animation scene and camera to it, and exports
the animation on the network, requiring less than a dozen lines of
Repo-3D code. A "network" version, that imports the animation
from the network instead of reading it from disk, replaced the lines
of code to read and export the animation with a single line to
import it. <A href="6.html#7">Figure 5(a) shows an animation being viewed by one of
these clients.
The examiner program is loaded by both these simple clients, and
is about 450 lines long. The examiner supports:
Pausing and continuing the animation, and changing the
current animation time using the mouse. Since this is done by
operating on the shared animation handle, changes performed
by any viewer are seen by all. Because of the consistency guarantees
, all users can freely attempt to change the time, and the
system will maintain all views consistently.
A second "overview" window<A href="6.html#7"> (Figure 5(b)), where a new
camera watches the animation scene and camera from a distant
view. A local graphical child (representing a portion of the
animation camera's frustum) was added to the shared animation
camera group to let the attributes of the animation camera
be seen in the overview window.
A local animation meter (bottom of<A href="6.html#7"> Figure 5(c)), that can be
added to any window by pressing a key, and which shows the
current time offset into the animation both graphically and
numerically. It was added in front of the camera in the animation
viewer window, as a local child of a GO in the camera
graph, so that it would be fixed to the screen in the animation
viewer.
Local editin<A href="6.html#7">g (Figure 5(d)), so that users can select objects and
make them transparent (to better see what was happening in the
animation) or hide them completely (useful on slow machines,
to speed up rendering). Assorted local feedback (highlighting
the object under the mouse and flashing the selected object)
was done with local property changes to the shared GOs in the
scene graph.
Given the attention paid to the design of Repo-3D, it was not
necessary to be overly concerned with the distributed behavior of
the application (we spent no more than an hour or so). Most of that
time was spent deciding if a given operation should be global or a
local variation. The bulk of programming and debugging time was
spent implementing application code. For example, in the overview
window, the representation of the camera moves dynamically,
based on the bounding values of the animation's scene and camera
graphs. In editing mode, the property that flashes the selected node
bases its local color on the current global color (allowing a user
who is editing while an animation is in progress to see any color
changes to the selected node.)
CONCLUSIONS AND FUTURE WORK
We have presented the rationale for, and design of, Repo-3D, a
general-purpose, object-oriented library for developing distributed,
interactive 3D graphics applications across a range of heterogeneous
workstations. By presenting the programmer with the
illusion of a large shared memory, using the Shared Data-Object
model of DSM, Repo-3D makes it easy for programmers to rapidly
prototype distributed 3D graphics applications using a familiar
object-oriented programming paradigm. Both graphical and
general-purpose, non-graphical data can be shared, since Repo-3D
is embedded in Repo, a general-purpose, lexically-scoped, distributed
programming language.
Repo-3D is designed to directly support the distribution of graphical
objects, circumventing the "duplicate database" problem and
allowing programmers to concentrate on the application function-6
. The local state is not copied when a replicated object is first passed to a
new process because the Repo-3D objects have custom serialization
routines (or Picklers, in Modula-3 parlance). These routines only pass
the global state, and initialize the local state on the receiving side to
reasonable default values corresponding to the empty local state.
ality of a system, rather than its communication or synchronization
components. We have introduced a number of issues that must be
considered when building a distributed 3D graphics library, especially
concerning efficient and clean support for data distribution
and local variations of shared graphical scenes, and discussed how
Repo-3D addresses them.
There are a number of ways in which Repo-3D could be
improved. The most important is the way the library deals with
time. By default, the library assumes all machines are running a
time-synchronization protocol, such as NTP, and uses an internal
animation time offset
7
(instead of the system-specific time offset)
because different OSs (e.g., NT vs. UNIX) start counting time at
different dates. Hooks have been provided to allow a programmer
to specify their own function to compute the "current" animation
time offset within a process. Using this facility, it is possible to
build inter-process time synchronization protocols (which we do),
but this approach is not entirely satisfactory given our stated goal
of relieving the programmer of such tedious chores. Future
systems should integrate more advanced solutions, such as adjusting
time values as they travel between machines, so that users of
computers with unsynchronized clocks can collaborate
8
. This will
become more important as mobile computers increase in popularity
, as it may not be practical to keep their clocks synchronized.
The specification of local variations in Repo-3D could benefit
from adopting the notion of paths (as used in Java 3D and Inventor,
for example). A path is an array of objects leading from the root of
the graph to an object; when an object occurs in multiple places in
one or more scene graphs, paths allow these instances to be differ-entiated
. By specifying local variations using paths, nodes in the
shared scene graphs could have variations within a process as well
as between processes. One other limitation of Repo-3D, arising
from our use of the Replicated Object package, is that there is no
way to be notified when local variations are applied to an object.
Recall that the methods of an automatically generated Notification
Object correspond to the update methods of the corresponding
Replicated Object. Since the methods that manipulate the local
variations are non-update methods (i.e., they do not modify the
replicated state), there are no corresponding methods for them in
the Notification Objects. Of course, it would be relatively straightforward
to modify the Replicated Object package to support this,
but we have not yet found a need for these notifiers.
A more advanced replicated object system would also improve
the library. Most importantly, support for different consistency
semantics would be extremely useful. If we could specify
semantics such as "all updates completely define the state of an
object, and only the last update is of interest," the efficiency of the
distribution of property values would improve significantly; in this
case, updates could be applied (or discarded) when they arrive,
without waiting for all previous updates to be applied, and could be
applied locally without waiting for the round trip to the sequencer.
There are also times when it would be useful to have support for
consistency across multiple objects, either using causal ordering
(as provided by systems such as ISIS and Visual-Obliq), or some
kind of transaction protocol to allow large groups of changes to be
applied either as a unit, or not at all. It is not clear how one would
provide these features with a replicated object system such as the
one used here.
While a library such as Repo-3D could be built using a variety of
underlying platforms, the most likely one for future work is Java.
Java shares many of the advantages of Modula-3 (e.g., threads and
garbage collection are common across all architectures) and the
packages needed to create a Repo-3D-like toolkit are beginning to
appear. While Java does not yet have a replicated object system as
powerful as the Replicated Object package, a package such as
JSDT [36] (which focuses more on data communication than high-level
object semantics) may be a good starting point. Work is also
being done on interpreted, distributed programming languages on
top of Java (e.g., Ambit [9]). Finally, Java 3D is very similar to
Anim-3D, even though its design leans toward efficiency instead of
generality when there are trade-offs to be made. For example, the
designers chose to forgo Anim-3D's general property inheritance
mechanism because it imposes computational overhead. By combining
packages such as Java 3D, JSDT, and Ambit, it should be
possible to build a distributed graphics library such as Repo-3D in
Java.
Acknowledgments
We would like to thank the reviewers for their helpful comments,
as well as the many other people who have contributed to this
project. Andreas Butz ported CATHI to use Repo-3D and helped
with the examples and the video. Clifford Beshers participated in
many lively discussions about the gamut of issues dealing with
language-level support for 3D graphics. Tobias Hllerer and
Steven Dossick took part in many other lively discussions. Xinshi
Sha implemented many of the extensions to Obliq-3D that went
into Repo-3D. Luca Cardelli and Marc Najork of DEC SRC
created Obliq and Obliq-3D, and provided ongoing help and
encouragement over the years that Repo and Repo-3D have been
evolving.
This research was funded in part by the Office of Naval Research
under Contract N00014-97-1-0838 and the National Tele-Immersion
Initiative, and by gifts of software from Critical Mass and
Microsoft.
References
[1]
D. B. Anderson, J. W. Barrus, J. H. Howard, C. Rich, C. Shen, and
R. C. Waters. Building Multi-User Interactive Multimedia Environments
at MERL. Technical Report Research Report TR95-17, Mit-subishi
Electric Research Laboratory, November 1995.
[2]
H. Bal, M. Kaashoek, and A. Tanenbaum. Orca: A Language for
Parallel Programming of Distributed Systems. IEEE Transactions on
Software Engineering
, 18(3):190205, March 1992.
[3]
K. Bharat and L. Cardelli. Migratory Applications. In ACM UIST '95,
pages 133-142, November 1995.
[4]
K. P. Birman. The Process Group Approach to Reliable Distributed
Computing. CACM, 36(12):3653, Dec 1993.
[5]
A. Birrell, G. Nelson, S. Owicki, and E. Wobber. Network Objects.
In Proc. 14th ACM Symp. on Operating Systems Principles, 1993.
[6]
A Butz, Animation with CATHI, In Proceedings of AAAI/IAAI '97,
pages 957962, 1997.
[7] J. Calvin, A. Dickens, B. Gaines, P. Metzger, D. Miller, and
D. Owen. The SIMNET Virtual World Architecture. In Proc. IEEE
VRAIS '93
, pages 450455, Sept 1993.
[8]
L. Cardelli. A Language with Distributed Scope. Computing Systems
, 8(1):2759, Jan 1995.
[9]
L. Cardelli and A. Gordon. Mobile Ambients. In Foundations of
Software Science and Computational Structures
, Maurice Nivat
(Ed.), LNCE 1378, Springer, 140155. 1998.
[10]
R. Carey and G. Bell. The Annotated VRML 2.0 Reference Manual.
Addison-Wesley, Reading, MA, 1997.
[11] C. Carlsson and O. Hagsand. DIVE--A Multi-User Virtual Reality
System. In Proc. IEEE VRAIS '93, pages 394400, Sept 1993.
[12] C. F. Codella, R. Jalili, L. Koved, and J. B. Lewis. A Toolkit for
Developing Multi-User, Distributed Virtual Environments. In Proc.
IEEE VRAIS '93
, pages 401407, Sept 1993.
7. Computed as an offset from January 1, 1997.
8. Implementation details of the combination of Network and Replicated
Objects made it difficult for us to adopt a more advanced solution.
[13]
C. Elliott, G. Schechter, R. Yeung and S. Abi-Ezzi. TBAG: A High
Level Framework for Interactive, Animated 3D Graphics
Applications, In Proc. ACM SIGGRAPH 94, pages 421434, August,
1994.
[14]
M. Fairen and A. Vinacua, ATLAS, A Platform for Distributed
Graphics Applications, In Proc. VI Eurographics Workshop on Programming
Paradigms in Graphics, pages 91102, September, 1997.
[15] S. Feiner, B. MacIntyre, M. Haupt, and E. Solomon. Windows on the
World: 2D Windows for 3D Augmented Reality. In Proc. ACM UIST
'93, pages 145155, 1993.
[16] T. A. Funkhouser. RING: A Client-Server System for Multi-User
Virtual Environments. In Proc. 1995 ACM Symp. on Interactive 3D
Graphics, pages 8592, March 1995.
[17] G. Grimsdale. dVS--Distributed Virtual Environment System. In
Proc. Computer Graphics '91 Conference, 1991.
[18] S. P. Harbison. Modula-3. Prentice-Hall, 1992.
[19]
H.W. Holbrook, S.K. Singhal and D.R. Cheriton, Log-Based
Receiver-Reliable Multicast for Distributed Interactive Simulation,
Proc. ACM SIGCOMM '95, pages 328341, 1995.
[20] W. Levelt, M. Kaashoek, H. Bal, and A. Tanenbaum. A Comparison
of Two Paradigms for Distributed Shared Memory. Software
Practice and Experience, 22(11):9851010, Nov 1992.
[21]
B. Lucas. A Scientific Visualization Renderer. In Proc. IEEE
Visualization '92, pp. 227-233, October 1992.
[22]
V. Machiraju, A Framework for Migrating Objects in Distributed
Graphics Applications, Masters Thesis, University of Utah, Department
of Computer Science, Salt Lake City, UT, June, 1997.
[23]
B. MacIntyre. Repo: Obliq with Replicated Objects. Programmers
Guide and Reference Manual. Columbia University Computer
Science Department Research Report CUCS-023-97, 1997.}
[24]
B. MacIntyre, and S. Feiner. Language-level Support for Exploratory
Programming of Distributed Virtual Environments. In Proc. ACM
UIST '96, pages 8394, Seattle, WA, November 68, 1996.
[25] M. A. Najork and M. H. Brown. Obliq-3D: A High-level, Fast-turnaround
3D Animation System. IEEE Transactions on Visualization
and Computer Graphics, 1(2):175145, June 1995.
[26]
R. Ben-Natan. CORBA: A Guide to the Common Object Request
Broker Architecture, McGraw Hill, 1995.
[27]
D. Phillips, M. Pique, C. Moler, J. Torborg, D. Greenberg. Distributed
Graphics: Where to Draw the Lines? Panel Transcript,
SIGGRAPH 89, available at:
http://www.siggraph.org:443/publications/panels/siggraphi89/
[28] A. Prakash and H. S. Shim. DistView: Support for Building Efficient
Collaborative Applications Using Replicated Objects. In Proc. ACM
CSCW '94, pages 153162, October 1994.
[29]
J. Rohlf and J. Helman, IRIS Performer: A High Performance
Multiprocessing Toolkit for Real-Time {3D} Graphics, In Proc.
ACM SIGGRAPH 94, pages 381394, 1994.
[30] M. Roseman and S. Greenberg. Building Real-Time Groupware with
GroupKit, a Groupware Toolkit. ACM Transactions on Computer-Human
Interaction, 3(1):66106, March 1996.
[31] C. Shaw and M. Green. The MR Toolkit Peers Package and
Experiment. In Proc. IEEE VRAIS '93, pages 1822, Sept 1993.
[32] G. Singh, L. Serra, W. Png, A. Wong, and H. Ng. BrickNet: Sharing
Object Behaviors on the Net. In Proc. IEEE VRAIS '95, pages 1925,
1995.
[33]
H. Sowizral, K. Rushforth, and M. Deering. The Java 3D API
Specification, Addison-Wesley, Reading, MA, 1998.
[34] M. Stefik, G. Foster, D. G. Bobrow, K. Kahn, S. Lanning, and
L. Suchman. Beyond The Chalkboard: Computer Support for
Collaboration and Problem Solving in Meetings. CACM, 30(1):32
47, January 1987.
[35]
P. S. Strauss and R. Carey, An Object-Oriented 3D Graphics Toolkit,
In Computer Graphics (Proc. ACM SIGGRAPH 92), pages 341349,
Aug, 1992.
[36]
Sun Microsystems, Inc. The Java Shared Data Toolkit, 1998.
Unsupported software, available at:
http://developer.javasoft.com/developer/earlyAccess/jsdt/
[37] I. Tou, S. Berson, G. Estrin, Y. Eterovic, and E. Wu. Prototyping
Synchronous Group Applications. IEEE Computer, 27(5):4856,
May 1994.
[38]
R. Waters and D. Anderson. The Java Open Community Version 0.9
Application Program Interface. Feb, 1997. Available online at:
http://www.merl.com/opencom/opencom-java-api.html
[39]
A. Wollrath, R. Riggs, and J. Waldo. A Distributed Object Model for
the Java System, In Proc. USENIX COOTS '96, pages 219231, July
1996.
[40]
R. Zeleznik, D. Conner, M. Wloka, D. Aliaga, N. Huang,
P. Hubbard, B. Knep, H. Kaufman, J. Hughes, and A. van Dam. An
Object-oriented Framework for the Integration of Interactive
Animation Techniques. In Computer Graphics (SIGGRAPH '91
Proceedings), pages 105112, July, 1991.
[41] M. J. Zyda, D. R. Pratt, J. G. Monahan, and K. P. Wilson. NPSNET:
Constructing a 3D Virtual World. In Proc. 1992 ACM Symp. on
Interactive 3D Graphics, pages 147156, Mar. 1992. | Data sharing;programming language;Distributed graphics;Data structures;Interactive graphical application;3D graphics library;Change notification;Library;Replicated object;Object representation;distributed virtual environments;Shared memory;Syncronisation;data distribution;object-oriented graphics;Java;Programming;duplicate database;local variations;multi-threaded programming;Heterogeneous workstation;Multi-user interaction;3D graphics application;Repo-3D;3D graphics;Callbacks;Prototyping;Distributed systems;Client-server approach;object-oriented library;Programming language;shared-data object model;prototype;Graphical objects;Client-Server;Object-oriented;Local variation;3D Graphics;Object oriented;Distributed applications;distributed shared memory;Distributed processes;Properties |
60 | Coupling Feature Selection and Machine Learning Methods for Navigational Query Identification | It is important yet hard to identify navigational queries in Web search due to a lack of sufficient information in Web queries, which are typically very short. In this paper we study several machine learning methods, including naive Bayes model, maximum entropy model, support vector machine (SVM), and stochastic gradient boosting tree (SGBT), for navigational query identification in Web search. To boost the performance of these machine techniques, we exploit several feature selection methods and propose coupling feature selection with classification approaches to achieve the best performance. Different from most prior work that uses a small number of features, in this paper, we study the problem of identifying navigational queries with thousands of available features, extracted from major commercial search engine results, Web search user click data, query log, and the whole Web's relational content. A multi-level feature extraction system is constructed. Our results on real search data show that 1) Among all the features we tested, user click distribution features are the most important set of features for identifying navigational queries. 2) In order to achieve good performance, machine learning approaches have to be coupled with good feature selection methods. We find that gradient boosting tree, coupled with linear SVM feature selection is most effective. 3) With carefully coupled feature selection and classification approaches, navigational queries can be accurately identified with 88.1% F1 score, which is 33% error rate reduction compared to the best uncoupled system, and 40% error rate reduction compared to a well tuned system without feature selection. | INTRODUCTION
Nowadays, Web search has become the main method for
information seeking. Users may have a variety of intents
while performing a search. For example, some users may
already have in mind the site they want to visit when they
type a query; they may not know the URL of the site or
may not want to type in the full URL and may rely on the
search engine to bring up the right site. Yet others may have
no idea of what sites to visit before seeing the results. The
information they are seeking normally exists on more than
one page.
Knowing the different intents associated with a query may
dramatically improve search quality. For example, if a query
is known to be navigational, we can improve search results
by developing a special ranking function for navigational
queries. The presentation of the search results or the user-perceived
relevance can also be improved by only showing
the top results and reserving the rest of space for other purposes
since users only care about the top result of a navigational
query. According to our statistics, about 18% of
queries in Web search are navigational (see Section 6). Thus,
correctly identifying navigational queries has a great potential
to improve search performance.
Navigational query identification is not trivial due to a
lack of sufficient information in Web queries, which are normally
short. Recently, navigational query identification, or
more broadly query classification, is drawing significant attention
. Many machine learning approaches that have been
used in general classification framework, including naive Bayes
classifier, maximum entropy models, support vector machines
, and gradient boosting tree, can be directly applied
here. However, each of these approaches has its own advantages
that suit certain problems. Due to the characteristics
of navigational query identification (more to be addressed
in Section 2 ), it is not clear which one is the best for the
task of navigational query identification. Our first contribution
in this paper is to evaluate the effectiveness of these
machine learning approaches in the context of navigational
query identification. To our knowledge, this paper is the
very first attempt in this regard.
682
Machine learning models often suffer from the curse of
feature dimensionality. Feature selection plays a key role
in many tasks, such as text categorization [18]. In this paper
, our second contribution is to evaluate several feature
selection methods and propose coupling feature selection
with classification approaches to achieve the best performance
: ranking features by using one algorithm before another
method is used to train the classifier. This approach is
especially useful when redundant low quality heterogeneous
features are encountered.
Most previous studies in query identification are based on
a small number of features that are obtained from limited
resources [12]. In this paper, our third contribution is to
explore thousands of available features, extracted from major
commercial search engine results, user Web search click
data, query log, and the whole Web's relational content. To
obtain most useful features, we present a three level system
that integrates feature generation, feature integration, and
feature selection in a pipe line.
The system, after coupling features selected by SVM with
a linear kernel and stochastic gradient boosting tree as classification
training method, is able to achieve an average performance
of 88.1% F1 score in a five fold cross-validation.
The rest of this paper is organized as follows. In the next
section, we will define the problem in more detail and describe
the architecture of our system. We then present a
multi-level feature extraction system in Section 3. We describe
four classification approaches in Section 4 and three
feature selection methods in Section 5. We then conduct
extensive experiments on real search data in Section 6. We
present detailed discussions in Section 7. We discuss some
related work in Section 8. Finally, we conclude the paper in
Section 9.
PROBLEM DEFINITION
We divide queries into two categories: navigational and
informational. According to the canonical definition [3, 14],
a query is navigational if a user already has a Web-site in
mind and the goal is simply to reach that particular site.
For example, if a user issues query "amazon", he/she mainly
wants to visit "amazon.com". This definition, however, is
rather subjective and not easy to formalize. In this paper,
we extend the definition of navigational query to a more
general case: a query is navigational if it has one and only
one perfect site in the result set corresponding to this query.
A site is considered as perfect if it contains complete information
about the query and lacks nothing essential.
In our definition, navigational query must have a corresponding
result page that conveys perfectness, uniqueness,
and authority.
Unlike Broder's definition, our definition
does not require the user to have a site in mind. This makes
data labeling more objective and practical. For example,
when a user issues a query "Fulton, NY", it is not clear
if the user knows the Web-site "www.fultoncountyny.org".
However, this Web-site has an unique authority and perfect
content for this query and therefore the query "Fulton,
NY" is labeled as a navigational query. All non-navigational
queries are considered informational. For an informational
query, typically there exist multiple excellent Web-sites corresponding
to the query that users are willing to explore.
To give another example, in our dataset, query "national
earth science teachers association" has only one perfect corresponding
URL "http://www.nestanet.org/" and therefore
is labeled as navigational query.
Query "Canadian gold
maple leaf" has several excellent corresponding URL's, including
"http://www. goldfingercoin.com/ catalog gold/ canadian
maple leaf.htm", "http://coins.about.com/ library/weekly/
aa091802a.htm" and "http://www.onlygold.com/Coins/ Cana-dianMapleLeafsFullScreen
.asp".
Therefore, query "Canadian
gold maple leaf" is labeled as non-navigational query.
Figure 1 illustrates the architecture of our navigational
query identification system. A search engine takes in a query
and returns a set of URLs. The query and returned URLs
are sent into a multi-level feature extraction system that
generates and selects useful features; details are presented
in the next section. Selected features are then input into a
machine learning tool to learn a classification model.
MULTI-LEVEL FEATURE EXTRACTION
The multiple level feature system is one of the unique
features of our system. Unlike prior work with a limited
number of features or in a simulated environment [11, 12],
our work is based on real search data, a major search en-gine's
user click information and a query log. In order to
handle large amount of heteorgeneous features in an efficient
way, we propose a multi-level feature system. The first
level is the feature generation level that calculates statistics
and induces features from three resources: a click engine,
a Web-map and a query log. The second level is responsible
for integrating query-URL pair-wise features into query
features by applying various functions. The third level is
a feature selection module, which ranks features by using
different methods. Below we present the details of the first
two levels. The third level will be presented separately in
Section 5 since those feature selection methods are standard.
3.1
Feature Generation
Queries are usually too short and lack sufficient context
to be classified. Therefore, we have to generate more features
from other resources. We use three resources to generate
features: a click engine, a Web-map, and query logs.
The click engine is a device to record and analyze user click
behavior. It is able to generate hundreds of features automatically
based on user click through distributions [16]. A
Web-map can be considered as a relational database that
stores hundreds of induced features on page content, anchor
text, hyperlink structure of webpages, including the
inbound, outbound URLs, and etc. Query logs are able to
provide bag-of-words features and various language model
based features based on all the queries issued by users over
a period of time.
Input to feature generation module is a query-URL pair.
For each query, the top 100 ULRs are recorded and 100
query-URLs are generated. Thus for each query-URL pair,
we record a total of 197 features generated from the following
four categories:
Click features: Click features record the click information
about a URL. We generate a total number of 29
click features for each query-URL pair. An example of
a click feature is the click ratio (CR). Let n
i
k
denote
the number of clicks on URL k for query i and total
number of clicks
n
i
= X
k
n
i
k
.
683
Webmap
Click engine
Query log
Entropy
Max
Min
...
SGBT
Naive Bayes
MaxEnt
SVM
Search engine
query
Classifier
Classification module
Feature generation
Feature selection module
Feature integration
Information gain
SVM feature ranking
Boosting feature selection
Integrated feature
query-url feature
Selected feature
query-URL
Figure 1: Diagram of Result Set Based Navigational Query Identification System
The click ratio is the ratio of number of clicks on a
particular URL K for query i to the total number of
clicks for this query, which has the form
CR(i, K) = n
i
K
n
i
.
URL features: URL features measure the characteristics
of the URL itself. There are 24 URL based features
in total. One such feature is a URL match feature,
named urlmr, which is defined as
urlmr = l(p)
l(u)
where l(p) is the length of the longest substring p of the
query that presents in the URL and l(u) is the length
of the URL u. This feature is based on the observation
that Web-sites tend to use their names in the URL's.
The distributions confers uniqueness and authority.
Anchor text features: Anchor text is the visible text in
a hyperlink, which also provides useful information for
navigational query identification. For example, one anchor
text feature is the entropy of anchor link distribution
[12]. This distribution is basically the histogram
of inbound anchor text of the destination URL. If an
URL is pointed to by the same anchor texts, the URL
is likely to contain perfect content. There are many
other anchor text features that are calculated by considering
many factors, such as edit distance between
query and anchor texts, diversity of the hosts, etc. In
total, there are 63 features derived from anchor text.
Since we record the top 100 results for each query and
each query URL pair has 197 features, in total there are
19,700 features available for each query. Feature reduction
becomes necessary due to curse of dimensionality [5]. Before
applying feature selection, we conduct a feature integration
procedure that merges redundant features.
3.2
Feature Integration
We design a feature integration operator, named normalized
ratio r
k
of rank k, as follows:
r
k
(f
j
) =
max(f
j
)
- f
jk
max(f
j
)
- min(f
j
) , k = 2, 5, 10, 20.
(1)
The design of this operator is motivated by the observation
that the values of query-URL features for navigational
query and informational query decrease at different
rates. Taking the urlmr feature for example and considering
a navigational query "Walmart" and an informational
query "Canadian gold maple leaf", we plot the feature values
of top 100 URLs for both queries, as shown in Figure 2.
As we can see, the feature value for the navigational query
drops quickly to a stable point, while an information query
is not stable. As we will see in the experiment section, this
operator is most effective in feature reduction.
Besides this operator, we use other statistics for feature
integration, including mean, median, maximum, minimum,
entropy, standard deviation and value in top five positions
of the result set query-URL pair features. In total, we now
have 15 measurements instead of 100 for the top 100 URLs
for each query. Therefore, for each query, the dimension of
a feature vector is m = 15
197 = 2955, which is much
smaller than 197, 000.
CLASSIFICATION METHODS
We apply the most popular generative (such as naive Bayes
method), descriptive (such as Maximum Entropy method),
and discriminative (such as support vector machine and
stochastic gradient boosting tree) learning methods [19] to
attack the problem.
4.1
Naive Bayes Classifier
A simple yet effective learning algorithm for classification
684
0
20
40
60
80
100
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
Rank
Result Set Feature: URLmr
Query: "Walmart"
0
20
40
60
80
100
0
0.1
0.2
0.3
0.4
0.5
Rank
Result Set Feature: URLmr
Query: "Canadian gold maple leaf"
Figure 2:
urlmr query-URL feature for navigational
query (upper) and a informational query (lower)
is based on a simple application of Bayes' rule
P (y
|q) = P (y) P (q|y)
P (q)
(2)
In query classification, a query q is represented by a vector of
K attributes q = (v
1
, v
2
, ....v
K
). Computing p(q
|y) in this
case is not trivial, since the space of possible documents
q = (v
1
, v
2
, ....v
K
) is vast. To simplify this computation,
the naive Bayes model introduces an additional assumption
that all of the attribute values, v
j
, are independent given
the category label, c.
That is, for i = j, v
i
and v
j
are
conditionally independent given q. This assumption greatly
simplifies the computation by reducing Eq. (2) to
P (y
|q) = P (y)
Q
K
j=1
P (v
j
|y)
P (q)
(3)
Based on Eq. (3), a maximum a posteriori (MAP) classifier
can be constructed by seeking the optimal category which
maximizes the posterior P (c
|d):
y
= arg max
yY
(
P (y)
K
Y
j=1
P (v
j
|y)
)
(4)
= arg max
yY
(
K
Y
j=1
P (v
j
|y)
)
(5)
Eq. (5) is called the maximum likelihood naive Bayes classifier
, obtained by assuming a uniform prior over categories.
To cope with features that remain unobserved during training
, the estimate of P (v
j
|y) is usually adjusted by Laplace
smoothing
P (v
j
|y) = N
y
j
+ a
j
N
y
+ a
(6)
where N
y
j
is the frequency of attribute j in D
y
, N
y
=
P
j
N
y
j
, and a = P
j
a
j
. A special case of Laplace smoothing
is add one smoothing, obtained by setting a
j
= 1. We
use add one smoothing in our experiments below.
4.2
Maximum Entropy Classifier
Maximum entropy is a general technique for estimating
probability distributions from data and has been successfully
applied in many natural language processing tasks.
The over-riding principle in maximum entropy is that when
nothing is known, the distribution should be as uniform as
possible, that is, have maximal entropy [9]. Labeled training
data are used to derive a set of constraints for the model
that characterize the class-specific expectations for the distribution
. Constraints are represented as expected values
of features. The improved iterative scaling algorithm finds
the maximum entropy distribution that is consistent with
the given constraints. In query classification scenario, maximum
entropy estimates the conditional distribution of the
class label given a query. A query is represented by a set
of features. The labeled training data are used to estimate
the expected value of these features on a class-by-class basis.
Improved iterative scaling finds a classifier of an exponential
form that is consistent with the constraints from the labeled
data.
It can be shown that the maximum entropy distribution
is always of the exponential form [4]:
P (y
|q) = 1
Z(q) exp(
X
i
i
f
i
(q; y))
where each f
i
(q; y) is a feature,
i
is a parameter to be
estimated and Z(q) is simply the normalizing factor to ensure
a proper probability: Z(q) = P
y
exp(P
i
i
f i(q; y)).
Learning of the parameters can be done using generalized
iterative scaling (GIS), improved iterative scaling (IIS), or
quasi-Newton gradient-climber [13].
4.3
Support Vector Machine
Support Vector Machine (SVM) is one of the most successful
discriminative learning methods. It seeks a hyperplane
to separate a set of positively and negatively labeled
training data. The hyperplane is defined by w
T
x + b = 0,
where the parameter w
R
m
is a vector orthogonal to the
hyperplane and b
R is the bias. The decision function is
the hyperplane classifier
H(x) = sign(w
T
x + b).
The hyperplane is designed such that y
i
(w
T
x
i
+ b)
1
i
,
i = 1, ..., N, where x
i
R
m
is a training data point
and y
i
{+1, -1} denotes the class of the vector x
i
. The
margin is defined by the distance between the two parallel
hyperplanes w
T
x + b = 1 and w
T
x + b =
-1, i.e. 2/||w||
2
.
The margin is related to the generalization of the classifier
[17]. The SVM training problem is defined as follows:
minimize
(1/2)w
T
w + 1
T
subject to
y
i
(w
T
x
i
+ b)
1 i
, i = 1, ..., N
0
(7)
685
where the scalar is called the regularization parameter,
and is usually empirically selected to reduce the testing error
rate.
The basic SVM formulation can be extended to the nonlinear
case by using nonlinear kernels.
Interestingly, the
complexity of an SVM classifier representation does not depend
on the number of features, but rather on the number of
support vectors (the training examples closest to the hyperplane
). This property makes SVMs suitable for high dimensional
classification problems [10]. In our experimentation,
we use a linear SVM and a SVM with radial basis kernel.
4.4
Gradient Boosting Tree
Like SVM, gradient boosting tree model also seeks a pa-rameterized
classifier. It iteratively fits an additive model [8]
f
t
(x) = T
t
(x;
0
) +
T
X
t=1
t
T
t
(x;
t
),
such that certain loss function L(y
i
, f
T
(x + i) is minimized,
where T
t
(x;
t
) is a tree at iteration t, weighted by parameter
t
, with a finite number of parameters,
t
and is the
learning rate. At iteration t, tree T
t
(x; ) is induced to fit
the negative gradient by least squares. That is
^
:= arg min
N
X
i
(
-G
it
t
T
t
(x
i
); )
2
,
where G
it
is the gradient over current prediction function
G
it
=
L(y
i
, f (x
i
)
f (x
i
)
f=f
t-1
.
The optimal weights of trees
t
are determined
t
= arg min
N
X
i
L(y
i
, f
t-1
(x
i
) + T (x
i
, )).
If the L-2 loss function [y
i
-f(x
i
)]
2
/2 is used, we have the
gradient G(x
i
) =
-y
i
+ f (x
i
). In this paper, the Bernoulli
loss function
-2 X
i
(y
i
f (x
i
)
- log(1 + exp(f(x
i
))))
is used and the gradient has the form
G(x
i
) = y
i
1
1 + exp(
-f(x
i
)) .
During each iteration of gradient boosting, the feature
space is further partitioned. This kind of rectangular partition
does not require any data preprocessing and the resulting
classifier can be very robust. However, it may suffer from
the dead zoom phenomenon, where prediction is not able to
change with features, due to its discrete feature space partition
. Friedman (2002) found that it helps performance by
sampling uniformly without replacement from the dataset
before estimating the next gradient step [6]. This method
was called stochastic gradient boosting.
FEATURE SELECTION
Many methods have been used in feature selection for
text classification, including information gain, mutual information
, document frequency thresholding, and Chi-square
statistics. Yang and Pedersen [18] gives a good comparison
of these methods. Information gain is one of the most
effective methods in the context of text categorization. In
addition to information gain, we also use feature selection
methods based on SVM's feature coefficients and stochastic
gradient boosting tree's variable importance.
5.1
Information Gain
Information gain is frequently used as a measure of feature
goodness in text classification [18]. It measures the
number of bits of information obtained for category prediction
by knowing the presence or absence of a feature. Let
y
i
: i = 1..m be the set of categories, information gain of a
feature f is defined as
IG(f ) =
m
X
i=1
P (y
i
)logP (y
i
)
+ P (f )
m
X
i=1
P (y
i
|f)logP (y
i
|f)
+ P (f )
m
X
i=1
P (y
i
|f)logP (y
i
|f)
where f indicates f is not present. We compute the information
gain for each unique feature and select top ranked
features.
5.2
Linear SVM Feature Ranking
Linear SVM (7) produces a hyperplane as well as a normal
vector w. The normal vector w serves as the slope of
the hyperplane classifier and measures the relative importance
that each feature contribute to the classifier. An extreme
case is that when there is only one feature correlated
to sample labels, the optimal classifier hyperplane must be
perpendicular to this feature axle.
The L-2 norm of w, in the objective, denotes the inverse
margin. Also, it can be viewed as a Gaussian prior of random
variable w. Sparse results may be achieved by assuming a
laplace prior and using the L-1 norm [2].
Unlike the previous information gain method, the linear
SVM normal vector w is not determined by the whole body
of training samples. Instead, it is determined by an optimally
determined subset, support vectors, that are critical
to be classified. Another difference is obvious: normal vector
w is solved jointly by all features instead of one by one
independently.
Our results show that linear SVM is able to provide rea-sonably
good results in feature ranking for our navigational
query identification problem even when the corresponding
classifier is weak.
5.3
Stochastic Gradient Boosting Tree
Boosting methods construct weak classifiers using subsets
of features and combines them by considering their predica-tion
errors. It is a natural feature ranking procedure: each
feature is ranked by its related classification errors.
Tree based boosting methods approximate relative influence
of a feature x
j
as
J
2
j
=
X
splits on x
j
I
2
k
686
where I
2
k
is the empirical improvement by k-th splitting on
x
j
at that point.
Unlike the information gain model that considers one feature
at a time or the SVM method that considers all the
feature at one time, the boosting tree model considers a set
of features at a time and combines them according to their
empirical errors.
Let R(
X ) be a feature ranking function based on data set
X . Information gain feature ranking depends on the whole
training set RInfo(X ) = RInfo(Xtr). Linear SVM ranks features
is based on a set of optimally determined dataset. That
is, RSVM(X ) = RSVM(XSV), where XSV is the set of support
vectors. The stochastic gradient boosting tree (GSBT)
uses multiple randomly sampled data to induce trees and
ranks feature by their linear combination. Its ranking function
can be written as RSGBT(X ) = P
T
t=1
t
R
t
SGBT(X
t
),
where
X
t
is the training set randomly sampled at iteration
t.
EXPERIMENTS
A total number of 2102 queries were uniformly sampled
from a query log over a four month period. The queries
were sent to four major search engines, including Yahoo,
Google, MSN, and Ask. The top 5 URL's returned by each
search engine were recorded and sent to trained editors for
labeling (the number 5 is just an arbitrary number we found
good enough to measure the quality of retrieval). If there
exists one and only one perfect URL among all returned
URLs for a query, this query is labeled as navigational query.
Otherwise, it is labeled as non-navigational query.
Out of 2102 queries, 384 queries are labeled as navigational
. Since they are uniformly sampled from a query log,
we estimate there are about 18% queries are navigational.
The data set were divided into five folders for the purpose
of cross-validation. All results presented in this section are
average testing results in five fold cross validations.
6.2
Evaluation
Classification performance is evaluated using three metrics
: precision, recall and F1 score. In each test, Let n
++
denotes the number of positive samples that correctly classified
(true positive); n
-+
denotes the number of negative
samples that are classified as positive (false positive); n
+-denotes
the number of false positive samples that are classified
as negative (false negative); and n
-denotes
the number
of negative samples that are correctly classified (true
negative). Recall is the ratio of the number of true positives
to the total number of positives samples in the testing set,
namely
recall =
n
++
n
++
+ n
+
.
Precision is the ratio of the number of true positive samples
to the number samples that are classified as positive, namely
precision =
n
++
n
++
+ n
-+
.
F1 is a single score that combines precision and recall,
defined as follows:
F 1 = 2 precsion recall
precsion + recall
.
6.3
Results
6.3.1
Feature Selection Results
Table 1 shows the distributions of the top 50 features selected
by different methods. All methods agree that click
features are the most important. In particular, linear SVM
and boosting tree select more click features than information
gain. On the other hand, information gain select many
features from anchor text and other metrics such as spam
scores.
Table 1: Distributions of the Selected Top 50 Features
According to Feature Categories
Feature Set
Info. Gain
Linear SVM
Boosting
Click
52%
84%
74%
URL
4%
2%
6%
Anchor Text
18%
2%
12%
Other metrics
26%
12%
8%
Table 2 shows the distribution of the selected features according
to feature integration operators.
It shows which
operators applied to result set query-URL pair wise features
are most useful. We group the 15 operators into 5 types:
vector, normalized ratios (r
k
, k = 2, 5, 10, 20), min/max, en-tropy/stand
deviation, and median/mean. Vector group includes
all query-URL pair features in top 5 positions; normalized
ratios are defined in (1). As we can see from the
table, all feature integration operators are useful.
Table 2: Distributions of the Selected Top 50 Features
According to Integration Operators
Operators
Info. Gain
Linear SVM
Boosting
vector
40%
22%
28%
normalized ratios
8%
38%
22%
min/max
6%
20%
16%
entropy/std
20%
16%
18%
mean/median
26%
4%
16%
The number of selected features directly influence the classification
performance. Figure 3 shows relationship between
the boosting tree classification performance and the number
of selected features. As we can see, performance increases
with cleaner selected features. However, if the number of
selected feature is too small, performance will decrease. A
number of 50 works the best in our work.
6.3.2
Classification Results
We first apply four different classification methods: naive
Bayes, maximum entropy methods, support vector machine
and stochastic gradient boosting tree model over all available
features. The results are reported in Table 3. As we can see,
stochastic gradient boosting tree has the best performance
with an F1 score of 0.78.
We then apply those methods to machine selected features
. We test 4 different feature sets with 50 number of features
, selected by information gain, linear SVM and boosting
tree. The combined set consists of 30 top features selected by
linear SVM and 29 top features selected by boosting tree.
Please note that the total number of features are still 50
since linear SVM and boosting tree selected 9 same features
in their top 30 feature set.
687
0
500
1000
1500
2000
2500
3000
0.78
0.79
0.8
0.81
0.82
0.83
0.84
0.85
0.86
Number of Features Selected By Boosting Tree
F1 Score of Boosting Tree Classifier
Classification Performance VS Number of Features
Figure 3:
Classification performance F1 against
number of features: 25, 50, 100, 200, 400, 800, and
2955 (all features)
Table 3: Results of Various Classification Methods
over All Features
Recall
Precision
F1
Naive Bayes
0.242
0.706
0.360
SVM (Linear Kernel)
0.189
1.000
0.318
Maximum Entropy
0.743
0.682
0.711
SVM (RBF Kernel)
0.589
0.485
0.528
Boosting Trees
0.724
0.845
0.780
Table 4 presents the results of the coupled feature selection
and classification methods. It is obvious that the performance
of each method is improved by applying them to machine
selected clean features, except naive Bayes classifier.
Surprisingly, the features selected by linear SVM are the
best set of features. The results show that even if the underlying
problem is not linear separable, the linear coefficients
of the large margin linear classifier still convey important
feature information. When the stochastic gradient boosting
tree is applied over this set of features, we get the best
performance with 0.881 F1 score among all cross-methods
evaluations. Without feature ablation, SGBT is only able
to achieve 0.738 F1 score. That is, feature selection has
an effect of error reduction rate 40%. Without introducing
linear SVM in feature ablation, if SGBT works on the feature
set selected by its own variable importance ranking, it
achieves 0.848 F1 score. That is to say, a cross methods
coupling of feature selection and classification causes a 33%
error reduction.
DISCUSSION
An interesting result from Table 1 is the features selected
for navigational query identification.
Those features are
mostly induced from user click information. This is intu-itively
understandable because if a query is navigational,
the navigational URL is the most clicked one. On the other
hand, it might be risky to completely rely on click information
. The reasons might be 1) user click features may
be easier to be spammed, and 2) clicks are often biased by
various presentation situation such as quality of auto abstraction
, etc.
From Table 4, we observe that linear SVM and boosting
tree have better feature selection power than information
gain. The reason that information gain performs inferior to
linear SVM and boosting tree is probably due to the fact
that information gain considers each feature independently
while linear SVM considers all features jointly and boosting
tree composites feature rank by sum over all used features.
The results show that URL, anchor text and other metrics
are helpful only when they are considered jointly with click
features.
The most important result is that the stochastic gradient
boosting tree coupled with linear SVM feature selection
method achieves much better results than any other combination
. In this application, the data has very high dimension
considering the small sample size. The boosting tree method
needs to partition an ultra-high dimensional feature space
for feature selection. However, the stochastic step does not
have enough data to sample from [6]. Therefore, the boosted
result might be biased by earlier sampling and trapped in
a local optimum. Support vector machine, however, is able
to find an optimally determined subset of training samples,
namely support vectors, and ranks features based on those
vectors. Therefore, the SVM feature selection step makes
up the disadvantage of the stochastic boosting tree in its
initial sampling and learning stages that may lead to a local
optimum.
As expected, naive Bayes classifier hardly works for the
navigational query identification problem. It is also the only
classifier that performs worse with feature selection. Naive
Bayes classifiers work well when the selected features are
mostly orthogonal. However, in this problem, all features
are highly correlated.
On the other hand, classification
methods such as boosting tree, maximum entropy model
and SVM do not require orthogonal features.
RELATED WORK
Our work is closely related to query classification, a task of
assigning a query to one or more categories. However, general
query classification and navigational query identification
are different in the problems themselves. Query classification
focuses on content classification, thus the classes are
mainly topic based, such as shopping and products. While
in navigational query identification, the two classes are intent
based.
In the classification approaches regard, our work is related
to Gravano, et al. [7] where authors applied various
classification methods, including linear and nonlinear SVM,
decision tree and log-linear regression to classify query locality
based on result set features in 2003.
Their work,
however, lacked carefully designed feature engineering and
therefore only achieved a F1 score of 0.52 with a linear SVM.
Beitzel, et al.[1] realized the limitation of a single classification
method in their query classification problem and pro-posed
a semi-supervised learning method. Their idea is to
compose the final classifier by combining classification results
of multiple classification methods. Shen, et al. [15]
also trained a linear combination of two classifiers. Differ-ently
, instead of combining two classifiers for prediction, we
couple feature selection and classification.
In the feature extraction aspect, our work is related to
Kang and Kim 2003 [11] where authors extracted heterogenous
features to classify user queries into three categories:
topic relevance task, the homepage finding task and service
finding task. They combined those features, for example
URL feature and content feature, by several linear empiri-688
Table 4: F1 Scores of Systems with Coupled Feature Selection and Classification Methods
Methods
Info. Gain
Linear SVM
Boosting
Combined Set
SVM (Linear Kernel)
0.124
0.733
0.712
0.738
Naive Bayes
0.226
0.182
0.088
0.154
Maximum Entropy
0.427
0.777
0.828
0.784
SVM (RBF Kernel)
0.467
0.753
0.728
0.736
Boosting Tree
0.627
0.881
0.848
0.834
cal linear functions. Each function was applied to a different
binary classification problem.
Their idea was to emphasize
features for different classification purposes. However,
the important features were not selected automatically and
therefore their work is not applicable in applications with
thousands of features.
CONCLUSION
We have made three contributions in the paper. First,
we evaluate the effectiveness of four machine learning approaches
in the context of navigational query identification.
We find that boosting trees are the most effective one. Second
, we evaluate three feature selection methods and propose
coupling feature selection with classification approaches.
Third, we propose a multi-level feature extraction system to
exploit more information for navigational query identification
.
The underlying classification problem has been satisfacto-rily
solved with 88.1% F1 score. In addition to the successful
classification, we successfully identified key features for recognizing
navigational queries: the user click features. Other
features, such as URL, anchor text, etc. are also important
if coupled with user click features.
In future research, it is of interest to conduct cross methods
co-training for the query classification problem to utilize
unlabeled data, as there is enough evidence that different
training methods may benefit each other.
REFERENCES
[1] S. Beitzel, E. Jensen, D. Lewis, A. Chowdhury,
A. Kolcz, and O. Frieder. Improving Automatic Query
Classification via Semi-supervised Learning. In The
Fifth IEEE International Conference on Data Mining,
pages 2730, New Orleans, Louisiana, November 2005.
[2] C. Bhattacharyya, L. R. Grate, M. I. Jordan, L. El
Ghaoui, and I. S. Mian. Robust Sparse Hyperplane
Classifiers: Application to Uncertain Molecular
Profiling Data. Journal of Computational Biology,
11(6):10731089, 2004.
[3] A. Broder. A Taxonomy of Web Search. In ACM
SIGIR Forum, pages 310, 2002.
[4] S. della Pietra, V. della Pietra, and J. Lafferty.
Inducing Features of Random Fields. IEEE
Transactions on Pattern Analysis and Machine
Intelligence, 19(4), 1995.
[5] R. O. Duda, P. E. Hart, and D. G. Stork. Pattern
Classification. John Wiley, New York, NY, 2nd
edition, 2000.
[6] J. H. Friedman. Stochastic Gradient Boosting.
Computational Statistics and Data Analysis,
38(4):367378, 2002.
[7] L. Gravano, V. Hatzivassiloglou, and R. Lichtenstein.
Categorizing Web Queries According to Geographical
Locality. In ACM 12th Conference on Information
and Knowledge Management (CIKM), pages 2730,
New Orleans, Louisiana, November 2003.
[8] T. Hastie, R. Tibshirani, and J. Friedman. The
Elements of Statistical Learning: Data Mining,
Inference, and Predication. Springer Verlag, New
York, 2001.
[9] E. T. Jaynes. Papers on Probability, Statistics, and
Statistical Physics. D. Reidel, Dordrecht, Holland and
Boston and Hingham, MA, 1983.
[10] T. Joachims. Text Categorization with Support Vector
Machines: Learning with Many Relevant Features. In
Proceedings of the 10th European Conference on
Machine Learning (ECML), pages 137142, Chemnitz,
Germany, 1998.
[11] I.-H. Kang and G. Kim. Query Type Classification for
Web Document Retrieval. In Proceedings of the 26th
annual international ACM SIGIR conference on
Research and development in informaion retrieval,
pages 64 71, Toronto Canada, July 2003.
[12] U. Lee, Z. Liu, and J. Cho. Automatic Identification
of User Goals in Web Search. In Proceedings of the
14th International World Wide Web Conference
(WWW), Chiba, Japan, 2005.
[13] R. Malouf. A Comparison of Algorithms for Maximum
Entropy Parameter Estimation. In Proceedings of the
Sixth Conference on Natural Language Learning
(CoNLL), Taipei, China, 2002.
[14] D. E. Rose and D. Levinson. Understanding User
Goals in Web Search. In Proceedings of The 13th
International World Wide Web Conference (WWW),
2004.
[15] D. Shen, R. Pan, J.-T. Sun, J. J. Pan, K. Wu, J. Yin,
and Q. Yang. Q2C at UST: Our Winning Solution to
Query Classification in KDDCUP 2005. SIGKDD
Explorations, 7(2):100110, 2005.
[16] L. Sherman and J. Deighton. Banner advertising:
Measuring effectiveness and optimizing placement.
Journal of Interactive Marketing, 15(2):6064, 2001.
[17] V. Vapnik. The Nature of Statistical Learning Theory.
Springer Verlag, New York, 1995.
[18] Y. Yang and J. Pedersen. An Comparison Study on
Feature Selection in Text Categorization. In
Proceedings of the 20th annual international ACM
SIGIR conference on Research and development in
informaion retrieval, Philadelphia, PA, USA, 1997.
[19] S.C. Zhu. Statistical modeling and conceptualization
of visual patterns. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 25(6):619712,
2003.
689
| Stochastic Gradient Boosting Tree;Linear SVM Feature Ranking;Gradient Boosting Tree;Information Gain;Naive Bayes Classifier;Support Vector Machine;Experiments Results;Machine Learning;Maximum Entropy Classifier;Navigational Query Classification;Navigational and Informational query;Multiple Level feature system |
61 | Coverage Directed Test Generation for Functional Verification using Bayesian Networks | Functional verification is widely acknowledged as the bottleneck in the hardware design cycle. This paper addresses one of the main challenges of simulation based verification (or dynamic verification ), by providing a new approach for Coverage Directed Test Generation (CDG). This approach is based on Bayesian networks and computer learning techniques. It provides an efficient way for closing a feedback loop from the coverage domain back to a generator that produces new stimuli to the tested design. In this paper, we show how to apply Bayesian networks to the CDG problem. Applying Bayesian networks to the CDG framework has been tested in several experiments, exhibiting encouraging results and indicating that the suggested approach can be used to achieve CDG goals. | INTRODUCTION
Functional verification is widely acknowledged as the bottleneck
in the hardware design cycle [1]. To date, up to 70% of the design
development time and resources are spent on functional verification
. The increasing complexity of hardware designs raises the
need for the development of new techniques and methodologies
that can provide the verification team with the means to achieve its
goals quickly and with limited resources.
The current practice for functional verification of complex designs
starts with a definition of a test plan, comprised of a large
set of events that the verification team would like to observe during
the verification process. The test plan is usually implemented
using random test generators that produce a large number of test-cases
, and coverage tools that detect the occurrence of events in
the test plan, and provide information related to the progress of the
test plan. Analysis of the coverage reports allows the verification
team to modify the directives for the test generators and to better
hit areas or specific tasks in the design that are not covered well [5].
The analysis of coverage reports, and their translation to a set
of test generator directives to guide and enhance the implementation
of the test plan, result in major manual bottlenecks in the otherwise
highly automated verification process. Considerable effort
is invested in finding ways to close the loop of coverage analysis
and test generation. Coverage directed test generation (CDG) is
a technique to automate the feedback from coverage analysis to
test generation. The main goals of CDG are to improve the coverage
progress rate, to help reaching uncovered tasks, and to provide
many different ways to reach a given coverage task. Achieving
these goals should increase the efficiency and quality of the verification
process and reduce the time and effort needed to implement
a test plan.
In this paper, we propose a new approach for coverage directed
test generation. Our approach is to cast CDG in a statistical inference
framework, and apply computer learning techniques to achieve
the CDG goals. Specifically, our approach is based on modeling the
relationship between the coverage information and the directives to
the test generator using Bayesian networks [9]. A Bayesian network
is a directed graph whose nodes are random variables and
whose edges represent direct dependency between their sink and
source nodes. Each node in the Bayesian network is associated with
a set of parameters specifying its conditional probability given the
state of its parents.
Simply stated, the CDG process is performed in two main steps.
In the first step, a training set is used to learn the parameters of
a Bayesian network that models the relationship between the coverage
information and the test directives. In the second step, the
Bayesian network is used to provide the most probable directives
that would lead to a given coverage task (or set of tasks).
Bayesian networks are well suited to the kind of modeling required
for CDG, because they offer a natural and compact representation
of the rather complex relationship between the CDG
ingredients, together with the ability to encode essential domain
knowledge. Moreover, adaptive tuning of the Bayesian network
parameters provides a mean to focus on the rare coverage cases.
We describe two experiments in which we tested the the ability
of Bayesian networks to handle aspects of the CDG problem
in various settings. The goals of the experiments were to increase
the hitting rates in hard-to-reach coverage cases; design directives
aimed at reaching uncovered tasks; and provide many different directives
for a given coverage task. We used two settings for our
experiments. In the first setting, we used a Bayesian network to
generate instruction streams to an abstract model of the pipeline of
286
18.2
Random
Test
Generator
Test Plan
Fail
Pass
Information
Directives
Test
Coverage
Coverage
Analysis Tool
Reports
Coverage
Simulator
DUT
Figure 1: Verification process with automatic test generation
an advanced super-scalar PowerPC processor. In the second setting
, we used a Bayesian network to generate directives to an existing
test generator of a storage control unit of a mainframe with a
goal to cover all possible transactions from the CPUs connected to
this unit. In both experiments we reached our goals. The encouraging
results suggests that Bayesian networks may well be used to
achieve the primary goals of CDG.
The remainder of this paper is as follows. In Section 2, we briefly
present the CDG framework and review related work. In Section 3,
we describe Bayesian networks and their application to CDG. Sections
4 and 5 provide detailed descriptions of the experiments. We
conclude with a few remarks and suggestions for future study.
COVERAGE DIRECTED TEST GENERATION (CDG)
In current industry practice, verification by simulation, or dynamic
verification, is the leading technique for functional verification
. Coverage is used to ensure that the verification of the design is
thorough, and the definition of coverage events or testing requirements
is a major part in the definition of the verification plan of the
design. Often, a family of coverage events that share common properties
are grouped together to form a coverage model [7]. Members
of the coverage model are called coverage tasks and are considered
part of the test plan. Cross-product coverage models [7] are of special
interest. These models are defined by a basic event and a set of
parameters or attributes, where the list of coverage tasks comprises
all possible combinations of values for the attributes.
Figure 1 illustrates the verification process with an automatic
random test generation. A test plan is translated by the verification
team to a set of directives for the random test generator. Based
on these directives and embedded domain knowledge, the test generator
produces many test-cases. The design under test (DUT) is
then simulated using the generated test-cases, and its behavior is
monitored to make sure that it meets its specification. In addition,
coverage tools are used to detect the occurrence of coverage tasks
during simulation. Analysis of the reports provided by the coverage
tools allows the verification team to modify the directives to
the test generator to overcome weaknesses in the implementation
of the test plan. This process is repeated until the exit criteria in the
test plan are met.
The use of automatic test generators can dramatically reduce the
amount of manual labor required to implement the test plan. Even
so, the manual work needed for analyzing the coverage reports and
translating them to directives for the test generator, can constitute a
bottleneck in the verification process. Therefore, considerable effort
is spent on finding ways to automate this procedure, and close
the loop of coverage analysis and test generation. This automated
feedback from coverage analysis to test generation, known as Coverage
Directed test Generation (CDG), can reduce the manual work
in the verification process and increase its efficiency.
In general, the goal of CDG is to automatically provide directives
that are based on coverage analysis to the test generator. This can
be further divided into two sub-goals: First, to provide directives to
the test generator that help in reaching hard cases, namely uncovered
or rarely covered tasks. Achieving this sub-goal can shorten
the time needed to fulfill the test plan and reduce the number of
manually written directives. Second, to provide directives that allow
easier reach for any coverage task, using a different set of directives
when possible. Achieving this sub-goal makes the verification
process more robust, because it increases the number of times a task
has been covered during verification. Moreover, if a coverage task
is reached via different directions, the chances to discover hidden
bugs related to this task are increased [8].
In the past, two general approaches for CDG have been pro-posed
: feedback-based CDG and CDG by construction. Feedback-based
CDG relies on feedback from the coverage analysis to automatically
modify the directives to the test generator. For example,
in [2], a genetic algorithm is used to select and modify test-cases to
increase coverage. In [13], coverage analysis data is used to modify
the parameters of a Markov Chain that represents the DUT. The
Markov Chain is then used to generate test-cases for the design.
In [11], the coverage analysis results trigger a set of generation
rules that modify the testing directives. In contrast, in CDG by
construction, an external model of the DUT is used to generate test
directives designed to accurately hit the coverage tasks. For example
, in [14] an FSM model of pipelines is used to generate tests that
cover instruction interdependencies in the pipes.
COVERAGE DIRECTED TEST GENERATION USING BAYESIAN NETWORKS
The random nature of automatic test-case generators imposes a
considerable amount of uncertainty in the relationship between test
directives and coverage tasks, e.g., the same set of directives can
be used to generate many different test-cases, each leading to different
coverage tasks. This inherent uncertainty suggests to cast
the CDG setup in a statistical inference framework. To this end,
Bayesian networks offer an efficient modeling scheme by providing
a compact representation of the complex (possibly stochastic)
relationships among the CDG ingredients, together with the possibility
to encode essential domain knowledge. It should be noted
that we do not suggest modeling the behavior of the design, typi-cally
a large and complicated (deterministic) finite state machine.
Rather, we model the CDG process itself, namely the trial-and-error
procedure governed by the verification team, which controls
the test generation at one end and traces the progress of covering
the test plan at the other.
3.1
A Brief Introduction to Bayesian Networks
A Bayesian network is a graphical representation of the joint
probability distribution for a set of variables. This representation
was originally designed to encode the uncertain knowledge of an
expert and can be dated back to the geneticist Sewall Wright [15].
Their initial development in the late 1970s was motivated by the
need to model the top-down (semantic) and bottom-up (perceptual)
combinations of evidence (observations/findings). Their capability
for bidirectional inferences, combined with a rigorous probabilistic
foundation, led to the rapid emergence of Bayesian networks as the
method of choice for uncertain reasoning in AI and expert systems,
replacing ad hoc rule-based schemes. Bayesian networks also play
a crucial role in diagnosis and decision support systems [10].
Obviously, there's a computational problem in dealing with many
sources of uncertainty, i.e. the ability to perform probabilistic ma-nipulations
in high dimensions (the "curse of dimensionality"). The
main breakthrough emerged in the late 1980s and can be attributed
to Judea Pearl [12], who introduced 'modularity', thus enabling
287
large and complex models and theirs associated calculations, to be
split up into small manageable pieces. The best way to do this is
via the imposition of meaningfully simplified conditional independence
assumptions. These, in turn, can be expressed by means of a
powerful and appealing graphical representation.
A Bayesian network consists of two components. The first is a
directed acyclic graph in which each vertex corresponds to a random
variable. This graph represents a set of conditional independence
properties of the represented distribution: each variable is
probabilistically independent of its non-descendants in the graph
given the state of its parents. The graph captures the qualitative
structure of the probability distribution, and is exploited for efficient
inference and decision making. The second component is a
collection of local interaction models that describe the conditional
probability p
(X
i
|Pa
i
) of each variable X
i
given its parents Pa
i
. Together
, these two components represent a unique joint probability
distribution over the complete set of variables X [12]. The joint
probability distribution is given by the following equation:
p
(X) =
n
i
=1
p
(X
i
|Pa
i
)
(1)
It can be shown that this equation actually implies the conditional
independence semantics of the graphical structure given earlier.
Eq. 1 shows that the joint distribution specified by a Bayesian network
has a factored representation as the product of individual local
interaction models. Thus, while Bayesian networks can represent
arbitrary probability distributions, they provide a computational advantage
for those distributions that can be represented with a simple
structure.
The characterization given by Eq. 1 is a purely formal characterization
in terms of probabilities and conditional independence.
An informal connection can be made between this characterization
and the intuitive notion of direct causal influence. It has been noted
that if the edges in the network structure correspond to causal relationships
, where a variable's parents represent the direct causal
influences on that variable, then resulting networks are often very
concise and accurate descriptions of the domain. Thus it appears
that in many practical situations, a Bayesian network provides a
natural way to encode causal information. Nonetheless, it is often
difficult and time consuming to construct Bayesian networks from
expert knowledge alone, particularly because of the need to provide
numerical parameters. This observation, together with the fact that
data is becoming increasingly available and cheaper to acquire, has
led to a growing interest in using data to learn both the structure
and probabilities of a Bayesian network (cf. [3, 9, 12]).
Typical types of queries that can be efficiently answered by the
Bayesian network model are derived from applying the Bayes rule
to yield posterior probabilities for the values of a node (or set of
nodes), X , given some evidence, E, i.e. assignment of specific values
to other nodes:
p
(X|E) = p(E|X) p(X)
p
(E)
Thus, a statistical inference can be made in the form of either selecting
the Maximal A Posteriori (MAP) probability, max p
(X|E), or
obtaining the Most Probable Explanation (MPE), arg max p
(X|E).
The sophisticated yet efficient methods that have been developed
for using Bayesian networks provide the means for predictive and
diagnostic inference
1
. A diagnostic query is such that the evidence
1
This is in contrast to standard regression and classification methods
(e.g., feed forward neural networks and decision trees) that
encode only the probability distribution of a target variable given
several input variables.
State
Int
Covearge
Variables
Directives
Test Generator
Core
Enbable
Cmd
Type
cp_cmd_type =
{// val weight
{read, 20},
{write, 20},
{RMW, 5},
...
};
cp_core_enable =
{// val weight
{Core 1, 10},
};
Op
Mode
Core
Cmd
Resp
{Core 0, 10},
{Both, 100}
Figure 2: Bayesian Network of CDG
nodes E represent a cause, while the queried nodes, X , represent
an effect. The reversed direction, i.e. evidence on the effect nodes
which serves to determine the possible cause, is called abductive.
These methods also allow Bayesian networks to reason efficiently
with missing values, by computing the marginal probability of the
query given the observed values.
There are two important extensions of Bayesian networks: Dynamic
Bayesian networks and influence diagrams. The first extension
(see [6]) enables the incorporation of time, thus modeling temporal
dependencies in a stochastic process. The second extension
(see [3]) enriches the Bayesian network paradigm with decision
making and utility considerations which create a powerful mechanism
for dealing with decisions under uncertainty constraints.
3.2
A Bayesian Network for CDG
The CDG process begins with the construction of a Bayesian network
model that describes the relations between the test directives
and the coverage space. Figure 2 illustrates a simple, yet typical,
Bayesian network, which models a small excerpt of the CDG setup.
The network describes the relationship between the directives that
influence the type of command that is generated (cp cmd type)
and the active cores inside a CPU (cp core enable), and the
coverage attributes of a generated command (cmd), its response
(resp), and the core that generated it (core). The network is
comprised of input nodes (the white circles on the left) that relate
to test directives that appear to their left and coverage nodes
(the white squares on the right) that define the coverage space. In
addition to these nodes, for which we have physical observations,
the network may also contain hidden nodes, namely variables for
which we don't have any physical evidence (observations) for their
interactions. These variables are represented as shaded ovals in
the figure. Hidden nodes are added to the Bayesian network structure
primarily to reflect expert domain knowledge regarding hidden
causes and functionalities which impose some structure on the interaction
between the interface (observed) nodes
2
.
The Bayesian network at Fig. 2 describes the causal relationships
from the test generation directives (causes) to the coverage model
space (effects). For example, it encodes the expert knowledge that
indicates that there is an internal mode of operation for which we
do not have any direct physical observation, yet it is determined
by the combined values of the test generation attributes. On the
other hand, the (hidden) mode of operation directly influences the
choice of the resulting command and core, which are attributes of
2
Introducing hidden nodes to the network structure has the secondary
impact of reducing the computational complexity by dimensionality
reduction, and as a means for capturing non-trivial (higher
order) correlations between observed events.
288
the coverage model. Note the absence of a direct link between the
requested core (via the directive cp core enable) and the observed
one (at Core), which captures our understanding that there
is no direct influence between the directives and the coverage attribute
. Another assumption encoded in the CDG Bayesian network
structure at Fig. 2, is that the only information that governs
the response for the command is the generated command itself, and
this is encoded via the direct link from Cmd to Resp.
In a nutshell, the design of the Bayesian network starts with identifying
the ingredients (attributes) that will constitute the directives
to the test generator on one hand, and to the coverage model on the
other. These attributes are dictated by the interface to the simulation
environment, to the coverage analysis tool, and by the specification
of the coverage model in the test plan. These ingredients are used
as the first guess about the nodes in the graph structure. Connecting
these nodes with edges is our technique for expert knowledge
encoding, as demonstrated in Fig. 2. Obviously, using a fully connected
graph, i.e. with an edge between every pair of nodes, represents
absolutely no knowledge about the possible dependencies
and functionalities within the model. Hence, as the graph structure
becomes sparser, it represents deeper domain knowledge. We discovered
that a good practice in specifying a dependency graph is
to remove edges for which we have strong belief that the detached
nodes are not directly influencing one another. At this point, hidden
nodes can be added to the structure, either to represent hidden
causes, which contribute to a better description of the functionalities
of the model, or to take on a role from the complexity stand
point, by breaking the barges cliques in the graph (see [4]).
After the Bayesian network structure is specified, it is trained
using a sample of directives and the respective coverage tasks. To
this end, we activate the simulation environment and construct a
training set out of the directives used and the resulting coverage
tasks. We then use one of the many known learning algorithms (cf.
[3]) to estimate the Bayesian network's parameters (i.e. the set of
conditional probability distributions). This completes the design
and training of the Bayesian network model.
In the evaluation phase, the trained Bayesian network can be
used to determine directives for a desired coverage task, via posterior
probabilities, MAP and MPE queries, which use the coverage
task attributes as evidence. For example, in a model for
which the directives are weights of possible outcomes for internal
draws in the test generator (e.g. the directive cp cmd type
in Fig. 2 specifies a preference to read commands, write commands
, etc.), we can specify a desired coverage task assignment
(evidence) for the coverage nodes (e.g. Resp = ACK) and calculate
the posterior probability distribution for directive nodes (e.g.
p
(Cmd Type|Resp = ACK)), which directly translates to the set of
weights to be written in the test generator's parameter file. Note, as
the example demonstrates, we can specify partial evidence and/or
determine a partial set of directives.
INSTRUCTION STREAM GENERATION USING A DYNAMIC NETWORK
To evaluate the feasibility of the suggested modeling approach
to the CDG problem, we designed a controlled study that acts in
a simple domain (small state space), where we have a deep understanding
of the DUT's logic, direct control on the input, and a
`ground truth' reference to evaluate performance.
We conducted the experiment on a model of the pipeline of NorthStar
, an advanced PowerPC processor. The pipeline of NorthStar
contains four execution units and a dispatch unit that dispatches instructions
to the execution units. Figure 3 illustrates the general
Dispatch
0000
0000
0000
0000
0000
0000
1111
1111
1111
1111
1111
1111
Branch
Pipe (B)
Write Back
Execute
Data Fetch
S3
S2
S1
Simple Arith
Pipe (S)
C2
C1
C3
Complex Arith
Pipe (C)
0000
0000
0000
0000
0000
0000
1111
1111
1111
1111
1111
1111
Load/Store
Pipe (L)
Figure 3: The structure of the NorthStar pipeline
structure of the NorthStar pipeline. For reasons of simplicity, our
model contains only the simple arithmetic unit that executes simple
arithmetic instructions such as add, and the complex arithmetic unit
that can execute both simple and complex arithmetic instructions.
Each execution unit consists of three pipeline stages: (1) Data fetch
stage, in which the data of the instruction is fetched; (2) Execute
stage, in which the instruction is executed; (3) Write back stage,
where the result is written back to the target register. The flow of
instructions in the pipeline is governed by a simple set of rules.
For example, in-order dispatching of instructions to the execution
units, and rules for stalling because of data dependency. Note, the
complete set of rules is omitted to simplify the description.
We developed a simple abstract model of the dispatch unit and
two pipelines and used it to simulate the behavior of the pipeline.
The input to our NorthStar model is a simplified subset of the PowerPC
instruction set. Each instruction is modeled by four input
variables. The first variable indicates the type of the instruction.
There are five possible types: S - simple arithmetic; C1, C2, C3
- complex arithmetic; and NOP - instructions that are executed in
other execution units. The second and third input variables constitute
the source and target register of the instructions. For simplicity
and in order to increase the possibility of register interdependency,
we used only eight registers instead of the 32 registers available in
PowerPC. The last input variable indicates whether the instruction
uses the condition register. Due to restrictions on the legal combinations
of the input variables (e.g., NOP instruction is not using
registers), there are 449 possible instructions.
We used a coverage model that examines the state of the two
pipelines, and properties of the instructions in them. The coverage
model consists of five attributes, the type of instruction at stage 1 of
the simple and complex arithmetic pipelines (S1Type and C1Type,
resp.), flags indicating whether stage 2 of the pipelines are occupied
(S2Valid and C2Valid, resp.), and a flag indicating whether
the instruction at stage 2 of the simple arithmetic pipeline uses the
condition register (S2CR). The total number of legal coverage tasks
in the model is 54 (out of 80 possible cases).
The goal of the experiment was to generate instruction streams
that cover the coverage model described above. Specifically, we
concentrated on the ability to reach the desired coverage cases with
many, yet relatively short, instruction sequences.
We modeled the temporal dependencies between the instructions
and coverage tasks and among the instructions using a two-slice
Dynamic Bayesian Network (DBN) [6]. Rather than an accurate
mapping of the specific state machine structure, the DBN encoded
the general knowledge of an expert on the modus operandi of this
type of DUT. Using an expert's domain knowledge proved to be vital
in this setup because it provided essential information needed
for the generation of instruction streams. Moreover, it enabled
the use of hidden nodes, which effectively reduced the complexity
through dimensionality reduction. The resulting DBN has 19
289
Time slice (cycle) t
Time slice (cycle) t+1
Input Node
Coverage Node
Hidden Node
type1
sr1
tg1
cr1
type2
sr2
tg2
cr2
type1
sr1
tg1
cr1
type2
sr2
tg2
cr2
im0
ir0
mv1
rv1
rcr1
im0
ir0
mv1
rv1
rcr1
Figure 4: two-slice DBN for the NorthStar experiment
Rare
Uncovered
Instructions
Cycles
Instructions
Cycles
Training Set
6
7
DBN
4
5
4
5
Text Book
3
4
3
4
Table 1: NorthStar experiment results
nodes per slice, 13 of which are observed, 15 intra (within a slice)
edges, and 37 inter (between slices) edges (see Fig 4).
The training set is composed of 1000 sequences of random instructions
. The length of each sequence is 10 cycles. Note, the
model the we used for the Bayesian network made it easier to measure
length in terms of cycles instead of instructions. The training
set contained 385 different instructions. During its simulation, 49
(out of 54) coverage cases were observed. The average number of
instructions per sequence in the training set was 9.7 out of the 20
possible dispatches in 10 cycles (i.e., more than half of the dispatch
slots in the sequence are empty).
After training the Bayesian network, we tried to generate instruction
sequences for all 54 coverage tasks in the coverage model.
Each sequence was generated using the DBN, by solving the Most
Probable Explanation (MPE) problem for the requested coverage
task. All 49 coverage cases of the training set plus three additional
uncovered cases were reached using instruction sequences
designed by the DBN. In addition, we generated many different instruction
sequences for each coverage task that was covered by the
Bayesian network. The average number of cycles in a generated sequence
dropped to 2.9, while the average number of instructions in
a sequence reduced to 3.7. This reflects the fact that the generated
instruction sequences cause less stall states en-route to reaching the
desired coverage cases. Table 1 illustrates the details of reaching
two difficult coverage cases--the rarest coverage task, which was
seen only once in the training set, and an uncovered task. The table
shows the number of cycles and instructions required to reach
these tasks in the training set, the instruction sequences generated
by the trained DBN, and the `text book' solution--the best possible
sequence. The table indicates that the instruction sequences
generated by the DBN are shorter, both in instructions and cycles,
than the sequences in the training set. Overall, the results indicate
that the trained DBN is able to generate many compact instruction
sequences that are not far from the best possible solution.
Resp
Cmd
Resp
Cmd
Resp
Cmd
Pipe 0
Pipe 1
Core 0
Core 1
Core 0
Core 1
Core 0
Core 1
Storage Control
Element
(SCE)
Memory Subsystem
CP0
CP1
CP7
Figure 5: The structure of SCE simulation environment
STORAGE CONTROL EXPERIMENT USING A STATIC NETWORK
The second experiment was conducted in a real-life setting. The
design under test in the experiment is the Storage Control Element
(SCE) of an IBM z-series system. Figure 5 shows the structure of
the SCE and its simulation environment. The SCE handles commands
from eight CPUs (CP0 CP7). Each CPU consists of two
cores that generate commands to the SCE independently. The SCE
handles incoming commands using two internal pipelines. When
the SCE finishes handling a command, it sends a response to the
commanding CPU.
The simulation environment for the SCE contains, in addition to
the SCE itself, behavioral models for the eight CPUs that it services
, and a behavioral model for the memory subsystem. The behavioral
models of the CPUs generate commands to the SCE based
on their internal state and a directive file provided by the user. The
directive file contains a set of parameters that affect the behavior
of the system. Some of these parameters control the entire system
while others are specific to certain components of the system,
such as a specific CPU. Figure 2 shows an example of some parameters
that are used in the simulation environment of the SCE.
Each parameter contains a set of possible values that the parameter
can receive. Each value has a weight associated with it. When
the value of a parameter is needed, it is randomly chosen from
the set of possible values according the weights of these values.
For example, when a CPU generates a new command, it first uses
the cp cmd type parameter to determine the type of command to
generate, and then a specific parameter for that command type to
determine the exact command to be used.
In the experiment, we tried to cover all the possible transactions
between the CPUs and the SCE. The coverage model contained five
attributes: The CPU (8 possible values) and the core (2 values) in
it that initiated the command, the command itself (31 values), its
response (14 values), and the pipeline in the SCE that handled it (2
values). Overall, the cross product contains 13,888 cases and the
coverage model contains 1968 legal coverage tasks.
This experiment added many new challenges over the controlled
experiment described in the previous section. First, our knowledge
about the DUT in this experiment was very limited compared to
the full understanding of the design in the first experiment. In addition
, we were less able to observe and control the input and output
nodes of the Bayesian network. For the test parameters, we could
only specify the distribution of each parameter and we could not
observe the values that were actually used, only their distribution.
Moreover, in some cases the behavioral models ignored the parameters
and generated commands based on their internal state. Thus,
the actual distribution used was not exactly the provided distribu-290
0
50
100
150
200
250
300
350
400
450
0
50
100
150
200
250
Test-cases
C
overe
d
T
as
k
s
CDG
Baseline
223
Figure 6: Coverage progress of the CDG process
tion of the parameters. This type of observation (distribution instead
of specific value) is known as a soft evidence. The coverage
data that we got out of the simulation environment was a summary
of all the coverage tasks that occurred during the simulation of a
test-case. Therefore, it was hard to correlate between the observed
coverage tasks and the parameters' values that caused them and between
the different observed coverage tasks.
Because we had limited knowledge about the DUT and the correlation
between the parameters in the test directives and the coverage
tasks, the first Bayesian network we constructed contained
arcs between each of the coverage variables and each of the test
parameters. We trained this network with 160 test-cases (each taking
more than 30 minutes to execute). After the initial training, we
analyzed the Bayesian network and found out that most of the test
parameters were strongly correlated either to the command and response
coverage variables or the pipe and core variables, but only
a single variable was strongly correlated to all coverage variables.
Therefore, we partitioned the Bayesian network into two networks,
one for command and response and the other for core and pipe.
The result of the inference on the common parameter from the first
network was used as input for the second one. We trained the second
network with the same training set of 160 test-cases. During
the training, 1745 out of the 1968 tasks in the model were covered,
while 223 remained uncovered.
We checked the performance of the trained network and its ability
to increase the coverage rate for the uncovered tasks in the training
set. The baseline for comparison was the progress achieved by
the best test directive file created by an expert user.
We tried to maximize the coverage progress rate using a large
number of test directive files aimed at specific sets of uncovered
tasks. This approach is not realistic for a human user due the effort
needed to create each set of directives. However, it is useful
for the automatic creation of directives, because the inference time
from the trained network is negligible. Our method to maximize
the coverage progress rate was to randomly partition the uncovered
tasks, use the trained network to create a test directive file
for each partition, and simulate a single test-case for each directive
file. This process was repeated until all the tasks were covered.
The CDG process was able to cover all uncovered tasks after 250
test-cases, while the baseline case of the user defined test directives
file covered only two thirds of them after over 400 test-cases (see
Figure 6).
CONCLUSIONS AND FUTURE WORK
In this paper we demonstrated how Bayesian networks can be
used to close the loop between coverage data and directives to test
generators. The experiments described in the paper show that this
modeling technique can be efficiently used to achieve the CDG
goals of easier reach for hard coverage cases, diverse reach for average
cases, and improved coverage progress rate. It should be
noted that the suggested CDG method is not limited to the types
of simulation environments handled in this paper (i.e., parameters-based
test generation and direct stimuli generation). It can be used
in other types of environments, such as test generators in which the
control on the stimuli is embedded in the generator itself.
Our future work has two distinct aspects: enhancing the learning
capabilities and effectively applying the suggested framework to
the verification process. From the learning perspective, we plan
to explore other techniques that may increase our capabilities. For
example, incremental structure learning as a means for encoding
richer domain knowledge, and the efficient construction of good
queries to boost targeting rare cases using selective sampling. To
effectively deploy the CDG framework, we need to gain a better
understanding of the type of knowledge that should be encoded in
the model, and to identify in which areas the suggested approach
may prove most beneficial to the verification process.
REFERENCES
[1] J. Bergeron. Writing Testbenches: Functional Verification of HDL
Models. Kluwer Academic Publishers, January 2000.
[2] M. Bose, J. Shin, E. M. Rudnick, T. Dukes, and M. Abadir. A
genetic approach to automatic bias generation for biased random
instruction generation. In Proceedings of the 2001 Congress on
Evolutionary Computation CEC2001, pages 442448, May 2001.
[3] R. G. Cowell, A. P. Dawid, S. L. Lauritzen, and D. J. Spiegelhalter.
Probabilistic Networks and Expert Systems. Springer-Verlag, 1999.
[4] G. Elidan, N. Lotner, N. Friedman, and D. Koller. Discovering
hidden variables: A structure-based approach. In Proceedings of the
13th Annual Conference on Neural Information Processing Systems,
pages 479485, 2000.
[5] L. Fournier, Y. Arbetman, and M. Levinger. Functional verification
methodology for microprocessors using the Genesys test-program
generator. In Proceedings of the 1999 Design, Automation and Test
in Europe Conference (DATE), pages 434441, March 1999.
[6] Z. Ghahramani. Learning dynamic Bayesian networks. In Adaptive
Processing of Sequences and Data Structures, Lecture Notes in
Artificial Intelligence, pages 168197. Springer-Verlag, 1998.
[7] R. Grinwald, E. Harel, M. Orgad, S. Ur, and A. Ziv. User defined
coverage - a tool supported methodology for design verification. In
Proceedings of the 35th Design Automation Conference, pages
158165, June 1998.
[8] A. Hartman, S. Ur, and A. Ziv. Short vs long size does make a
difference. In Proceedings of the High-Level Design Validation and
Test Workshop, pages 2328, November 1999.
[9] D. Heckerman. A tutorial on learning with Bayesian networks.
Technical report, Microsoft Research, 1996.
[10] D. Heckerman, A. Mamdani, and M. Wellman. Real-world
applications of Bayesian networks. Communications of the ACM,
38(3):2430, 1995.
[11] G. Nativ, S. Mittermaier, S. Ur, and A. Ziv. Cost evaluation of
coverage directed test generation for the IBM mainframe. In
Proceedings of the 2001 International Test Conference, pages
793802, October 2001.
[12] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Network of
Plausible Inference. Morgan Kaufmann, 1988.
[13] S. Tasiran, F. Fallah, D. G. Chinnery, S. J. Weber, and K. Keutzer. A
functional validation technique: biased-random simulation guided
by observability-based coverage. In Proceedings of the International
Conference on Computer Design, pages 8288, September 2001.
[14] S. Ur and Y. Yadin. Micro-architecture coverage directed generation
of test programs. In Proceedings of the 36th Design Automation
Conference, pages 175180, June 1999.
[15] S. Wright. Correlation and causation. Journal of Agricultural
Research, 1921.
291
| Coverage directed test generation;conditional probability;Functional Verification;Bayesian Networks;bidirectional inferences;Maximal A Posteriori;Dynamic Bayesian Network;design under test;coverage model;Coverage Analysis;Most Probable Explanation;Markov Chain |
62 | Creating a Massive Master Index for HTML and Print | An index connects readers with information. Creating an index for a single book is a time-honored craft. Creating an index for a massive library of HTML topics is a modern craft that has largely been discarded in favor of robust search engines. The authors show how they optimized a single-sourced index for collections of HTML topics, printed books, and PDF books. With examples from a recent index of 24,000 entries for 7,000 distinct HTML topics also published as 40 different PDF books, the authors discuss the connections between modern technology and traditional information retrieval methods that made the index possible, usable, and efficient to create and maintain. | THE PROBLEM
A project with a large library of existing documentation of
about 40 books (several with multiple volumes) required a high-quality
master index. The material was written using a proprietary
SGML authoring tool and converted to both HTML for browser-based
information and PDF. The library was being converted from
a set of books into a task-oriented, topic-based set of
documentation, so very little indexing time was available for the
approximately 20 authors in two sites fully engaged in rewriting
the documentation to conform to new guidelines for online topics,
to incorporate new content, and to change content for the next
release of their product. The four project editors responsible for
the complete library were likewise busy assisting the structural
conversion of the library. Customers were asking for more direct
ways to find information. At a user conference, 52 percent of users
said the former book indexes were their primary entry point into
product information.
Given these imposing (but sadly typical) resource constraints,
the information architect for the project and an editor with
extensive indexing experience worked together to develop an
approach that would use technology to maximize the available
efforts of both authors and editors. This paper describes the
approach from the perspectives of the authors, editors, and, most
importantly, the users of this set of documentation. The approach
is described in enough detail to enable other projects to adapt the
approach to the constraints of their situation. The paper also
indicates the future directions that the project will explore.
The challenges to producing a high-quality master index were
not just posed by available resources or by available technology,
but also by the writing culture of the project. The project had
historically been heavily oriented towards writing and producing
books--the HTML documentation set had been simply a
collection of books converted to HTML. As a result, navigation
through the HTML documentation was difficult; there were almost
no links between books, and each book was organized under its
own table of contents. Full-text search of the HTML books had
been the only method of finding information across the complete
library, if a user didn't know which book to consult directly.
The product market share was expanding, and new users had
to learn the product quickly. However, cultural attitudes towards
writing reinforced the problem of books as separate silos of
information: authors were responsible for, and took justifiable
pride in, producing their individual books, and while consistency
in style and ease of access across the library was encouraged, it
was much less important to the writers' satisfaction and explicit
goals than completing their self-contained sets of information well.
Earlier, the product had offered a PostScript master index as
a printed book and a file, in response to customer feedback about
finding information across the growing library. There was never
time to improve it, so it was eventually dropped, but at the same
time, the need for better retrievability of information was
increasing and search did not adequately meet that need. Users
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies
are not made or distributed for profit or commercial advantage and
that copies bear this notice and the full citation on the first page. To
copy otherwise, or to republish, to post on servers or to redistribute to
lists, requires specific permission and/or a fee.
SIGDOC '02, October 20-23, Toronto, Ontario, Canada.
Copyright 2002 ACM 1-58113-543-2/02/0010...$5.00
186
demanded both an interactive information finder and an easily
scanned printable index.
The previous version of the library produced a master index
in both PDF and PostScript formats to assist users on platforms on
which search did not work. However, the PDF index was
generated from the individual indexes of each PDF book after the
content of the books had been frozen for translation, so there was
no attempt or opportunity to enforce consistency in indexing style
across books, even when editors were able to help writers improve
individual book indexes. Any effort in directly editing the master
index would have been lost on the library as a whole because the
index entries were part of the source books. Even so, the process
of creating the master index took days and required the
cooperation of dozens of authors.
The PDF master index was only a limited replacement for
search, since neither the PDF nor the PostScript version could
provide direct links to the indexed information. The PDF master
index forced users to find and open the corresponding PDF or
printed book, then find the referenced page themselves.
As part of the effort to address the problems of navigating
through the HTML documentation for the next release of the
product, the information architect and the editors decided to
produce a high-quality master index in HTML, even though the
full-text search capability would be supported across all platforms
for the next release.
WHY SEARCH IS NOT ENOUGH
A frequent question early in the project was "Why bother
with an online index if you have a good search engine?" The
problem with search is that typical search solutions are limited to
the terms or keywords that are found within the content matter,
making search results a happy coincidence between the user's
terminology and the content's terminology.
If synonyms or preferred terms are addressed at all in typical
search solutions, they are implemented as meta keywords. This
turns an explicit advantage of the index (the capability of training
the user in a product's vocabulary using see or see also references)
into an implicit method to improve search results. The terms used
in an index reflect the indexers' knowledge of the subject matter
and the users of that set of documentation. While search solutions
typically provide the user with the title, a ranking, and
occasionally keywords in context, an index's primary, secondary,
and tertiary entries give users more data and context with which to
select the right piece of information to meet their needs.
PROBLEMS WITH THE PREVIOUS PDF MASTER INDEX
A previous version of the product shipped a PDF master
index that was generated simply by collecting and sorting the
author-created, book-specific indexes from each of the 40 books.
An analysis of the resulting master index revealed that the
indexing approach varied greatly across writers. There were many
cases of inconsistent terminology. Singular versus plural nouns,
preferences for verbs or nouns, and capitalization differences were
the easiest problems to spot. Some authors indexed every
occurrence of a given term, other authors indexed only the most
significant usage of a given term, and others indexed almost
nothing at all. Some authors indexed every possible synonym for a
given term, and others indexed only the term itself. Some authors
relied heavily on secondary and tertiary entries to lend structure to
their index, while others relied almost exclusively on primary
index entries.
Many writers clearly didn't understand the significance of
such decisions on the customers who would ultimately use the
indexes. The master index was over 350 pages, and seemingly
small differences in primary entries sometimes resulted in users
having to look through several pages to find a specific entry.
A complicating cultural problem was that, for many authors,
indexing was an activity to be performed only when the content
had been written and checked for technical accuracy. In many
cases, this meant that the index for an individual book received
very little attention, and, in most cases, the master index received
no attention at all.
BARRIERS
It was clear that there were indexing problems throughout the
library. However, without painstakingly analyzing each index in
conjunction with the master index, it was impossible to determine
what significant material might not have been indexed or which
books had inconsistent capitalization and use of plurals. In the
wake of the first master index in PDF, the editors initiated an
attempt to address some of the most glaring inconsistencies by
setting indexing guidelines, educating authors in the art of
indexing, and encouraging authors to collaborate with other
authors in the project to define standard indexing terminology for
new or existing problem areas (for example, using specific labels
for program entities so they made sense in the context of other
entities from across the library, such as "SQL data type" versus
"data type").
The effort met with some success, but the process of
standardizing terminology across the library was unwieldy: index
entries were maintained within the source files for each book, and
the average book had between 50 and 200 source files. So, making
a single terminology change that affected 10 percent of the source
files in half of the books in the library would require opening,
editing, and saving approximately 200 source files that are kept
within a version control system. At five minutes per source file,
that's almost 17 hours of work! In addition, the source files were
often accessible only to the author, so there was a potential
bottleneck and a conflict between demands on the author's time to
write new content or revise existing index entries.
INDEX ENTRIES AS METADATA
Recognizing these problems along with others related to the
shift to a topic-based architecture, we proposed a solution that
required both technical and cultural change. The new approach to
indexing was a shift from viewing index entries as content owned
by each author and created solely during the writing process to
treating index entries as metadata to be created and edited during a
separate process.
The new topic-writing process involves adding all topics in
the library to a relational database and storing index entries in a
table related to the table that stores the topics. The database is
affectionately known as Dobalina. The very nature of an index is a
database, because they consist of records that are compiled into a
readable, searchable format (Wright, 2001), or several for single-sourced
information. The database maximized our ability to
generate flexible outputs for different formats.
The initial indexing pass took advantage of the index entries
in the legacy documentation: a Perl script stripped the index
187
entries from the source files, replacing them with an SGML text
entity unique to each file, and inserting the index entries into the
database. Once the index entries were in the database, both authors
and editors could then use a Web interface to the database to
maintain the index entries. To build a PDF version of their books,
the authors download a set of auxiliary SGML files from the
database that define the SGML text entities as their corresponding
index entries. The following example demonstrates how the index
entries are dynamically generated from the database and
incorporated into the SGML source.
<!-- Start of topic source file -->
&index1; <!-- Index text entity -->
<p>Some topic content.</p>
<!-- End of topic source file -->
<!-- Index entity definitions generated by
database -->
<!ENTITY index1 "<index>
<primary>SQL statements</primary>
<secondary>data definition</secondary>
<primary>data definition</primary>
<secondary>SQL statements</secondary>
</index>">
5.1 Better living through automation
Storing the index entries in a database gives the team the
ability to quickly generate the HTML master index, freeing
authors from the requirement to painstakingly transform each of
their books to create the individual indexes that composed the
PDF master index. The process of creating the master index now
takes approximately 15 minutes instead of days. Dynamic access
to the entire set of index entries enables the team to easily isolate
certain consistency problems, such as plural nouns or incorrect
capitalization; to immediately identify topics without index entries;
to help ensure library-wide consistency for new index terms by
providing drop-down selection of terms; and to effect social
change by eliminating the need to access source files to add, edit,
or delete index entries.
Spell-checking the 24,000-odd master index entries takes
about two hours. The changes to the index terms are made in the
database, so the corrections are automatically reflected in the book
indexes. The previous approach would have required
communicating each correction to the writer, and the master index
would still have been subject to the errors if a writer did not have
time to integrate the changes throughout the affected source files.
5.2 Cultural and process changes
The role of editors changed significantly; rather than
providing guidance to authors, the editors collectively edit the
master index by directly manipulating index entries for the entire
library through a Web interface. More extensive or complicated
changes to index entries are accomplished through Perl scripts that
manipulate the database records. Maintaining index entries in a
database enables new methods of collaboration; for example, a
writing team might designate their most skilled indexer to index
the new content for a release.
THE AUTHOR EXPERIENCE
Indexing training provided detailed guidelines for the authors
with a hands-on experience in group indexing. The detailed
guidelines are included in the Appendix. They consist of two
tables--one listing general indexing guidelines and a content-oriented
table that lists specific consistency rules for the library.
They also include some guidelines to help writers improve the
quality of their PDF indexes.
After the index entries are in the database, authors can
dynamically generate Web-based reports about the index entries
for their book. One report simply lists the number of index entries
per topic; as each topic should have at least one index entry, this is
an easy way for writers to ensure that all topics are indexed and to
see how extensively. This also helps writers meet the guideline
that each PDF book should have no more than two locators for any
index entry (primary-secondary-tertiary combination), to ensure
that entries are at the best level of detail for users.
Another report, shown in Figure 1, flags specific index
entries that might contravene the indexing guidelines. For
example, the report checks for index entries that differ only by
case and primary index entries that contain a comma, as well as
other conditions. A primary index entry that contains a comma
typically suggests that the entry should be split into primary and
secondary entries so it will nest with other secondary entries.
Figure 1. Report on inconsistent index entries
To ease the authors' job of verifying or addressing
deficiencies noted in the reports, the reports provide links to both
the content of the source file (View column) and the authors'
indexing interface (Title column). The authors' screens are
implemented as a Web-based form with two modes of work:
updating existing index entries, and adding index entries.
6.1 Authors' update index screen
The update screen, shown in Figure 2, is a set of text fields
that display the current primary, secondary, and tertiary entries
(shown as i1, i2, and i3, respectively).
188
Figure 2. Authors' updating screen
When authors update an entry in a text field, the status of that
entry automatically switches from No Change to Edit. When
authors complete their changes, they click Submit to commit their
changes for any flagged entries to the topic database. In Figure 2,
for example, the writer can easily see that the third entry isn't done
the same way as the first two, and that the third entry under names
has an unnecessary for. Authors also have the option of deleting
index entries by selecting the Delete radio button for any entries.
After they submit their changes, they see a refreshed table with any
deletions at the top followed by a list of changes.
6.2 Authors' add entry screen
The Add Index Entries screen, shown in Figure 3, enables
authors to add index entries easily in a way that maintains
consistency with the entries already in the database. To add a new
index entry for a topic, the author begins by clicking the initial
character for the new index entry from a drop-down box.
Figure 3. Selecting the initial character of a new index entry
The author then selects the primary entry (the i1 entry) from
the drop-down box that has been dynamically added to the form.
This box lists all the primary entries that start with the initial
character selected.
When an author picks a primary entry, as in Figure 4, it is
automatically copied into the i1 entry text box. The author then
drills down through the available secondary and tertiary entries,
picking the relevant entries. At any time in the process of adding a
new index entry, authors can manually enter or change the
primary, secondary, or tertiary entries in the text boxes if the
existing index entries in the database do not suit their needs. This
approach provides a flexible way to limit, but not rigidly control,
authors' indexing vocabulary. The next builds of the individual
book or deliverable, the HTML master index, and the PDF master
index reflect any new, deleted, or updated index entries.
Figure 4. Selecting the primary entry from the existing list
THE EDITOR EXPERIENCE
While the author interface focuses on a book-by-book (or
deliverable-by-deliverable) view of the index, the role of the
editors in the indexing effort is to improve the quality of the index
across the entire library.
Their primary focus is on the master index in HTML. During
the editing phase, a new master index in HTML was generated
every two hours to enable the editors to quickly view the results of
their work. The editors' indexing interface reflects their needs and
approach to the task of editing the master index. Editors begin by
finding a problem index entry in the master index using a separate
browser window. Then, they use a string or substring of the
primary index entry, as shown in Figure 5, to bring up a listing of
all primary index entries beginning with that string. The search is
not case-sensitive, so any variations in capitalization are included
in the search results.
189
Figure 5. Editors' screen for finding a primary index entry
The primary entry search yields matching results from across
the entire library, so the editor can quickly address inconsistencies
between two primary entries and rationalize secondary and tertiary
entries. The list of search results appears in a screen similar to the
authors' Show/Update/Add Index Entries screen, as shown in
Figure 6.
Figure 6. Editors' indexing screen for updating entries
When an editor submits a change, the editors' indexing
interface refreshes with the results, making it easy to confirm that
the expected change was made.
The next builds of the affected individual books or
deliverables, the HTML master index, and the PDF master index
reflect any new, deleted, or changed index entries.
USER EXPERIENCE HTML
Several alternative presentation formats were considered for
the HTML version of the master index. These included the format
used by the individual book indexes in the previous version of the
documentation, third-party formats, and an internally developed
format.
8.1 Previous presentation format
Previous versions of the HTML indexes generated by the
project's standard tools were created for individual books. These
indexes were presented as nested, unordered lists of index terms.
Links to the HTML pages were displayed as arbitrary four-digit
integers, as shown in Figure 7. If an index entry pointed to more
than one location in the HTML book, the links were displayed as
arbitrary integers separated by commas.
Figure 7. Previous index presentation format
One of the disadvantages of this presentation format is that
users have no criteria by which to choose among the links that
might be presented for a single index entry. The arbitrary integers
might even give users the impression of being some sort of
relevance ranking. Another disadvantage is that, for a master index
composed of approximately 40 books, the large number of
primary, secondary, and tertiary index entries makes the index very
difficult to scan in an online medium.
8.2 Third-party formats
Some existing systems that support indexes, such as Sun
JavaHelpTM, limit index entries to a one-to-one mapping between
each unique index entry and a corresponding locator. This was not
an acceptable limitation for the project's large documentation set;
each unique index entry needed to be able to map to multiple
locators. Other systems, such as Microsoft
HTML Help, do
support multiple locators for a single index entry through the use
of a pop-up window but do not support the operating systems
supported by the project.
Oracle Help for Java and Oracle Help for the Web
(http://otn.oracle.com/tech/java/help/content.html) can display
multiple topics per index entry using the topic titles, but they are
currently limited to two levels of index entries. Oracle Help for the
Web enables the user to type the first few characters of a primary
entry to jump ahead to that section of the index, but requires the
user to click on the index entry to display any links to topics.
We decided to display the index in the content pane of a help
system partly to enable portability. If we later decide to deliver our
information in a different help system such as JavaHelp, the
Eclipse help system, or Windows
Help we can avoid any
limitations in the help system's built-in support for indexes. Trying
to drop 24,000 index entries into a navigation pane simply will not
deliver acceptable performance. Assuming that we continue to
deliver HTML-based documentation in future versions of the
190
product, this approach will provide users of future versions of the
product with a consistent means of accessing the master index.
8.3 Internally developed format
The solution for this project was to provide an expanding,
collapsing master index, as shown in Figure 8. To assist in
scanning, the HTML master index presents only the primary
entries at first, but enables users to drill down to secondary and
tertiary entries to find exactly the information they need. An index
entry that maps to multiple topics displays the locators as a further
nested list of links. To provide users with criteria by which they
can judge which of several locators meets their needs, each locator
is shown as the title of the corresponding topic, as illustrated
below.
Figure 8. Current index presentation format
Nielsen's 1997 article "The Need for Speed" suggests that
34K is the maximum size of an optimal Web page for a ten-second
response time for a dial-up connection to the Web. However,
internal surveys of our users indicated that most users of our
product access the documentation either from a local workstation
or from a web server within their company's intranet. We wanted
to give users the additional context provided by full topic titles, so
we divided the content into multiple files. Given our typical user
scenarios, we decided that we could allow somewhat larger files
than recommended by Nielsen and divided the index by initial
letter into 27 separate files, including one file for all index entries
beginning with non-alphabetic characters.
The resulting average size of the set of index files is 100K,
the median size of the set of index files is 60K, and the largest
index file is 400K. The average index file loads and displays in
less than one second from an intranet or local workstation, while
the largest index file takes just over three seconds to load and
display. These times are within Nielsen's maximum threshold for
web usability. While the longest delays fall outside the threshold
of optimal usability response times of less than one second, we
feel that the slightly increased initial load time is balanced by the
ease of scanning an index file that contains all of the entries for an
initial character.
When we collected the complete set of 24,000 index entries
in a single HTML file, the file size was over two megabytes and
some browsers were incapable of displaying the file. The current
index presentation format uses only two images, each smaller than
one kilobyte, which are downloaded by the browser only once.
Approximately five percent of the total size of the HTML index
represents the JavaScript and CSS code that makes the index
accessible by keyboard navigation. The majority of the content is
due to the topic titles, which average 29 characters per title. Index
entries average 13 characters per entry.
Initial usability sessions on the current index presentation
format indicate that users understand how to work with the
expanding and collapsing lists, prefer this format to our previous
index format, and find the performance of the index from an
intranet or from a locally workstation acceptable. However, as
these sessions drew on the experiences of only four users, we
recognize the need to do further research to confirm the validity of
this presentation format.
USER EXPERIENCE PDF
In PDF, the individual book indexes look like regular book
indexes; that is, they display the index entries and one or more
page numbers for those index entries. The decision to associate
index entries with entire topics has the detrimental effect of forcing
all index entries for a given topic to point to the beginning of that
topic. For topics that are longer than a page, users might have to
browse through subsequent pages to find the information
represented by the index entry in which they are interested.
However, length should not be an issue for most topics, as authors
are encouraged to keep topics short to ensure good online display.
The exception is some reference topics: a topic covering the syntax
of a single command might be over 20 pages, and a user might
legitimately be interested in just one of the optional arguments for
that command. In future iterations of the topic database we hope to
enable a more granular indexing strategy to address this
shortcoming.
We decided to optimize our indexing effort for the master
index (both in HTML and PDF), rather than for individual book
indexes, so we had to relax some traditional indexing guidelines.
One decision we made was to allow individual book indexes to
contain primary entries with only a single secondary entry. Rather
than concatenating such entries with a comma, we decided, for the
sake of the master index, to keep these entries as primary and
secondary entries. The following example demonstrates how an
individual book index is normally edited to concatenate the
primary and secondary entries:
SQL statements, overview
453
However, to optimize such entries for the master index, the
individual book index must contain distinct primary entries with a
lone secondary entry, as in the following example:
SQL statements
overview
453
We felt that this decline in usability of the individual book
indexes would be recouped by reducing the number of comma-spliced
entries in the master index, as shown in the following
example from the PDF version (the identifiers in front of the page
numbers are short forms of the book names):
191
SQL statements, issuing
ADGv1: 238
SQL statements, overview ADMINv1: 453
The master index instead presents the optimal arrangement of
nested primary and secondary entries:
SQL statements
issuing
ADGv1: 238
overview
ADMINv1: 453
One indexing decision driven by the requirements of PDF
that posed a potential compromise to the quality of the HTML
master index instead improved the master index. It was necessary
to artificially subdivide extremely long index entries into separate
primary and secondary entries in PDF because the books display
the index in a three-column format. A limitation of the PDF
transform technology used by the authoring tool is that extremely
long index entries that contain no spaces, such as the API keyword
SQL_DESC_DATETIME_INTERVAL_ PRECISION
, bleed
over column boundaries rather than wrap cleanly. In those cases
where many similar entries also start with the same prefix, the
prefix was turned into a primary entry and the remainder was
indexed as a secondary entry. This indexing decision reduces the
number of primary entries in the master index, making it much
easier for the user to scan the primary entries in the collapsed
HTML master index. So the preceding example becomes one of
many index entries with the primary entry "SQL_DESC":
SQL_DESC_
BIND_OFFSET_PRT
DATETIME_INTERVAL_PRECISION
DISPLAY_SIZE
TECHNOLOGY
The HTML master index is implemented in HTML Version
4.0 using Cascading Style Sheets level 2 (CSS2 at
http://www.w3.org/Style/CSS/), Document Object Model (DOM
at http://www.w3.org/DOM/), and ECMAScript 262
(http://www.ecma.ch/ecma1/stand/ecma-262.htm) to enable the
expanding and collapsing behavior. Microsoft Internet Explorer
5.0 and higher and Netscape 6 and higher support the expanding
and collapsing behavior. Other browsers degrade gracefully to
display nested unordered lists of the index terms and associated
topics.
The topic database, known as "Dobalina," is implemented
with a blend of open-source and proprietary products. The
relational database is IBM
DB2
Version 7.2
(http://www.software.ibm.com/data/db2/udb), running on a
reasonably powerful server, but it could fairly easily be replaced
by an open-source database such as MySQL or PostgreSQL. The
source format for documentation is SGML (produced with a tool
built on ArborText using a proprietary IBMIDDOC DTD), which
enables us to dynamically generate index entries as text entities.
XML or some sort of manipulated HTML would also fit easily
into this model.
The project uses the Apache Web server
(http://httpd.apache.org) to display the Web scripting front end
implemented in PHP (PHP: Hypertext Preprocessor, see
http://php.apache.org). PHP also creates the auxiliary source files
used to generate the PDF books.
The HTML master index is generated by a Perl script
(http://www.perl.org), which connects to the database through the
Perl database interface module (http://dbi.perl.org). Most of the
initial processing of the documentation source files was also
performed with Perl scripts.
FUTURE DIRECTIONS
The index improvement process for such a large information
set is planned over several phases, as shown in Table 1. In this
project, we are now planning Phase 3.
Table 1. Phases of the indexing effort
Phase 1:
Recognize the
problem and
build internal
support
Reassess master index quality problems
Analyze information retrieval needs
Experiment with scripts
Build indexing requirements
Develop indexing guidelines
Collect user feedback
Phase 2: Create
first version for
writers and
customers
Create the topic database
Develop indexing interfaces
Design the presentation of the master index
and do initial user testing
Establish flexible controlled vocabulary
process and start adding see references
Train authors in indexing and guidelines
Edit master index (focus on primary entries)
Conduct initial user testing of index
presentation format
Phase 3:
Continue
refining the
master index
Provide multiple entry points to the index in
the product
Implement See also references
Review consistency of primary entries
Edit master index (focus on subentries)
Add more syntax and reference entries
Implement and apply further reporting
features
Promote internal use and gather feedback on
index content to refine user orientation of
entries and to "fill holes"
Do user testing
Reuse user-centered design scenarios
Phase 4:
Ensure
completeness of
contributing
content areas
Continue to develop reporting features to
improve overall consistency
Ensure completeness of concept and task
entries
Edit PDF indexes to help ensure that content
is adequately indexed
Work with each small writing teams to
improve index coverage
Analyze problems with PDF and master
indexes in other languages
Phase 5:
Maintain and
continue
improving index
Establish iterative maintenance process
Continue to improve presentation,
technology, and content
Build consensus to improve quality of
localized indexes
This table shows our actual process and not necessarily the
best possible task flow. For example, ideally, See and See also
references would be fully implemented in phase 2.
192
One area for future development is the creation of additional
reports and scripts. For example, a regular report could identify all
the new entries that were not part of the last published index for
special editing attention.
We are currently augmenting our lists of see references for
synonyms, competitive terms, and obsolete terms. We plan to take
full advantage of the capabilities of our relational database to
create mappings between deprecated or less acceptable terms and
acceptable terms, so that any PDF book with an index entry for a
deprecated term automatically includes cross-references that lead
to the acceptable term.
A new indexing screen for authors, with some of the function
of the editors' indexing screen, will facilitate improvements of the
PDF indexes.
We've put a lot of effort into making what's already in the
index more consistent, usable, and predictable. But one of the
biggest problems is to fill the remaining content holes, that is,
what's not yet indexed. Customer, developer, and service feedback
indicates the need to improve the granularity of indexing reference
topics that document syntax.
We will also ensure that the index provides easy access to
information needed to implement business and customer scenarios
developed by the user-centered design and development teams,
and continue to develop the usability of the index interfaces in the
product. In Phase 4, we plan to work with each small writing team
on specific ways to improve the retrievability of their information.
REFERENCES
[1]
Nielsen, Jakob, "The Need for Speed," 1997; available at
http://www.useit.com/alertbox/9703a.html
[2]
Wright, Jan C., "Single-Source Indexing," Proceedings of the
19
th
Annual International Conference on Systems
Documentation, October 2001; available at
http://www.portal.acm.org
193
| book indexes;Information retrieval methods;Massive Master;drop-down selection of terms;Indexing;SQL data type;Search;primary index entry;Internally developed format;automation;indexing problems;Human factors;HTML master index;Online information;Navigation |
63 | Data Mining in Metric Space: An Empirical Analysis of Supervised Learning Performance Criteria | Many criteria can be used to evaluate the performance of supervised learning. Different criteria are appropriate in different settings, and it is not always clear which criteria to use. A further complication is that learning methods that perform well on one criterion may not perform well on other criteria. For example, SVMs and boosting are designed to optimize accuracy, whereas neural nets typically optimize squared error or cross entropy. We conducted an empirical study using a variety of learning methods (SVMs, neural nets, k-nearest neighbor, bagged and boosted trees, and boosted stumps) to compare nine boolean classification performance metrics: Accuracy, Lift, F-Score, Area under the ROC Curve, Average Precision, Precision/Recall Break-Even Point, Squared Error, Cross Entropy, and Probability Calibration. Multidimensional scaling (MDS) shows that these metrics span a low dimensional manifold. The three metrics that are appropriate when predictions are interpreted as probabilities: squared error, cross entropy, and calibration, lay in one part of metric space far away from metrics that depend on the relative order of the predicted values: ROC area, average precision, break-even point, and lift. In between them fall two metrics that depend on comparing predictions to a threshold: accuracy and F-score. As expected, maximum margin methods such as SVMs and boosted trees have excellent performance on metrics like accuracy , but perform poorly on probability metrics such as squared error. What was not expected was that the margin methods have excellent performance on ordering metrics such as ROC area and average precision. We introduce a new metric, SAR, that combines squared error, accuracy, and ROC area into one metric. MDS and correlation analysis shows that SAR is centrally located and correlates well with other metrics, suggesting that it is a good general purpose metric to use when more specific criteria are not known. | INTRODUCTION
In supervised learning, finding a model that could predict
the true underlying probability for each test case would be
optimal. We refer to such an ideal model as the One True
Model. Any reasonable performance metric should be optimized
(in expectation, at least) by the one true model, and
no other model should yield performance better than it.
Unfortunately, we usually do not know how to train models
to predict the true underlying probabilities. The one
true model is not easy to learn. Either the correct parametric
model type for the domain is not known, or the training
sample is too small for the model parameters to be esti-mated
accurately, or there is noise in the data. Typically,
all of these problems occur together to varying degrees.
Even if magically the one true model were given to us, we
would have difficulty selecting it from other less true models.
We do not have performance metrics that will reliably assign
best performance to the probabilistically true model given
finite validation data.
In practice, we train models to minimize loss measured via
a specific performance metric. Since we don't have metrics
that could reliably select the one true model, we must accept
the fact that the model(s) we select will necessarily be
suboptimal. There may be only one true model, but there
are many suboptimal models.
There are different ways that suboptimal models can differ
from the one true model tradeoffs can be made between
different kinds of deviation from the one true model. Different
performance metrics reflect these different tradeoffs. For
example, ordering metrics such as area under the ROC curve
and average precision do not care if the predicted values are
near the true probabilities, but depend only on the relative
size of the values. Dividing all predictions by ten does
not change the ROC curve, and metrics based on the ROC
curve are insensitive to this kind of deviation from truth.
Metrics such as squared error and cross entropy, however,
are greatly affected by scaling the predicted values, but are
less affected by small changes in predicted values that might
alter the relative ordering but not significantly change the
deviation from the target values. Squared error and cross
entropy reflect very different tradeoffs than metrics based
on the ROC curve. Similarly, metrics such as accuracy depend
on how the predicted values fall relative to a threshold.
If predicted values are rescaled, accuracy will be unaffected
if the threshold also is rescaled. But if small changes to
69
Research Track Paper
0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
0.6
0.8
1
W2
W1
Max ACC
0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
0.6
0.8
1
W2
W1
Max AUC
0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
0.6
0.8
1
W2
W1
Min RMS
0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
0.6
0.8
1
W2
W1
Min MXE
0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
0.6
0.8
1
W2
W1
Min CAL
0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
0.6
0.8
1
W2
W1
Max SAR
Figure 1: Level curves for six error metrics: ACC, AUC, RMS, MXE, CAL, SAR for a simple problem.
predicted values are made for cases near the threshold, this
can have large impact on accuracy. Accuracy reflects yet
another tradeoff in how deviation from truth is measured.
The one true model, if available, would have (in expectation
) the best accuracy, the best ROC curve, and the best
cross entropy, and the different tradeoffs made by these metrics
would not be important. But once we accept that we
will not be able to find the one true model, and must therefore
accept suboptimal models, the different tradeoffs made
by different performance metrics become interesting and important
. Unfortunately, little is known about how different
performance metrics compare to each other.
In this paper we present results from an empirical analysis
of nine widely used performance metrics. We perform
this empirical comparison using models trained with seven
learning algorithms: SVMs, neural nets, k-nearest neighbor
, bagged and boosted trees, and boosted stumps. We
use multidimensional scaling (MDS) and correlation analysis
to interpret the results. We also examine which learning
methods perform best on the different metrics. Finally, we
introduce a new metric, SAR, that combines squared error,
accuracy, and ROC area into a single, robust metric.
THE PERFORMANCE METRICS
We experiment with nine performance metrics for boolean
classification: Accuracy (ACC), Lift (LFT), F-Score (FSC),
Area under the ROC Curve (AUC), Average Precision (APR),
the Precision/Recall Break-Even Point (BEP), Root Mean
Squared Error (RMS), Mean Cross Entropy (MXE), and
Probability Calibration (CAL). Definitions for each of the
metrics can be found in Appendix A.
Figure 1 shows level curves for six of the ten performance
metrics for a model with only two parameters (W 1 and W 2)
trained on a simple synthetic binary problem. Peak performance
in the first two plots occurs along a ridge in weight
space. In the other four plots peak performance is indicated
by solid dots. Peak performance for some metrics nearly
coincide: RMS and MXE peak at nearly the same model
weights. But other metrics peak in different places: CAL
has a local optimum near the optima for RMS and MXE,
but its global optimum is in a different place. Also, the
ridges for optimal ACC and optimal AUC do not align, and
the ridges do not cross the optima for the other four metrics.
Optimizing to each of these metrics yields different models,
each representing different tradeoffs in the kinds of errors
the models make. Which of these tradeoffs is best depends
on the problem, the learning algorithm, and how the model
predictions ultimately will be used.
We originally divided the nine metrics into three groups:
threshold metrics, ordering/rank metrics, and probability
metrics. The three threshold metrics are accuracy (ACC),
F-score (FSC) and lift (LFT). F-score is the harmonic mean
of precision and recall at some threshold. Lift measures the
true positive rate in the fraction of cases that fall above
threshold. (See Appendix A for a definition of lift, and [3]
for a description of Lift Curves. Lift is the same as precision
at some threshold, but scaled so that it can be larger than
1.) Usually ACC and FSC use a fixed threshold. In this
paper we use 0.5. With lift, often the threshold is adjusted
so that a fixed percent, p, of cases are predicted as positive,
the rest falling below threshold. Usually p depends on the
problem. For example, in marketing one might want to send
fliers to 10% of customers. Here we somewhat arbitrarily set
p = 25% for all problems. Note that for all threshold metrics
it is not important how close a prediction is too a threshold,
only if the predicted value is above or below threshold.
The ordering/rank metrics look at predictions differently
from the threshold metrics. If cases are ordered by predicted
value, the ordering/rank metrics measure how well the ordering
ranks positive cases above negative cases. The rank
metrics can be viewed as a summary of the performance of
a model across all possible thresholds. The rank metrics we
use are area under the ROC curve (AUC), average precision
(APR), and precision/recall break even point (BEP). See
[10] for a discussion of ROC curves from a machine learning
perspective. Rank metrics depend only on the ordering
of the predictions, not the actual predicted values. If the
ordering is preserved it makes no difference if the predicted
values range between 0 and 1 or between 0.29 and 0.31.
Although we group Lift with the threshold metrics, and
BEP with the ordering metrics, BEP and Lift are similar to
each other in some respects. Lift is directly proportional to
BEP if Lift is calculated at p equal to the proportion of positives
in the data set. This threshold also is the break-even
point where precision equals recall. BEP and Lift are similar
to the ordering metrics because the threshold depends
implicitly on the ordering, but also are similar to the threshold
metrics because neither is sensitive to the orderings on
either side of the threshold once that threshold has been
defined. Results presented later suggest that both Lift and
BEP are more similar to the ordering metrics than to the
threshold metrics.
The three probability metrics depend on the predicted values
, not on how the values fall relative to a threshold or relative
to each other. The probability metrics are uniquely min-imized
(in expectation) when the predicted value for each
case coincides with the true probability of that case being
positive. The probability metrics we consider are squared
error (RMS), cross entropy (MXE) and calibration (CAL).
CAL measures the calibration of a model: if a model predicts
0.85 for a large number of cases, about 85% of those cases
should prove to be positive if the model is well calibrated.
See Appendix A for details of how CAL is calculated.
70
Research Track Paper
We also experiment with a new performance metric, SAR,
that combines squared error, accuracy, and ROC area into
one measure: SAR = (ACC + AU C + (1 - RM S))/3. SAR
behaves somewhat differently from ACC, AUC, and RMS
alone, and is a robust metric to use when the correct metric
is unknown. SAR is discussed further in Section 8.
NORMALIZING THE SCORES
Performance metrics such as accuracy or squared error
have range [0, 1], while others (lift, cross entropy) range from
0 to q where q depends on the data set. For some metrics
lower values indicate better performance. For others higher
values are better. Metrics such as ROC area have baseline
rates that are independent of the data, while others such as
accuracy have baseline rates that depend on the data. If
baseline accuracy is 0.98, an accuracy of 0.981 probably is
not good performance, yet on another problem, if the Bayes
optimal rate is 0.60, achieving an accuracy of 0.59 might be
excellent performance.
In order to compare performance metrics in a meaningful
way, all the metrics need to be placed on a similar scale. One
way to do this is to scale the performances for each problem
and metric from 0 to 1, where 0 is poor performance, and 1
is good performance. For example, we might place baseline
performance at 0, and the Bayes optimal performance at 1.
Unfortunately, we cannot estimate the Bayes optimal rate
on real problems. Instead, we can use the performance of
the best observed model as a proxy for the Bayes optimal
performance. We calculate baseline rate as follows: predict
p for every case, where p is the percent of positives in the test
set. We normalize performances to the range [0, 1], where
0 is baseline and 1 represents best performance. If a model
performs worse than baseline, its normalized score will be
negative. See Table 1 for an example of normalized scores.
The disadvantage of normalized scores is that recovering the
raw performances requires knowing the performances that
define the top and bottom of the scale, and as new best
models are found the top of the scale changes.
CAL, the metric we use to measure probability calibration
, is unusual in that the baseline model that predicts p
for all cases, where p is the percent of positives in the test
set, has excellent calibration. (Because of this, measures like
CAL typically are not used alone, but are used in conjunction
with other measures such as AUC to insure that only
models with good discrimination and good calibration are
selected. See Figure 1 for a picture of how unusual CAL's
error surface is compared with other metrics.) This creates a
problem when normalizing CAL scores because the baseline
model and Bayes optimal model have similar CAL scores.
This does not mean CAL is a poor metric it is effective at
distinguishing poorly calibrated models from well calibrated
models. We address this problem later in the paper.
EXPERIMENTAL DESIGN
The goal of this work is to analyze how the ten metrics
compare to each other. To do this we train many different
kinds of models on seven test problems, and calculate for
each test problem the performance of every model on the
ten metrics.
We train models using seven learning algorithms: Neural
Nets (ANN), SVMs, Bagged Decision Trees (BAG-DT),
Boosted Decision Trees (BST-DT), Boosted Decision Stumps
Table 1: Accuracy on ADULT problem
model
acc
norm score
bst-stmp
0.8556
1.0000
bag-dt
0.8534
0.9795
dt
0.8503
0.9494
svm
0.8480
0.9267
bst-dt
0.8464
0.9113
ann
0.8449
0.8974
knn
0.8320
0.7731
baseline
0.7518
0.0000
(BST-STMP), single Decision Trees (DT) and Memory Based
Learning (KNN). For each algorithm we train many variants
and many parameter settings. For example, we train ten
styles of decision trees, neural nets of different sizes, SVMs
using many different kernels, etc. A total of 2000 models are
trained and tested on each problem. See Appendix B for a
description of the parameter settings we use for each learning
method. While this strategy won't create every possible
model, and won't create a uniform sample of the space of
possible models, we feel that this is an adequate sample of
the models that often will be trained in practice.
For each problem, the 2000 models are trained on the same
train set of 4000 points. The performance of each model
is measured on the same large test set for each of the ten
performance metrics. In order put the performances on the
same scale across different metrics and different problems,
we transform the raw performance to normalized scores as
explained in Section 3. In total, across the seven problems,
we have 2000 7 = 14, 000 models and for each model we
have it's score on each of the 10 performances metrics.
DATA SETS
We compare the algorithms on seven binary classification
problems. ADULT, COVER TYPE and LETTER are from
UCI Repository [1]. ADULT is the only problem that has
nominal attributes. For ANNs, SVMs and KNNs we transform
nominal attributes to boolean. Each DT, BAG-DT,
BST-DT and BST-STMP model is trained twice, once with
the transformed attributes and once with the original attributes
. COVER TYPE has been converted to a binary
problem by treating the largest class as the positive and the
rest as negative. We converted LETTER to boolean in two
ways. LETTER.p1 treats the letter "O" as positive and the
remaining 25 letters as negative, yielding a very unbalanced
binary problem. LETTER.p2 uses letters A-M as positives
and the rest as negatives, yielding a well balanced problem.
HYPER SPECT is the IndianPine92 data set [4] where the
difficult class Soybean-mintill is the positive class. SLAC is
a problem from collaborators at the Stanford Linear Accelerator
and MEDIS is a medical data set. The characteristics
of these data sets are summarized in Table 2.
Table 2: Description of problems
problem
#attr
train size
test size
% pos.
adult
14/104
4000
35222
25%
cover type
54
4000
25000
36%
letter.p1
16
4000
14000
3%
letter.p2
16
4000
14000
53%
medis
63
4000
8199
11%
slac
59
4000
25000
50%
hyper spect
200
4000
4366
24%
71
Research Track Paper
MDS IN METRIC SPACE
Training 2000 models on each problem using seven learning
algorithms gives us 14,000 models, each of which is eval-uated
on ten performance metrics. This gives us 14,000
sample points to compare for each performance metric. We
build a 10x14,000 table where lines represent the performance
metrics, columns represent the models, and each entry
in the table is the score of the model on that metric.
For MDS, we treat each row in the table as the coordinate
of a point in a 14,000 dimension space. The distance between
two metrics is calculated as the Euclidean distance
between the two corresponding points in this space. Because
the coordinates are strongly correlated, there is no
curse-of-dimensionality problem with Euclidean distance in
this 14,000 dimensional space.
We are more interested in how the metrics compare to
each other when models have good performance than when
models have poor performance. Because of this, we delete
columns representing poorer performing models in order to
focus on the "interesting" part of the space where models
that have good performance lie. For the analyses reported
in this paper we delete models that perform below baseline
on any metric (except CAL).
Ten metrics permits 10 9/2 = 45 pairwise comparisons.
We calculate Euclidean distance between each pair of metrics
in the sample space, and then perform multidimensional
scaling on these pairwise distances between metrics.
MDS is sensitive to how the performance metrics are scaled.
The normalized scores described in Section 3 yield well-scaled
performances suitable for MDS analysis for most metrics
. Unfortunately, as discussed in Section 3, normalized
scores do not work well with CAL. Because of this, we perform
MDS two ways. In the first, we use normalized scores,
but exclude the CAL metric. In the second, we include CAL,
but scale performances to mean 0.0 and standard deviation
1.0 instead of using normalized scores. Scaling by standard
deviation resolves the problem with CAL for MDS, but is
somewhat less intuitive because scores scaled by standard
deviation depend on the full distribution of models instead
of just the performances that fall at the top and bottom of
each scale.
Figure 2 shows the MDS stress as a function of the number
of dimensions in the MDS (when CAL is included). The
ten metrics appear to span an MDS space of about 3 to 5
dimensions. In this section we examine the 2-D MDS plots
in some detail.
Figure 3 shows two MDS plots for the metrics that result
when dimensionality is reduced to two dimensions. The plot
on the left is MDS using normalized scores when CAL is
excluded. The plot on the right is MDS using standard
deviation scaled scores when CAL is included.
Both MDS plots show a similar pattern. The metrics appear
to form 4-5 somewhat distinct groups. In the upper
right hand corner is a group that includes AUC, APR, BEP,
LFT, and SAR. The other groups are RMS and MXE, ACC
(by itself, or possibly with FSC), FSC (by itself, or possibly
with ACC), and CAL (by itself). It is not surprising that
squared error and cross entropy form a cluster. Also, presumably
because squared error tends to be better behaved
than cross entropy, RMS is closer to the other measures than
MXE. We are somewhat surprised that RMS is so centrally
located in the MDS plots. Perhaps this partially explains
why squared error has proved so useful in many applications.
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
1
2
3
4
5
6
7
8
MDS Stress
Number of MDS Dimensions
Figure 2: MDS stress vs. number of dimensions
It is somewhat surprising that accuracy does not appear
to correlate strongly with any of the other metrics, except
possibly with FSC. ACC does not fall very close to other
metrics that use thresholds such as Lift and F-Score, even
though F-Score uses the same 0.5 threshold as accuracy in
our experiments. (The threshold for Lift is adjusted dynam-ically
so that 25% of the cases are predicted as positive.)
Accuracy is surprisingly close to RMS, and closer to RMS
than to MXE, again suggesting that part of the reason why
RMS has been so useful is because of its close relationship
to a metric such as ACC that has been so widely used.
The most surprising pattern in the MDS plot that includes
CAL is that CAL is distant from most other metrics
. There appears to be an axis running from CAL at
one end to the ordering metrics such as AUC and APR
at the other end that forms the largest dimension in the
space. This is surprising because one way to achieve excellent
ordering is to accurately predict true probabilities,
which is measured by the calibration metric. However, one
can achieve excellent AUC and APR using predicted values
that have extremely poor calibration, yet accurately predict
the relative ordering of the cases. The MDS plot suggests
that many models which achieve excellent ordering do so
without achieving good probabilistic calibration. Closer examination
shows that some models such as boosted decision
trees yield remarkably good ordering, yet have extremely
poor calibration.
We believe maximum margin methods
such as boosting tradeoff reduced calibration for better margin
. See Section 9 for further discussion of this issue. One
also can achieve good calibration, yet have poor AUC and
APR. For example, decision trees with few leaves may be
well calibrated, but the coarse set of values they predict do
not provide a basis for good ordering.
Figure 4 shows 2-D MDS plots for six of the seven test
problems. The seventh plot is similar and is omitted to
save space. (The omitted plot is one of the two LETTER
problems.) Although there are variations between the plots,
the 2-D MDS plots for the seven problems are remarkably
consistent given that these are different test problems. The
consistency between the seven MDS plots suggests that we
have an adequate sample size of models to reliably detect relationships
between the metrics. Metrics such as ACC, FSC,
and LFT seem to move around with respect to each other in
these plots. This may be because they have different sensi-72
Research Track Paper
acc
fsc
lft
auc
apr
bep
rms
mxe
sar
Dim 2
-2.0
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
Dim 1
-2.0
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
acc
fsc
lft
auc
apr
bep
rms
mxe
cal
sar
Dim 2
-2.0
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
Dim 1
-2.0
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
Figure 3: 2D MDS plot using normalized scores (left) and standard deviation scaling (right).
tivities to the ratio of positives to negatives in the data sets.
For example, BEP is proprtional to LFT (and thus behaves
similarly) when the percentage of positives in the dataset
equals the fraction predicted above threshold (25% in this
paper). Other than this, we have not been able to correlate
differences we see in the individual plots with characteristics
of the problems that might explain those differences,
and currently believe that the MDS plots that combine all
seven problems in Figure 3 represents an accurate summary
of the relationships between metrics. Note that this does
not mean that the performance of the different learning algorithms
exhibits the same pattern on these test problems
(in fact they are very different), only that the relationships
between the ten metrics appear to be similar across the test
problems when all the learning algorithms are considered at
one time.
CORRELATION ANALYSIS
As with the MDS analysis in the previous section, we
used each of the ten performance metrics to measure the
performance of the 2000 models trained with the different
learning methods on each of the seven test problems. In
this section we use correlation analysis on these models to
compare metrics instead of MDS.
Again, to make the correlation analysis easier to interpret
, we first scale performances to the range [0, 1] so that
the best performance we observed with that metric on each
problem with any of the learning methods is performance
1, and baseline performance with that metric and data set
is performance 0. This eliminates the inverse correlation
between measures such as accuracy and squared error, and
normalizes the scale of each metric.
Ten metrics permits 10 9/2 = 45 pairwise correlations.
We do these comparisons using both linear correlation (excluding
CAL) and rank correlation. The results from the
linear and rank correlation analyses are qualitatively similar
. We present the results for non-parametric rank correlation
because rank correlation makes fewer assumptions
about the relationships between the metrics, and because
rank correlation is insensitive to how CAL is scaled.
Table 3 shows the rank correlation between all pairs of
metrics. Each entry in the table is the average rank correlation
across the seven test problems. The table is sym-metric
and contains only 45 unique pairwise comparisons.
We present the full matrix because this makes it easier to
scan some comparisons. The final column is the mean of
the rank correlations for each metric. This gives a rough
idea how correlated each metric is on average to all other
metrics.
Metrics with pairwise rank correlations near one behave
more similarly than those with smaller rank correlations. Ignoring
the SAR metric which is discussed in the next section,
seven metric pairs have rank correlations above 0.90:
0.96: Lift to ROC Area
0.95: ROC Area to Average Precision
0.93: Accuracy to Break-even Point
0.92: RMS to Cross-Entropy
0.92: Break-Even Point to ROC Area
0.92: Break-Even Point to Average Precision
0.91: Average Precision to Lift
We expected AUC and average precision to behave very
similarly and thus have high rank correlation. But we are
surprised to see that Lift has such high correlation to AUC.
Note that because Lift has high correlation to AUC, and
AUC has high correlation to average precision, it is not surprising
that Lift also has high correlation to average precision
. As expected, break-even point is highly correlated with
the other two ordering metrics, AUC and average precision.
But the high correlation between accuracy and break-even
73
Research Track Paper
acc
fsc
lft
auc
apr
bep
rms
mxe
cal
sar
-2.0
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
COVER_TYPE
-2.0
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
acc
fsc
lft
auc
apr
bep
rms
mxe
cal
sar
-2.0
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
ADULT
-2.0
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
acc
fsc
lft
auc
apr
bep
rms
mxe
cal
sar
-2.0
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
LETTER.P1
-2.0
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
acc
fsc
lft
auc
apr
bep
rms
mxe
cal
sar
-2.0
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
HYPER_SPECT
-2.0
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
acc
fsc
lft
auc
apr
bep
rms
mxe
cal
sar
-2.0
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
MEDIS
-2.0
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
acc
fsc
lft
auc
apr
bep
rms
mxe
cal
sar
-2.0
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
SLAC
-2.0
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
Figure 4: 2-D MDS plots for six of the seven test problems. The seventh problem yields a similar plot and
is omitted only to save space. The missing plot is for one of the LETTER problems.
74
Research Track Paper
Table 3: Average rank correlations between metrics
acc
fsc
lft
auc
apr
bep
rms
mxe
cal
sar
mean
acc
1.00
0.87
0.85
0.88
0.89
0.93
0.87
0.75
0.56
0.92
0.852
fsc
0.87
1.00
0.77
0.81
0.82
0.87
0.79
0.69
0.50
0.84
0.796
lft
0.85
0.77
1.00
0.96
0.91
0.89
0.82
0.73
0.47
0.92
0.832
auc
0.88
0.81
0.96
1.00
0.95
0.92
0.85
0.77
0.51
0.96
0.861
apr
0.89
0.82
0.91
0.95
1.00
0.92
0.86
0.75
0.50
0.93
0.853
bep
0.93
0.87
0.89
0.92
0.92
1.00
0.87
0.75
0.52
0.93
0.860
rms
0.87
0.79
0.82
0.85
0.86
0.87
1.00
0.92
0.79
0.95
0.872
mxe
0.75
0.69
0.73
0.77
0.75
0.75
0.92
1.00
0.81
0.86
0.803
cal
0.56
0.50
0.47
0.51
0.50
0.52
0.79
0.81
1.00
0.65
0.631
sar
0.92
0.84
0.92
0.96
0.93
0.93
0.95
0.86
0.65
1.00
0.896
point is somewhat surprising and we currently do not know
how to explain this.
The weakest correlations are all between the calibration
metric (CAL) and the other metrics. On average, CAL correlates
with the other metrics only about 0.63. We are surprised
how low the correlation is between probability calibration
and other metrics, and are currently looking at other
measures of calibration to see if this is true for all of them.
acc
fsc
lft
auc
apr
bep
rms
mxe
cal
sar
Dim 2
-2.0
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
Dim 1
-2.0
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
Figure 5: MDS using rank correlation
Figure 5 shows an MDS plot for the metrics when distance
between metrics is calculated as 1 - rank correlation, making
MDS insensitive to how the metrics are scaled. (Distances
based on 1 - rank correlation do not respect the
triangle inequality so this is not a proper metric space.)
The overall pattern is similar to that observed in the MDS
plots in Figure 3. CAL is at one end of the space far from
the other metrics. Cross-entropy is closest to RMS, though
not as close as in the other plots. Cross-entropy and RMS
have high rank correlation, but because cross-entropy has
lower rank-correlation to other most metrics than RMS, it
is pushed far from RMS which is close to other metrics in
the MDS plot. APR and AUC are at the other end of the
space farthest from CAL. FSC is in the upper left side of
the space. ACC and RMS are near the center of the space.
SAR A GENERAL PURPOSE METRIC
When applying supervised learning to data, a decision
must be made about what metric to train to and what metric
to use for model selection. Often the learning algorithm
dictates what metrics can be used for training, e.g. it is difficult
to train a neural net for metrics other than RMS or
MXE. But there usually is much more freedom when selecting
the metric to use for model selection, i.e. the metric used
to pick the best learning algorithm and the best parameters
for that algorithm.
If the correct metric for the problem is known, model selection
probably should be done using that metric even if
the learning algorithm cannot be trained to it. What should
be done when the correct metric is not known? The MDS
plots and correlation analysis suggest that RMS is remarkably
well correlated with the other measures, and thus might
serve as a good general purpose metric to use when a more
specific optimization criterion is not known.
We wondered if we could devise a new metric more centrally
located than RMS and with better correlation to the
other metrics. Rather than devise a completely new metric
, we tried averaging several of the well behaved metrics
into a new metric that might be more robust than each one
individually. SAR combines Squared error, Accuracy, and
ROC area into one measure: SAR = (ACC + AU C + (1 RM
S))/3. We chose these metrics for SAR for three reasons
:
1. we wanted to select one metric from each metric group:
the threshold metrics, the ordering metrics, and the
probability metrics
2. ACC, AUC, and RMS seemed to be the most popular
metric from each of these groups, respectively
3. these three metrics are well correlated to the other
metrics in their groups, and in the MDS plots lie closest
to the other metrics in their groups
As can be seen from the MDS plots and in the tables,
SAR behaves differently from ACC, AUC, and RMS alone.
In Table 3 SAR has higher mean rank correlation to other
metrics than any other metric. In the MDS plots, SAR tends
to be more consistently centrally located than other metrics.
And in Table 4 it is the metric that best reflects the ordering
by mean performance of the seven learning methods.
These results suggest that of the ten metrics we exam-ined
, SAR is the metric that on average is most correlated
with the other metrics, both separately, and in groups. SAR
is even more representative than RMS (though RMS also is
75
Research Track Paper
Table 4: Normalized scores for each learning algorithm by metric (average over seven problems)
model
acc
fsc
lft
auc
apr
bep
rms
mxe
cal
mean
sar
ann
0.9399
0.9486
0.9623
0.9722
0.9538
0.9632
0.9043
0.9009
0.9963
0.9491
0.9516
svm
0.9010
0.9515
0.9642
0.9688
0.9523
0.9635
0.9024
0.9041
0.9881
0.9440
0.9524
bag-dt
0.8796
0.8986
0.9450
0.9765
0.9577
0.9464
0.8763
0.9087
0.9800
0.9299
0.9470
bst-dt
0.9506
0.9443
0.9843
0.9866
0.9779
0.9858
0.6400
0.6427
0.9399
0.8947
0.9171
knn
0.8127
0.9042
0.9248
0.9481
0.9052
0.9252
0.7954
0.7754
0.9871
0.8865
0.9012
dt
0.6737
0.8621
0.8393
0.8897
0.8169
0.8403
0.6292
0.6748
0.9731
0.7999
0.8160
bst-stmp
0.7929
0.8265
0.8721
0.9291
0.8799
0.8724
0.3181
0.3013
0.9477
0.7489
0.6966
very good). In an experiment where SAR was used for model
selection, SAR outperformed eight of the nine metrics in selecting
the models with the best overall, and tied with RMS.
We believe our results suggest that SAR is a robust combination
of three popular metrics that may bey appropriate
when the correct metric to use is not known, though the
benefit of SAR over RMS is modest at best. Attempts to
make SAR better by optimizing the weights given to ACC,
AUC, and RMS in the SAR average did not significantly improve
SAR compared to equal weights for the three metrics.
We are very impressed at how well behaved RMS alone is
and are currently working to devise a better SAR-like metric
that yields more improvement over RMS alone.
PERFORMANCES BY METRIC
Table 4 shows the normalized performance of each learning
algorithm on the nine metrics. (CAL is scaled so that
the minimum observed CAL score is 0.0 and the maximum
observed CAL score is 1.0) For each test problem we find
the best parameter settings for each learning algorithm and
compute it's normalized score. Each entry in the table averages
these scores across the seven problems. The last two
columns are the mean normalized scores over the nine metrics
, and the SAR performance. Higher scores indicate better
performance. The models in the table are ordered by
mean overall performance. We have written a separate paper
to compare the performance of the learning methods to
each other on these metrics, but there are a few interesting
relationships between learning algorithms and metrics that
are worth discussing in the context of this paper.
Overall, the best performing models are neural nets, SVMs,
and bagged trees. Surprisingly, neural nets outperform all
other model types if one averages over the nine metrics.
ANNs appear to be excellent general purpose learning methods
. This is not to say that ANNs are the best learning
algorithm they only win on RMS and CAL, but because
they rarely perform poorly on any problem or metric, they
have excellent overall performance.
The SVMs perform almost as well as ANNs. Note that
SVM predictions on [-, +] are not suitable for measures
like cross entropy, calibration, and squared error. SVMs do
well on these metrics because we use Platt's method [8] to
transform SVM predictions to calibrated probabilities. Like
neural nets, SVMs appear to be a safe, general purpose, high
performance learning method once their predictions have
been calibrated by a method such as Platt scaling.
Although single decision trees perform poorly, bagged trees
perform nearly as well as neural nets and SVMs. Bagging
improves decision tree performance on all metrics, and yields
particularly large improvements on the probability metrics.
Like neural nets and SVMs, bagged trees appear to be a
safe, general purpose, high performance learning method.
Boosted trees outperform all other learning methods on
ACC, LFT, ROC, APR, and BEP. Boosting wins 2 of 3
threshold metrics and 3 of 3 rank metrics, but performs
poorly on the probability metrics: squared error, cross entropy
, and calibration. Maximum margin methods such as
boosted trees yield poorly calibrated probabilities. (SVMs
perform well on these because Platt scaling "undoes" the
maximum margin.) Overall, boosting wins 5 of the 6 metrics
for which it is well suited, and would easily be the top
performing learning method if we consider only the 6 threshold
and ordering metrics.
The KNN methods were not competitive with the better
algorithms, but might done better with larger train sets.
Single decision trees also did not perform as well as most
other methods, probably because recursive partitioning runs
out of data quickly with 4k train sets, and because small
trees are not good at predicting probabilities [9]. We tested
many different kinds of decision trees, including smoothed
unpruned trees, and then picked the best, so the poor performance
of trees here is not due to any one tree type being
inferior, but because all of the many tree types we tested
did not perform as well as other methods.
Interestingly, boosting stump models does not perform
as well as boosting full decision trees. Boosted stumps do
outperform single trees on 5 of the 6 threshold and rank
metrics. Their last-place ranking below decision trees is due
to their extremely poor performance on the three probability
measures.
RELATED WORK
There is not a large literature comparing performance
metrics. The closest work to ours is by Flach [7]. In this
work Flach uses the ROC space to understand and compare
different metrics. He analyzes accuracy, precision, weighted
relative accuracy and several decision tree splitting criteria.
The STATLOG project [6] performed a large scale empirical
evaluation of a number of learning algorithms in 1995.
STATLOG compared the performance of the different algorithms
, and also did an analysis of how the predictions made
by the algorithms compared to each other. STATLOG, however
, did not compare performance using different metrics.
DISCUSSION AND CONCLUSIONS
Our analysis allows us to draw a variety of conclusions
which we summarize here. If the goal is to maximize accuracy
, but the model needs a continuous performance metric
(e.g. using backpropagation to train a neural net), it probably
is better to train the model using squared error instead
of cross entropy because squared error sits closer to accuracy
in metric space. This result is surprising since cross entropy
is the theoretically preferred loss function for binary classification
. We suspect cross entropy is not as robust as squared
76
Research Track Paper
error on real data sets because real data sometimes contains
class noise that cross entropy is very sensitive to.
Squared error is a remarkably robust performance metric
that has higher average correlation to the other metrics than
any other metric except SAR. Squared error appears to be
an excellent general purpose metric.
Many models achieve excellent performance on the ordering
metrics AUC, APR, and BEP without making predictions
that yield good probabilities. For example, the k-nearest
neighbor models with the best ROC performance
use values of K that are so large that most of the predictions
are close to p, the fraction of positives in the data.
This yields predictions that are poor when viewed as probabilities
, yet small differences between these predicted values
are sufficient to provide for good ordering.
As expected, maximum margin methods such as boosting
and SVMs yield excellent performance on metrics such as accuracy
for which they are designed. Surprisingly, however,
the maximum margin methods also yield excellent performance
on the ordering metrics. We had not expected that
maximizing distances to decision boundaries would provide a
good basis for ordering cases that fall far from those boundaries
.
Although boosted trees perform well on accuracy and ROC,
they perform poorly on probability metrics such as squared
error and cross entropy. This poor performance on probability
metrics is a consequence of boosting being a maximum
margin method. SVMs do not exhibit this problem
because we scale SVM predictions with Platt's method; Lin-early
scaling SVM predictions to [0, 1] does not work well.
Neural nets trained with backpropagation have excellent
overall performance because, unlike boosting, they perform
well on all metrics including the probability metrics RMS,
MXE, and CAL. We believe part of the reason why the neural
nets perform so well is that they were trained with backpropagation
on squared error, and as we have seen squared
error is an excellent metric.
The three ordering metrics, AUC, APR, and BEP, cluster
close in metric space and exhibit strong pairwise correlations
. These metrics clearly are similar to each other and
somewhat interchangeable. We originally grouped LFT with
the threshold metrics ACC and FSC, but the results suggest
that LFT behaves more like BEP, an ordering metric. We
now would group LFT with BEP in the ordering metrics
along with AUC and APR.
The metric space for the ten metrics has three or more
significant dimensions. The ten metrics do not all measure
the same thing. Different performance metrics yield different
tradeoffs that are appropriate in different settings. No
one metric does it all, and the metric optimized to or used
for model selection does matter. The SAR metric that combines
accuracy, ROC area, and squared error appears to be
a good, general purpose metric, but RMS is so good that
SAR may not provide much benefit over using RMS alone.
We hope that additional research in this area will enable us
to design better metrics, and will shed more light on which
metrics are most appropriate to use in different settings.
ACKNOWLEDGMENTS
Thanks to Geoff Crew and Alex Ksikes for help running
some of the experiments. Thanks to the creators of XGVIS
and XGOBI for the interactive MDS software used to generate
the MDS plots. Thanks to collaborators at Stanford
Linear Accelerator for the SLAC data, and to Tony Gualtieri
at NASA Goddard for help with the Indian Pines data.
REFERENCES
[1] C. Blake and C. Merz. UCI repository of machine
learning databases, 1998.
[2] M. DeGroot and S. Fienberg. The comparison and
evaluation of forecasters. Statistician, 32(1):1222,
1982.
[3] P. Giudici. Applied Data Mining. John Wiley and
Sons, New York, 2003.
[4] A. Gualtieri, S. R. Chettri, R. Cromp, and
L. Johnson. Support vector machine classifiers as
applied to aviris data. In Proc. Eighth JPL Airborne
Geoscience Workshop, 1999.
[5] T. Joachims. Making large-scale svm learning
practical. In Advances in Kernel Methods, 1999.
[6] R. King, C. Feng, and A. Shutherland. Statlog:
comparison of classification algorithms on large
real-world problems. Applied Artificial Intelligence,
9(3):259287, May/June 1995.
[7] P.A.Flach. The geometry of roc space: understanding
machine learning metrics through roc isometrics. In
Proc. 20th International Conference on Machine
Learning (ICML'03), pages 194201. AAAI Press,
January 2003.
[8] J. Platt. Probabilistic outputs for support vector
machines and comparison to regularized likelihood
methods. In A. Smola, P. Bartlett, B. Schoelkopf, and
D. Schuurmans, editors, Advances in Large Margin
Classifiers, pages 6174, 1999.
[9] F. Provost and P. Domingos. Tree induction for
probability-based rankings. Machine Learning, 52(3),
2003.
[10] F. J. Provost and T. Fawcett. Analysis and
visualization of classifier performance: Comparison
under imprecise class and cost distributions. In
Knowledge Discovery and Data Mining, pages 4348,
1997.
APPENDIX
A.
PERFORMANCE METRICS
accuracy: probably the most widely used performance metric
in Machine Learning. It is defined as the proportion
of correct predictions the classifier makes relative
to the size of the dataset. If a classifier has continuous
outputs (e.g. neural nets), a threshold is set and everything
above this threshold is predicted to be a positive.
root-mean-squared-error (RMSE): widely used in regression
, it measures how much predictions deviate from
the true targets.
1
RMSE is defined as:
RM SE =
1
N
(P red(C) - T rue(C))
2
(1)
mean cross entropy (MXE): is used in the probabilistic
setting when interested in predicting the probability
1
Root-mean-squared error is applicable to binary classification
settings where the classifier outputs predictions on [0, 1]
that are compared with the true target labels on {0, 1}.
77
Research Track Paper
that an example is positive (1). It can be proven that
in this setting minimizing the cross entropy gives the
maximum likelihood hypothesis. mean cross entropy is
defined as:
M XE =
1
N
(T rue(C) ln(P red(C)) +
(1 - T rue(C)) ln(1 - P red(C)))
(2)
(The assumptions are that P red(C) [0, 1] and T rue(C)
{0, 1})
receiver operating characteristic (ROC): has it's roots in
WWII in the early days of radar where it was difficult
to distinguish between true positives and false positives
. ROC is a plot of sensitivity vs. (1-specificity)
for all possible thresholds. Sensitivity is the defined as
P (P red = positive|T rue = positive) and is approxi-mated
by the fraction of true positives that are predicted
as positive (this is the same as recall). Specificity
is P (P red = negative|T rue = negative). It is approx-imated
by the fraction of true negatives predicted as
negatives. AUC, the area under the ROC curve, is
used as a summary statistic. ROC has a number of
nice properties that make it more principled than similar
measures such as average precision. AUC is widely
used in fields such as medicine, and recently has become
more popular in the Machine Learning community.
lift: often used in marketing analysis, Lift measures how
much better a classifier is at predicting positives than a
baseline classifier that randomly predicts positives (at
the same rate observed for positives in the data). The
definition is:
LIF T = %of true positives above the threshold
%of dataset above the threshold
(3)
Usually the threshold is set so that a fixed percentage
of the dataset is classified as positive. For example,
suppose a marketing agent wants to send advertising to
potential clients, but can only afford to send ads to 10%
of the population. A classifier is trained to predict how
likely a client is to respond to the advertisement, and
the ads are sent to the 10% of the population predicted
most likely to respond. A classifier with optimal lift
will get as many clients as possible that will respond to
the advertisement in this set.
precision and recall : These measures are widely used in
Information Retrieval. Precision is the fraction of examples
predicted as positive that are actually positive.
Recall is the fraction of the true positives that are predicted
as positives. These measures are trivially maxi-mized
by not predicting anything, or predicting everything
, respectively, as positive. Because of this these
measures often are used together. There are different
ways to combine these measures as described by the
next 4 metrics.
precision-recall F-score: for a given threshold, the F-score
is the harmonic mean of the precision and recall at that
threshold.
precision at a recall level: as the name suggests, set the
threshold such that you have a given recall and the
precision for this threshold is computed.
precision-recall break-even point: is defined as the precision
at the point (threshold value) where precision and recall
are equal.
average precision: usually is computed as the average of
the precisions at eleven evenly spaced recall levels.
CAL is based on reliability diagrams [2]. It is calculated
as follows: order all cases by their predicted value, and
put cases 1-100 in the same bin. Calculate the percentage
of these cases that are true positives. This
approximates the true probability that these cases are
positive. Then calculate the mean prediction for these
cases. The absolute value of the difference between the
observed frequency and the mean prediction is the calibration
error for this bin. Now take cases 2-101, 3-102,
.... and compute the errors in the same way for each of
these bins. CAL is the mean of these binned calibration
errors.
B.
PARAMETER SETTINGS
We use the following parameter settings and algorithm
variations for the seven learning methods:
KNN: we use 26 values of K ranging from K = 1 to
K = |trainset|. We use KNN with Euclidean distance and
Euclidean distance weighted by gain ratio. We also use distance
weighted KNN, and locally weighted averaging. The
kernel widths for locally weighted averaging vary from 2
0
to
2
10
times the minimum distance between any two points in
the train set.
ANN: we train nets with gradient descent backprop and
vary the number of hidden units {1, 2, 4, 8, 32, 128} and
the momentum {0, 0.2, 0.5, 0.9}. We don't use validation
sets to do weight decay or early stopping. Instead, for each
performance metric, we examine the nets at many different
epochs.
DT: we vary the splitting criterion, pruning options, and
smoothing (Laplacian or Bayesian smoothing). We use all
of the tree models in Buntine's IND package: Bayes, ID3,
CART, CART0, C4, MML, and SMML. We also generate
trees of type C44 (C4 with no pruning), C44BS (C44 with
Bayesian smoothing), and MMLLS (MML with Laplacian
smoothing). See [9] for a description of C44.
BAG-DT: we bag at least 25 trees of each type. With
BST-DT we boost each tree type. Boosting can overfit,
so we consider boosted DTs after {2, 4, 8, 16, 32, 64, 128,
256, 512, 1024, 2048} steps of boosting. With BST-STMP
we use stumps (single level decision trees) with 5 different
splitting criteria, each boosted {2, 4, 8, 16, 32, 64, 128, 256,
512, 1024, 2048, 4096, 8192} steps.
SVMs: we use most kernels in SVMLight [5] {linear, polynomial
degree 2 & 3, radial with width {0.001, 0.005, 0.01,
0.05, 0.1, 0.5, 1, 2}} and vary the regularization parameter
C by factors of ten from 10
-7
to 10
3
. The output range
of SVMs is [-, +] instead of [0, 1]. To make the SVM
predictions compatible with other models, we use Platt's
method to convert SVM outputs to probabilities by fitting
them to a sigmoid [8]. Without scaling, SVMs would have
poor RMS and it would not be possible to calculate MXE
and CAL.
78
Research Track Paper | Lift;Precision;performance metric;ROC;Supervised Learning;supervised learning;squared error;SVMs;pairwise;Recall;algorithmns;Cross Entropy;ordering metric;Euclidean distance;Performance Evaluation;standard deviation;backpropagation;Metrics |
64 | Database Security Curriculum in InfoSec Program | Database Security course is an important part of the InfoSec curriculum. In many institutions this is not taught as an independent course. Parts of the contents presented in this paper are usually incorporated in other courses such as Network Security. The importance of database security concepts stems from the fact that a compromise of data at rest could expose an organization to a greater security threat than otherwise. Database vulnerabilities exposed recently in several high profile incidents would be a good reason to dedicate a full course to this important topic. In this paper we present key topics such as technologies for database protection, access control, multilevel security, database vulnerabilities and defenses, privacy and legal issues, impact of policies and some well known secure database models. | INTRODUCTION
Information Security curriculum is receiving greater attention
from many institutions, thanks to the standardization efforts by
the Committee on National Security Systems (CNSS). The
CNSS members come from the National Security Agency,
Department of Defense, and the Department of Homeland
Security, among others. The CNSS standardization efforts are
based on the Presidential Decision Directive [24] issued in
1998 for training professionals to protect the nation's critical
infrastructure. To achieve this goal, CNSS has developed five
major standards known as the National Security
Telecommunications Information Systems Security Instruction
(NSTISSI). The NSTISSI standards are numbered 4011, 4012,
4013, 4014 and 4015 [8]. Additional standards under this
sequence are in the offing as well. The relevance of these
standards is that they include a vast number of topics that cover
the entire gamut of information assurance and database security
topics are included in many of these standards. First, we will
briefly outline the main content of each of these standards and
then move onto the main content of this paper.
The 4011 standard covers the information security foundation
topics such as wired and wireless communications basics,
operations security, transmission security, information security
from a policy perspective, cryptography, key management, legal
aspects of security, contingency planning and disaster recovery,
risk management, trust, auditing, and monitoring. At present,
coverage of topics mentioned in this standard is considered
essential by CNSS in every InfoSec curriculum. The 4012
standard is primarily aimed at training Designated Approving
Authority personnel. A quick look at the following topics would
show the relationship of these standards vis--vis database
security. The primary topics of this standard include: liabilities,
legal issues, security policy, sensitive data access policy, threats,
vulnerabilities, incident response, life cycle management,
configuration management, and contingency management. The
purpose of 4013 standard is to provide a minimum set of topics
necessary for certifying Systems Administrators in Information
Systems Security. Some of the topics in this category include:
development and maintenance of security policies and
procedures, education, training and awareness of such policies,
development of countermeasures for known attacks as well as
development of safeguards. Also, configuration management is
an important part of 4013 standard. The standard for training
Information Systems Security Officers is 4014. This standard
covers topics such as facilities planning, business continuity, and
password management, access control policies, laws and
regulations related to information security, privacy, encryption
standards, intrusion detection, audit tools, and security reviews.
The last standard currently in place in this series is numbered
4015. This standard is for training System Certifiers. Among
the main topics here are: defining roles and responsibilities for
personnel, certification of systems, identifying process
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
Information Security Curriculum Development (InfoSecCD) Conference
'05, September 23-24, 2005, Kennesaw, GA, USA. Copyright 2005
ACM 1-59593-261-5...$5.00.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies
are not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
Information Security Curriculum Development (InfoSecCD)
Conference '05, September 23-24, 2005, Kennesaw, GA, USA.
Copyright 2005 ACM 1-59593-261-5/05/0009...$5.00.
79
boundaries, integration, security engineering, and applications
security. These five standards have been in place since 1994
and are constantly getting updated by CNSS.
INFOSEC FOUNDATION COURSES
Traditionally, the following courses are considered as a set of
foundation courses: Network Security, Information Security, and
Cryptography. Usually these courses are augmented by
additional courses such as Operating System Security, Database
Security, Secure E-commerce, and Security Management. In
our curriculum at the University of Louisville we are offering
the three foundation courses listed above and the Database
Security course. The main purpose of this paper is to identify
several topics that could be included in a Database Security
course. In the last quarter of 2004 and the first quarter of 2005,
several incidents of theft or loss of data from databases of large
organizations have brought to light the vulnerabilities in
managing database systems. Every organization depends
heavily on databases, both large and small, in order to manage
inventory, human resources, and business functions on a day to
day basis. Therefore, in order to mitigate risk, every
organization must take adequate steps to protect the data that
they hold. Issues related to technology as well as policies are
important for the protection of data. Such topics form the core
of this Database Security course, which we will discuss in
greater detail in the remaining sections.
INFOSEC AT U OF LOUISVILLE
At the University of Louisville (U of L), InfoSec courses are
offered in two departments. The Computer Information Systems
(CIS) department in the College of Business offers an
undergraduate concentration in InfoSec [36]. The Computer
Science department in the college of Engineering offers graduate
courses in InfoSec at the masters and doctoral levels. Database
security course is offered as the second course in database, the
first course being the standard database design and management
course. Students taking the database security course are either
juniors or seniors and are expected to have experience with one
of the mainframe commercial databases such as Oracle or SQL
Server 2000. The major course objectives were for students to:
Learn the fundamental concepts in database security
Understand how access controls work in a database
Learn to develop and manage secure database
architectures
Be familiar with the laws governing computer privacy
Understand the alternatives to encrypting data at rest
Understand the security implementations and
vulnerabilities in commercial database systems such as
Oracle and SQL Server 2000
Learn security audit methods
Learn about multi-level database security
The course content was covered using material from many
sources, primarily research papers. The Database Security book
by Castano, et al is an out of print book as it was originally
developed in 1994. The Database Security and Auditing book
by Afyouni was printed in April 2005 and so was not available
when the semester started. In the course we used two SQL
Server Security books which were available in print and one
Oracle Security book that was available in electronic form
through Safari books. These books contributed to reinforcing
concepts discussed by testing several attack methods. Another
special feature of teaching the Database Security course was the
availability of a dedicated InfoSec Lab. We will discuss the
contribution of the InfoSec Lab later in this paper.
The initial emphasis in the course was on incorporating database
security concepts during the design phase. The primary reason
for this emphasis was on the need for integration of various
components of an information system. Since database security
is heavily dependent on network security, operating system
security, physical security and other applications security such
an integrated approach is essential for an appreciation of design
decisions. The course content was arranged in such a way that
both technology and policy aspects were equally emphasized.
This emphasis was motivated by the fact that there are several
legal requirements to be met and people's privacy must be
protected. A compromised database endangers the privacy of
individuals by the release of personal information such as social
security number, date of birth, credit card numbers, and health
history.
An important part of database security is access control. There
are several types of access controls that are available for the
database administrator to work with. More importantly,
choosing the proper type of access control enables the allocation
and revocation of privileges to individuals for various
components of the database. The three types of access controls
discussed related to Mandatory Access Control (MAC),
Discretionary Access Control (DAC) and Role-based Access
Control (RAC). A simple example of MAC would be that of
using a suitable password for database access. However,
practical uses of databases always require overriding a default
access privilege for a specific event. In such instances one uses
Discretionary Access Control. Since database privileges
sometimes have the inheritance property it becomes essential to
understand how the particular commercial system would handle
DAC. The most important of access controls is Role-based
Access Control. Discussion of this topic showed the various
nuances involved in assigning access privileges in a timely
manner without hindering productivity and at the same time
providing security. All necessary database accesses could be
associated with a specific role that an individual performs in an
organization. Since roles change often and consequently access
needs change as well, it is much easier to manage access control
by associating privileges with roles. It is worth noting that these
three types of access controls are not mutually exclusive but
work in combinations that suit the organizational needs.
Another important aspect of database security is authentication.
Since databases provide many views of the data, suitable
privileges for the ability to drill down data requires appropriate
authentication. The authentication aspect of database access
supports the confidentiality in the CIA (Confidentiality-Integrity
-Availability) triangle that is basic to information
security. Authentication models discussed include single-factor,
two-factor, and three-factor authentication and the attendant
performance issues.
Among the many topics covered in this course, one of the
important ones relates to Multi-Level Secure (MLS) databases
[12, 14]. Commercial databases such as Oracle or SQL Server
do not handle the MLS aspects in their software. However, it is
an important aspect to be aware of. For example, in an
80
organization not every one has the access rights to view
confidential information. Database queries are designed to pull
all data that satisfy the query condition. In the MLS
environment, the query condition would not necessarily reflect
the security level of the person making the query. Since the
security level authorization of the individual running the query
is known from the login time, the MLS database is supposed to
show all data that is cleared for that level of user. Usually the
security levels could be unclassified, confidential, secret, or top
secret. Not all fields of data in a record would need to be
carrying a classification. Those sensitive data that have an
associated security classification should be able to serve the
needs of users with the appropriate security clearance and hide
the data from others. A major problem to overcome in this
context is known as polyinstantiation [13]. This concept refers
to the fact that if persons with a lower clearance level have
reason to suspect the existence of a record with hidden values,
then they would be able to infer. Polyinstantiation could be
addressed to a large extent by allowing certain redundancies in a
database.
Another common problem with MLS databases is the presence
of inference channel. Inference channel leaks information about
data classified at a higher level to users with lower level
clearances [19]. Security policies also play an important role in
protecting against inference channel leaks. A related approach
to this problem is to develop classification constraints on data.
These data classifications are then used at query time and then
the appropriate level of the constraint is applied to the resulting
data before it is presented to the user.
In this context we discussed the security architecture for
databases. This was broadly classified as those systems that use
a Trusted Computing Base (TCB) that is external to the DBMS
and those systems that manage access to data through the DBMS
[22]. In the TCB architecture, the access controls were usually
handled by the operating system or the network. In the DBMS
control architecture, security design involved multi-factor
authentication as well as security clearance and role-based
access. As part of the secure architecture topic, we studied the
Bell-LaPadula Model and the Biba Model [5]. Then we took a
detailed look at the Seaview Model [17]. This is the first paper
that studied in detail the security needs for database systems that
contained data with various levels of security clearances. The
major contribution of this paper was the application-independent
nature of data integrity with particular reference to entity
integrity, referential integrity and polyinstantiation integrity.
We studied additional secure architecture topics with particular
reference to commercial database systems. These topics include
input validation, credential handling and encryption.
Encryption is a major topic in itself in the security context.
Usually encryption is an important tool for data in transit.
However, the recent spate of incidents involving lost or stolen
data [37] shows the need for protecting data at rest from falling
into the wrong hands. One useful tool in this regard is
encryption. We studied the impact of encrypted data with
respect to performance. Usually, encryption of sensitive data at
rest is a desirable feature provided the access to such data is not
frequent. On the other hand, for data that is frequently used the
better alternative to encryption would be to partially secure
storage [31, 33] whereby the data management is handled by an
independent system that works outside the operating system
control. This technique protects the data from hackers as the
access control is under an independent system that is not
manipulated by the operating system, where most of the
vulnerabilities are exploited. In this context we studied the
FARSITE model that discusses the reliable storage aspects in an
incompletely trusted environment such as the Internet [2]. This
research, performed at Microsoft, shows how "to harness the
collective resources of loosely coupled, insecure, and unreliable
machines to provide logically centralized, secure, and reliable
file-storage service."
The next major topic covered was security audit for a database.
The sources used for this topic were Jajodia [15], Andrews [4],
and material from the Congressional Hearing reference provided
in the References section. Audit involves different components
such as login, access and storage. Commercial database systems
such as Oracle and SQL Server facilitate login auditing in a
simple way. For example, in SQL Server the user could set the
login audit level to any one of four levels. Level 0 does not log
any information about the logins, level 1 logs information about
successful logins only, level 2 logs information about
unsuccessful logins only and level 3 logs information about all
attempted logins. This aspect of setting the appropriate level is
related to the security policy of the organization. An
organization might feel that they need to know only those people
who attempted a login and failed as the ones who successfully
logged in are considered authorized users. This is not a good
assumption when it comes to computer forensics where one is
trying to reconstruct an event that happened in the past.
Consequently, organizations must consider the impact of their
policies when it comes to information security. Auditing is also
mandated by certain accreditation bodies. In order to satisfy
certain data security requirements, some organizations might
have to secure C2 level security rating from the National
Computer Security Center (NCSC). The NCSC certification is
measured according to the Department of Defense Trusted
Computer System Evaluation Criteria [4]. We concluded the
course with an analysis of database protection, copyright and
privacy aspects both from a policy and legal perspective. First,
we discussed the Congressional hearing on "Database and
Collections of Information Misappropriation Act of 2003."
This hearing showed the limitations of Copyright laws and how
U.S. courts have interpreted the laws that protect privacy. We
then studied the future of the database protection in U.S. and the
laws to help in this regard. U.S. court rulings, including that of
the Supreme Court, have shown that "sweat of the brow"
argument does not offer protection for databases, rather the
demonstration of some form of "originality" in data collection
and dissemination is essential for database ownership. A court
ruling in 2001 in United Kingdom in the case of the British
Horseracing Board (BHB) has once again brought into focus the
sweat of the brow argument. The U.K. court upheld the BHB's
claim of ownership of data pertaining to horses and jockeys
[10]. It remains to be seen how the U.S. courts would consider
challenges to the sweat of the brow argument when it comes to
protecting large databases from competitors.
EVALUATION TOOLS
In this course we used several different types of evaluation tools.
Students were required to write three individual research reports
on topics provided in class. The topics were:
81
1.
Buffer overflows
2.
Security audit
3.
Sarbanes Oxley Act and its impact on Database
Security
On the testing side, we used a closed book, closed notes,
midterm and final examinations. All questions were essay type.
The students had access to a dedicated InfoSec lab where they
could perform several different types of hands-on testing for
vulnerabilities [32]. The InfoSec Lab has 16 workstations on a
LAN connected to a Windows 2000 server. First the SQL
Server 2000 was installed on the server. Two stand-alone
computers that were not connected to the network were also
provided to the students for testing. The first assignment
provided a chance for the students to install SQL Server 2000
and choose appropriate security settings for various components
of the database. The students then created new SQL Server
accounts on the stand-alone computers and granted suitable
privileges first and then tested the DENY and REVOKE features
as well. The students had to install the latest SQL Server
patches on the stand-alone computers and test for vulnerabilities.
The dedicated lab environment provided an excellent facility for
us to allow students to understand how a hacker would gain
routine information about the database system. First the SQL
Server 2000 was left unpatched and the students used the SQL
Ping2 utility to gather information about the database system.
This showed the port 1433 in use. Then the SQL Server 2000
was patched with version 3a and the students tried the same
SQL Ping2 utility, this time finding a different type of
information about the SQL Server. Next, the SQL Server was
put in hide mode and the students found out this piece of
information by noticing that the listening port had changed to
2433. We were able to accomplish this testing by making
changes to the SQL Server every two days giving a short time
between changes for testing. This was done as assignment 2.
The third assignment involved testing Bulk Copy / Bulk Insert
features of SQL Server. The fourth assignment involved a
buffer overflow attack. A sample code was given to the students
to try the buffer overflow attack on the patched server. The
patched server foiled the attack. The students were then asked
to test the same buffer overflow attack on the stand-alone
computers where patches were not applied. The last assignment
involved SQL Injection attack. The students were given a series
of codes for the SQL Injection attack testing. The first part
involved logging into a SQL Server database system knowing
the userid of the user but not the password. The second part
involved not knowing the userid or the password. The third part
involved creating a new user and then exploiting the system.
The fourth part involved finding the password of the sa account.
The fifth part involved dropping the SQL Server from the server
and shutting down the SQL Server via SQL Injection attack.
The students were given the challenge in the fourth part of the
SQL Injection attack testing to find out the strong password used
on the server, which had all the latest patches both for the SQL
Server part and the operating system part. This required more
work beyond the SQL knowledge. One of the students
succeeded in finding out the server password, not just the sa
password, which was much easier to get using SQL Injection.
CONCLUSION
Overall, the students enjoyed the content of the course that
involved learning many database security concepts and the
ability to test many aspects of SQL Server installation, suitable
settings, detect vulnerabilities, develop simple countermeasures
and have the ability to use the logs to detect intrusion.
ACKNOWLEDGEMENTS
This research was supported in part by the NSF grant DUE-0416900
and the Kentucky Council on Postsecondary Education
grant GB040955.
REFERENCES
[1]
Abrams, M. D., Jajodia, S., Podell, H. J. 1995.
Information Security: An integrated collection of essays,
IEEE Computer Society Press, CA.
[2]
Adya, A., Bolosky, W.J., Castro, M., Cermak, G.,
Chaiken, R., Douceur, J., Howell, J., Lorch, J.R.,
Theimer, M. and Wattenhofer, R.P., 2002. "FARSITE:
Federated,
Available, and Reliable Storage for an
Incompletely Trusted Environment," Proceedings of the
5
th
Symposium on Operating Systems Design and
Implementation, Boston, MA, December, 1 14.
[3]
Afyouni, H. A. 2006. Database Security and Auditing,
Course Technology, MA.
[4]
Andrews, C., Litchfield, D., Grindlay, B. 2003. SQL
Server
Security
Fundamentals, McGraw-Hill/Osborne,
NY.
[5]
Castano, S., Fugini, M., Martella, G., Samarati, P. 1994.
Database Security, ACM Press Books (Diane Publishing
Co.), NY.
[6]
Cerrudo, C. "Manipulating Microsoft SQL Server Using
SQL Injection"
http://database.ittoolbox.com/browse.asp?c=DBPeerPubl
ishing&r=%2Fpub%2FSG090202%2Epdf,
Accessed on 07/25/2005
[7] CERT
http://www.cert.org, Accessed on 05/20/2005
[8]
CNSS Stds. "National IA Education Standards,"
http://www.nsa.gov/ia/academia/cnsstesstandards.cfm
[9]
Congressional Hearing, 2003. "Database and Collections
of Information Misappropriation Act of 2003,"
September.
http://www.copyright.gov/docs/regstat092303.html,
Accessed on 04/10/2005
[10] Duke University, 2001. "The Future of Database
Protection in U.S. Copyright Law"
http://www.law.duke.edu/journals/dltr/articles/2001dltr0
017.html, Accessed on 04/15/2005
[11] Hinke, T., 1995. "Multilevel Secure Database
Management Prototypes," in Information Security: An
Integrated Collection of Essays, 1 edition
st
, Edited by
Abrams, M.D., Jajodia, S.G., Podell, H.J., Essay 23,
IEEE Computer Society Press, CA, 542-569.
82
[12] Jajodia, S. and Sandhu, R., 1995. "Toward a Multilevel
Secure Relational Model," in Information Security: An
Integrated Collection of Essays, 1 edition
st
, Edited by
Abrams, M.D., Jajodia, S.G., Podell, H.J., Essay 20.
IEEE Computer Society Press, CA, 460-492.
[13] Jajodia, S., Sandhu, R. and Blaustein, B.T., 1995.
"Solutions to the Polyinstantiation Problem" in
Information Security: An Integrated Collection of
Essays,
1 edition
st
, Edited by Abrams, M.D., Jajodia,
S.G., Podell, H.J., Essay 21. IEEE Computer Society
Press, CA, 493-529.
[14] Jajodia, S. and Meadows, C. 1995. "Inference problems
in multilevel secure database management systems," in
Information Security: An Integrated Collection of
Essays,
1 edition
st
, Edited by Abrams, M.D., Jajodia,
S.G., Podell, H.J., Essay 24. IEEE Computer Society
Press, CA, 570-584.
[15] Jajodia, S., Gadia, S.K., and Bhargava, G., 1995.
"Logical Design of Audit Information in Relational
Databases" in Information Security: An Integrated
Collection of Essays, 1 edition
st
, Edited by Abrams,
M.D., Jajodia, S.G., Podell, H.J., Essay 25. IEEE
Computer Society Press, CA, 585-595.
[16] Lewis, M. 2004. "SQL Server Security Distilled," 2
nd
edition,
Apress,
CA.
[17] Lunt, T., Denning, D. E., Schell, R. R., Heckman, M.
and Shockley, W. R. 1990. "The Seaview Security
Model," IEEE Transactions on Software Engineering, 16
(#6), June, 593 607.
[18] Mao, W. 2004. "Modern Cryptography," Prentice-Hall,
NJ.
[19] Meadows, C. and Jajodia, S., 1995. "Integrity in
Multilevel Secure Database Management Systems," in
Information Security: An Integrated Collection of
Essays,
1 edition
st
, Edited by Abrams, M.D., Jajodia,
S.G., Podell, H.J., Essay 22. IEEE Computer Society
Press, CA, 530-541.
[20] Nevins, S.C., 2003. "Database security breaches on the
rise"
http://www.snwonline.com/evaluate/database_
security_03-31-03.asp?article_id=224,
Accessed on 04/15/2005.
[21] Nessus
http://www.nessus.org. Accessed on 05/19/2005.
[22] Notargiacomo,
L.
"Architectures for MLS Database
Management Systems" in Information Security: An
Integrated Collection of Essays, 1 edition
st
, Edited by
Abrams, M.D., Jajodia, S.G., Podell, H.J., Essay 19.
IEEE Computer Society Press, CA.
[23] O'Reilly Publishers. Developing a Database Security
Plan
http://www.oreilly.com/catalog/orasec/chapter/ch07.html
Accessed on 03/10/2005.
[24] PDD63,
1998.
http://www.fas.org/irp/offdocs/pdd/pdd63.htm,
Accessed on 05/22/2005.
[25] Pernul, Gunther, 1994. "Database Security" chapter in
`Advances in Computers,' Edited by M.C.Yovits,
vol. 38, Academic Press, NY.
[26] Rob, P. and Coronel, C. 2004. "Design, Implementation
and
Management," 6
th
Edn., Course Technology, MA.
[27] Sandhu, R. and Samarati, P., 1994. "Access Control:
Principles and Practice," IEEE Communications
Magazine, vol. 32, September, 40-48.
[28] Sandhu, R., Coyne, E.J., Feinstein, H. L. and Youman,
C.E., 1996. "Role-based Access Control Models," IEEE
Computer, vol. 29, February, 38-47.
[29] SANS http://www.sans.org, Accessed on 05/19/2005.
[30] Solworth, J. A. 2004. "Integrating Discretionary and
Mandatory Access Controls"
http://parsys.cs.uic.edu/~solworth/integratingMacDac.pd
f. Accessed on 04/15/2005.
[31] Son, S. H., Chaney, C., and Thomlinson, N. P., "Partial
Security Policies to Support Timeliness in Secure Real
time Databases," 1998. Proceedings of the IEEE
Symposium on Security and Privacy, May 3-6,
136 147.
[32] Srinivasan, S. 2005. "Design and Development of an
Information Security Laboratory," Proceedings of the 9
th
Annual Colloquium on Information System Security
Education, Atlanta, GA, June 6-9.
[33] Strunk, J.D., Goodson, G.R., Scheinholtz, M.L., Soules,
C.A.N. and Ganger, G.R., 2003. "Self-Securing Storage:
Protecting Data in Compromised Systems," Foundations
of Intrusion Tolerant Systems, 195 209.
[34] Theriault, M. and Heney, W. 1998. "Oracle Security,"
O'Reilly Publishers, IN.
[35] Tomson, B., 2004. "SQL Server 2000 Security Best
Practices"
http://wp.bitpipe.com/resource/org_1078177630_947/SQ
Lserver2000.pdf. Accessed on 03/20/2005.
[36] UofL InfoSec, 2005. "InfoSec Program website,"
http://www.louisville.edu/infosec
[37] Wall Street Journal, 2005. "ChoicePoint struggles to
gauge how much information fell into wrong hands,"
May 3, Page 1.
83 | inference channel;access control;buffer overflows;CIA;privacy;polyinstantiation;database;inference;Database;encryption;multilevel security;authentication;policy;security |
65 | dBBlue: Low Diameter and Self-routing Bluetooth Scatternet | This paper addresses the problem of scatternet formation for single-hop Bluetooth based ad hoc networks, with minimal communication overhead. We adopt the well-known structure de Bruijn graph to form the backbone of Bluetooth scatternet, hereafter called dBBlue, such that every master node has at most seven slaves, every slave node is in at most two piconets, and no node assumes both master and slave roles. Our structure dBBlue also enjoys a nice routing property: the diameter of the graph is O(log n) and we can find a path with at most O(log n) hops for every pair of nodes without any routing table . Moreover, the congestion of every node is at most O(log n/n), assuming that a unit of total traffic demand is equally distributed among all pair of nodes. We discuss in detail a vigorous method to locally update the structure dBBlue using at most O(log n) communications when a node joins or leaves the network. In most cases, the cost of updating the scatternet is actually O(1) since a node can join or leave without affecting the remaining scatternet. The number of nodes affected when a node joins or leaves the network is always bounded from above by a constant. To facilitate self-routing and easy updating, we design a scalable MAC assigning mechanism for piconet, which guarantees the packet delivery during scatternet updating. The dBBlue scatternet can be constructed incrementally when the nodes join the network one by one. Previously no method can guarantee all these properties although some methods can achieve some of the properties. | INTRODUCTION
Bluetooth [8] is a promising new wireless technology, which enables
portable devices to form short-range wireless ad hoc networks
based on a frequency hopping physical layer. Bluetooth ad-hoc
networking presents some technical challenges, such as scheduling
, network forming and routing. User mobility poses additional
challenges for connection rerouting and QoS services. It has been
widely predicted that Bluetooth will be the major technology for
short range wireless networks and wireless personal area networks.
This paper deals with the problem of building ad hoc networks using
Bluetooth technology.
According to the Bluetooth standard, when two Bluetooth devices
come into each other's communication range, one of them
assumes the role of master of the communication and the other becomes
the slave. This simple one hop network is called a piconet,
and may include more slaves. The network topology resulted by the
connection of piconets is called a scatternet. There is no limit on
the maximum number of slaves connected to one master, although
the number of active slaves at one time cannot exceed . If a master
node has more than
slaves, some slaves must be parked. To
communicate with a parked slave, a master has to unpark it, thus
possibly parking another active slave instead. The standard also
allows multiple roles for the same device. A node can be master
in one piconet and a slave in one or more other piconets. However,
one node can be active only in one piconet. To operate as a member
of another piconet, a node has to switch to the hopping frequency
sequence of the other piconet. Since each switch causes delay (e.g.,
scheduling and synchronization time), an efficient scatternet formation
protocol can be one that minimizes the roles assigned to the
nodes, without losing network connectivity.
While several solutions and commercial products have been in-troduced
for one-hop Bluetooth communication, the Bluetooth specification
does not indicate any method for scatternet formation. The
problem of scatternet formation has not been dealt with until very
recently. The solutions proposed in literature can be divided into
single-hop and multi-hop solutions. Several criteria could be set
as the objectives in forming scatternet. First of all, the protocol
should create degree limited scatternets, to avoid parking any node.
Secondly, the number of piconets should be minimized to provide
faster routing. Thirdly, the formation and maintenance of scatternet
should have small communication overhead. Fourthly, the diameter
of the scatternet should be small, i.e., the maximum number of hops
between any two devices must be small. In this paper, we focus on
scatternet formation for single-hop ad hoc networks. In a single-hop
ad hoc network, all wireless devices are in the radio vicinity
of each other, e.g., electronic devices in a laboratory, or laptops in
a conference room. A single-hop network can be modeled by a
complete graph.
22
Previous literature on scatternet formation assumed that devices
are not able to communicate unless they have previously discovered
each other by synchronizing their frequency hopping patterns.
Thus, even if all nodes are within direct communication range of
each other, only those nodes, which are synchronized with the transmitter
, can hear the transmission. Synchronizing the frequency
hopping patterns is apparently a time consuming and pseudo-random
process [13]. In this paper we assume that the problem of discovering
all neighbors within transmission radius of a device is resolved
by separate Bluetooth protocol. One such protocol for discovering
all one hop networks is described in [13, 3], while a protocol that
provides two-hop information to every node is described in [12].
These protocols are applicable as the pre-phase of our scheme.
This paper addresses the problem of scatternet formation for
single-hop Bluetooth based ad hoc networks, with minimal communication
overhead. We adopt the well-known structure de Bruijn
graph to form the backbone of Bluetooth scatternet, hereafter called
dBBlue, such that every master node has at most seven slaves, every
slave node is in at most two piconets, and no node assumes
both master and slave roles. Our structure dBBlue also enjoys a
nice routing property: the diameter of the graph is
and
we can find a path with at most
hops between every pair
of nodes without any routing table. Moreover, the congestion of
every node is at most
, assuming that a unit of total
traffic demand is evenly distributed among all pair of nodes. We
discuss in detail a vigorous method to locally update the structure
dBBlue using at most
communications when a node joins
or leaves the network. In most cases, the cost of updating the scatternet
is actually
since a node can join or leave without affecting
the remaining scatternet. The number of nodes affected when
a node joins or leaves the network is always bounded from above
by a constant. To facilitate self-routing and easy updating, we design
a scalable MAC assigning mechanism for piconet, which can
guarantee the packet delivery even during updating. Our method
can construct the structure dBBlue incrementally when the nodes
join the network one by one. In addition, the structure formed by
our method can sustain the faults of
nodes and the network is
still guaranteed to be connected. If a node detects a fault of some
neighboring master node or bridge slave node, it can dynamically
re-route the packets and the path traveled by the packet is still at
most
. Previously no method can guarantee all these properties
although some methods can achieve some of the properties.
The rest of the paper is organized as follows. Section 2 presents
our new Bluetooth formation algorithms for single-hop ad hoc networks
. We describe how to build a static scatternet of
nodes based
on de Bruijn graph and assign roles and labels to them. Section 3
proposes a vigorous method to locally and dynamically update the
scatternet topology when node joins or leaves the network. Section
4 describes the routing method for our de Bruijn based scatternet
which efficiently finds the next node need to go without any routing
table. The related works is discussed in section 5. We conclude our
paper in Section 6 by pointing out some possible future research
directions.
DBBLUE SCATTERNET CONSTRUCTION
Our dBBlue scatternet first builds a backbone based on the well-known
de Bruijn graph [5]. The de Bruijn graph, denoted by
,
is a directed graph with
nodes. Assume that each node is assigned
a unique label of length
on the alphabet
.
There is an edge in
from a node with label
to
any node with label
, where
. Figure
1 illustrates
. It is well-known that the de Bruijn graph
enables self-routing intrinsically. The self-routing path from the
source with label
to the target with label
is
. Observe that, we could find a shorter route by looking
for the longest sequence that is both a suffix of
and a
prefix of
. Suppose that
is
such longest sequence. The shortest path between the source and
the target is
. Clearly, the
route between any two nodes is at most
hops, i.e.,
has
diameter
, where
is the number of nodes of the
graph.
111
001
011
010
100
110
000
101
Figure 1: The de Bruijn graph
.
The classical de Bruijn graph is balanced in the sense that the
labels of all nodes have the same length. The de Bruijn graph can
be generalized to any set of vertices whose labels form a universal
prefix set. In [7], Fraigniaud and Gauron proposed a novel
method to construct an efficient topology for P2P network based
on the generalized de Bruijn graph defined on a universal prefix
set. "A universal prefix set is a set
of labels on an alphabet
such that, for any infinite word
, there is a unique
word in
, which is a prefix of
. The empty set is also a universal
prefix set."[7] For instance,
is
a universal prefix set on alphabet
, but
and
are not. There is a directed
edge from node
to another node
in the generalized
de Bruijn graph if
is the prefix of the label of node
. A generalized de Bruijn graph is pseudo-balanced if the lengths
of the labels are different by at most one. For simplicity, we still
denote a pseudo-balanced de Bruijn graph on alphabet
by
if the node labels have length at least
bits and at most
bits. We also say that a node from
is at level
if its
label has
bits.
In this paper, we only consider the balanced or pseudo-balanced
binary de Bruin graph
. Node labels in a pseudo-balanced
de Bruijn graph correspond to all the leaf nodes in a full binary tree,
in which the depth difference between any two leaf nodes is at most
one and each internal node has two children, Figure 2 illustrates the
correspondence between them. In the figure, the pseudo-balanced
de Bruijn graph is defined on the leaf nodes and directed edges.
In a pseudo-balanced de Bruijn graph
, each node has at
most
out-neighbors and
in-neighbors. To route a packet from
a node
with label
to another node
with label
, where
. Node
will forward
the packet to its neighbor node with label
, or
, or
. Notice that since the labels
of the nodes are a universal prefix set, we know that exactly
one of these three labels does exist. The following nodes keep forwarding
the packet similarly until it reaches node
. Consequently,
the diameter of pseudo-balanced de Bruijn graph is still
.
In this paper, we propose a scalable scatternet structure based on
pseudo-balanced de Bruijn graph
.
23
root
0000 0001 0010 0011
001
000
011
010
100 101 110 111
10
11
01
00
1
0
Figure 2: The correspondence between full binary tree and
pseudo-balanced de Bruijn graph.
In a pseudo-balanced de Bruijn graph
, two nodes are
called critical pair if they only differ in the least significant bit of
their labels. Let
be the sequence of nodes visited
by a traversal of all leaf nodes in the corresponding binary tree of
. A node
is called the successor of another node
and
is called the predecessor of another node
. Here
takes value
. For example, in Figure 2, nodes
and
is a critical pair; node
is the successor of the
node
.
2.2
MAC Address Assignment for Piconet
Our method will construct a balanced (or pseudo-balanced) de
Bruijn graph
as the backbone of the network. Here the
choosing of the integer
is discussed later. We will ignore the
direction of the edges in the de Bruijn graph
. Thus, every
node will have at most
(or
for pseudo-balanced de Bruijn graph
) edges incident.
Every node in the backbone of dBBlue scatternet will be assigned
a master role. We will add a bridge slave node for every
pair of master nodes that are connected in the backbone. Thus, every
master node will have at most six bridge slave nodes so far. We
then add some free slave nodes to each master node, and call them
pure slave nodes.
Before we discuss in detail our scatternet construction methods,
we present our novel rule of assigning the MAC address in a piconet
. In our dBBlue scatternet, when we route a packet to a
destination node
, we only know the piconet ID of node
, say
, which is same as the label of its master node, and the
MAC address, say
, of this node in that piconet. The detail
routing mechanism will be discussed in Section 4. When some
node joins or leaves the scatternet, we often have to reorganize
some piconets and thus re-assign the MACs of some nodes. Our
method of assigning MAC addresses in a piconet and reorganizing
the piconets guarantees that the new piconet (even the new MAC
address) can be found by a simple appending or deleting the least
significant bit, which keeps the label prefix of updating nodes un-changed
so that even the delivery of the packets on the way to those
updating nodes will not be interrupted.
In a piconet, MAC
is always reserved by the master node.
For simplicity, we omit the MAC address of a master node hereafter
while representing its label, i.e., the master node with label
actually has a label
if
consistent labels with slave nodes are needed. Remember that, in a
pseudo-balanced de Bruijn graph, any node has
in-neighbors (except
and
) and at most
out-neighbors, so MAC addresses
and
are always reserved for the two bridge slaves to in-neighbors
, MAC
,
,
and
are reserved for bridge
slaves to out-neighbors if they exist, and
is reserved for the
th slave (it must be a pure slave) if it exists. Figure 3 illustrates
all four possibilities for the piconet MAC address assignment according
to the number of out-neighbors in scatternet backbone.
In the figure, for simplicity, we use
to denote
a node with label
or
,
whichever exists in the network. Notice that a master node in
the constructed scatternet based on a pseudo-balanced de Bruijn
graph
always has two incoming neighbors. For example,
a master node
in level
can have incoming neighbor
or
, but not both since the de
Bruijn graph is built upon a universal prefix set; similarly another
incoming neighbor is
. Analogously, a master
node
in level
has incoming neighbors
and
. On the other
hand, the number of out-neighbors of a node in the pseudo-balanced
de Bruijn graph
could be
. Only the node at level
could have
or
out-neighbors and only the node at level
could have
out-neighbor (except nodes
and
if they exist).
... x
m-1
1 x
1
0
... x
m-1
x
1
m
(x )
m
(x )
001
010
100
101
110
011
111
... x
m+1
... x
x
m+1
2
x
1
x
2
(a) One out-neighbor
m
m
(x )
(x )
1
...
x
1
x
2
x
m
...
...
x
1
x
m-1
...
x
1
x
m-1
001
010
100
101
110
011
111
1
x
0
2
x
m
0
...
x
2
x
m
(b) Two out-neighbors
(x )
m
m
(x )
x
m
...
...
x
1
x
m-1
...
x
1
x
m-1
x
2
x
m
0 0
...
x
2
x
m
0
...
1
001
010
100
101
110
011
111
1
x
0
2
x
m
1
...
x
1
x
2
(c) Three out-neighbors
m
(x )
(x )
m
x
m-1
...
x
1
x
m-1
x
2
x
m
0 0
...
x
2
x
m
0
...
1
x
2
x
m
...
1 0
x
2
x
m
...
1
1
001
010
100
101
110
011
111
1
x
0
1
x
2
x
m
...
...
x
1
(d) Four out-neighbors
Figure 3: MAC address assignment for a piconet.
24
Table 1 summarizes the rule of assigning the MAC address to the
bridge slave nodes in a piconet. Their MAC addresses can be decided
uniquely according to the label bit difference between current
piconet and neighbor piconet IDs. For example, if the master
is
labeled
and its out-neighbor
is labeled
,
then the MAC addresses of their bridge slave is
assigned by
, and
assigned by
. Remember that every bridge slave has
one MAC address in each of the two piconets it resides.
Table 1: The rule to assign MAC address to bridge slave nodes.
In-Neighbor
Out-Neighbor
Node
Notice that, in bluetooth scatternet, the bridge slave nodes have
two independent piconet IDs and MAC addresses in two piconets
respectively. However, since the routing mechanism in de Bruijn
is directional, only their piconet ID and MAC address assigned by
their in-master is public and meaningful for routing, saying label
in the remaining paper, and the other one is only used for inter-communication
in a piconet. Figure 4 illustrates one piconet in
the scatternet. Here nodes
,
,
,
and
assume master
role and form the backbone for scatternet. These master nodes are
connected in the de Bruijn graph by bridge slaves
,
,
and
respectively. Assume that node
has label
.
Nodes
,
denote the two incoming neighbors of node
, which
has label
and
respectively. Nodes
,
denote the two outgoing neighbors of node
, which has label
and
respectively. Nodes
,
, and
are the pure slave nodes of
in the scatternet. The label
of node
(
) is
where is
the MAC address of node
in this piconet, and
and
has public
label
and
,
respectively, which is consistent with the prefix of
and
respectively
. Notice that the MACs of
and
in the piconet mastered
by node
are
and
respectively, which are used only by nodes
in this piconet and not broadcasted to the network.
2
v
5
v
1
v
6
v
4
v
3
v
7
u
I
1
I
2
2
O
O
v
1
Figure 4: An example of a static piconet (with nodes inside the
shaded region) formed by our method. Here a master node is
denoted by a square, a pure slave is denoted by a circle, and a
bridge slave is denoted by a triangle.
As will see later, our labeling rule makes the updating of the
scatternet topology and nodes' labels much easier when new nodes
join the network or some existing nodes leave the network. For incremental
updating of the scatternet, there are two scenarios when
a new node joins the network. The first case is that there is a master
node who has free slot for a pure slave. We then directly assign
the newly joined node as the pure slave of that master node. The
second case is that no master node has free slot for a pure slave. We
then have to split some piconet and in turn create some free slots
for pure slaves. The splitting of a piconet is performed such that
the resulting backbone (formed by master nodes and bridge slaves)
is still a pseudo-balanced de Bruijn graph. When a piconet is aplit-ted
, the labels of some nodes have to be updated. While updating
the topology, it is possible that some packets are already on their
way to the destinations (via or toward this splitting piconet). Our
labeling rule makes sure that the packets can still be routed without
any interruption, only the local nodes are assigned new labels, and
the re-labeling are also conducted locally.
2.3
Static Scatternet Construction
Given
nodes currently distributed in the network, the section
gives an efficient algorithm to construct our de Bruijn based scatternet
dBBlue, which has low diameter and bounded node degree
property. In other words, we first study the construction of the scatternet
for a static
-nodes network, which will serve the base for
our dynamic construction.
Our method will construct a balanced de Bruijn graph
as the initial backbone of the network. We will choose integer
such that
. The choosing of
guarantees that
there are enough bridge slave nodes, which implies that no master
node serves as bridge slave.
Our method does not consider the detail of the neighbor discovering
process. We assume that every node already knows the existence
of the other nodes.
A
LGORITHM
1.
Static DeBruijn-Based Scatternet
1. Assume that there is a leader already among these
nodes
. The leader could be the node with smallest ID. We give
the token to the leader and call it token node. Token node
randomly selects
nodes (including itself) into the master
set
which assumes the master role in final scatternet
topology, where
and
is the number of
nodes in
. Let
, which is the total number
of nodes that can be assigned as pure slaves.
2. Token node assigns itself with label
, and each node in
with a unique
bits label in the range from
to
. The set of nodes
forms a de Bruijn graph
as the scatternet backbone.
3. Token node, with label
, selects
nodes
1
from
the remaining as its bridge slaves, and assigns them labels
and
respectively. Here
,
will also serve as the Medium Access Code (MAC) for
these two slaves in the piconet mastered by this token node.
Token node uses its bridge slave node
to
connect with its out-neighbor
and the bridge
slave node
to connect the out-neighbor node
.
4. Assume that the current token node has label with value .
The token node selects
nodes
2
from the
remaining as its slaves and assigns them with labels
,
1
There are two special nodes
and
, which only have 1 out-neighbor
, we then just use one bridge slave node to connect with its
out-neighbor.
2
Node
and
may choose
nodes as its pure slaves since they
only have one in-neighbor and one out-neighbor.
25
and
in the order if they exist
. Let
.
Then the token is passed to its successor.
5. Repeat the above steps (3) and (4) until all nodes in
are
processed. After all nodes have been processed, the current
token node passes the token back to node
again.
Once the initial topology construction is finished, the token node
will be responsible for the following node joining and leaving
issues. Master nodes form the backbone of bluetooth scatternet,
and a piconet works like a node in de Bruijn graph.
111
001
011
010
100
110
000
101
Figure 5: dBBlue Bluetooth Scatternet.
Figure 5 illustrates a dBBlue scatternet containing
nodes based
on
graph.
T
HEOREM
1. In dBBlue scatternet, each master has no more
than
slaves and each slave assumes as bridge for at most
piconets
. And the number of piconets is at most
and at least
.
Moreover, the computation cost is
for static construction.
P
ROOF
. From the topology construction, each master carries at
most
same prefix slaves, and at most
different prefix slaves from
its in-neighbors since each node in
graph has at most
in-neighbors
, so each master has no more than
slaves. And, each
slave exists as a free slave or as the bridge between its same prefix
master
and one of
's out-neighbors, so the degree of a slave node
is at most 2.
Let
, where
and
is the number of
masters. Then
implies
.
Thus,
and
.
Consequently,
, which implies
.
It is obvious that the total computation cost of constructing static
dBBlue scatternet is
.
In this paper we always assume a bluetooth piconet consists of at
most
slaves and
master. If future bluetooth technology allows
a master to bring more slaves, say
, our scatternet construction
method can adapt easily as follows. The scatternet backbone will be
still based on
de Bruijn graph. However,
is chosen such
that
. In other words, every master node will
carry
pure slaves and
bridge slaves to connect to its two out-neighbors
and two in-neighbors in the de Bruijn graph
.
It is not difficult to show that using de Bruijn graph
will
create a scatternet with less piconets than using
for
since each master node will carry less pure slaves in the later case.
On the other hand, the scatternet based on
for
does
provide a better fault tolerance since the degree of each master node
is increased to
.
DYNAMIC SCATTERNET UPDATING
In this section we describe a vigorous method to locally update
the scatternet topology dynamically when node joins or leaves the
network. Considering each piconet as an abstract node in the de
Bruijn graph, our goal is to maintain a scalable pseudo-balanced de
Bruijn graph.
3.1
Token Based Updating
First consider the case when a node wants to join the network.
We have to assign a role for this newly joined node. There are several
possible scenarios about the existing scatternet. (1) the existing
scatternet has a master node that has free slave slots, then we can
simply assign this newly joined node as the pure slave of this master
node. (2) all master nodes in the existing scatternet already have
slaves, we then have to expand the backbone of the scatternet to
incorporate this newly joined node. In other words, we have to split
some piconet to two such that the two new piconets will have some
free pure slave slots to hold this newly joined node.
Several methods can be used to implement the above scheme. To
make the updating efficient, we should be able to quickly find the
master node with empty slot for pure slave if there is any. One approach
is to keep the current scatternet compact and assign a special
node the token in a way such that all master nodes with label less
than the token node do not have empty slot, and all master nodes
with label larger than the token node do have empty slot. When
a new node joins the network, we can simply assign it the empty
pure slave slot and then update the token node if necessary. This approach
is efficient for node joining but suffers more cost for node
leaving. When a node leaves the network, we have to update the
scatternet to keep the scatternet compact. Thus, we possibly have
to move some nodes to fill the slot emptied by this left node.
The other approach is not to compact the scatternet. When a
node leaves, we do nothing if the backbone of the scatternet is untouched
. However, this approach suffers a large cost when node
joins the network since we have to find where to put the newly
joined node. One method is to use the broadcast method to travel
the whole scatternet to find the master node with free pure slave
slot. This may perform better if only a few of the existing piconets
have free slots. The other method is to randomly select a master
node and check if it has free slot. If it does not, we then select
another random master node until one such master node is found.
This approach performs better if the majority of the piconets have
free slots. We omit the detail of performance analysis here, which
will be presented in the full version of the paper.
In this paper, we will adopt the compact approach. Before we
present the detail of our methods of updating the scatternet, we first
study the possible status of the scatternet, which will be recorded
in the token node.
When a new node requests joining the network, there are three
possible scenarios to be discussed.
1. Current backbone is a balanced de Bruijn graph. Figure 6
illustrates an example. The token is held by the master node
with the smallest label among all master nodes that have less
than
same-prefix slaves. In this status, the master node with
the token has some free slot for newly joined node and so do
all master nodes with larger labels.
2. Current backbone is pseudo-balanced de Bruijn graph
under expanding status, i.e., many nodes join the scatternet.
Figure 7 illustrates an example. The token is held by the
first master node with less than
same-prefix slaves in level
if it exists, otherwise the first master node in level
holds the token. In this status, all master nodes in level
26
token
i-1
i
i+1
Figure 6: Token in balanced de Bruijn graph.
and
do not have free slots except the last two master
nodes in level
. In other words, at most two master
nodes have free slots.
level m
token
i
i-1
i+1
level m+1
Figure 7: Token in pseudo-balanced de Bruijn graph under expanding
status.
3. Current backbone is a pseudo-balanced de Bruijn graph
under shrinking status, i.e., many nodes leave the scatternet.
Figure 8 illustrates an example. The token is held by the
master node in level
with the smallest label. In this status,
each master node in level
and level
has
and
same-prefix slave nodes respectively.
level m+1
i+1
i-1
i
token
level m
Figure 8: Token in pseudo-balanced de Bruijn graph under
shrinking status.
Those statuses balanced, expanding, shrinking will be recorded
in the token data structure.
3.2
Node Joining
When a new node joins the network, there are three cases.
1. Token status is balanced, that is to say, current backbone is a
balanced de Bruijn graph. See Figure 6 for an illustration.
(a) The token node
has less than
slaves. Then
it simply adds the joining node into its slave set and
assigns it a label
, where
is
one of the un-assigned MAC address in
.
If the token node now has
slaves, then it passes the token
to its successor.
(b) The token node is fully occupied by slaves. This could
happen only when all master nodes in the scatternet
have
slaves. Then the token is passed back to node
if it is not at node
. Change the token status
to expanding and call Method 1 to split the current piconet
mastered by the token node into two parts and
add the joining node as a new pure slave with label
.
2. Token status is expanding, that is to say, current backbone is
a pseudo-balanced de Bruijn graph under expanding status.
See Figure 7 for an illustration.
(a) If the token node is in level
, i.e., with
bits
label
, the it must has less than
slaves.
It simply adds the joining node into its slave set and
assigns it a label
, where
is one of the un-assigned labels in
. If
the token node now has
slaves, then passes the token
to its successor.
(b) If the token node is in level
, i.e., with
-bits label
. This could happen only when all master
nodes in the scatternet has been fully occupied by
slaves. Call Method 1 to split the current piconet mastered
by this token node into two piconets, and add the
joining node as a new slave with label
.
3. Token status is shrinking, that is to say, current backbone
is a pseudo-balanced de Bruijn graph under shrinking status
. See Figure 8 for an illustration. In this case, token node
surely has exactly four slaves (see node leaving for more details
). We first add the joining node as the slave of the token
node and assign it one of the un-assigned MAC addresses
in
. Call Method 1 to split current piconet
into two piconets, and pass token to the successor in level
. If the current token node is
, then set token status to
balanced and pass the token to master node
. In other
words, we basically undo the updating (piconets merging)
caused by the previous node leaving event.
We then present our algorithm that split one piconet mastered by
node
to two new piconets mastered by nodes
and
respectively.
M
ETHOD
1.
Piconet split due to node joining
1. Token node
promotes its slave node
as the master for a new piconet. We change
the label
of a pure slave node or a out-neighbor
bridge slave node by simply appending
in the
MAC address, i.e., the new label is
.
Two new piconets have master node with labels
and
respectively. The detail of labeling and role
updating is as follows:
(a)
, which assumes
master role in first piconet.
(b)
, which assumes
a bridge slave role in first piconet.
(c)
, which assumes
a bridge slave role in first piconet.
(d)
, which assumes
master role in second piconet.
(e)
, which assumes
a bridge slave role in second piconet.
(f)
, which assumes
a bridge slave role in second piconet.
27
Notice this label extension still preserves their prefix. Thus,
after the piconet splitting, the message delivery will not be
interrupted at all because old addresses are still reachable
since the new label has same prefix. In addition, the nodes
with new labels with the corresponding MAC addresses will
serve the bridge slave role in the two newly created piconets.
Figure 9 illustrates the change while piconet splitting.
m
...
0,101
x
1
x
m
...
1,010
x
1
x
m
...
0,010
x
1
x
m
...
1,101
x
1
x
m
...
0,001
x
1
x
m
...
0
x
1
x
m
...
1
x
1
x
m
...
x
1
x
m
...
,001
x
1
x
m
...
,010
x
1
x
m
...
,100
x
1
x
m
...
,101
x
1
x
m
...
,110
Joining
v
v
u
x
u
1
x
Figure 9: Piconet splits due to node joining.
2. Then, both
and
need reselect the bridge slaves to connect
with its in-neighbors and out-neighbors if needed. Simultaneously
, both
and
's neighbors need reselect its same-prefix
bridge slaves to connect with
and
. The selection
still follows the rule described in Section 2.2, Figure 3 illustrates
all possible scenarios. Since the master nodes in
the new piconets are in level
, each of them has at
most
out-neighbors in the pseudo-balanced de Bruijn graph
. Thus, we have enough bridge slave nodes for each
new piconet. The in-neighbor master nodes
,
where
or
, of node
and
in the de Bruijn graph
have to change one of its pure slave to bridge slave to connect
with node
or
. Notice this update is only restricted to
local regions, so the update is totally localized.
3. Finally, the token is still kept by the master node
,
whose previous label is
.
3.3
Node Leaving
If a node leaves elegantly, it should first notify the token node
before leaving. If a master/slave node leaves because unexpected
reason such as power off, all of its neighborhood will detect it soon
and notify the token node. Our method does not consider the detail
of the exception detection process, we assume the token node can
detect the node leaving in short time.
When the token node detects the node leaving, then there are
three cases to be addressed again:
1. Token status is balanced, that is to say, current backbone is a
balanced de Bruijn graph. Here two cases need be discussed:
(a) If the token node does have pure slave node, then the
token node requests one pure slave to replace the position
of the leaving node, including the label;
(b) If the token node
has no pure slave nodes, then it
passes the token to its predecessor, say node
. There
are two scenarios also, which as discussed as follows.
i. If node
has pure slaves, then it requests one pure
slave to replace the position of the leaving node.
ii. If node
also has no pure slaves. This could happen
only when
, and all master nodes have
only
slaves serving bridge slave role. Token node
changes the token status to shrinking, and call
Method 2 to merge its corresponding critical pair,
then ask one pure slave to replace the position of
the leaving node.
2. Token status is expanding, that is to say, current backbone is
a pseudo-balanced de Bruijn graph under expanding status.
(a) If the token node is in level
, i.e., with
-bits label
. This could happen only when all master
nodes in the scatternet has been fully occupied by
slaves. The token need be passed the predecessor,
which will ask one pure slave node to replace the position
of the leaving node.
(b) If the token node is in level
, i.e., with
bits
label
. If the token node does have pure
slave node, then the token node requests one pure slave
to replace the position of the leaving node, otherwise
two cases need be discussed here:
i. The least significant bit of the token node's label
is
. The token will be passed to be passed the
predecessor, which will ask one pure slave node to
replace the position of the leaving node.
ii. The least significant bit of the token node's label
is
. It first merges its corresponding critical pair
by calling Method 2, then requests one pure slave
to replace the position of the leaving node. Now
if the current token node is
, then it changes
the token status to balanced and passes the token
to its predecessor
.
3. Token status is shrinking, that is to say, current backbone is
a pseudo-balanced de Bruijn graph under shrinking status.
(a) If the token node is not
, then it passes the token
to its second predecessor with least significant bit
in
level
, which will call Method 2 to merge its
critical pair piconet and ask one pure slave to replace
the position of the leaving node.
(b) If the token node is
, then it changes the token
status to balanced and passes the token to node
,
which will ask one pure slave to replace the position of
leaving node.
One special case is that token node leaves. In this case, the token
node will promote one of its pure slaves to replace it, i.e., be the
master node and the new token node. If no new pure slave exits,
similarly, we have to ask some pure slave node from its predecessor
to replace its role. When the token node did not leave elegantly, it
is more complicated and we need fault tolerance about the token
node, which is out of the scope of this paper.
We then describe our method to merge two piconets that are mastered
by a critical pair.
M
ETHOD
2.
Piconet merge due to node leaving
1. Assume that token node
requests merging
with its sibling master node
. The new piconet
has master node with label
. Notice that node
and node
each has at most
out-neighbors in the de
Bruijn graph. The label change will be achieved by simply
deleting the least significant bit as follows:
(a)
, which is the
master node in the new piconet.
28
(b)
, which is a pure
slave node or the bridge slave node to connect master
node
if it exists.
(c)
, which is the
bridge slave node to connect master node
,
whichever exists.
(d)
moves to replace the leaving node
position.
(e)
, which is the
bridge slave node to connect master node
,
whichever exists.
(f)
, which is a pure
slave node or the bridge slave node to connect master
node
if it exists.
Notice this label shrink still preserves the label prefix. Thus,
after the piconets merging, the message delivery will not
be affected at all because de Bruijn graph uses prefix based
routing, old addresses are still reachable by the same prefix
. The piconets mergence will not cause any routing problem
although the node label shrink is not acknowledged by
other nodes. At the same time, the sibling master node
leaves to replace the position of leaving node. To
continue the message delivery for node
, the new master
node
will keep the new label of
for a period of time and
forwards the message targeted to
accordingly. More detail
is discussed in Section 4. Figure 10 illustrates the change of
labels by merging piconets.
m
...
x
1
x
m
...
,001
x
1
x
m
...
,101
x
1
x
m
...
,110
x
1
x
m
...
0,101
x
1
x
m
...
1,010
x
1
x
m
...
0,010
x
1
x
m
...
1,101
x
1
x
m
...
0
x
1
x
m
...
1
x
1
x
m
...
,010
x
1
x
m
...
1
u
v
u
v
x
replace leaving node
1
x
Figure 10: Piconets merge due to node leaving.
2. Then, node
need reselect the bridge slaves to connect with
in-neighbors and out-neighbors if needed. Simultaneously,
the neighboring master nodes of
and
need reselect their
same-prefix bridge slaves to connect with
. The selection
still follows the same rule described in Section 2.2, please see
Figure 3 for an illustration for all possible scenarios. Notice
this update is totally localized.
3. The token is now kept by the master node
.
It is not difficult to prove the following theorem.
T
HEOREM
2. Our method locally updates the dBBlue scatternet
using at most
communications when a node joins or
leaves the network. In most cases, the cost of updating the scatternet
is actually
since the node can leave and join without
affecting the remaining scatternet. The number of nodes affected
when a node leaves or joins the network is always bounded from
above by a constant. Our method can construct the structure incrementally
when the nodes join the network one by one.
3.4
Bounded Network Size
The method described so far can incrementally construct the scatternet
when the nodes join the network one by one and can update
the scatternet structure efficiently when nodes leave or join the network
even frequently without affecting the worst case properties of
the scatternet. This method is efficient in most cases, however, it
could generate lots of merging and splitting of piconets in the worst
case: a node joins the scatternet which causes the splitting of a piconet
, then a node leaves which in turn causes the merging of two
piconets, and repeat joining, leaving.
In most applications, the size of the bluetooth network is often
stable, for example, within
for a small constant
. If
this is the case, we can apply the following approach to build the
scatternet. First, we use Algorithm 1 to build a scatternet with
nodes. When a new node joins the network, we first tries to find
an empty pure slave slot for this node from the current token node.
If no empty slot, we then pass the token to the successor of the
current token node. When all master nodes in the scatternet have
slaves, we will start to create another piconet to connect to the
current backbone. In other words, instead of having
pure slave
nodes, a master node from the scatternet backbone will replace the
pure slave nodes by
piconets (at maximum). We call such piconets
associated with the master node of the backbone. Clearly,
a backbone based on a balanced de Bruijn graph
could
support from
nodes to
nodes without associating piconets
. By associating piconets to the master nodes of backbone,
the number of nodes it can support is increased to
since we
can replace each pure slave node by a piconet of
nodes.
One disadvantage of associating piconets to master nodes is that
every master node in the backbone will have to forward more messages
than the scatternet created by the method described previously
. The other disadvantage is that when the network size goes
beyond its supported scope, the updating of the scatternet is more
costly than before. See the full version of the paper for more detail.
ROUTING IN SCATTERNET
We first describe the routing in the dBBlue scatternet with balanced
backbone. If both source and target nodes are masters, we
assume the source master node
has label
and the
target master node
has label
. According to the routing
mechanism described in Section 2.1, node
simply forwards the
message to its neighbor master node
, relayed
by their common bridge node
if
or by
if
. Then
forwards the message again
to its neighbor master node accordingly. Clearly, the message is
guaranteed to reach the target in at most
steps. If the source
node is a slave, it first sends the messages to its master node. Notice
that pure slave node has only one master node and the bridge
slave node has two master nodes. Then bridge slave node just randomly
picks one master node. Similarly if the target node is a slave,
the message will be first forwarded to its master node. The procedure
of routing message between these two master nodes is same as
the previous description. Clearly, the routing path from one master
node to another master node is at most
hops. The longest
path between two nodes happens from a slave node to another slave
node, which is at most
hops. From
, we have
. Thus, the diameter of the de Bruijn-based scatternet
is
.
T
HEOREM
3. For any two nodes in dBBlue scatternet, there is
a path with at most
hops and such path can be found
locally based only on the labels of the source and target.
29
Notice that, two assumptions are made in our routing scheme
described above: (1) the source node knows the label of the target
node, and (2) the backbone of the scatternet is based on a balanced
de Bruijn graph. We will not try to resolve the first assumption in
this paper, but discuss it briefly here. The labels of a node can be
broadcasted to the whole network if the nodes leaving and joining
is not frequent, i.e., the labels of nodes do not change frequently.
Or we can adopt a mechanism similar to the Domain Name Service
(DNS): the labels are stored in a hierarchical manner and a node
can query the label servers to get the labels of the target nodes and
then cache them locally. Here, we discuss briefly how to perform
broadcast in de Bruijn graph such that it guarantees to reach each
node exactly once. We initiate the broadcast from node
. Each
node with label
continues forwarding the message
to its out-neighbors. The nodes whose most significant bit is
will not forward the message. The broadcast basically works same
as the breadth first search (BFS) in a binary tree. Clearly, a node
will only forward the message to nodes with larger labels. Thus, a
node receives the message exactly once. The communication cost
of such broadcasting is exactly
messages.
We then discuss in detail how to route the packets when the scatternet
backbone is pseudo-balanced. Assume the source master
node
has label
and the target master node
has label
, where
. Node
will
forward the packet to its out-neighbor master node
with label
, or
, or
. Notice
that since the labels of all nodes are a universal prefix set, we know
that exactly one of these three labels does exist. Consequently, the
diameter of pseudo-balanced de Bruijn graph is still
. The
bridge slave node from
to
has MAC (1)
if a master node
with label
exists; or (2)
if a master node with
label
exists; or (3)
if a master node with
label
exists. Review Section 2.2 for more detail
about the rules of labeling nodes and assigning MAC addresses in
a piconet. A shorter route is obtained by looking for the longest
sequence that is suffix of
and prefix of
.
For the purpose of illustration, let's see how we route packets
from master node
to master node
in the scatternet based on the de Bruijn graph illustrated in
Figure 2. First, the master node
checks the labels of all
out-neighbor master nodes and finds that master node with label
exists. Then it forwards the packet to master node
via the bridge slave node with MAC
. Similarly, master
node
forwards the packet to master node with
label
via the bridge slave with MAC
. Finally
, the master node
forwards the packet to node
via the bridge slave with MAC
. Notice that
the last step it takes a shorter path other than via another master
node
.
At last, we discuss how to route the messages while the scatternet
is on updating due to nodes leaving or joining the network. When a
node joins the network, the piconet mastered by the token node may
be split into two piconets. Clearly, the message still can be routed
since the labels of the two newly created piconets are the children
of this token node. Similarly, when two piconets are merged to create
a new piconet, the label-based routing still successfully route
the packets. The remaining case is that when a node leaves, we
may need find a pure slave node
from the current token node
to
fill the space emptied by this left node. When a message targeted
to node
reaches the piconet mastered by the token node
, node
has already been moved. To remedy this, we apply a mechanism
similar to the mail-forwarding service provided by the post-office:
the master node
will keep a record of the nodes moved to other
piconets and its new label within a time window. When a message
targeted for
reaches, the master node forwards the message to the
new destination and also acknowledges the source node of the new
label of
. The source node will then cache the label of node
if
it is frequently used. To decrease messages forwarding, every master
node could record the frequency that a slave node receives messages
from other node. When a pure slave node is visited frequently
by other nodes, then we switch its role with one of the bridge slaves
with same prefix and broadcast the new labels of these two nodes
to the network. When we have to move a pure slave node to other
piconet to make the scatternet compact, the pure slave node is the
least frequently visited nodes among the current piconet.
RELATED WORK
Zaruba, Basagni and Chlamtac [15] proposed two protocols for
forming connected scatternet. In both cases, the resulting topology
is termed a bluetree. The number of roles each node can assume
is limited to two or three. The first protocol is initiated by a single
node, called the blueroot, which will be the root of the bluetree. A
rooted spanning tree is built as follows. The root will be assigned
the role of master. Every one hop neighbor of the root will be its
slave. The children of the root will be now assigned an additional
master role, and all their neighbors that are not assigned any roles
yet will become slaves of these newly created masters. This procedure
is repeated recursively till all nodes are assigned. Each node is
slave for only one master, the one that paged it first. Each internal
node of the tree is a master on one piconet, and slave of another
master (its parent in the initial tree). In order to limit the number of
slaves, they [15] observed that if a node in unit disk graph has more
than five neighbors, then at least two of them must be connected.
This observation is used to re-configure the tree so that each master
node has no more than
slaves. If a master node has more
than
slaves, it selects its two slaves
and
that are connected
and instructs
to be master of
, and then disconnects
from
itself. Such branch reorganization is carried throughout the network
. However, whether this approach will terminate is not proved
in [15]. Tan et al. [14] proposed a similar method for single-hop
network. In the second protocol [15], several roots are initially selected
. Each of them then creates its own scatternet as in the first
protocol. In the second phase, sub-tree scatternets are connected
into one scatternet spanning the entire network. Notice that the tree
topology suffers from a major drawback: the root is a communication
bottleneck as it will be overloaded by communications between
the different parts of the tree. Obviously, the root node in the
tree-based scatternet is the bottleneck of the network and its congestion
is
, assuming that total traffic demand is a unit and is
uniformly distributed. In addition, dynamic updating that preserves
correct routing is not discussed in these protocols.
Law, Mehta and Siu [9] described an algorithm that creates connected
degree bounded scatternet in single-hop networks. The final
structure is a tree like scatternet, which limits efficiency and robust-ness
. A single-hop Bluetooth scatternet formation scheme based on
1-factors is described in [1]. However, piconets are not degree limited
in that scheme.
Salonidis et al. [13] proposed another topology construction algorithm
recently. It first collects neighborhood information using
an inquiry procedure, where senders search for receivers on randomly
chosen frequencies, and the detected receivers reply after
random backoff delay. Leader is elected in the process, one for
each connected component. Leader then collects the information
about the whole network, decides the roles for each node, and distributes
back the roles. In other words, basically, it is a centralized
approach. Thus, the solution is not scalable, and not localized.
30
Moreover, how to assign the roles is not elaborated in [13]. They
also assume up to
nodes in the network. Another centralized solution
for single-hop networks, where the traffic between any pair
of nodes is known a priori, is described in [10].
Sun, Chang and Lai [11] described a self-routing topology for
single-hop Bluetooth networks. Nodes are organized and maintained
in a search tree structure, with Bluetooth ID's as keys (these
keys are also used for routing). It relies on a sophisticated scatternet
merge procedure with significant communication overhead for
creation and maintenance. Bluerings as scatternets are proposed in
[4]. Ring structure for Bluetooth has simplicity and easy creation as
advantage, but it suffers large diameter (i.e., the maximum number
of hops between any two devices) and large number of piconets.
The works are most related to our dBBlue scatternet construction
method is [2] and [7].
Barriere, Fraigniaud, Narajanan, and Opatrny [2] described a
connected degree limited and distributed scatternet formation solution
based on projective geometry for single-hop networks. They
assume that only slave nodes can act as bridges. They described
procedures for adding and deleting nodes from the networks and
claimed that its communication cost is
and
the computation cost is
, where
is the number
of nodes in the network. The degree of the scatternet can be
fixed to any
, where
is a power of a prime number. However,
in their method, every node need hold information of the projective
plane and the master node who has the "token" needs to know
the information of the projective scatternet (which label should be
used for the new coming master and which existing nodes need to
be connected to it). However, the authors did not discuss in detail
how to compute the labels for the new master and its slaves, and
what will happen when the number of nodes reaches the number of
nodes of a complete projective scatternets.
Notice that our dBBlue scatternet can be easily transformed to
support a Bluetooth network in which a piconet has any number
of slaves, while the method in [2] can only support the piconet
with
slaves where
is a power of a prime number. Moreover
, the dynamic updating cost of dBBlue is at most
.
The construction of dBBlue scatternet is inspired by the method
proposed by Fraigniaud and Gauron [7] for constructing a network
topology for P2P environment based on de Bruijn graph. When a
node
joins the P2P network, it [7] randomly selects a node
in the
de Bruijn graph and then creates two children nodes of
: one for
and one for
. This random selection of node
cannot be applied
to Bluetooth scatternet since it may create a de Bruijn graph with
node whose degree is large than . It is not difficult to show that for
Bluetooth scatternet, we can only afford the de Bruijn graph whose
node label lengths differ by at most
. In this paper, we proposed
a novel method for assigning MAC addresses to nodes such that a
self-routing is still possible during the updating procedures when
node leaves or joins the network. The de Bruijn graph is used as
backbone of the scatternet in our dBBlue structure.
CONCLUSION
In this paper, we addressed the problem of scatternet formation
for single-hop Bluetooth based ad hoc networks, with minimal
communication overhead. We adopted the well-known structure de
Bruijn graph to form the backbone of the dBBlue scatternet. The diameter
of the scatternet dBBlue is
and we can find a path
with at most
hops between every pair of nodes without
using any routing table. Moreover, the congestion of every node is
at most
. We discussed in detail the method to locally
update the structure dBBlue using at most
communications
when a node joins or leaves the network. In most cases, the
cost of updating the scatternet is actually
. Our method can
construct the structure dBBlue incrementally when the nodes join
the network one by one. Previously no method can guarantee all
these properties although some methods can achieve some of the
properties. The dBBlue scatternet has lower dynamic updating cost
than the structure proposed in [2].
Notice that, instead of having three statuses for the token, we
can require that the scatternet is always in the status of expanding
. Then the scenarios for updating the scatternet become simpler
when nodes join or leave the network, but with a possible high cost
of updating: more merging and splitting of piconets will occur. We
are currently investigating the tradeoffs of the three approaches described
in this paper by conducting simulations on different models
of node joining and leaving the network. We are also investigating
the scatternet formed based on butterfly structure [6] and compare
their performance with the one described here. Notice that the butterfly
structure has node degree at most , which maps exactly to
the degree requirement by bluetooth piconet.
REFERENCES
[1] S. Baatz, S. Bieschke, M. Frank, P. Martini, C. Scholz, and C. Kuhl.
Building efficient bluetooth scatternet topologies from 1-factors. In
Proc. IASTED Wireless and Optical Communications WOC, 2002.
[2] L. Barriere, P Fraigniaud, L. Narajanan, and J. Opatrny. Dynamic
construction of bluetooth scatternets of fixed degree and low
diameter. In 14th ACM-SIAM Symp. on Discrete Algorithms (SODA),
pages 781790, 2003.
[3] S. Basagni, R. Bruno, and C. Petrioli. Device discovery in bluetooth
networks: A scatternet perspective. In Proc. IFIP-TC6 Networking
Conference, Networking 2002, 2002.
[4] F. Cgun-Choong and C. Kee-Chaing. Bluerings - bluetooth
scatternets with ring structure. In Proc. IASTED Wireless and Optical
Communications WOC, 2002.
[5] N. de Bruijn. A combinatorial problem. In Koninklijke Nederlandse
Academie van Wetenschappen, 49, pages 758764, 1946.
[6] D.Malkhi, M.Naor, and D.Ratajczak. Viceroy: a scalable and
dynamic lookup network. In Proceedings of the 21st ACM
Symposium on Principles of Distributed Computing(PODC), 2002.
[7] Pierre Fraigniaud and Philippe Gauron. The content-addressable
network d2b. Technical Report Technical Report TR-LRI-1349 (also
appeared in 22nd ACM Symp. on Principles of Distributed
Computing (PODC)), 2003.
[8] Jaap C. Haartsen. The bluetooth radio system. IEEE Personal
Communications, 7:2836, 2000.
[9] C. Law, A.K. Mehta, and K.Y. Siu. Performance of a new bluetooth
scatternet formation protocol. In Proc. ACM Symposium on Mobile
Ad Hoc Networking and Computing MobiHoc, pages 183192, 2001.
[10] D. Miorandi and A. Zanella. On the optimal topology of bluetooth
piconets: Roles swapping algorithms. In Proc. Mediterranean
Conference on Ad Hoc Networks MedHoc, 2002.
[11] C.K. Chang M.T. Sun and T.H. Lai. A self-routing topology for
bluetooth scatternets. In 2002 International Symposium on Parallel
Architectures, Algorithms and Networks (ISPAN '02), 2002.
[12] C. Petrioli and S. Basagni. Degree-constrained multihop scatternet
formation for bluetooth networks. In Proc. IEEE GLOBECOM, 2002.
[13] T. Salonidis, P. Bhagwat, L. Tassiulas, and R. LaMaire. Distributed
topology construction of bluetooth personal area networks. In Proc.
IEEE INFOCOM, 2001.
[14] G. Tan, A. Miu, J. Guttag, and H. Balakrishnan. Forming scatternets
from bluetooth personal area networks. Technical Report
MIT-LCS-TR-826, MIT, 2001.
[15] G.V. Zaruba, S. Basagni, and I. Chlamtac. Bluetrees - scatternet
formation to enable bluetooth based ad hoc networks. In Proc. IEEE
International Conference on Communications(ICC), 2001.
31
| scalable MAC assignment;scatternet formation;Low Diameter;ad hoc networks;self-routing;const updating cost;de Bruijn graph;Bluetooth;Network topology;Self-routing Scatternet;Bluetooth networks;Bruijn graph;equal traffic;easy updating;low diameter;single-hop |
66 | Development of E-commerce Statistics and the Implications | This text has analyzed the development of E-commerce in some developed countries such as Canada, U.S.A., Japan, etc and put forward several suggestions on how to set up the system of E-commerce in our country taking the national conditions of our country into account. | INTRODUCTION
Since the 1990s, the rapid development of e-commerce has
brought extensive, enormous and far-reaching influence on the
economy of the countries all over the world. E-commerce has
already become the contemporary chief trend of economic and
social development. As representatives of advanced productivity
of new economic period, the level of its development has
already become important signs of measuring the modernization
level and comprehensive strength of countries and cities; it has
become important means to make changeover in the economic
system and reform the style of economic, promote the upgrading
of the industrial structure, promote the modernized level of the
city and strengthen international competitiveness. So, the
governments all over the world have paid close attention to the
development of E-commerce Statistics.
Though the development of informationization in our country is
very quick, it still has great disparity with the developed
countries for relatively late start. Our country is still in the
interim facing the double task of informationization and
industrialization at present. So, in order to carry on an instant,
accurate Statistics to the development level of E-commerce and
set up perfect E-commerce Statistics system, we must
understand, absorb and bring in the theories and methods of
E-commerce Statistics from the main foreign countries to make
E-commerce Statistics become effective guarantee in leading
and promoting e-commerce in a healthy way, combining social
source and promoting national power.
DEVELOOPMENT STATES OF E-COMMERCE STATISTICS IN THE WORLD
We have chosen some representative countries in the world and
analyzed the development of E-commerce Statistics in these
countries.
2.1
Definitions of e-commerce in main
Developed countries.
The definition of e-commerce is the standard of carrying on
E-commerce Statistics, but there are various kinds of definition
of E-commerce in the world because of visual angles are
different. So, it is necessary for each country to make a distinct,
standard, practical, wide meaningful and measurable definition
which should be suitable for each field and can be amenable to
time.
2.1.1
Definition of e-commerce in OECD
(Organization for Economic Cooperation and
Development)
There are broadly-defined e-commerce and the
narrowly-defined e-commerce. The broadly-defined
e-commerce means the activity of electronic transaction on item
and service, no matter the transaction occurred between
enterprise, family, government and other public or individual
organizations. It uses network as a intermediary. Goods and
service should be ordered on the network but the payment and
goods service don't need to carry on the net; the
narrowly-defined e-commerce is only referred to trade activity
carrying on through Internet.
It is worth pointing out that the source of OECD definition of
e-commerce is the Canada official Statistical department.
2.1.2
Definition of e-commerce in Canada
The e-commerce definition of Canada official Statistical
department is: E-commerce is the transaction which based on
the computer network, including the transformation of
ownership, transformation of tangible and intangible assets right
to use. It consists of B2B (business to business), B2C (business
to consumer), B2G (business to government) and G2C
(government to consumer). But the transaction taking place
inside enterprises will not be included in the e-commerce
Statistics.
2.1.3
Definition of e-commerce in the U.S.A.
Definition of e-commerce in the U.S.A. is defined by the U.S.A.
general survey bureau who divides e-commerce into three parts
from angle of the overall situation: e-commerce infrastructure;
electronic affairs; e-commerce.
E-commerce infrastructure is the economic facility or
equipment which is used for supporting electronic affairs or
electronic transaction activities.
Electronic affairs include the affairs managed by computer
network in company, government and other non- profit
organization.
E-commerce refers to goods or service transaction activity
completed on computer network.
70
2.2
Overview and Characteristic of Main
Country's e-commerce Statistics
2.2.1
Overviews of e-commerce Statistical Surveys
2.2.1.1
Overviews of Canadian e-commerce
Statistical Survey
The e-commerce Statistics in Canada is an official activity that
was presiding over by government and implemented concretely
by State Statistics Bureau of Canada. Up till now, Canada has
implemented four pieces of different e-commerce Statistics.
a) "Net-banking operation and bank service Statistics survey
on internet and e-commerce application in financial department",
this investigation is an irregular survey; its respondents are
enterprises of the financial field and its nature is a separate
investigation;
b) "Annually Statistical survey on internet application in family
", is a fixed annual Statistical survey. It is a supplemented
investigation and its respondents are families.
c) "The Statistics survey on communication technology and e-
commerce", is an irregular Statistical survey; it is a supplement
investigation and the respondents are enterprises in "the
standard industry of North America classifies"
d) "Annually Statistical survey on e-commerce and relevant
technology ", is a fixed annual Statistical survey; it is a
supplemental investigation and the respondents are enterprises
in "the standard industry of North America classifies"
2.2.1.2
Overviews of e-commerce Statistical survey
in U.S.A.
U.S.A. is one of the countries that e-commerce and e-commerce
Statistical survey launched earliest in the world. The U.S.A.
general survey bureau is the principal organ responsible for
e-commerce Statistical survey.
The annually Statistical survey adopted by U.S.A. general
survey bureau is consisted of annual sample investigation of
commerce, annual sample investigation of manufacturing
industry, annual sample investigation of retailing business and
annual sample investigation of service trade. The method taken
in these investigations is dividing layer and sampling. The
concrete method in e-commerce Statistical survey is joining the
questions of e-commerce into the existing questionnaire except
the annually sample investigation of manufacturing industry
which is joining the supplementary questionnaire. These
respondents investigating are enterprises and the enterprise
e-commerce activity, business procedure and sales amount are
investigated on the foundation of existing investigation.
2.2.1.3
Overview of e-commerce Statistical survey
in other countries
2.2.1.3.1
Overview of e-commerce Statistical
survey in Japan
In Japan, the departments in chare of e-commerce Statistical
survey is Statistics Bureau, Ministry of Internal Affairs and
Communication, Japan, but other departments participate in the
e-commerce Statistical survey such as Statistics Bureau of
Cabinet, Statistics Bureau of Ministry of Economics and
Industry, etc. So, there are more than forty kinds of official
investigation on e-commerce which involve every aspect of
e-commerce but have great differences in purpose, frequency
and content. These investigations launch around three
departments including enterprises, governments and families.
2.2.1.3.2
Overview of e-commerce Statistical
Survey in S.Korean
Statistics bureau of the S.Korean began the official e-commerce
Statistical survey since April of 2000. The investigation mainly
concentrates on B2C (business to consumers) and B2B
(business to business). The investigation on B2G (business to
government) lags behind slightly, which began since the first
quarter of 2001.
2.2.2
Characteristics of the e-commerce Statistical
Survey in each country.
a)
The organizers carrying on e-commerce Statistical survey in
the above-mentioned countries are all official departments, or
implemented by cooperating with other relevant government
departments (as Japan). The Statistical survey presided over by
the government can not only strengthen its Fairness and
dependability, but also give the survey authoritativeness.
b)
The investigations almost are not specially but
supplementary. The main reasons are high cost of special
investigations and not perfect e-commerce Statistical systems of
each country which have not reach the level of special
investigation.
c)
The above-mentioned countries confirm the content of
investigation not only consulting the content that OECD
recommends, but also considering the development level and
characteristics of the national e-commerce. It is worth pointing
out those indexes of Statistical survey in Singapore, Canada and
U.S.A. are comprehensive and have involved the preferential
investigation content that OECD recommends.
d)
Most investigations take the annually survey as the core, but
there also are monthly, quarter, general survey and irregular
surveys. The industries included in monthly and quarterly
investigation are not more than on generally, such as "monthly
trade sample investigation of retail business" in U.S.A. and "the
investigation on family consumption trend", etc.
e)
Most countries adopt the sample investigations, but other
method as census and census combine with sample investigation
are also adopted. The method of sampling is mainly used and
the following two kinds are used less.
IMPLICATIONS OF E-COMMERCE STATISTICAL SURVEY IN OUR COUNTRY
The e-commerce in our country is still in the elementary stage
and the e-commerce Statistical survey is just start too. There are
just some semi-official or unofficial departments and
organization trying to carry on e-commerce Statistical survey
but not a formal, overall, official survey on e-commerce in our
country. For instance: "Statistical Reports on the Internet
Development in China", "CII research and calculating on
e-commerce total index system in China", "Statistical survey on
intranet and e-commerce development level", "investigation on
e-commerce developing in enterprises ", etc.
Most of above mentioned investigations are irregular, even once
only, lack unified consideration and can't form a system except
"Statistical Reports on the Internet Development in China"
which hold regularly and establishes its own system to a certain
71
extent. Meanwhile, the unofficial survey is very apt to the
systemic deviation and utility nature for it is not mandatory;
even affect the fairness, accuracy and representativeness of the
investigation result.
3.2
Implications of e-commerce Statistical
survey in China
According to the experience of some foreign countries that
carrying on e-commerce Statistical and the development of
e-commerce Statistical in our country, we consider that if we
want to set up a comparatively perfect e-commerce Statistical
survey system, we should accomplish the following several
points at least:
3.2.1
Attach importance to the definition of
e-commerce
The kind, range and respondents are all fixed according to the
definition of e-commerce which is prerequisite of e-commerce
Statistical survey. There is not an authoritative definition of
e-commerce in our country so that the key problem we met is
how to define e-commerce when we carrying on the
e-commerce Statistical survey.
We consider that open principle should be followed when
defining e-commerce according to its characteristic of appearing
late and excessive growth, in order to perfect it constantly with
the development of e-commerce.
3.2.2
The government should take charge of
e-commerce Statistical survey
We could understand the development of e-commerce prompt
and accurate, find the questions existing in e-commerce and
predict the development trend according to the e-commerce
Statistical. It is obvious that e-commerce Statistical is important
to the sound development of e-commerce. E-commerce could be
promoted by just and accurate Statistical survey but the
unilateral and utilitarian Statistical survey will mislead even
hamper it.
However, the e-commerce Statistical survey of our country
lacks the authoritativeness and mandatory at present even
affected the fairness and accuracy of Statistical survey. So, the
Statistical survey of e-commerce in our country should be
included in the official Statistical development plan as early as
possible and we should set up the official survey system of
e-commerce in order to make the e-commerce Statistical survey
authoritative and promote the development of it.
3.2.3
Accelerate the research of the e-commerce
Statistical theory
The problem which should be considered first in research on
Statistical theory of e-commerce is to keep the continuity with
traditional Statistical. The e-commerce Statistical is not
produced without foundation after all but is the extension on the
network of traditional Statistical so that the basic theories of
traditional Statistical are still suitable for the e-commerce
Statistical survey.
Secondly, we should make further research on Statistical
method, Statistical caliber and Statistical range of e-commerce,
and then set up the index system of e-commerce Statistical as
soon as possible.
Moreover, e-commerce has the characteristics of crossing over
the limit of region. We should try our best to keep the harmony
with the world on research in e-commerce Statistical theories
for the overall and perfect system of e-commerce Statistical
need the joint efforts of countries all over the world.
3.2.4
The service of e-commerce Statistical survey
should be comprehensively and pointed.
The e-commerce Statistical survey should serve not only for the
macroscopically strategic policies of countries but also for the
micro operation of enterprises. Meanwhile, there should be
different surveys to conform to the different respondents in
order to offer the personalized service of the Statistical survey.
Only in that way can we offer the good development
environment for e-commerce and reflect the value of
e-commerce Statistical survey.
REFERENCES
[1].
Seminar of "research on e-commerce Statistical survey and
application". "Statistical Surveys and Application of
E-Commerce in Canada" [J]. China Statistics, 2003, 3
2003, 4
[2].
Seminar of "research on e-commerce Statistical survey and
application". "Survey of e-commerce development in
S.Korean". [J]. China Statistics, 2003, 5
[3].
Seminar of "research on e-commerce Statistical survey and
application". "Statistical Surveys and Application of
E-Commerce in Japan" [J]. China Statistics, 2003, 6
[4].
Seminar of "research on e-commerce Statistical survey and
application". "Statistical Surveys and Application of
E-Commerce in the U.S.A." [J]. China Statistics, 2003, 7
[5].
"Research paper of e-commerce development all over the
world", translated by Juanying Zhu, Bingzhi Yang United
Nations Trade and Development Board [M].2003.
[6].
"The application of IT in Statisticals" Feng Cui, [M] Lixin
Accounting Publishing House,2003
72
| Development stage;Definition of E-commerce;Survey;Authority;Statistics;Statistical methods;Measurement;Implications;E-commerce Statistics;Statistical Survey;China;E-commerce |
67 | Development through communicative action and information system design: a case study from South Africa | Many authors have recognised the importance of structure in shaping information system (IS) design and use. Structuration theory has been used in IS research and design to assist with the identification and understanding of the structures in which the IS is situated. From a critical theoretical perspective, focusing on the Habermas' theory of communicative action, a community based child health information system was designed and implemented in a municipality in rural South Africa. The structures which shaped and influenced the design of this IS (the restructured health services and social tradition) are explored and discussed. From this case study the implications of using IS design as a developmental tool are raised: namely the development of a shared understanding, the participation of key players and the agreement on joint action. | INTRODUCTION
Many authors [Walsham and Sahay 1996; Walsham and Han 1991; Jones 1997; Rose 1999; Orlikowski 1992;
Orlikowski and Baroudi 1991; Orlikowski and Robey 1991] have recognised the importance of structure in shaping
information system (IS) design and use. Structuration theory has been used in IS research and design to assist
with the identification of the structures in which they are situated. Using this meta-analysis tool, information
systems have been used to redefine and/or reinforce some of these structures. The IS design process is particularly
important, not just in shaping the structures, but also in terms of understanding what structures exist and how
they were formed.
Critical approaches to IS examine those structures with the perspective of questioning and changing some of
them. Critical social researchers seek to emancipate people by finding alternatives to existing social conditions
as well as challenging taken-for-granted conditions. In particular, Habermas [1987] examines communication and
how through striving for an ideal speech situation these structures can be challenged. In the process of IS design
communication is especially important, as is who participates, and how.
In this paper the author explores the existing structures which have contributed to the accessibility, or as the
case may be inaccessibility, of the health services in the Okhahlamba municipality, KwaZulu-Natal, South Africa.
Through the design of the community-based child health information system these structures were explored and
addressed throughout the design process. Communication and participation were integral to the process, as well
as the recognition of the importance of the context in which the system is designed.
The rest of this paper is structured in the following manner. The following section looks at what is meant
by structure, the process of structuration and its application to IS design. The third section looks at critical
social theory in IS design, in particular Habermas' notion of communicative action. The fourth section outlines
the existing structures in a community in KwaZulu-Natal that were important in shaping the IS design process.
The fifth section explores how the process of IS design acknowledged and challenged these structures and the
last section discusses the implications for IS design as a developmental tool.
Author Addresses:
Elaine Byrne, School of Public Health, University of the Western Cape, PBag X17, Bellville, 7535, South Africa,
[email protected]
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that
the copies are not made or distributed for profit or commercial advantage, that the copies bear this notice and the full citation on the
first page. Copyrights for components of this work owned by others than SAICSIT or the ACM must be honoured. Abstracting with
credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission
and/or a fee.
c 2003 SAICSIT
Proceedings of SAICSIT 2003, Pages 8392
84
Elaine Byrne
Figure 1.
Dimensions of duality of structure.
IS DESIGN AND STRUCTURATION
In this paper structure is regarded as 'Rules and resources, recursively implicated in the reproduction of social
systems. Structure exists only as memory traces, the organic basis of human knowledgeability, and as instantiated
in action' [Giddens 1993] p377. That is, through action, based on rules and resources in peoples' minds, structures
in society are produced and reproduced. The rules and resources drawn upon in the production and reproduction
of action are simultaneously the means of system reproduction (this is what Giddens refers to as the 'duality
of structure'). The rules can be viewed as generalised procedures of action and human agents are aware and
knowledgeable of these rules, but may not know what the outcome of that action will be because action can have
both intended and unintended consequences. The resources are both authoritative (coordination of the activity
of human agents) and allocative (control of material aspects of the natural world), so both human and material
resources are included.
The process of structuration involves knowledgeable actions of human agents discursively and recursively
forming the sets of rules, practices and routines which, over time and space constitute structure. Thus agents
and structures are not seen as independent, but as a duality whereby structure is relied upon in human actions,
and in so doing structures are produced or reproduced. Over time these social practices becomes reasonably
stable and routines develop. Giddens [1993] p29 breaks down social structure and human interaction into three
dimensions which are interlinked by three modalities as illustrated in Figure 1.
When human actors communicate, they draw on interpretative schemes to help make sense of interactions.
At the same time those interactions reproduce and modify those interpretative schemes which are embedded in
social structure as meaning or signification. Similarly the human actors allocate resources through use of power,
and produce and reproduce social structures of domination. Moral codes or norms help determine what human
agents can sanction and thus produce and reproduce social structures of legitimation. It is useful to separate
structure and interaction into these three dimensions for analysis of structure, but the dimensions are interlinked.
[Rose 1999]
The design and use of information systems are shaped by the very structures within which they are situated,
but IS can also be used to help define and redefine these structures. By exploring each of the above dimensions
in the process of IS design, IS design can be used as a tool for development by refining the structures to include
the views and values of those currently disadvantaged by the existing structures. Through a participative and
reflective process in IS design, cultural and traditional norms which influences human action can be explained,
understood and addressed. The design process, and the IS itself, can improve communication and encourage
reflection and change interpretative schemes. Through the process of IS design and reflecting on the situation
the excluded can be empowered, which redefines the power and resource structures. In summary IS design can
define and refine structures by understanding and incorporating all the dimensions of the duality of structure in
the design process.
Structuration theory has been used quite widely in IS. Rose [1998] conceptualises the use of the theory in IS
for three different purposes: analyse; theorise and operationalise. Walsham and Han [1991] analyse literature
under topics of operational studies, meta-theory and specific concepts used, as well as outlining structuration
Proceedings of SAICSIT 2003
Development through communicative action and information system design
85
theory. Jones [1997] analyses the use of structuration theory in an attempt to reconstruct theory to accommodate
technology. He further explores the application of the theory as an analytical tool, the use of theory as a meta-theory
, and use of concepts from the theory.
In an attempt to theorise aspects of the IS field using structuration theory Orlikowski and Robey [1991], apply
the fundamentals of structuration theory to help understand the relationship between information technology
and organisations. In a later article Orlikowski [1992] developed her structurational model of technology to
understand the relationship between information technology and institutions. She recognises that technology
cannot determine social practices, but can condition them and that technology in conditioning social practices
is both facilitating and constraining.
In terms of empirical studies Walsham [1993] provides a number of case study analysis which cover issues of
IS strategy, development, implementation and evaluation in three different organisations. Walsham and Sahay
[1996] use structuration theory, with actor-network theory, to investigate problems in developing Geographical
Information Systems in an Indian government department. In a similar manner this paper, from a critical social
perspective, uses structuration theory to highlight two key aspects of existing structure which were addressed in
and affected the process of designing the IS. The meaning of a critical social perspective is provided in the next
section before section 4 decribes the key structural aspects of the case study.
CRITICAL SOCIAL THEORY AND IS DESIGN
Critical social researchers by their very presence influence and are influenced by the social and technological
systems they are studying. 'For critical social theorists, the responsibility of a researcher in a social situation
does not end with the development of sound explanations and understandings of it, but must extend to a critique
of unjust and inequitable conditions of the situation from which people require emancipation'[Ngwenyama and
Lee 1997]p151. Critical social theorists seek to emancipate people; they are concerned with finding alternatives
to existing social conditions as well as challenging taken-for-granted conditions. Critical social theorists view
people, not as passive receptacles of whatever data or information that is transported to them, but as intelligent
actors who assess the truthfulness, completeness, sincerity, and contextuality of the messages they receive.
Adopting a critical social theoretical perspective to IS design is not new. In relation to IS research Ngwenyama
[1991] gives an in-depth treatment of critical social theory. Ngwenyama and Lee [1997] approach research on
communication richness in computer mediated communication from a critical social theoretical perspective.
Hirschheim and Klein [1994] deal with a critical approach to qualitative research.
Habermas [1987] suggests that critical social theorists should initiate a process of self-reflection among human
actors, but it is only participants in the community that can select the appropriate political action. His theory
of communicative action notes that all social action assumes a basic set of norms. These norms allow all actors
to express themselves fully and openly. They also imply that all actors accept the outcome of open rational
argument. According to the theory of communicative action, breakdowns in communication occurs when actors
fail to adhere to these norms. There have been numerous studies which refer in particular to the theory of
Habermas. Lyytinen [1992] has explored the theory of Habermas to analyse systems development. Hirschheim
et al. [1996]using Habermas' theory of communicative action propose a framework for the intellectual trends in
IS development research.
In this study Habermas' theory of communicative action and the notion of 'the ideal speech situation' is used to
explore how effective striving for its attainment is as a transformation strategy. My study uses aspects of critical
social theory to examine how community action can be strengthened or changed by exploring the structures which
enable or constrain that action. Communication, power and norms are key in trying to grasp an understanding
of that action. Fundamental to this exploration is the belief that as intelligent and knowledgeable agents, human
actors can, within limits, choose to act in accordance with or against societal norms.
SITUATION IN OKHAHLAMBA MUNICIPALITY, UTHUKELA DISTRICT, KWAZULU-NATAL, SOUTH AFRICA
The existing district health information system in South Africa excludes children and adults that cannot, and/or
do not, access the services at the health facilities (clinics, community centres, mobiles and hospitals). Those
who are most vulnerable and socially excluded, and need the health support systems the greatest, are the very
ones not accessing the health services. Policies are formed and resources allocated to the community based
on the information they recieve. Since the vulnerable are excluded from the formal IS they are further and
systematically excluded from these policy and resource decisions.
With the impact of HIV/AIDS children have increasingly become an excluded and more vulnerable group.
This exclusion and vulnerability of children can be tackled on two interconnected levels. The first is through
the creation of awareness of the situation of children and the second through the commitment and action of
government and society to address this situation. The first can be supported by designing an information system
Proceedings of SAICSIT 2003
86
Elaine Byrne
for action - an information system that can be used for advocating and influencing decisions and policies for the
rights of these children. So IS design can be used as a developmental tool.
Since protecting and improving the health of the children of the entire district is the aim of the district health
system, research was conducted on how to develop a community-based health information system that could
support a comprehensive district health information system. The research was conducted in Okhahlamba as a
component of the child health programme of the uThukela district child survival project and the department of
health. Okhahlamba is a municipality of the uThukela district lying in KwaZulu Natal on the eastern coast of
South Africa. The primary objective of developing a community-based information system is to assist community
members in their decision-making regarding the health of their children. On a secondary level it aims to establish
interfaces with the formal health facility information system to enable district managers to use information from
the whole district to make informed decisions and policy changes.
After a review of the district's health information system and a community meeting on monitoring and evaluation
community members, as well as district government staff, recognised their need for a community-based
child health information system. To understand what the information needs were, who should be involved in
the information system and the format the information should be communicated in a total of 10 interviews, 16
focus group discussions and 1 meeting took place between July and September 2002. From the field work there
was a greater understanding around the meaning of 'well-being' and 'at-risk' for a child, what factors/practices
contribute to these situations, how the situations can be measured and, based on what action can be taken,
who the information should go to. Consequently a community-based child health information system has been
integrated into the district health information system.
In this section two key aspects of structures which address, or have contributed to, the exclusion of children
are outlined, namely restructuring of health services and status of child health, and social traditions. The first
aspect provided an opportunity for change and reflection on the current role and function of the IS whilst also
providing an understanding of the exclusion of segments of the population. The second aspect again provides an
understanding of the position of women and children in society which impacts on IS design as well as presents
some challenges in the design process. [For more details of the child health programme and the research see
[uThukela District Child Survival Project 2002; 2000a; 2000b; 1999a]]
4.1
Restructuring of health services and status of child health
After 1994 the national Health Plan for South Africa and the Reconstruction and Development Programme
outlined that a Primary Health Care (PHC) approach is the underlying philosophy for the restructuring of
the health system. Crucial to this is the role of the community in the development of a district health system
emphasising the movement from a traditionally vertical curative based health system to a newer client centred and
preventive based health system. In addition, more recently, there has been the move towards the decentralisation
of health service delivery (along with other basic social services) to local authorities from the department of health.
The newly established structures, such as the community health committees and community health forums, have
meant a renegotiation of roles and responsibilities at the district level. This requires active communication
between the parties involved to ensure consent on the new roles and responsibilities of all local government staff
[uThukela District Child Survival Project 2000b; 2000a].
Since 1994 children have benefited from the move to PHC. However the free health care policy is not without
its fair share of problems. Due to the emphasis on PHC, there has been a 30% increase in clinic attendance and
a 2% increase in hospital attendance in the province of KwaZulu-Natal. The additional drugs and personnel
needed for the increased attendance at the clinics was not budgeted correctly. As a result the quality of services
in terms of shortages of personnel and drugs has been compromised as well as putting severe strain on the budget.
Clinics in particular have struggled to accommodate the increased number of clients. Clients also complain that
hospital-based health workers are often unsympathetic to their needs. [uThukela District Child Survival Project
1999b]
Poorer children living in rural areas have poorer access to PHC facilities than children living in the wealthier
more urbanised areas. They have greater distances to walk and fewer health personnel to cater for them.
KwaZulu-Natal is one of two provinces with especially poor client-to-clinic ratios (23,000 clients per clinic) and
in 1995 only 54.3% of households in KwaZulu-Natal were within 5 kms of medical care, the second lowest in the
country.[Crisp and Ntuli 1999]
Child health indicators point to the lingering effects of apartheid's racial, geographic and socio-economic
policies. Just over half of all children aged 12-23 months in KwaZulu-Natal are not immunised, though 62.2%
have their road to health cards. This indicates at least one contact with the health services, but this contact was
not sustained as the immunisation schedule has not been completed. The infant mortality rate for KwaZulu-Natal
has been estimated at 52.1/1000 and the under-five mortality rate at 74.5/1000.[Crisp and Ntuli 1999]
Proceedings of SAICSIT 2003
Development through communicative action and information system design
87
This situation is exacerbated by disparities in access to basic infrastructure. Access to potable (drinkable) water
and sanitation are often critical to improving child health outcomes. The government has however committed
itself to increasing access to water and sanitation. In spite of two major dams and several springs in the area,
a serious shortage of water for agriculture and clean drinking water has impacted nearly every household, and
influenced the health status of the area. The cholera epidemic in 2001 is evident of this poor access. A situational
analysis for the Okhahlamba municipality completed in July 1998 estimates that only 25% of the population
live within 15 minutes walking distance of safe water, and only 25% have adequate sanitary facilities. Transport
remains poor, particularly during rains when rivers become impassable. [uThukela District Child Survival Project
1999b]
4.2
Social traditions
Strong Zulu cultural and traditional values exist in the Okhahlamba municipality. Traditional leaders are highly
respected, though there is some controversy over the roles and powers being eroded with the formation of the new
local government structures. Grandmothers and traditional healers are often the first persons to be consulted in
times of illness and many locally available remedies and treatments are used and practiced.
Grandmothers can have quite a powerful decision-making influence at household level. However, women in
general tend to be dependent on males for income and have very little access to independent means of livelihood.
Household responsibilities also make women subject to 'time poverty' that is, it is not uncommon for most women
in this rural area to work ten hours a day, making it a hardship to travel to seek health care for themselves
or their children. Much of each day involves several hours of strenuous manual labour, hauling water and
firewood, and performing agricultural work. Women, including mothers, grandmothers and older 'girl children',
are predominantly responsible for childcare [uThukela District Child Survival Project 1999b]. However if the
health-seeking or care decision involves any financial decisions the head of the household, which is usually a
man, will need to be consulted in order to make the final decision. This process often causes a delay in a child
attending a clinic as money for transport and alternative child care for the siblings would need to be sourced.
Through the existing patriarchal social system women are particularly at risk from HIV/AIDS. These factors
include: sexual subservience to men, higher risk of transmission with the migrant labour of partners to cities;
differential access to information and resources for prevention, and; women often remain with spouses who are
HIV positive rather than vice-versa. Women in their twenties have the highest rate of HIV infection nationally,
but between 1997 and 1998 the HIV prevalence among teens attending antenatal clinics jumped over 65%, from
12.7% to 21%. With high teenage fertility rates this picture is unlikely to change in the near future. In 1998,
the provincial fertility rate was 3.3%, and the provincial teenage pregnancy rate was 13.8%. In Okhahlamba/
Mtshezi municipalities the average teenage pregnancy rate for young women delivering in facilities in 1999 was
22.9%, significantly higher than the provincial rate [uThukela District Child Survival Project 1999b]. Children
are particularly susceptible to the ravages of the HIV/AIDS epidemic through high rates of mother to child
transmission and an increasing number of AIDS orphans and consequent child headed households.
ASPECTS OF THE PROCESS OF DESIGNING A COMMUNITY-BASED INFORMATION SYSTEM IN OKHAHLAMBA, KWAZULU-NATAL, SOUTH AFRICA
One of the fundamental steps that needed to be addressed before addressing the situation of children, and how
this was reflected in information systems, was a paradigm shift. It required a shift from the older focus on
curative centre based service delivery to the newer health services approach which focuses on prevention, clients
and quality. To support this paradigm shift the project adapted a new approach of transformational thinking, or
future focussed approach, developed in the business sector, but which is also being integrated in health systems.
The approach focuses on working towards holistic well-being for all, rather than just solving health associated
problems. Through community meetings and discussions the community determined a vision for their children:
'To achieve optimal health, growth, development and well-being of children within the family and community in
the uThukela Health district'.
The implications of the paradigm shift for IS was that though it was important to measure children's physical
condition, it was also important to measure how far towards our vision we are. So instead of saying 80% of our
children are immunised, we would say that we still need to immunise 20% of our children. This approach reflects
what we still need to do to attain our vision and thus, hopefully, stimulate action. Adopting a forward looking
perspective also stresses the importance of the context we are presently in and the importance of measuring
changes in that context. Monitoring the context and acting based on that information, should lead to a situation
where most children in the future would find themselves in a state of 'well-being'.
Proceedings of SAICSIT 2003
88
Elaine Byrne
5.2
Sharing of information with key actors
If people are to act or reflect on information received that information needs to be relevant and communicated in
a culturally sensitive and appropriate manner. In terms of a community-based information system for children
an important step in the process of the IS design was who should participate in the process. The main role
players and duty bearers
1
need to be included as it is them who are in the best position to change or influence
the context in which the child is placed. In the case study these key people were: the community health
workers, parents, family members, early childhood and creche teachers, home based carers, caretakers, social
workers, health facility staff, clinic health committees, councillors, government officials and staff from external
organisations. This indicates that a multi-leveled and multi-sectoral group affects the situation of children at
community level.
What was also important was a common understanding by all parties on what was meant by 'well-being' and
'at-risk' as the monitoring of these situations and conditions would be important if we were to measure whether
we were on the right track to attaining our vision. Meanings of 'well-being' and 'at-risk' were gathered through
focus-group discussions, interviews and meetings with all the role players and duty bearers. This common
understanding was translated into common data definitions in the community-based, as well as in the health
facility, information system.
A review of the existing data sources and flows was conducted based on the assumption that information flows
are a key element of dialogue between providers and consumers of health services. One important conclusion
from this review was that some of the data collected through the current district health information system is
valid and useful, but is not getting to the people who can act upon or use it. As one project leader mentioned we
need to look at how data is flowing and the possibility of establishing 'feedback pathways' for this data. There
are many of these pathways at different levels, but the one between community based workers and community
forums is core for a community-based health information system. This level of feedback was entirely absent from
the district health information system in Okhahlamba.
It is also interesting to note what was absent from existing data sets, yet what key role players and duty
bearers felt were important in monitoring the situation of their children. Data items relating to the context in
which a child is being reared are mostly excluded. Many of the current indicators focus on the condition the
child is currently in, such as having immunisation or not, and not the context that caused the child to be in that
situation, such as no caregiver to take the child to the clinic. But exclusion is a process and to prevent the child
becoming excluded requires analysing the situation of the child throughout that process. Measures for context,
such as happiness, playfulness and communication are more intangible and therefore difficult to develop as data
items. However through the new observation tools developed by and for the community health worker these
measures are now included. So data items on the presence of a caregiver, drug and alcohol abuse, cleanliness of
the household for example, are now included as indicators of 'at-risk'. This observation tool is used as part of
the dialogue between the health worker, who is a trusted and respected family advisor, and the household. The
results from the aggregated monthly data is shared through role play, song, dance, drawings and histograms in
the community quarterly meetings. The act of sharing information establishes networks of people at community
level who are responsible for the care of the children. These networks form the basis for communication.
5.3
The communication loop
In terms of capacity to act, or to make decisions, most respondents, in the research undertaken, felt that they
could act if given appropriate information and if key role players were included in the communication loop with
one another. The visioning exercise started a communication process, but this needed to be developed into more
formal communication structures. Communication was needed with other levels of government. Building on
the recent development of clinic health committees and the governments' appointment of the community health
workers in the KwaZulu-Natal province, communication loops were developed. These loops are described below
at three levels: household, community, and district.
--Household level: Following on from a discussion on how to measure the more intangible measures a standardised
observation checklist was developed. The checklist is used as a communication tool with household members.
Based on the community health worker's assessment a number of choices or options to solve any of the problems
identified is given to the household. The community health worker could facilitate the choices, such as contact
with certain services, if requested to do so, but the final decision lies with the household. The assessment is
used as an empowering tool, rather than as a means of inspection. These visits assist the child caregivers in
1
Role players have a role to play in children's lifes, but duty bearers are those people who are responsible and obligated to fulfill
childrens' rights
Proceedings of SAICSIT 2003
Development through communicative action and information system design
89
terms of their knowledge of child care and health seeking behaviour within their household. The visits also
provide the mother or caregiver with a mediator between them and health facilities as well as a mediator
between them and their family. Therefore issues of access to basic social services could be addressed.
--Community level: The community health workers, with the assistance of their supervisors (community health
facilitators), conduct village health days for discussion of broader issues affecting the community served by the
clinic. Bar graphs, role-plays, song, poetry and dance are used as these methods seem to work very well. These
meetings form the quarterly community health meetings, that were suggested in the course of the field work.
Members of the community and the clinic health committee, health facility staff, community health worker,
school children and other key people attend the meetings. More people have now access to the information
they requested and in a format that is easy to understand. The village health days also provide a forum for
reflection and discussion.
--District level: Communication and information flows between community and district involves combining data
from various sources to provide a comprehensive database for the district. Important for the collation of this
data is the use of the same data definitions in the different data sources. This collation is done through the
district information officer as her office already receives this data from the different sectors. A summary of
the district data is distributed every quarter. The content of the summary sheet is regularly determined in
consultation with the clinic health committees and through feedback on the village health days. Existing
local government structures, community and clinic committees, have already established clear communication
channels with higher levels of local government. The feedback from these meetings would be sent through
these structures when needed. Thus a comprehensive picture of child health in the district is achieved.
In summary, with the restructuring of the health services there was the need for a paradigm shift, before
addressing the review and design of a community-based child health information system. This shift was from the
older more curative health service approach to the newer client and service focused approach of primary health
care. With the newly established local clinic and community health committees this offered an opportunity of
new people coming into the health services with a new vision and who were also willing to be involved in the IS
design process. Furthermore the newly formed local government structures have established clear communication
channels with higher levels of local government. The feedback from quarterly community health meetings could
be sent through these channels and forms part of the health information flow and communication loop.
Challenges around the position of women in society impacted on decisions regarding participation. However as
women are the main carers of children they were involved in the process without any question. Furthermore as
some of the key positions in the community are occupied by men it was also felt that they needed to participate
in the design process as their positions were influential in terms of the situation of children in the community.
The dialogue initiated in the design process continues through the community health quarterly meetings which
provide an opportunity for dialogue to take place at community level. At the household level the community
health workers role is to empower the household in its health seeking and caring practices. This is done through
household visits and providing the appropriate education at the appropriate time, for example if a child has
diarrhoea the conversation would be around what to do for the child with diarhoea. The community health worker
also plays the role of mediator - mediator between households and the community forum and also between the
caregiver and the rest of the family. Through the supportive role of the community health worker the position
of women and children will not change in society, but their views on health and the care of children will be
supported and heard.
The process of IS design in the case study supported and questioned two key aspects of structure, namely the
restructuring of health services and the status of child health and social traditions and their implications on the
process of the design. The next section explores what implications the use of such an approach has for IS design.
DISCUSSION IMPLICATIONS FOR IS DESIGN
The implications for IS design have been categorised into three main areas: the need for a shared understanding,
the need for participation of key people and the need for agreement on joint action.
6.1
Shared understanding
If health IS design is to be used in a developmental context their needs to be agreement reached between health
care deliverers and those who receive the services on the design and the purpose of the health service. From
our case study the importance of having a common vision for the health services was seen as an important
first step in this direction, especially given the restructuring of the health services and the adoption of a PHC
approach. Creation of this vision and shared understanding necessitates communication between the designers of
the system, the users of the IS as well as the users of the health system. The process of IS design is important for
Proceedings of SAICSIT 2003
90
Elaine Byrne
establishing the relationship between the users and providers of health care, as reaching agreement on subsequent
action that needs to take place involves both parties working together.
The objective of communicative action is to achieve mutual understanding. In this case study mutual understanding
on a vision for the children, how to measure our progress to this vision and who needs to be involved in
that process was made. IS design should be ' . . . concerned with achieving and maintaining mutual understanding
. . . among all those who are involved in a coordinated organizational situation . . . Organizational actors involved
in communicative action depend on a common language and a shared understanding of the organizational context
in order to enact meaning from each other's communicative actions.' [Ngwenyama and Lee 1997]p158/9
IS use in developmental contexts can go beyond communicative action and be an enabler of discursive action.
Discursive action is intended to achieve or restore agreement for collective action. It is ' oriented toward achieving
or restoring agreement and redeeming validity claims. Discursive action is initiated when organizational actors
need to achieve agreement for joint action. In such a situation, the individuals would generally engage each
other in a debate of the issues until they agree on a course of action' [Ngwenyama and Lee 1997]p155. However
there needs to be a common medium of communication, agreement on roles and responsibilities and terms and
conditions set for means of discourse. A common understanding was needed in this case on what was meant by
'at-risk' and 'well-being' children and how to measure the situation of the child.
The process of IS design can create an environment where people can express themselves, where understanding
on various roles can be agreed to, where responsibility can be taken and where action using available information
occurs. However unless we explore and change the structures in which a person operates, e.g. the position of
children and women in society it is difficult for an actor to be able to engage either in reflection or in discursive
action.
6.2
Participation of key players
Reaching a common understanding between the users and providers of the health services is impossible without
their joint participation. Participation of the excluded increases transparency and opens officials and other
responsible parties to dialogue and wider scrutiny by the citizens they serve. Underlying power differences
between different actors influences the interaction and negotiation between them (both within the community
and between the community and outside groups) and this can influence whose 'interests' are explored and served
in information systems. The social dynamics and power relationships that underlie and constitute the actual
practice of the information system needs to be explicit.
In this research the unequal nature of social relationships and positions between different actors and also
institutions was recognised from the outset. Forums were established that suited the needs of the various groups.
Discussions were also facilitated from people who were familiar with the area and who also had an understanding
of the norms and values of that society. In the initial stages because of these differentials in status and roles
within the community, groups comprising, for example, mothers, councillors, facility staff, met separately to
discuss what they wanted for their children. These meetings were held in the local language and near the
homes of the individuals. The community health worker formed the essential mediation role between the service
providers and the clients. At a later stage representatives from the various groups met jointly to share the
findings from the research and to discuss the way forward.
Even with community participation communication does not always work smoothly, or in favour of children.
Communication provides the means for exploring, affirming or denying norms, debating policies and practices,
and discussing old experiences and new ideas. The situation of children will change only when action to improve
that situation is taken. So the next step was to explore what will happen once the information has been shared.
6.3
Agreement on joint action:a multi-leveled and multi-sectoral approach
Once the vision is formulated then the necessary action to attain that vision needs to be agreed to. Often this
involves a multi-leveled and multi-sectoral approach. It also needs all key role players to be in communication
with one another. It is not easy to challenge or change the institutions and systems established that support the
status quo.
In Okhahlamba the key role players that could act to change the situation of children were all identified. The
most difficult task was achieving agreement by these role players on their action. Most of the confusion was
over formal roles and responsibilities which had changed with the recent moves to decentralization of basic social
services to local authorities, rather than an unwillingness to support one another. With this move the community
health workers had also recently moved from a local non-governmental organization to the Department of Health
and were confused over their reporting structures. The district department of health needs to hand over delivery
of health services to local authorities, but the local authority does not have the human nor financial capacity
to carry out this function. The volunteer clinic health committees are enthusiastic to support initiatives that
Proceedings of SAICSIT 2003
Development through communicative action and information system design
91
will improve the situation of their children, but have only been formed recently. It was only after groups met
one another and agreement was reached on their roles and responsibilities that agreement on the action, and
who was responsible for that action, took place. The recent changes have provided an opportunity for inclusion
of children on the agenda as many of the structures and systems are not, or have only recently, been formed.
What was encouraging from the field work was that most people felt that they had the capability to act if they
received the information.
CONCLUSION
It is increasingly recognised that globalisation also produces marginalisation. Castells [2000b; 2000a] argues
that processes of globalisation are extremely selective, and various parts of the globe in both the developing
and developed world run the potential of being excluded from this process. He uses the term 'fourth world' to
describe this segment of society. Conditions of history and geography shape the access that groups and societies
have to new information and communication technologies. Lack of such access can be exclusionary. Castells
describes these processes to be systematic and can lead to further marginalisation and exclusion of societies.
In information systems, and not just health information systems, the voices of communities - in particular
women, children and youth - are not often heard, both within communities, between communities and between
the other levels of society. When, where and how do they get the opportunity to express their needs and
aspirations? How do they have the chance to identify and develop the skills and resources they need to address
their problems? Where do they get the opportunity to express themselves or to exchange ideas and experience?
In a sub-district in KwaZulu-Natal these questions have been addressed through a holistic approach to health
information systems development.
Some of the challenges for IS design is the need to focus both on the output, as well as the process. Attaining a
common vision is fundamental if the system is to be used, but this involves the participation of different sectors
and different levels of actors from the outset. It also offers some opportunities. Clarification over roles and
responsibilities allows recognition and acceptance by duty bearers of the tasks they need to perform. This is
a first step towards action. There was great enthusiasm by these key role players in the design process and a
desire to work together. The community monitoring system in Okhahlamba was based on an understanding that
people are intelligent and know what affects their children's and their own development. There is the need to
co-design systems, processes and tools in IS design and obtain clarity on what we need to measure. IS design
should be about facilitating a journey of development, rather than measuring the destination.
Communication and participation, as well as the capacity to do so, are needed to strive towards Habermas'
ideal speech situation. This is no easy task, as exclusion is built upon a system of norms, interpretative schema
and facilities that systematically excludes segments of the population and country from the network society.
Communication will not simply be improved by introducing a new or improved health information system. Even
so, a process from visioning, developing skills and capacity and constructing a conducive environment can mean
that IS design can be viewed as a development tool, as striving towards this 'ideal speech situation', even if this
situation is not attained.
ACKNOWLEDGMENTS
I wish to thank all the people from uThukela district who assisted with the research. In particular thanks must
go to the staff from the uThukela District Child Survival Project and the Department of Health, who assisted
with the carrying out of the field research, the data analysis and the implementation of the information system.
For support on the formatting and editing of this paper I am grateful for the assistance of Bob Jolliffe. I have
also benefitted from the insightful comments from Sundeep Sahay on the various drafts of this paper. Financial
support for the research was provided by a World Vision/USAID grant to uThukela District Child Survival
Project.
REFERENCES
Castells, M.
2000a. The Information Age: Economy, Society and Culture: The End of the Millenium, 2 ed. Vol. 2. Blackwell
Publishers.
Castells, M.
2000b. The Information Age: Economy, Society and Culture: The Network Society, 2 ed. Vol. 1. Blackwell Publishers.
Crisp, N. and Ntuli, A.
, Eds. 1999. South African Health Review. Health Systems Trust, South Africa.
Giddens, A.
1993. The constitution of society. Outline of the theory of structuration. Polity Press, Oxford.
Habermas, J.
1987. The Theory of Communicative Action. MIT Press.
Hirschheim, R. and Klein, H.
1994. Realizing emancipatory principles in information systems development: The case for ethics.
MIS Quarterly 18,
1, 83109.
Proceedings of SAICSIT 2003
92
Elaine Byrne
Hirschheim, R.
, Klein, H. K., and Lyytinen, K. 1996. Exploring the intellectual structures of information systems development:
A social action theoretical analysis. Accounting, Management and Information Technology 6, 1/2.
Jones, M.
1997. Re-Thinking Management Information Systems. Oxford University Press, Chapter structuration and IS.
Lyytinen, K.
1992. Critical Management Studies. Sage Publications, London, Chapter Information systems and critical theory,
159180.
Ngwenyama, O.
1991.
Information Systems Research: Contemporary Approaches and Emergent Traditions
. North Holland,
Amsterdam, Chapter The Critical Social Theory Approach to Information Systems: Problems and Challenges.
Ngwenyama, O. K. and Lee, A. S.
1997. Communciation richness in electronic mail: Critical social theory and the contextuality
of meaning. MIS Quarterly 21, 2 (June), 145167.
Orlikowski, W.
1992. The duality of technology: rethinking the concept of technology in organisations. Organisation Science 3, 3
(August).
Orlikowski, W. and Baroudi, J. J.
1991. Studying information technology in organisations: Research approaches and assumptions.
Information Systems Research 2
, 128.
Orlikowski, W. and Robey, D.
1991. It and the structuring of organisations. Information Systems Research 2, 2, 143169.
Rose, J.
1998. Evaluating the contribution of structuration theory to the is discipline. In Proceedings of the European Conference
on Information Systems
.
Rose, J.
1999. Towards a structurational theory of is - theory development and case study illustrations. In Proceedings of the 7th
European Conference on Information Systems
. Copenhagen.
uThukela District Child Survival Project
. 1999a. Final evaluation report. uThukela District, KwaZulu-Natal, South Africa,
unpublished.
uThukela District Child Survival Project
. 1999b. Knowledge, practice and coverage survey. UThukela District Child Survival
Project, KwaZulu-Natal, South Africa.
uThukela District Child Survival Project
. 2000a. Cs xv detailed implementation plan. uThukela District, KwaZulu-Natal,
South Africa, unpublished.
uThukela District Child Survival Project
. 2000b. Integrated managment of childhood illness situational analysis. uThukela
District, KwaZulu-Natal, South Africa, unpublished.
uThukela District Child Survival Project
. 2002. Mid term evaluation report. uThukela District, KwaZulu-Natal, South Africa,
unpublished.
Walsham, G.
1993. Interpreting Information Systems in Organisations. Chichester, John Wiley.
Walsham, G. and Han, C. K.
1991. Structuration theory and information systems research. Journal of Applied Systems Analysis 17,
7785.
Walsham, G. and Sahay, S.
1996. Gis for district-level administration in india: Problems and opportunities. International Journal
of Geographical Informaiton Systems 10
, 385404.
Proceedings of SAICSIT 2003
| communicative action;critical social theory;moral codes or norms;community information systems;information system design;Structuration theory;interpretative schemes;critical social theory in IS design;conducive environment;community monitoring system;marginalisation;health information systems;duality of structure;structuration theory;the ideal speech situation |
68 | Diagnosis of TCP Overlay Connection Failures using Bayesian Networks | When failures occur in Internet overlay connections today, it is difficult for users to determine the root cause of failure. An overlay connection may require TCP connections between a series of overlay nodes to succeed, but accurately determining which of these connections has failed is difficult for users without access to the internal workings of the overlay. Diagnosis using active probing is costly and may be inaccurate if probe packets are filtered or blocked. To address this problem, we develop a passive diagnosis approach that infers the most likely cause of failure using a Bayesian network modeling the conditional probability of TCP failures given the IP addresses of the hosts along the overlay path. We collect TCP failure data for 28.3 million TCP connections using data from the new Planetseer overlay monitoring system and train a Bayesian network for the diagnosis of overlay connection failures . We evaluate the accuracy of diagnosis using this Bayesian network on a set of overlay connections generated from observations of CoDeeN traffic patterns and find that our approach can accurately diagnose failures. | INTRODUCTION
When failures occur in Internet overlay connections today, it is
difficult for users to determine the root cause of failure. The proliferation
of TCP overlays such as content distribution networks
and HTTP proxies means that frequently network communication
requires a series of TCP connections between overlay nodes to succeed
. For example, an HTTP request using the CoDeeN[9] content
distribution network first requires a TCP connection to a CoDeeN
node and then a connection from a CoDeeN node to a server or another
CoDeeN node. A failure in any one of the TCP connections
along the overlay path causes the user's HTTP request to fail. If
the user knows which TCP connection failed, then they can take
appropriate action to repair or circumvent the failure. For instance,
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
SIGCOMM'06 Workshops September 11-15, 2006, Pisa, Italy.
Copyright 2006 ACM 1-59593-417-0/06/0009 ...
$
5.00.
if they know that the connection from the proxy to the server failed,
then they complain to the web server administrator. On the other
hand, if the user/proxy connection fails, perhaps they can try connecting
to the proxy using a different ISP. If multiple overlay paths
exist between the source and destination, nodes and applications
may also use this type of diagnostic information to automatically
recover or route around failures[1].
Unfortunately, accurately determining which TCP connection in
an overlay connection has failed is difficult for end users, who typically
do not have access to the internal workings of the overlay.
Commercial overlay networks such as Akamai typically do not reveal
details of connection failures to users, and the diagnostic tools
available to users today are frequently inadequate. Active probing
techniques such as tulip[7] and Planetseer[11] frequently cannot
provide accurate information due to firewalls and packet filtering.
Furthermore, active probing can be costly both in terms of network
resources and time, and cannot diagnose the many transient TCP
failures that begin and end before one can complete a probe[11].
Additionally, one must take care when using active probing for diagnosis
because they may concentrate network traffic at points of
failure and trigger intrusion detection systems.
Instead, in our research we consider a passive approach to diagnosis
in which intelligent diagnostic agents use probabilistic inference
to determine the root cause of failure. The reliability of IP
links in the Internet varies widely and hence we expect the probability
of TCP failure to differ between different sets of hosts. Diagnostic
agents in the Internet learn the probability of such failures
for different regions in the Internet based on observations of TCP
traffic. When users or network administrators detect network failures
, they request diagnosis from such diagnostic agents. Agents
then use information about the relative probability of failure of the
TCP connections that make up an overlay connection to identify
the most likely cause of failure when an overlay connection occurs
without conducting any additional probes. In addition, diagnostic
agents can also use this Bayesian network to predict the probability
of overlay and TCP connection failure given information about the
path of an overlay connection.
We collect data on TCP failure probabilities in order to determine
whether this data enables diagnostic agents data to accurately
diagnose overlay failures in the Internet. To learn the probability
of failure for TCP connections between different points in the network
, we observe TCP traffic on the content distribution network
CoDeeN using an updated version of Planetseer[11]. Next we construct
a Bayesian network for diagnosis using these probabilities.
We then use Bayesian inference to infer the most probable cause of
failure for TCP-based applications.
To evaluate the effectiveness of this approach, we test this Bayesian
network on an artificial set of overlay connections based on the
305
traffic observed on CoDeeN. We find that when a failure occurs,
knowing only the AS numbers of the source, proxy, and destination
, we can determine which TCP connection has failed with over
80% probability. In addition, the probability of failure between
ASes stays relatively constant over time, and data learned can be
accurately used for diagnosis for many hours into the future. This
suggests that the TCP failure probabilities we learn may be useful
in the diagnosis of future failures as well.
The contribution of this research is to show how inter-AS TCP
failure probabilities can be used for probabilistic diagnosis of failures
in overlay networks such as CoDeeN using Bayesian inference
. We also demonstrate a variety of clustering methods to address
the problem of dataset sparsity for learning TCP failure probabilities
. In this paper we evaluate our system on CoDeeN overlay
connections, but our Bayesian model generalizes to the diagnosis
of other TCP-based applications as well.
RELATED WORK
There has been previous work in passive diagnosis of failures
in the Internet. Padmanabhan, Ramabhadran, and Padhye developed
Netprofiler, which collects network measurements from a set
of end hosts and attempts to identify cause of failure by examining
the shared dependencies among hosts that experience failures[8].
They show that this approach can provide information useful for
diagnosis, but their paper only provides some preliminary results
and do not provide details of how their system might diagnose real-world
failures in practice.
Shrink probabilistically diagnoses IP link failures based on the
observed status of IP links that share resources[4]. Similarly in our
work we diagnose failures in overlay connections where an overlay
depends on several underlying TCP connections which may share
IP hops. Shrink assumes that one can accurately determine the status
of all IP links at any point in time. This allows one to identify
the shared cause of failure of the failed IP links. Theoretically, we
can also use this approach to diagnose overlay failures. That is, we
can identify the TCP connections that share common IP hops and
observe which overlay connections have failed at any point in time
to identify the failed TCP connections.
Unfortunately, in real-world diagnosis of TCP connections many
of the assumptions made by systems such as Shrink do not hold for
the following reasons.
1. The status of overlay connections may change rapidly, making
it difficult to correlate failures in different overlay connections
over time.
2. In order to construct a Bayesian network that accurately models
the IP hops shared among different TCP connections we
need an accurate IP level map of the Internet. As the Skitter
1
project demonstrates, accurately constructing such a map is
difficult because routes may change and frequently tools such
as traceroute do not provide accurate information.
3. Determining the status of an inactive overlay connection or a
TCP connection is costly and takes time because it requires
an active probe such as a ping, traceroute, or HTTP connection
. Furthermore such probes are frequently inaccurate because
of the prevalence of packet filtering, network address
translation (NAT), and firewalls in the Internet[3].
4. TCP and IP failures are frequently so transient that by the
time one can test the status of a link, the failure no longer
exists [11].
1
http://www.caida.org/tools/measurement/skitter/
Therefore in this paper we present an alternative passive diagnosis
approach that does not require simultaneously knowing the
status of all overlay connections. Instead, we cluster TCP failures
based on the Internet autonomous systems (ASes) of their endpoints
and use information about the distribution of TCP failures
to infer the cause of failure. An agent first learns a probabilistic
model of failures based on a training set of observed TCP connections
, and then it uses this model to diagnose future failures when
it does not know the connection status.
Other researchers have developed methods for diagnosing specific
TCP-based applications. Ward, et al. infer the presence of
TCP performance failures based on the rate of requests processed at
an HTTP proxy server and TCP connection state [10]. Unlike such
specialized diagnostic systems, our Bayesian approach to diagnosis
can generalize to other applications that rely on TCP connections.
Most previous research in probabilistic diagnosis of Internet failures
evaluate their work on simulated failures. Steinder and Sethi
model network faults using a bipartite causality graph in which the
failure of individual links cause the failure of end-to-end connec-tivity
, and then perform fault localization using a belief network[6].
In contrast, in our research we evaluate our approach on real-world
TCP failures using actual data collected on the Internet.
DIAGNOSING OVERLAY CONNECTION FAILURES
In this paper we consider the diagnosis of overlay networks in
which an overlay network connection requires a series of TCP connections
between overlay nodes between the source and destination
hosts. For example, Akamai is a content distribution network
in which retrieving a resource from a web server may requ ire
communication among multiple Akamai nodes along multiple TCP
connections. Another example is the content distribution network
CoDeeN on Planetlab, in which overlay nodes act as HTTP proxies
. An request on CoDeeN[9] first requires a TCP connection to a
CoDeeN node and then a connection from a CoDeeN node to server
or another CoDeeN node. A failure in any one of these TCP connections
causes the user's HTTP connection to fail. The challenge
is to determine which of these TCP connections has failed.
Sometimes users can determine whether a failure has occurred
along the first TCP connection along the overlay path using information
provided by their local TCP stack, but if a failure occurs
beyond the first connection users cannot tell where a failure occurs
without cooperation from the overlay. Depending on the type of
overlay, users may have different amounts of information about the
overlay path. For example, in an HTTP proxy connection, users
know that the proxy is the first hop along the path and that if the
connection is not cached, the web server is the last hop along the
path.
As a first step, in our research we examine a special case of diagnosis
in order to gain insight into how well our approach might
generalize to other types of diagnosis. The question we wish to answer
is, if a two hop overlay connection fails due to a TCP failure,
which TCP connection failed? In this paper we define a TCP failure
as three consecutive TCP retransmits without a response. We
assume that the diagnostic agent only knows that the overlay connection
has failed and does not know which of the TCP connections
has failed. We want to answer this question knowing only the IP addresses
of the source, IP address of the first hop overlay node, and
the IP address of the ultimate overlay destination host. Our model
for probabilistic diagnosis generalizes to overlay connections with
any number of hops, but as a starting point in this paper we only
consider overlay connections with two hops.
306
TCP Conn.
B
C
Hour
Dst
AS
Src
AS
Overlay Conn.
A
C
TCP Conn.
A
B
Hour
Dst
AS
Src
AS
0
Failed
OK
1
OK
OK
0
OK
Failed
Failed
B
C
0
Failed
P(Status
=OK)
A
B
...
1
1
Hour
...
...
...
0.87
2
1
1
Dst
AS
0.99
1
P(Status
=OK)
Src
AS
Figure 1: A Bayesian network for TCP overlay path diagnosis
3.1
Probabilistic Diagnosis
The reliability of IP links in the Internet varies widely and hence
we expect the probability of TCP failure to differ between different
sets of hosts. Thus if we have knowledge of the relative probability
of failure of the TCP connections that make up an overlay connection
, we can then infer the most likely cause of failure when
an overlay connection occurs without conducting any additional
probes. In this paper we show we can use Bayesian networks both
to learn a model of TCP failures and to perform diagnosis.
Bayesian networks compactly represent the conditional probability
of related events and enable efficient inference based on available
evidence[5]. A Bayesian network is a directed acyclic graph
in which nodes represent variables, and edges from parent nodes to
children nodes represent dependence relations. Each node X has a
conditional probability table (CPT) P
(X|parents(X)) that encodes
the conditional probability of X given evidence about its parents.
Bayesian networks have several important features that make
them especially suitable for reasoning about failures in the Internet
. Firstly, Bayesian networks can model both deterministic and
probabilistic dependencies among many types of Internet components
and diagnostic tests. For example, an HTTP proxy connection
functions if and only if the user/proxy TCP connection functions
and the proxy/provider TCP connection functions. The probability
that a TCP connection functions depends on the source and
destination IP addresses and the time of the connection. To improve
accuracy, we cluster IP addresses by AS and connection time
by hour (see section 3.2). Figure 1 illustrates a Bayesian network
that encodes the conditional probabilities for diagnosing an overlay
connection from A to B to C. To diagnose an overlay connection
failure from A to C, one can use this Bayesian network to infer the
most probable status of the underlying TCP connections from A to
B and B to C given information about the AS numbers and hour the
connections were made.
The variables in the Bayesian network represent the functional
status of TCP connections and overlay connections. A node in
this Bayesian network represents the functional status of a connection
: OK if functioning, Failed if malfunctioning. Malfunctioning
means that a connection failure occurs along the path, functioning
means that no connection failure occurs. Edges in the Bayesian network
represent dependencies among connections. The CPT for an
overlay connection node represents the probability that it is functioning
given the status of its underlying TCP paths. The CPT for
a TCP path represents the probability that the TCP path functions
given information about the path. In our Bayesian network we assume
that the conditional probability of a TCP connection failure
depends only on the source and destination IP addresses and the
time of failure for each hop of the overlay, and not on which hop
of the overlay connection it is (user/proxy or proxy/server). We
represent this by using parameter tying in this Bayesian network
so that both TCP paths share the same CPT. We also assume that a
diagnostic agent can identify the intermediate hops in the overlay
connection, either through active probing or because it has knowledge
of the overlay topology.
An advantage of modeling network components in terms of Bayesian
networks is that a Bayesian network provides an abstract high-level
representation for diagnostic data suitable for reasoning. Representing
diagnostic data in terms of variables, evidence, and dependencies
rather than passing around low-level measurements such as
packet traces allows an agent to reason about the causes and consequences
of failures without any deep knowledge of the behavior
and characteristics of components and diagnostic tests. In addition,
the conditional independence assumptions of Bayesian inference
reduce the amount of data a diagnostic agent needs to consider for
diagnosis.
3.2
Clustering
To perform diagnosis using this Bayesian network, we need to
learn the conditional probability of failure of a TCP connection
given the properties of a connection. Learning the conditional probability
of failure for each pair of IP addresses is impractical because
it is infeasible to store the probability of failure for the 2
64
combinations
of source and destination IP addresses. More importantly,
for each pair of IP addresses we only have a limited amount of data
with which to train the Bayesian network. For more effective diagnosis
, diagnostic agents need a way to diagnose failures involving
IP addresses it has not previously observed.
Therefore to reduce the size of the conditional probability tables
and to improve the accuracy of the learned probabilities, we
cluster together IP addresses in a way that facilitates learning and
diagnosis. Our hypothesis is that TCP connections that share many
IP links with one another will have similar probabilities of failure
. Thus two TCP connections with topologically nearby sources
and nearby destinations will likely have similar failure probabilities
. Therefore we clustered source and destination IP addresses in
three ways: by the first eight bits of the IP address, the AS number,
and by country.
We also cluster TCP connections based on time. We hypothesize
that the probability of failure changes over multiple time scales.
For instance, if an IP routing change occurs, the probability of failure
for affected TCP connections may change from low to high and
back to low within a few minutes. On the other hand, the average
rate of routing failure over several days may remain relatively
constant. We show how different methods for clustering affect the
accuracy of diagnosis in section 5.
COLLECTING TCP FAILURE DATA
It is difficult to obtain accurate information about the distribution
of TCP failures in the Internet because failed connections make
up only a small percentage of overall TCP traffic and the number
307
of possible source and destination IP addresses is enormous. To
collect accurate failure probabilities, we need a way to observe the
status of large quantities of TCP connections from many different
source and destination hosts.
In order to obtain such data, we used an updated version of Planetseer
to collect data on TCP connection failures. The new Planetseer
monitors TCP connections in the CoDeeN content distribution
network and provides notifications when TCP sessions begin, end,
and when TCP failures occur. Planetseer runs on over 320 Planetlab
[2] nodes distributed around the world. We used Planetseer to
monitor all the TCP connections made by 196 CoDeeN nodes. We
observed 28.3 million TCP connections and 249,000 TCP failures
over a ten hour period. We observed TCP connections to approximately
17,000 distinct IP addresses per hour on average. In our
dataset, we observed TCP connections to hosts in 2116 unique Internet
autonomous systems.
CoDeeN overlay nodes act as HTTP proxies and establish TCP
connections with web clients, web servers, and other CoDeeN nodes.
In a typical CoDeeN session, a user initiates a TCP connection with
the CoDeeN proxy, the proxy connects to a web server and retrieves
the requested resource, and finally the proxy sends the requested
data back to the user. Note that many requests are cached, and so
the destination of the second hop in the overlay is a CoDeeN node
and not the web server specified in the HTTP request. We found
that 0.28% of user/proxy connections and 0.65% of proxy/server
connections experienced TCP failures. Since Planetseer monitors
TCP connections from the vantage point of the proxy, we cannot
detect those TCP failures in which a user is unable to establish a
TCP connection to the proxy. Therefore the lower percentage of
user/proxy failures may be partly explained by the fact that all failures
between the proxy and user occur after the user successfully
establishes a TCP connection to the proxy.
We believe that the failure probabilities learned through Planetseer
are representative of typical TCP connections in the Internet
. CoDeeN nodes operate as HTTP proxies, so the pattern of
TCP connections resembles typical web traffic. Though caching at
CoDeeN nodes reduces the number of connections to web servers
we observe, we believe that the average failure probability to web
servers we observe using Planetseer reflects typical failure rates for
HTTP related TCP connections. We are currently examining other
types of overlay connections to determine how well this TCP data
generalizes for the diagnosis of other overlays.
We learn the conditional probability table for TCP connection
failure using the data collected from Planetseer. We cluster source
and destination IP addresses by AS using the Oregon Route Views
BGP tables
2
.
EVALUATION
Our hypothesis is that Bayesian inference using the conditional
probability of failure for TCP connections given the AS numbers of
the source and destination can accurately diagnose failures in overlay
connections. In order to test this hypothesis, we constructed a
Bayesian network using the probabilities learned from Planetseer
and used it to diagnose failures in CoDeeN connections.
We wanted to answer the following questions in our experiments:
1. Which clustering method produces the most accurate diagnosis
: AS, IP/8 prefix, or country? We expect that clustering
based on AS will produce the most accurate results since it
is most closely correlated with the Internet routing topology.
2
http://www.routeviews.org/
2. How does diagnostic accuracy change as we increase the
time interval over which we cluster TCP connections? We
expect that as the clustering interval increases, accuracy will
increase at first, but then decrease as the learned probabilities
less accurately reflect the probabilities of new failures.
3. How does the age of the training set affect diagnostic accuracy
? We expect that as the distribution of TCP failures in
the Internet changes over time, diagnostic accuracy will also
decrease.
5.1
Experimental Setup
We train a Bayesian network using the Bayes Net Toolbox (BNT)
for Matlab
3
. In order to diagnose TCP connections between regions
we did not observe in the training set, we initialize the prior probabilities
of failure according to a uniform Dirichlet distribution,
which is equivalent to adding an additional element to the training
set for each combination of source cluster, destination cluster,
and connection status. We test this Bayesian network on an artificial
dataset generated based on the distribution of TCP connections
observed on Planetseer. Since Planetseer does not provide information
about which TCP connections are associated with each
CoDeeN request, we construct a dataset based on the TCP connections
we observed. First we identify user/proxy, proxy/proxy,
and proxy/server connections based on IP address and port number
. Then for each proxy, we count the number of TCP connections
to each server and to each proxy. We assume that the number
of cached requests equals the number of user/proxy connections
minus the number of proxy/server and proxy/proxy connections
. We assign each user/proxy TCP connection a corresponding
proxy/provider connection, where the provider may either be a web
server (if the resource is not cached), another proxy (if the resource
is cached at another proxy), or the same proxy (if the resource is
cached locally). We make these provider assignments according to
the observed distribution of proxy/server and proxy/proxy connections
. Of the 19,700 failures in this dataset, approximately 82%
of requests are cached locally, 7.9% are cached at other CoDeeN
nodes, and 10.6% are uncached.
For each CoDeeN request failure our Bayesian network makes
two diagnoses: one for the status of the user/proxy connection, and
one for the status of the proxy/provider connection. We measure
accuracy in terms of the fraction of correct diagnoses. To evaluate
the accuracy of diagnosis, we compute the most probable explanation
for a TCP failure given evidence that the overlay connection
has failed and the AS numbers of the source, proxy, and destination
, and then compare this diagnosis with the actual status of the
source/proxy and proxy/provider connections. In our experiments
we perform diagnosis without evidence about whether a resource is
cached at a proxy.
Of the CoDeeN requests that failed in the first hour of our dataset,
we found that 62% failed at the user/proxy connection, 31% failed
at the proxy/server connection, and 7% failed at a the proxy/proxy
connection. Therefore knowing only the overall distribution of
TCP failures between users and servers, without using information
about the IP addresses of the user, proxy, and server, one could diagnose
failures with 62% accuracy by diagnosing every failure as
a user/proxy failure. In our experiments we wish to determine if
our Bayesian approach to diagnosis can achieve significantly better
accuracy.
In order to properly compute the accuracy of diagnosis, we sepa-rated
the set of TCP connections with which we trained the Bayesian
network from the set of TCP connections associated with the failed
3
http://www.cs.ubc.ca/ murphyk/Software/BNT
308
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
AS
Country
IP
Baseline
Accuracy
Figure 2: Clustering Method Comparison
75%
76%
77%
78%
79%
80%
81%
82%
83%
1
2
3
4
5
6
7
8
9
Training Interval Length (hours)
Accuracy
Figure 3: Accuracy vs. Training Interval Length
overlay connections under diagnosis. We collected ten hours of
TCP connection data from Planetseer. In our initial experiments
we choose to learn the average probability of failure over one hour
because we find that clustering over shorter time scales does not
provide enough data for accurate diagnosis.
5.2
Experimental Results
First we compare the accuracy of three IP clustering methods:
by Internet autonomous system number (AS), by the first eight bits
of the IP address (IP), and by the country in which a host resides
(Country). We determine the country of a host using the hostip.info
database
4
, which maps the first 24 bits of an IP address to a country
using location information contributed by Internet users. We
train three Bayesian networks corresponding to the three clustering
methods using data from hour 1. Then we test these Bayesian
networks on the proxy connection failures constructed using data
from hours 210 and averaged the results. We use a junction tree
inference engine to compute the most likely status for each TCP
connection and compare the inferred status with the actual status
from the data. Since the Bayesian network we use for inference
has no cycles, we can perform Bayesian learning and junction tree
inference rapidly; in our experiments, inference for a single connection
requires approximately 5 ms.
4
http://www.hostip.info/
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
1
2
3
4
5
6
7
8
9
Training Set Age (hours)
Accuracy
Figure 4: Accuracy vs. Training Set Age
Figure 2 compares the diagnostic accuracy of these three clustering
approaches. We define accuracy as the fraction of correct
status inferences. As a baseline, we also plot the accuracy of simply
guessing that every failure is due to a user/proxy connection
failure. Figure 2 shows that all three clustering methods provide
similar degrees of accuracy. Our hypothesis was that clustering
based on AS would produce the most accurate results, but our experiments
show that clustering based on the first 8 bits of the IP
address yields higher accuracy for short time intervals. This may
be because one hour is not enough time to accurately learn inter-AS
TCP failure probabilities, or due to inaccuracies in the Route Views
BGP table.
Next we computed the accuracy of diagnosis as we increase the
time interval over which we cluster TCP connections. If the interval
over which we train is too short, then we will not have enough data
to accurately learn failure probabilities. If the interval is too long,
then it may not accurately reflect changing network conditions. We
train a Bayesian network using AS clustering on x hours before
hour 10 for values of x from 1 to 9. We then test each Bayesian
network on the data from hour 10. Figure 3 shows how accuracy
changes as the training time interval changes. This plot shows that
accuracy increases as the clustering time interval increases, suggesting
that the training value of incorporating additional data outweighs
the inaccuracy introduced by using older data.
Finally, we compute the accuracy of diagnosis as we increase the
age of the data on which we trained the Bayesian network. We train
a Bayesian network using AS clustering on data from hour 1 and
test it on overlay failures observed during each of the hours from
2 to 10. Figure 4 plots the accuracy of diagnosis over time. Average
accuracy changes over time because the distribution of failures
we observe using Planetseer varies from hour to hour, but overall
diagnostic accuracy diminishes only slightly after nine hours, suggesting
that the distribution of TCP failure probabilities remains
relatively stationary over time.
We also compare the false positive and false negative rates for
each clustering method. The false positive rate is the fraction of
functioning connections that are incorrectly diagnosed as having
failed, while the false negative rate is the fraction of failed connections
that are incorrectly diagnosed as functioning. Table 1 lists the
false positive and false negative rates for each clustering method.
5.3
Analysis
These experiments show that we can diagnose overlay connection
failures knowing only the AS numbers of its TCP endpoints.
309
AS
Country
IP
Baseline
user/proxy false pos.
0.174
0.358
0.426
1.000
user/proxy false neg.
0.219
0.050
0.060
0.000
proxy/server false pos.
0.219
0.101
0.265
0.000
proxy/server false neg.
0.171
0.128
0.100
1.000
Table 1: Diagnosis error rates by type
One reason our approach to diagnosis works is due to the heavy-tailed
distribution of TCP connection failure probability. The majority
of TCP failures occur among a small number of AS pairs.
Therefore most CoDeeN connection failures involve one TCP connection
with low failure probability and another TCP connection
with high failure probability, so probabilistic inference produces
the correct diagnosis. For example, we find that TCP connections
from hosts in China to hosts in the USA tend to have a much
higher probability of failure than connections within the USA. If
an CoDeeN user in China accesses a proxy in the USA to retrieve
content from a web server in the USA and experiences a failure,
then it is very likely that the failure occurred on the connection between
the user and the CoDeeN node. If the probability of failure
for every pair of ASes were equal, then our probabilistic approach
to diagnosis would not work as well.
Another interesting result is that the accuracy of diagnosis diminishes
relatively slowly over time, implying that the distribution
of TCP failures in the Internet stays relatively stationary over time.
This suggests that diagnostic agents can perform accurate diagnosis
using inter-AS TCP failure probabilities without having to con-stantly
collect the latest TCP failure data.
CONCLUSION AND FUTURE WORK
Our initial experimental results indicate that our passive probabilistic
approach to diagnosing TCP overlay connection failures
can provide useful diagnostic information. In this paper we show
that Bayesian inference provides a useful framework for diagnosing
two hop overlay connection failures on CoDeeN, but our approach
can generalize to the diagnosis of other overlay connection
failures as well. We view our approach to diagnosing TCP overlay
connection failures as just one example of a more general probabilistic
approach for Internet fault diagnosis. In this paper we show
how to use inter-AS TCP failure probabilities to diagnose failures
in overlay networks, but the technique we used to diagnose failures
in CoDeeN can be extended to the diagnosis of other overlays as
well. We can apply the knowledge we learned from Planetseer to
diagnose other classes of network components and applications by
adding new nodes and edges to the Bayesian network we use for
diagnosis.
In this paper we only considered diagnosis without using any additional
evidence about a failure. Typically, however, when failures
occur users may already know the status of certain network components
and can perform diagnostic probes to collect additional evidence
for diagnosing failures. We can improve the accuracy of our
approach by adding variables and edges to the Bayesian network to
take into account this information. For instance, if we know the IP
paths that TCP connections traverse, we can incorporate evidence
of IP link failures into the Bayesian network. We intend to explore
how agents can incorporate such additional evidence into a
Bayesian network to improve diagnostic accuracy.
In future work we will also examine more accurate models for
Internet fault diagnosis that take into account failures at both short
and long time scales. In this paper we only evaluated our algorithm
on ten hours of data from Planetseer; we would like to conduct additional
experiments to more accurately determine the effectiveness
of diagnosis using data from other time periods as well. In addition
we would like to explore other clustering methods, including dy-namically
choosing the prefix length on which to cluster based on
how much data an agent has about TCP connections to a particular
IP range.
Finally, though our paper describes a centralized diagnosis approach
, this approach can easily be adapted for distributed diagnosis
. Knowledge of the overlay topology and the conditional probabilities
in the CPTs can be distributed among multiple agents in
the Internet, allowing different agents to collect failure data from
different points in the network. We are currently developing such a
distributed system for the diagnosis of TCP application failures in
the Internet.
REFERENCES
[1] D. G. Andersen, H. Balakrishnan, M. F. Kaashoek, and
R. Morris. Resilient overlay networks. In Proceedings of the
18th ACM Symposium on Operating System Principles
(SOSP), 2001.
[2] B. Chun, D. Culler, T. Roscoe, A. Bavier, L. Peterson,
M. Wawrzoniak, and M. Bowman. Planetlab: an overlay
testbed for broad-coverage services. SIGCOMM Comput.
Commun. Rev., 33(3):312, 2003.
[3] S. Guha and P. Francis. Characterization and measurement of
tcp traversal through nats and firewalls. In Internet
Measurement Conference 2005 (IMC '05), 2005.
[4] S. Kandula, D. Katabi, and J.-P. Vasseur. Shrink: A Tool for
Failure Diagnosis in IP Networks. In ACM SIGCOMM
Workshop on mining network data (MineNet-05),
Philadelphia, PA, August 2005.
[5] U. Lerner, R. Parr, D. Koller, and G. Biswas. Bayesian fault
detection and diagnosis in dynamic systems. In Proceedings
of the Seventeenth National Conference on Artificial
Intelligence (AAAI-00), pages 531537, Austin, Texas,
August 2000.
[6] A. S. M Steinder. Increasing robustness of fault localization
through analysis of lost, spurious, and positive symptoms. In
Proceedings of INFOCOM, 2002.
[7] R. Mahajan, N. Spring, D. Wetherall, and T. Anderson.
User-level internet path diagnosis. In Proceedings of ACM
SOSP, 2003.
[8] V. N. Padmanabhan, S. Ramabhadran, and J. Padhye.
Netprofiler: Profiling wide-area networks using peer
cooperation. In Proceedings of the Fourth International
Workshop on Peer-to-Peer Systems (IPTPS), February 2005.
[9] L. Wang, K. Park, R. Pang, V. Pai, and L. Peterson.
Reliability and security in the codeen content distribution
network. In Proceedings of the USENIX 2004 Annual
Technical Conference, 2004.
[10] A. Ward, P. Glynn, and K. Richardson. Internet service
performance failure detection. SIGMETRICS Perform. Eval.
Rev., 26(3):3843, 1998.
[11] M. Zhang, C. Zhang, V. Pai, L. Peterson, and R. Wang.
Planetseer: Internet path failure monitoring and
characterization in wide-area services. In Proceedings of
Sixth Symposium on Operating Systems Design and
Implementation (OSDI '04), 2004.
310
| fault diagnosis;passive diagnosis;NAT;Bayesian networks;Planetseer overlay monitoring system;active probing for diagnosis;inter-AS TCP failure probabilities;TCP overlay connections;Bayesian networks modelling;CoDeeN traffic patterns;TCP overlay path diagnosis;Planetseer;clustering;network address translation |
69 | Digital Asset Management Using A Native XML Database Implementation | Digital Asset Management (DAM), the management of digital content so that it can be cataloged, searched and re-purposed, is extremely challenging for organizations that rely on image handling and expect to gain business value from these assets. Metadata plays a crucial role in their management, and XML, with its inherent support for structural representation, is an ideal technology for this. This paper analyzes the capabilities of a native XML database solution via the development of a "proof of concept" and describes implementation requirements, strategy, and advantages and disadvantages of this solution. | INTRODUCTION
Digital asset creation and management evolved in the late 1990s.
Companies have created massive digital assets in the form of
images, video and audio files, streaming media, Power Point
templates, web pages, and PDF files containing engineering specs,
legal documents, internal memos and more. The World Wide Web
has drastically increased the need for digital information and its
exchange. Ille [8] identifies a digital asset as a strategic asset like
any other form of working capital, and states that its efficient
management is being increasingly recognized as a competitive
lever for companies all over the world.
Development of the model for storing any form of digital object
into a structured format requires a deft combination of asset
analysis, strategic thinking, business planning and appropriate
technology. Companies can achieve early strategic advantage by
implementing management systems for digital assets that can be
repurposed or customized, providing process efficiencies in
collaborative work. Digital Asset Management (DAM) can deliver
competitive advantage for advertising agencies, technical or
engineering documentation departments, designers, producers and
others by reducing time spent locating creative assets.
Enterprises often require reusing or sharing their digital assets. It
is indispensable to store content in an organized way to reuse or
process it for future needs. Global enterprises are facing the
daunting challenge of figuring out how best to address the
growing complexity of creating digital assets and managing the
flow of assets through a single infrastructure [11]. Exacerbating
the challenge is the fact that companies are creating a massive
volume of digital assets but are rarely examining their organized
storage and retrieval methods.
SIGNIFICANCE OF THE PROBLEM
DAM systems are still relatively new, but organizations have
started realizing the importance and need for digital asset
management. The Gartner Group affirms that only a limited
number of technically advanced commercial content providers use
DAM systems today to digitally construct, store and distribute
rich media content in single medium forms [7]. The systems also
have limited corporate use in advertising firms and marketing
departments. Gartner predicts that by 2005 more than 25% of all
the enterprises with commercial internet operations will use DAM
systems. By 2010, more than 45% of all the enterprises with
commercial internet operations will use DAM systems.
Recently reported cases provide evidence that companies have
started investing in technology for DAM. For example, the Coca
Cola company has bought technology from IBM for its digital
advertisement archives, which contain 9,000 graphical images,
7,000 scanned documents and more than 25,000 corporate videos
and television advertisements [13]. The technology includes
search tools for retrieving, updating, managing and disseminating
historical records online, including the company's famous
marketing and advertising icons.
Another case is that of the Shoah Foundation. Steven Spielberg
established the Shoah Foundation with the mission to videotape
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
CITC4'03, October 1618, 2003, Lafayette, Indiana, USA.
Copyright 2003 ACM 1-58113-770-2/03/0010...$5.00.
237
and preserve the testimonies of the Holocaust survivors and
witnesses. The foundation has collected more than 50,000
eyewitness testimonies in 57 countries and 32 languages [12]. The
challenge now is to manage, catalog and disseminate the video
and digital collection of testimonies of those survivors. These
digital assets of the Shoah Foundation are cataloged with lists of
keywords, text summaries describing the survivors, related
documentaries focusing on topics such as the ghettos or labor
camps they lived in, and past and present photos of them and their
families.
The following summarizes the major challenges that are faced
during any wide adoption of DAM.
Storage: One of the fundamental problems is physical
deterioration of storage devices that store digital information.
Magnetic media are particularly subject to data loss and
errors. There is also the important question of hardware and
software obsolescence and the future availability of today's
technologies.
Procedural Issues: Technical problems are further complicated
when resources to be preserved need to be transformed into
digital form. Digitization of paper analogs for access and
preservation is a time consuming and labor intensive task.
Beyond this technical problem, there is a host of financial,
legal, and policy problems.
Security: Securing digital assets against misuse, theft, or
damage is an ongoing concern of organizations.
Copyright: One of the major legal issues associated with
electronic assets is the actual scanning or digitizing of
materials. Copyright holders have the exclusive right to
reproduce, distribute, and display their work. Ferullo [3]
points out that digitizing and posting infringes on these
exclusive rights.
Distribution: Digital assets will be utilized to the fullest only
when they can be distributed via proper communication
channels to the intended users.
Infrastructure: DAM requires a robust IT infrastructure to
support creation and distribution.
Human Factors: While the purpose of DAM is to provide
greater efficiency, getting users to adapt to the new
environment can be a challenge. This is an important issue
because many DAM solutions require a change in work
processes before the users see any benefits [6].
The requirement to manage and deliver digital assets is becoming
critical to almost all content-related applications, due to the
evolution of the Internet and growth of digital assets. Enterprises
need plans to benefit from the new world of rich, valuable digital
assets that will affect everything from their internal processes and
customer relations to their web site and telecommunications
infrastructures.
AN XML SOLUTION
Widespread use of rich media has spurred the growth of the DAM
market. Frost and Sullivan, a market analysis firm, claims the
average user in a media enterprise looks for a media file 83 times
a week and fails to find it 35% of the time [5]. Canto Software
research predicts that DAM solutions will drop that figure to 5%
[9]. With the growth of internet publication, document metadata is
increasingly important because it provides contextual information
about digital assets necessary for customization of the content. In
response, Adobe Systems plans to unveil new metadata
technology designed to ease the process of applying content to
multiple types of media. The XMP (Extensible Metadata
Platform) provides a framework for standardizing the creation and
processing of document metadata across publishing workflows,
according to Adobe officials [10].
A review by Doering [2] provides the insight into methods for
digital asset management. According to Doering, a DAM solution
must include the following critical features:
Indexing: As content is generated and stored, it is indexed
according to various possible criteria. Metadata does not
merely describe by whom, when, and how a piece of content
was created; it can also be used to interpret and catalog the
content itself.
Rights Management: Includes handling rights to the content or
restricting the use of the content by the purchaser/end-user.
This might occur, for example, with corporate information or
licensed images from a third party incorporated into the
content.
Reuse: With a viable DAM system in place, the internal
content developer can research and select appropriate content
for reuse in new content. This represents a significant savings
potential for the companies.
Review: A final benefit of an online catalog with a DAM
system is the ability to review older content more easily.
3.2 The XML/Metadata Approach
By incorporating a DAM system, a company gains both the
savings from reusing content as well as revenue from continued
sales of the same elements. According to Fontana [4], XML
databases serve in a complementary role to traditional databases
especially as XML becomes prevalent. Nearly 85% of large
corporations are expected to move all their web-based content to
XML over the next three years.
Fundamentally, two high-level approaches may be adopted for
implementing XML databases.
1) XML-enabled database: In an XML-enabled database, the
documents are stored in constituent fragments. Here the XML
data is stored in object relational form and one can use an XML
SQL Utility (XSU) or SQL functions and packages to generate the
whole document from its object relational instances.
2) Native XML database: In a native XML database approach,
XML is not fragmented but rather is stored as a whole in the
native XML database. This means that documents are stored,
indexed, and retrieved in their original format, with all their
content, tags, attributes, entity references, and ordering preserved.
In this technique, the XML Database is designed to handle XML
in its native format, thereby speeding access to data by eliminating
the need to transform it into the rows and columns of a standard
relational database.
238
Biggs [1] suggests there are three principal reasons to implement
a native XML database: 1) enterprises today use a mix of data,
such as the data housed in object-oriented databases and
unstructured data that needs to be exchanged with partners (native
XML databases can leverage all of these disparate sources); 2)
XML databases can boost processing performance due to XML-based
transactions; and 3) digital assets described in the XML
format can be used by other applications.
3.3 A Technique Using Native XML
When developing a native XML solution, certain steps should be
followed:
1) Identify the need for DAM
2) Know your assets: Identify various assets, define and
understand their use in the organization
3) Define search needs and key attributes of your assets
4) Capture the objects (digitized) and data about the objects
(metadata)
5) Process: Generate and store XML files associated with each
object
The basis of this technique lies in creation and usage of semi-structured
metadata stored in an XML format to manage digital
assets efficiently. When the structure of the data is irregular and
implicitly given by the document itself, not by a global scheme,
they are often referred to as semi-structured data. The ability of an
XML to handle dynamic data provides leeway for the applications
to use semi-structured data to describe their digital assets.
XML is becoming the de facto data exchange format for the Web.
The XML enclosing metadata can be either stored in the database
for future retrieval or be easily transferred over the Internet to any
other systems.
The Oracle 9i DBMS (used for the Proof of Concept) provides an
"XMLType" data type that can be used to store and query XML
data in the database. The XMLType has member functions one
can use to access, extract, and query the XML data using XPath
expressions, a standard developed by the World Wide Web
Consortium to traverse XML documents.
3.4 Proof of Concept
This study analyzed existing approaches for non-text based digital
asset management and implemented a DAM solution by applying
a native XML Database technique. Meta-data of digital assets is
often semi-structured and the digital files are of varied types.
XML databases are most appropriate for managing diverse types
of semi-structured digital assets.
For this project, the Proof of Concept (POC) was developed
using facilities and resources of the University's Digital
Enterprise Center (DEC) at the School of Technology. The
Product Lifecycle Management (PLM) lab at DEC creates,
simulates and shares digital information related to a company's
products, processes and resources. These digital assets encompass
graphical images, presentations, and video/audio clips of
manufacturing processes representing manufacturing models. A
digital asset produced in the PLM lab is the intellectual property
of the DEC and needs to be managed for future use or research.
Though the demonstration focused on the manufacturing process
models in the PLM lab, the XML Database technique can be
applied to any form of digital asset.
Since the potential file population is very vast and is of varied
types, the POC restricted the sampling to the following categories
of digital assets: Audio files, Video files, Images, and Text based
files (presentation slides, Word documents, Acrobat files). The
POC confined the sample to a limited number of files describing
assembly line parts available from the PLM lab.
Metadata was stored for each of the digital objects of the assembly
line. The global parameters and sub-parameters used to describe
the digital files included the following: File Name, Type (Image,
Audio, Video, Word etc), Author, Creation Date, General
Description, Keywords, and a Comment. A keyword search
capability was added for searching within these and other
parameters.
The Proof of Concept application was developed to provide a
storage, search and retrieval facility to manage the digital assets of
the PLM lab using the School of Technology's software and
hardware resources. The application provides a web-based
interface for convenient management of the digital models and has
an "n-tiered" architecture. The backend database is Oracle 9i with
native XML support. The web interface was developed using JSP
and the middle tier was incorporated using Java Servlets. Analysis
was conducted with the following steps:
Various types of data (heterogeneous data) were collected for
demonstration purposes.
The validity of metadata was checked before entering it into
the system.
Upon validation, data was entered in the system (Oracle 9i
database) through the web-interface.
With the appropriate search parameters, the data entered could
be searched for and retrieved. Depending on the requirement,
this retrieved data could be viewed, reused or repurposed.
Providing different search criteria tested consistency and
reliability in the retrieval of the asset.
The following technologies were used to develop the POC:
Apache Webserver 1.3; Apache JServ 1.1; JDK 1.1; Oracle 9i
(9.0.0.2); XML; Java, JSP, Servlets. Tools used for development
purposes included: ERwin database design software; Borland
Jbuilder; Microsoft Front Page; and Oracle Enterprise Manager.
The POC was developed with a three-tier architecture as shown in
Figure 1. The first tier of the architecture presents an interface to
the user to facilitate access to digital files. The interface is web-enabled
and was developed using Java Server Pages. This tier
provides a user-friendly navigation through the site for managing
digital files, including screens for inserting, deleting and
searching on file data.
239
XML
Generation
Database
XML Files
(Metadata
for Digital
Assets)
XML Files
(Metadata
for Digital
Assets)
User Interface
Web &
Application
Server
Digital
Asset Storage
XML
Validation
Metadata
for
Digital
Assets
Metadata
for
Digital
Assets
Digital
Asset
Digital
Asset
Tier 3
Tier 2
Tier 1
The POC Architecture
Figure 1: 3-Tier Architecture for POC
The middle tier of the architecture consists of Servlets and Java
helper classes. Servlets direct the flow of the control and provide
database connectivity for the application. In addition, the business
logic is implemented in Servlets. Helper classes are used by
Servlets to create and manage user-defined Java objects. Servlets
and JSPs are deployed on an Apache Web server and Apache
Jserv.
The third tier of the architecture is the database layer itself. As
noted previously, the POC uses the XML capabilities of the
Oracle 9i database management system.
OBSERVATIONS AND EVALUATION
The XML database technique stores attributes of digital assets in
the database in the form of an XML file. While the XML resides
in the database system, digital assets might be files in a file system
or a BLOB in a relational database. This demonstration of the
technique stores digital assets in a file system.
XML databases provide some features that are analogous to
conventional text databases. Common to all native XML
databases are indexes, which allow the query engine to jump to a
particular point in an XML document. This gives such databases a
tremendous speed advantage when retrieving entire documents or
document fragments. This is because the database can perform a
single index lookup and retrieve the entire document or fragment
in a single read.
Normalizing data for an XML database is largely the same as
normalizing it for a relational database. The programmer needs to
design the documents so that no data is repeated. One difference
between native XML databases and relational databases is that
XML supports multi-valued properties while (most) relational
databases do not. This makes it possible to "normalize" data in a
native XML database in a way that is not possible in a relational
database.
4.2 Native XML vs. XML-Enabled
A native XML database defines a (logical) model for an XML
document -- as opposed to the data in that document -- and stores
and retrieves documents according to that model. A native XML
database has an XML document as its fundamental unit of
(logical) storage, just as a relational database has a row in a table
as its fundamental unit of (logical) storage. It is not required to
have any particular underlying physical storage model. For
example, it can be built on a relational, hierarchical, or object-oriented
database, or use a proprietary storage format such as
indexed, compressed files.
An XML-enabled database has an added XML mapping layer
provided either by the database vendor or a third party. This
mapping layer manages the storage and retrieval of XML data.
Data that is mapped into the database is mapped into application
specific formats and the original XML meta-data and structure
may be lost. Data retrieved as XML is NOT guaranteed to have
originated in XML form. Data manipulation may occur via either
XML specific technologies (e.g. XPath) or other database
technologies (e.g. SQL). The fundamental unit of storage in an
XML-enabled database is implementation dependent. The XML
solutions from Oracle and Microsoft, as well as many third party
tools, fall into this category.
4.2.1 Advantages of Native XML
Native XML databases have several advantages over relational
databases. Since native XML databases do not have database
schemas, one can store similar documents with more than one
schema in the database at the same time. While one may still need
to redesign queries and convert the existing documents -- a non-trivial
process -- this may ease the transition process.
XML databases make search within the XML documents very
efficient. If the data is parsed into Document Object Model
(DOM) format, XPATH can be used. XML database solutions
usually add a text indexing system so that query performance is
improved.
4.2.2 Disadvantages of Native XML
Currently, only a few native XML databases enforce referential
integrity. The reason for this is that most native XML databases
do not currently support linking mechanisms, so there are no
references for integrity checking. Therefore, applications that rely
on referential integrity mechanisms of databases must enforce
these constraints themselves for XML databases. In the future,
many native XML databases will probably support linking
mechanisms and referential integrity.
Another disadvantage of XML databases is that while query
performance is better, update performance suffers. The reason is
that the index entries for a document must be deleted and new
entries created whenever a document is inserted or updated.
In general, an XML Database approach is better because it
supports the full power of XML. However, a major drawback is
performance degradation, as data must be constantly reparsed into
a DOM tree, wasting cycles and memory. Additionally, update
capabilities are weak, and finally, automated enforcement of
240
integrity constraints needlessly places unreasonable burden upon
application programmers, increasing risks and costs.
CONCLUSION
Digital Asset Management (DAM) is an evolving field with a
great potential. With the evolution of computers and the Internet,
companies have been creating an enormous volume of digital
content in various formats. Managing this content, so that it can
be cataloged, searched and re-purposed, is extremely challenging
for organizations.
XML is a commonly used standard in internet applications.
Hence, representing the metadata of a digital content in an XML
format is, on the surface, a good design decision. XML, by its
very nature, provides for a complex, well-defined and yet
extensible structural representation of the metadata and
interoperability between various applications dealing with digital
assets. A major advantage of having XML natively in the database
is that one can perform the relational manipulation operations
such as insert, update, and delete on the whole or partial XML
and can also perform XML specific operations like XPATH
search and node modification using the power of SQL. This, and
other advantages give native XML databases an edge over
systems that don't use XML or that manage XML externally.
REFERENCES
[1] Biggs, M. (2001, December 3). Two ways to bring XML to
your databases. InfoWorld, Framingham, 23 (49), 20.
[2] Doering, D. (2001, August). Defining the DAM thing: how
digital asset management works. Emedia Magazine, Wilton,
14(8), 28-32. Retrieved February 27, 2002, from
http://proquest.umi.com/pqdweb?Did=000000077222944&F
mt=4&Deli=1&Mtd=1&Idx=5&Sid=12&RQT=309
[3] Ferullo, D. L. (2002). The challenge of e-reserves. Net
Connect, 48 (8), 33-35.
[4] Fontana, J. (2001, November 5). Upgraded database makes
the most of XML. Network World, Framingham, 18 (45), 29.
Retrieved February 27, 2002, from
http://proquest.umi.com/pqdweb?Did=000000088274369&F
mt=4&Deli=1&Mtd=1&Idx=7&Sid=3&RQT=309
[5] Frost & Sullivan.s (n.d.). U.S. Digital Asset Management
Markets. Retrieved March 23, 2002, from
http://www.frost.com/prod/servlet/fcom?ActionName=Displa
yReport&id=A192-01-00-00-00&ed=1&fcmseq=1043213084248
[6] Garcia, K. (2001, November). Broadcasters starting to ease
into digital workflow. TVB Europe, 10(11), 22-24.
[7] Gilbert, M., Landers, G., Latham, L. & Lundy, J. (2001,
November 26). What's cool, what's hot: content technology
hype cycle. Gartner Advisory. Retrieved February 9, 2002,
from
http://gartner.adpc.purdue.edu/rasdquest/RESEARCH/RAS/
102700/102760/102760.html
[8] Ille, C. (2001, February 26). Market definitions: digital
document imaging. Gartner Group. Retrieved February 27,
2002, from
http://proquest.umi.com/pqdweb?Did=000000093522513&F
mt=3&Deli=1&Mtd=1&Idx=4&Sid=3&RQT=309
[9] Martin, N. (2001, September). DAM right! Artesia
technologies focuses on digital asset management. EContent,
Wilton, 24(7), 60-62.
[10] Moore, C. (2001, September 24). Adobe touts images.
InfoWorld, Framingham, 23(39), 36. Retrieved February 27,
2002, from
http://proquest.umi.com/pqdweb?Did=000000081956628&F
mt=3&Deli=1&Mtd=1&Idx=2&Sid=12&RQT=309
[11] Moore, C. (2001, September 24). Content management
plays role in cross-media publishing. Computer World.
Retrieved February 9, 2002, from
http://www.computerworld.com/storyba/0,4125,NAV47_ST
O64339,00.html
[12] Solomon, M. (2002, January 14). Managing the memories.
Computer World. Retrieved April 16, 2002, from
http://www.computerworld.com/storyba/0,4125,NAV47_ST
O67304,00.html
[13] Weiss, T. (2001, December 10). Coca-Cola ad legacy gets
help from IBM. Computer World. Retrieved February 9,
2002, from
http://www.computerworld.com/storyba/0,4125,NAV47_ST
O66495,00.html
241 | keyword search;metadata;multimedia;digital asset management;semi structured data;database;digital images;Digital Asset Management;XML database;storage and retrieval;native XML;DAM;heterogenous data;proof of concept |
7 | A Fair and Traffic Dependent Scheduling Algorithm for Bluetooth Scatternets | mechanisms and algorithms necessary to set up and maintain them. The operation of a scatternet requires some Bluetooth units to be inter-piconet units (gateways), which need to time-division multiplex their presence among their piconets. This requires a scatternet-scheduling algorithm that can schedule the presence of these units in an efficient manner. In this paper, we propose a distributed scatternet-scheduling scheme that is implemented using the HOLD mode of Bluetooth and adapts to non-uniform and changing traffic. Another attribute of the scheme is that it results in fair allocation of bandwidth to each Bluetooth unit. This scheme provides an integrated solution for both intra-and inter-piconet scheduling, i.e., for polling of slaves and scheduling of gateways. | Introduction
The Bluetooth [10] technology was developed as a replacement
of cables between electronic devices and this is perhaps
its most obvious use. But, it is the ability of Bluetooth devices
to form small networks called piconets that opens up a
whole new arena for applications where information may be
exchanged seamlessly among the devices in the piconet. Typ-ically
, such a network, referred to as a PAN (Personal Area
Network), consists of a mobile phone, laptop, palmtop, headset
, and other electronic devices that a person carries around
in his every day life. The PAN may, from time to time, also
include devices that are not carried along with the user, e.g.,
an access point for Internet access or sensors located in a
room. Moreover, devices from other PANs can also be interconnected
to enable sharing of information.
The networking capabilities of Bluetooth can be further
enhanced by interconnecting piconets to form scatternets.
This requires that some units be present in more than one piconet
. These units, called gateways, need to time-division
their presence among the piconets. An important issue with
the gateways is that their presence in different piconets needs
to be scheduled in an efficient manner. Moreover, since the
gateway cannot receive information from more than one piconet
at a time, there is a need to co-ordinate the presence of
masters and gateways.
Some previous work has looked at scheduling in a piconet
[2,5] and also in a scatternet. In [4], the authors define a
Rendezvous-Point based architecture for scheduling in a scatternet
, which results in the gateway spending a fixed fraction
of its time in each piconet. Such a fixed time-division of the
gateway may clearly be inefficient since traffic is dynamic.
In [9], the authors propose the Pseudo-Random Coordinated
Scatternet Scheduling (PCSS) scheme in which Bluetooth
nodes assign meeting points with their peers. The sequence
of meeting points follows a pseudo-random process that leads
to unique meeting points for different peers of a node. The
intensity of these meeting points may be increased or decreased
according to the traffic intensity. This work presents
performance results for various cases. In [11], a scatternet-scheduling
algorithm based on the concept of a switch table,
which can be dynamically adjusted based on traffic load, is
presented. In [1], the authors present a credit-based scheduling
scheme based on the SNIFF mode of Bluetooth, where
credits may be reallocated to cater to changing traffic.
Our scheduling scheme addresses the issues of fairness and
utilization of bandwidth. Since Bluetooth is a low-bandwidth
environment, it is important that bandwidth should be effi-ciently
utilized. Also, since a low bandwidth can easily lead
to starvation of flows, another metric we focus on is fairness.
We propose a distributed scatternet-scheduling algorithm that
is implemented using the HOLD mode [10] of Bluetooth
and adapts to non-uniform and changing traffic. This algorithm
provides an integrated solution for both intra- and inter-piconet
scheduling, i.e., for polling of slaves and scheduling
of gateways. The algorithm leads to a high bandwidth utilization
and results in a fair division of (a) the piconet bandwidth
between the slaves of a piconet and (b) the gateway presence
among different piconets.
In section 2, we discuss the Bluetooth technology. In
section 3, we present a definition of fairness in the context
of Bluetooth scatternets, which takes into account intra- and
inter-piconet max-min fairness. Section 4 describes the algorithm
and proves its fairness property. Section 5 presents
simulation results and section 6 presents the conclusions.
Bluetooth technology
The Bluetooth system [3] operates in the worldwide unlicensed
2.4 GHz IndustrialScientificMedical (ISM) frequency
band. To make the link robust to interference, it uses
a Frequency Hopping (FH) technique with 79 radio carriers.
It allows a raw data transmission rate of 1 Mbit/s.
Two or more Bluetooth units sharing the same channel
form a piconet. Each piconet consists of a master unit and
up to seven active slave units. The master unit polls the slave
units according to a polling algorithm and a slave is only allowed
to transmit after the master has polled it. The piconet
capacity is thus, shared among the slave units according to the
polling algorithm.
Furthermore, two or more piconets can be interconnected,
forming a scatternet. This requires a unit, called an inter-piconet
unit (gateway), to be a part of more than one piconet.
Such a unit can simultaneously be a slave member of multiple
piconets, but a master in only one, and can transmit and
receive data in only one piconet at a time; so participation in
multiple piconets has to be on a time-division multiplex basis
. The time of the gateway is, thus, also shared among the
piconets it belongs to. In this work, we assume that the gateway
can only be a slave in its piconets. If a gateway were
to be a master in a piconet, it would lead to the stoppage of
all transmission in the piconet when the gateway visits some
other piconet. Thus, we believe that the use of the gateway as
a slave is the most efficient method of scatternetting.
Fair allocation of bandwidth
As introduced in the previous section, units belonging to a
piconet share the piconet capacity according to the polling
algorithm used by the master. In an analogous manner, gateways
in a scatternet divide their time among their different
piconets, according to the "master-listening" algorithm they
use. It can be noted that there is a duality in this architecture
. On the one hand, a master divides its capacity among
the units of its piconet by using a polling algorithm. On the
other hand, a gateway shares its capacity among the piconets
it belongs to, on the basis of a scheduling algorithm it uses
for listening to the masters. The gateway, can, then be viewed
as a "virtual master" and its masters can be viewed as "virtual
slaves" forming a "virtual piconet", in which the polling cycle
is, actually, the "listening cycle" of the gateway. A graphical
interpretation of this duality is given in figure 1, in which the
solid line shows the actual piconets, and the dotted line shows
the virtual piconet.
Due to this duality, we design our scheduling scheme such
that the same scheduling algorithm is used for fair sharing of
both (a) the piconet capacity among slaves and (b) the gateway
time among piconets.
We now give a definition of max-min fairness [7]. We then
go on to define max-min fairness in the context of Bluetooth
scatternets, by considering (a) intra-piconet fairness, i.e., fairness
in division of piconet bandwidth among slaves (both
gateway and non-gateway) of a piconet and (b) inter-piconet
fairness, i.e., fairness in division of the gateway's presence
among its piconets. We first define a `feasible' rate distribution
since this is used in the definition of max-min fairness.
Definition 1 (Feasible). A rate distribution is feasible if rates
are non-negative, the aggregate rate is not greater than one,
and no unit receives a higher rate than required.
Definition 2 (Max-min fairness). An allocation of rates
1
,
2
, . . . ,
s
among s units is max-min fair if it is feasible, and
for each unit i,
i
cannot be increased (while maintaining fea-sibility
) without decreasing
j
for some other unit j for which
j
i
.
The distribution of max-min fair rates depends upon the
set of rate demands (traffic generated) of the units. In the
following subsections, we discuss factors that determine the
max-min "fair share" of a slave (gateway or non-gateway).
We call these factors the Piconet Presence Fraction and the
Scatternet Presence Fraction and show how they may be used
to calculate the "fair share" for a slave in a scatternet.
3.1. Piconet presence fraction
Consider a piconet consisting of gateway and non-gateway
slaves in which the master has complete knowledge of the rate
demands of all slaves (an ideal master). Using this knowledge
, the master polls the slaves in a max-min fair manner
such that each slave gets its "fair share" of the master's
polling. We refer to the "fair share" received by a slave as the
"piconet presence fraction" (PPF) of the slave. The gateway
has a PPF for each piconet it belongs to.
Consider the piconets shown in figures 2(a) and 2(b), each
consisting of one gateway and two slaves, with the traffic rates
of each slave as shown. In figure 2(a) (Piconet I), the PPF of
each non-gateway slave is 0.2, while the PPF of the gateway
is 0.6. In figure 2(b) (Piconet II), the PPFs of the slaves are
0.2 and 0.4, while the PPF of the gateway is 0.4.
A FAIR AND TRAFFIC DEPENDENT SCHEDULING
11
Figure 2. Piconets with traffic rates between master and each slave shownn.
3.2. Scatternet presence fraction
A gateway will, in general, be a slave in multiple piconets and
may have different amounts of traffic to exchange with each
piconet. Consider an ideal gateway that has complete knowledge
of the rate demands of all its masters. The gateway can
then divide its presence among its piconets in a max-min fair
manner, giving each piconet a "fair share" of its presence. We
call this fair share the "scatternet presence fraction" (SPF) of
the gateway for the piconet. The importance of the SPF is that
a fair division of the gateway's presence among its piconets
can be achieved based on the SPF.
Consider the piconets of figure 2 again, but the gateway of
each of the piconets now connects them to form a scatternet,
as shown in figure 3. The traffic requirements are the same as
shown in figure 2. The SPF of the gateway is 0.5 in Piconet I
and 0.5 in Piconet II.
3.3. Fair share
We see that for a gateway to be fair, there are two kinds of
fairness it has to achieve: that dictated by the PPFs, which
achieves fairness between the gateway and the other slaves of
a piconet, and that of the SPFs, which distributes the presence
of the gateway between its piconets in a fair manner. Both
these kinds of fairness may not always be completely achiev-able
and this can lead to a change in the values of PPF and
SPF, as we now discuss.
We observe that an ideal master (as in section 3.1) does
not give a gateway more than the PPF of its polling. Thus,
if the SPF of a gateway is greater than its PPF for a piconet,
the gateway spends a fraction of its time equal to the PPF
in the piconet. The gateway cannot stay for a fraction equal
to its SPF in the piconet since it is limited by its PPF. Thus,
the extra scatternet presence fraction (the difference of the
SPF and the PPF) is redistributed in a fair manner among
the gateway's other piconets for which the SPF is less than
the PPF. This may increase the SPF of the gateway in the
other piconets. In other words, the gateway behaves as if
its SPF in a particular piconet is reduced to the PPF and
thus, its SPF in the other piconets increases. We refer to this
changed SPF as the "updated SPF" of the gateway in a piconet
.
Similarly, an ideal gateway does not stay a fraction of time
more than the SPF in a piconet. Thus, if the PPF of the gate-Table
1
Calculation of fair share of the gateway in the two piconets of figure 3.
Piconet I
Piconet II
Actual traffic rate
0.7
0.6
PPF
0.6
0.4
SPF
0.5
0.5
Updated PPF
0.6
0.4
Updated SPF
0.6
0.4
Fair share
0.6
0.4
Figure 3. Gateway shared between two piconets; traffic rates between slaves
and the master are shown.
way in the piconet is greater than the SPF, the gateway spends
a fraction of time equal to the SPF in the piconet. The remaining
PPF of the gateway (the difference of the PPF and the
SPF) is redistributed in a fair manner among the other slaves
of the piconet (if this other slave is a gateway, it is redistributed
to it if its SPF is greater than its PPF in the piconet). This
may increase the PPF of these slaves. We refer to this changed
PPF as the "updated PPF" of the slave in the piconet. In case
there is no such redistribution, the updated PPF is equal to the
PPF and the updated SPF is equal to the SPF.
The fair share can now be calculated from the "updated
PPF" and the "updated SPF" as the minimum of these two
quantities. Note that all these quantities PPF, SPF, updated
PPF, updated SPF and fair shareare dependent on the traffic.
Any change in traffic demand of a unit may lead to a change
in some of these quantities. We explain the calculation of the
fair share using some examples.
An example is given in table 1, which shows the actual traffic
rate, PPF, SPF, Updated PPF, Updated SPF and fair share
of the gateway in the two piconets of figure 3. In Piconet II,
the gateway has a PPF of 0.4, which is less than the SPF. In
Piconet I, the gateway has a PPF of 0.6 and an SPF of 0.5.
Thus, the extra scatternet presence fraction of the gateway in
Piconet II (the difference between the SPF and the PPF) is
given to Piconet I, which has a higher traffic rate than may
be allowed by the SPF. This is reflected in the "updated SPF"
values. Thus, the "fair share" of the gateway in Piconet I is
0.6 and in Piconet II is 0.4. The fair shares of the non-gateway
slaves are equal to their PPF.
As another example, consider the scatternet consisting of
5 piconets with the traffic rates shown as in figure 4. As shown
in table 2, gateway G2 has a PPF of 0.5 and an SPF of 0.4 in
Piconet B. Thus, the "updated PPF" of G2 in Piconet B is 0.4.
The extra PPF (
= PPF - SPF) is added to the PPF of gateway
12
R. KAPOOR ET AL.
Figure 4. Scatternet with two gateways.
Table 2
Calculation of fair share of the gateways G1 and G2 in the scatternet of
figure 4.
Gateway G1
Piconet A
Piconet B
Piconet C
Actual traffic rate
0.4
0.6
0.1
PPF
0.25
0.5
0.1
SPF
0.4
0.5
0.1
Updated PPF
0.25
0.6
0.1
Updated SPF
0.25
0.65
0.1
Fair share
0.25
0.6
0.1
Gateway G2
Piconet B
Piconet D
Piconet E
Actual traffic rate
0.7
0.2
0.4
PPF
0.5
0.2
0.4
SPF
0.4
0.2
0.4
Updated PPF
0.4
0.2
0.4
Updated SPF
0.4
0.2
0.4
Fair share
0.4
0.2
0.4
G1 in Piconet B. The "updated PPF" of G1 in Piconet B is,
thus, 0.6.
Also, gateway G1 has a PPF of 0.25 and an SPF of 0.4
in Piconet A. Thus, the "updated SPF" of G1 in Piconet A is
0.25. The extra SPF (
= SPF-PPF) is added to the SPF of G1
in Piconet B. The "updated SPF" of G1 in Piconet B, is thus,
equal to 0.65. The fair shares can now be easily calculated.
A division of the master's polling and the gateway's presence
based on PPF and SPF as described in this section takes
into account the traffic demands of the slaves and the gateways
and leads to fairness in the scatternet. In the next section
, we introduce and describe an algorithm that aims to
achieve such a fair distribution of bandwidth.
Description of algorithm
We first explain how the algorithm works in the case of a single
piconet with no gateway. We then extend the algorithm
to the case of a scatternet and explain how the coordination
between the master and the gateways is achieved. We then
prove the fairness of the algorithm.
4.1. Single piconet with no gateways
The polling algorithm is based on the master estimating the
traffic rate between each slave and itself. This traffic rate is
the sum of the traffic rates from the master to a slave and in
the reverse direction. We assume, in order to simplify the explanation
of the algorithm, that traffic flows only from slaves
to master; masters generate no traffic to slaves. The same algorithm
also applies with little change when traffic flows in
both directions (explained later).
The master uses a Round Robin polling scheme, with the
modification that a slave is skipped if it does not belong to the
"active list" of the master. The slaves are moved in and out
of the active list on the basis of two variables that the master
maintains for each slave. These two variables are:
r
estimate of the rate of traffic generated by the slave;
N
estimate of the queue length of the slave.
When a slave is polled, the masterslave pair gets a chance
to exchange a maximum amount of data in each direction,
denoted by M. After each such polling phase, the master updates
the values of N and r in the following manner:
For the slave just polled:
N
= N + r - x,
(1)
r
=
r
+ (1 - ) xT ,
x < M
,
r
+ (1 - ) xT + , x = M.
(2)
For other slaves:
N
= N + r,
(3)
where is the time elapsed since the last update, x is the
amount of data exchanged during the poll phase, T is the total
time elapsed since the last poll of the same slave, is a parameter
used to smooth the rate estimation and is a parameter
used to probe for more bandwidth. Note that x is the actual
amount of data exchanged, which may be less than or equal
to M, depending upon the number of packets in the slave's
queue. Since N is an estimate of the slave's queue length and
r
is an estimate of the rate at which traffic is generated, N is
increased at the rate of r (as in equations (1) and (3)). Also,
when a slave is polled, N is decreased by the amount of data
exchanged ((equation 1)).
After updating these values, the master determines the
changes to be made to the active list. A slave is added or
deleted from the active list depending upon whether its value
of N is greater or smaller than a "threshold". The value of
this threshold is the minimum amount of data that the master
would like the slave to have in order to poll it. We choose
a value equal to a multiple of a DH5 packet for the threshold
since this packet incurs least overhead (the selection of
the value of the threshold is discussed further in the next subsection
). Thus, a slave is present in the active list if the master's
estimate of the value of N for the slave is greater than the
threshold. This makes the simple Round Robin polling strategy
adaptive to traffic and enables it to utilize bandwidth ef-ficiently
, even when slaves have different rates of traffic. The
maximum amount of data that can be exchanged at each poll,
M
, is also set equal to the threshold. Note that if the amount
of data, x, in the slave's queue is less than the threshold, the
polling of the slave ends after this data has been exchanged.
A FAIR AND TRAFFIC DEPENDENT SCHEDULING
13
If the value of N is less than the threshold for all the slaves,
then the slave whose value of N is estimated to take the smallest
time to reach the threshold is polled, i.e., the slave for
which the value of (Threshold
- N)/r is the smallest.
The master now goes to the next slave according to the
Round Robin ordering of slaves. If the slave is present in the
active list, it is polled. Else, the procedure is repeated for the
next slave in the Round Robin ordering.
Also, note that if the amount of data sent by the slave x
is equal to M, r is increased by a small amount, . This is
basically an attempt by the slave to probe for more bandwidth
if it is able to send data at the present rate. The usefulness
of this increase is evident in the proof of fairness in the next
section. The value of chosen is 0.15 and that of is 0.65.
We also discuss the rationale behind choosing these values in
the proof of fairness.
If traffic flows in both directions, i.e., from the slaves to
the master and in the reverse direction, x is the average of
the amount of data exchanged in the two directions, r refers
to the average of the rate-estimations of the two directions
and N refers to the average of the queue length estimates of
the two directions. Also, if the number of packets in either
direction is less than the threshold, the polling of the slave
continues till in both directions, (a) there is no more data to
send or (b) amount of data equal to the threshold has been
exchanged.
The initial value of N is set to the threshold (to ensure that
slaves get polled at the beginning) and that of r is set to 0.25
(as a reasonable value). Note that the algorithm converges to
the fair share, but a careful selection of initial values makes
the initial convergence faster.
Another advantage of such a scheme is that it may allow
the master to go into a power-saving mode if it realizes that no
slave has sufficient packets to send, i.e., if N is smaller than
the threshold for all slaves. Though we do not explore this
option in this paper, it may be useful since Bluetooth devices
are expected to work in power-constrained environments.
To improve the algorithm, we add a heuristic to it. The
maximum number of polling cycles that a slave is not polled
is bounded. If a slave generates a large burst of data occasionally
and then does not generate any data for a long time,
the value of r for the slave may be very low. This may cause
the value of N for the slave to be lower than the threshold
for a long time. By limiting the maximum number of cycles
missed by the slave, we make sure that such a behavior of the
slave does not lead to its starvation. In the experiments, this
value is taken to be equal to 5 cycles. We now explain how
the above algorithm works in a scatternet.
4.2. Scatternet
Scheduling of gateways using Rendezvous Points.
Before
describing how the algorithm works in a scatternet, we briefly
discuss the notion of Rendezvous Points (RPs) described
in [4]. A RP is a slot at which a master and a gateway have
agreed to meet, i.e., at this slot, the master will poll the gateway
and the gateway will listen to the master. In [4], RPs are
implemented using the SNIFF mode of Bluetooth, but we implement
RPs using the HOLD mode [10]. In the HOLD mode,
the slave does not have to listen to the master for a certain time
period and may use this time to visit other piconets. Prior to
entering the HOLD mode, the master and the slave agree on
the time duration the slave remains in the HOLD mode. We
implement our algorithm using RPs as described below.
The working of the algorithm in a scatternet is very similar
to its operation in a piconet. The master continues to poll the
non-gateway slaves in the same manner as described in the
previous section with the modification that a gateway is polled
at a Rendezvous Point. Each RP is a slot at which a particular
gateway is polled and a master has different RPs for each of its
gateways. These RPs are always unique (i.e., a master cannot
have the same RP with more than one gateway). Since the
gateway must be polled at the RP, this has implications in the
polling of the other slaves (discussed later). Once a gateway
has been polled, the master continues with the polling of the
other slaves in the same manner as described in the previous
section, i.e., it checks its active list to see if the next slave in
the polling cycle is to be polled and so on.
In order to divide its time among different piconets in a
fair manner, the gateway performs similar calculations as described
in the earlier section for the master. The gateway
maintains values of N and r for each piconet it belongs to and
these values are updated each time a gateway is polled (i.e.,
at each RP). Thus, the calculations performed by a gateway at
each RP are:
For the piconet in which the gateway was just polled:
N
= N + r - x,
(4)
r
=
r
+ (1 - ) xT ,
x < M
,
r
+ (1 - ) xT + , x = M.
(5)
For other piconets:
N
= N + r,
(6)
where is the time elapsed since the last update, x is the
amount of data exchanged during the poll phase, T is the total
time elapsed since the gateway was polled in the same piconet
, and and are as defined earlier.
Moreover, at each RP, the gateway and the master negotiate
the next RP between them. The assignment of this next
RP takes into account the fairness between (a) the gateway
and other slaves in a piconet and (b) the presence of the gateway
in different piconets. Also, we again employ a heuristic
that improves the algorithm. When the next RP is being nego-tiated
, we keep a bound on the maximum value this can take.
This prevents a piconet from not being visited by a gateway
for a long time. The maximum value of this next RP used in
our experiments is 400 slots.
We now see how the master and the gateway use the information
that they have to achieve fairness in the scatternet.
When a gateway is polled at a RP, the gateway and the master
do the following.
14
R. KAPOOR ET AL.
(i) Gateway. The gateway calculates the number of slots,
N
thresh
after which N for the piconet will become greater
than the threshold; N
thresh
= (threshold - N)/r, where
threshold is as explained in the previous section, N and
r
are values maintained by the gateway for the piconet.
The gateway makes use of this value and does not visit
a piconet till its estimate of N for the piconet becomes
greater than the threshold. This is similar to the algorithm
used by the master in which a slave is not polled till
the master's estimate of N for the slave becomes greater
than the threshold. Thus, the gateway tries to divide its
time between the piconets in a fair manner, i.e., according
to the SPFs. Note that N
thresh
may be negative if N
is greater than the threshold. Also, N
thresh
is allowed to
have a maximum value of 400.
Moreover, each time a gateway visits a piconet, it knows
the RPs for the other piconets it belongs to (except right
at the beginning or when the gateway is added to another
piconet).
(ii) Master. The master calculates the number of slots after
which the gateway can be polled such that the fairness
with other slaves is maintained. It adopts the following
procedure to achieve this:
It maintains a counter, num_slots (which is initialized
to 0) and checks the value of N for each slave, in a cyclic
order, starting from the slave after the current gateway in
the cyclic order to the slave before the current gateway.
The master checks if the value of N for the slave will be
greater than the threshold after num_slots slots. If this
condition is true, num_slots is incremented by twice the
value of the threshold. After incrementing num_slots, the
master also checks to see if it has a RP with any gateway
whose value is equal to num_slots and increments
num_slots by twice the value of the threshold if this is
true. This ensures that the master has a unique RP for
each of its gateways. Note that num_slots is incremented
by twice the value of the threshold since the master expects
to exchange threshold slots of data with a slave in
each direction.
The master uses the above procedure to estimate the number
of slaves who will have their value of N greater than
the threshold when the master polls the slaves in their
cyclic order starting from the gateway just polled. The
value of num_slots determines the number of slots which
the master expects to use in polling the other slaves in
one cycle before polling the gateway again and is thus,
used by the master to maintain fairness between the gateway
and the other slaves in the piconet. Again, note that
num_slots is allowed to have a maximum value of 400.
The master and the gateway now exchange the information
they have to calculate their next RP. This exchange takes
place using the LMP_hold_req PDU of the LMP (Link Manager
Protocol) layer. This PDU carries a hold instant and a
hold time, which are used to specify the instant at which the
hold will become effective and the hold time, respectively.
When the master is sending a packet to a gateway, the value
of num_slots can be sent after hold instant and hold time in
the packet. The master also sends the values of its RPs with
its other gateways in the packet. Similarly, the gateway sends
the master the values of its RPs with other piconets and the
value of N
thresh
also in an LMP_hold_req PDU. The master
now knows all the RPs of the gateway; similarly, the gateway
knows all the RPs of the master.
Note that the above information exchange requires a minimal
change in the Bluetooth specifications that the contents
of the LMP_hold_req PDU need to be enhanced. This PDU is
1-slot in length; thus, some bandwidth of the master is wasted
in sending these PDUs. This wasted bandwidth can be reduced
by increasing the value of threshold, i.e., the maximum
data that a slave and a master may exchange in each direction
during one poll of the slave. On the other hand, a large
value of the threshold will lead to larger delays for packets.
Thus, we have a tradeoff here. We choose a threshold value
equal to three times a DH5 packet. The effect of this wasted
bandwidth can be seen in the experiments section where the
piconet capacity used is slightly less than 1. Note that we
pay a small price here to get perfect coordination between the
master and the gateway and also to get a high degree of fairness
in the system, as the experiments later demonstrate.
Now, the master and the gateway both have complete information
. So, each of them calculates the next RP in the
following manner:
They take the maximum value out of num_slots and N
thresh
and as long as this value is the same as one of the RPs (note
that all relevant RPs are known to both the master and the
gateway), the value is incremented by 2
threshold. The value
at the end of this small procedure is the next RP between the
gateway and the master. Since this value takes into account
both N
thresh
and num_slots, it incorporates both the fairness
of the master's polling and the gateway's presence.
Note that the value of num_slots calculated by the master is
just an estimate (the master assumes that each slave included
in the calculation of num_slots will exchange threshold slots
of data with the master in each direction, but this may not be
true). Thus, the master may have polled all the slaves that had
to be polled before the RP of the gateway (according to the
estimate in the calculation of num_slots) and still be left with
some slots before the RP. In this case, the master just continues
polling the slaves in their cyclic order and polls the gateway
when the time for the RP arrives. Note that this means
that the master may have to force a slave to send a packet
smaller than a certain length. For example, if two slots are
left for the RP, then the master will send a 1-slot packet and
ask the slave being polled to do the same. Note that the Bluetooth
header has 4 bits to represent the packet type and these
can represent 16 packet types. For ACL links, 10 (7 data,
3 control packets) of the packet types are defined. We use 2
of the remaining bit sequences to send packets that force the
slave to send packets smaller than or equal to a certain length.
This is shown in table 3.
From table 3, we see that this procedure is adopted if the
number of slots left for the RP is less than 10 (if the number
of slots left for the RP is greater than or equal to 10, then the
A FAIR AND TRAFFIC DEPENDENT SCHEDULING
15
Table 3
Procedure adopted by the master if slots left for the RP is less than 10.
Slots left for RP
Maximum
Maximum
length of packet
length of packet
sent by master
sent by slave
2
1
1
4
1
1
6
3
3
8
3
3
slave's packet length does not have to be restricted). Thus,
if the slots left for the RP is 2, the master can send a packet
of maximum length
= 1 and the gateway can send a packet
of maximum length
= 1 and so on. Note that for reasons of
fairness, the maximum packet length for the master and the
gateway is the same. Since the master needs to restrict the
maximum length of the gateway's packet to either 1 or 3 (as
shown in table 3), we need 2 packet types to achieve this. This
procedure effectively suspends the polling of a slave to honor
a RP with a gateway. The polling of the slave continues after
the gateway has been polled.
In addition, a gateway may lose a slot in switching from
one piconet to another. This loss is unavoidable since piconets
are in general, not synchronized in time. In the experiments in
the paper, we set the value of the threshold to three times the
payload of a DH5 packet, which can give a switching loss of
about 3% at heavy loads (every 2
threshold slots, the gateway
loses about one slot in switching). At light loads, this switching
loss does not lead to inefficiency since the sum of the fair
shares of the gateway in all its piconets is less than 1 and even
after the switching loss, the gateway is able to obtain its fair
share. The simulations in the next section do not take this
switching loss into account and thus, the bandwidth received
by the gateway under heavy loads will be a little smaller than
the one shown in the results.
4.3. Proof of fairness
We now prove that the above algorithm leads to a max-min
fair distribution of the bandwidth of a scatternet among units.
We start by proving this in the case of a piconet. In the next
step, we will extend the proof to the general case of a scatternet
.
4.3.1. Fairness in a piconet
Let us introduce the following notation:
S
: number of slave units in the piconet;
g
i
: rate-demand of the ith unit;
i
: rate achieved by the ith unit;
r
i
: rate-estimation of the ith unit (as defined in equation
(2)),
where
i
and r
i
are average values.
Slave unit i is referred to as "satisfied", if it achieves it rate
demand, i.e.,
i
= g
i
; else, the slave unit is referred to as
"unsatisfied". Also, in the proof that follows, "slot" refers to
"Bluetooth slot"; "unit" and "slave unit" may be used inter-changeably
.
If there is one slave unit in a piconet, then it will always get
polled and hence, the algorithm is fair. We prove the fairness
when there are two or more slave units.
We first make the following observations:
(a) If a unit has a rate-estimation, r
0.25, it will never
achieve a lesser rate than any other unit.
r
is an estimation of the average number of slots of traffic
that a masterslave pair will generate per slot in each direction
. Thus, a rate of 0.25 means that a masterslave pair
generates, on the average, "threshold" slots of traffic in each
direction in every 4
threshold slots. Suppose a piconet has
two slaves, and the first has a rate-estimation, r
0.25, then
the first slave will be polled at least once in every 4
threshold
slots, i.e., will get on the average at least threshold polling
slots out of every 2
threshold, regardless of the r of the other
slave (since N increases at the rate of r, N will increase by
at least 0.25
4 threshold = threshold; thus, the slave will
enter into the "active list" in 4
threshold slots). Thus, it will
never achieve a lesser rate than another unit. It is easy to see
that this property would be true if there were more than two
slaves (two slaves is the worst case).
(b) For
0.1 and
0.6, an unsatisfied slave will tend to
a rate-estimation of at least 0.25.
For an unsatisfied slave, the second part of equation (2)
(when x
= M) is always used for updating the rate. Thus, if
r
i
is the ith rate-estimation:
r
n
+1
= r
n
+ (1 - ) M
T
n
+ .
This leads to (as n becomes very large):
r
= (1 - )M
k
=0
n
-k
T
k
+
1
1
- .
Thus, for
0.1 and
0.6, for any value of T , the
value of r tends to at least 0.25.
(c) As long as there is an unsatisfied unit, the utilization of
the system capacity is 1 (for
0.15 and
0.65).
Consider a piconet consisting of seven slave units, in
which the first unit, unit
1
is unsatisfied. From (a) and (b),
unit
1
will never achieve a lesser rate than any other unit; this
means that it will be polled at least once for each time the
other slaves are polled. The value of T (as in equation (2)) for
unit
1
is thus, at most, 14
threshold. For this value of T and
for
= 0.15 and = 0.65, r for unit
1
tends to at least 0.5.
A value of r
= 0.5 for a slave unit means that it can be polled
all the time (since N increases at the rate of r, N will increase
by at least 0.5
2 threshold = threshold; thus, the slave will
enter into the "active list" in 2
threshold slots, which is also
the time of its polling). Thus, the system capacity is totally
utilized. If there were less than 7 slave units, the value of T
would be smaller (than 14
threshold), and r would tend to a
higher value (than 0.5).
16
R. KAPOOR ET AL.
We choose values of and to satisfy the above properties
, i.e.,
= 0.15 and = 0.65.
The following statements hold.
(i) Units with the same rate-demand achieve the same average
rate:
g
i
= g
j
i
=
j
.
We prove this by contradiction. Suppose there are two units,
unit
1
and unit
2
with rate demands g
1
and g
2
, respectively,
such that g
1
= g
2
. Also, suppose one unit achieves a higher
average rate than the other,
1
>
2
.
Now, unit
2
does not achieve its rate-demand (since
1
>
2
)
. Unit
1
may or may not achieve its rate demand. From
property (b), unit
2
will always tend to a value at least equal to
0.25, since it is an unsatisfied slave. Using property (a), this
implies that
2
cannot be less than
1
. This is a contradiction.
(ii) Units with a higher rate-demand achieve an average rate
at least equal to that achieved by units with a lower rate-demand
:
g
i
> g
j
i
j
.
This can be proved by contradiction in the same manner as in
part (i).
Now, without loss of generality, let us partition the slave
units into two sets, S1 and S2, in such a way that units in S1
are satisfied, while units in S2 are not.
If the set S2 is empty, than all the units achieve their rate-demand
and the system is fair.
If the set S2 is not empty, then using statements (i) and (ii),
all units share the bandwidth in a fair manner. Moreover,
since S2 contains at least one unit, the total system capacity
is utilized. Hence, it is not possible to increase the rate
of a unit in S2 without decreasing the rate of some other
unit.
4.3.2. Fairness in a scatternet
The proof of fairness for a scatternet follows trivially from
that for a piconet. We make the following two observations:
(1) The gateway visits a piconet only after the estimation
of N for the piconet becomes greater than the threshold (it
calculates N
thresh
while determining the next RP). In other
words, the "virtual master" (gateway) does not poll (visit) its
"virtual slave" (master) till the estimate of N becomes greater
than the threshold. This is similar to the algorithm used by
the master to poll the slaves in which a slave is not polled till
its estimate of N becomes greater than the threshold. Thus,
the gateway divides its presence among its piconets in a fair
manner, i.e., according to the SPF. Note that if the PPF for
a gateway in a piconet is less than its SPF, the master does
not poll the gateway for more than the PPF. Thus, the apparent
rate demand and SPF for the gateway in the piconet are
reduced. This may increase the SPF of the gateway in other
piconets. In this case, the gateway divides its presence according
to the updated SPFs.
(2) While calculating the next RP for a gateway, the master
calculates the num_slots value which estimates the number
of slaves in one polling cycle (starting from the slave after
the gateway in the polling cycle) who will have their values
of N greater than the threshold at the estimated time of their
poll. This achieves fairness between the gateway and the non-gateway
slaves. Also, the master continues to use the same
algorithm for polling non-gateway slaves in a scatternet as
described for a piconet in section 4.1. This maintains fairness
between non-gateway slaves, i.e., the division is done according
to the PPFs (or the updated PPFs).
4.4. Overhead/limitations of the algorithm
The rate calculations will lead to a higher load on the system.
Also, the algorithm does not take into account SCO links. We
believe (and as has been shown in [6]) that ACL links are
capable of carrying voice with small delays. The controlled
channel access in Bluetooth can ensure good support of voice
using ACL links. Also, scheduling in a scatternet where SCO
links are allowed may not be feasible. Since SCO links require
a periodic reservation of two slots every two, four or
six slots, meeting the demands of such a link with a gateway
may be impossible when the gateway is visiting some other
piconet.
Experiments and results
In this section, we present simulation results, which show that
the algorithm satisfies the fairness criteria described earlier.
We start with simple topologies that illustrate the behavior of
the algorithm and then show that it also works well in more
complex topologies. There are three topologies that the experiments
focus on and these demonstrate the behavior of the algorithm
a topology with (a) a gateway belonging to two piconets
, (b) a gateway belonging to three piconets and (c) a piconet
having two gateways. The experiments also show the
adaptivity of the algorithm, i.e., how quickly the algorithm
adapts to changing traffic demands of slaves.
In the experiments, we specify the "rate of a slave", which
refers to the sum of the rates at which a slave generates data
for a master (i.e., the rate demand of a slave) and the master
generates data for the slave. Moreover, unless mentioned
otherwise, we assume that the traffic rate from a slave to a
master is equal to that from the master to the slave. Thus, a
slave having a rate of 0.4 means that the slave generates data
at the rate of 0.2 Bluetooth slots per slot and the master also
has a rate demand of 0.2 towards the slave. As we show in the
section on asymmetric traffic, the algorithm works well even
if these two rates are not the same.
The simulation environment used in our experiments is
NS-2 [8]. We have augmented NS-2 with the Bluetooth
model. The simulator models the Bluetooth baseband, LMP
and L2CAP layers and enables the creation of piconets and
scatternets. The model contains most of the standard features
of Bluetooth like Frequency Hopping, Multi-Slot Packets,
Fast ARQ (Automatic Retransmission Query). Note that as
mentioned earlier, in our simulator, the switching loss asso-ciated
with the gateway moving from one piconet to another
A FAIR AND TRAFFIC DEPENDENT SCHEDULING
17
Figure 5. Example scatternet.
is not taken into account. This effect can lead to the gateway
losing up to 3% of slots at heavy loads. The experiment
results are thus, a slight overestimate.
In the experiments, all traffic generated is CBR. Each experiment
is run for a system time of 32 sec. In the experiments
, the term "slave" refers to a non-gateway slave; a gateway
slave is referred to as "gateway". Also, in experiments
where the PPF and the SPF values (and not the updated PPF
and the updated SPF) are shown, the PPF and the updated PPF
are equal and the SPF and the updated SPF are also equal. In
the graphs, "BW" in the index stands for bandwidth, "GW"
stands for gateway.
5.1. Single gateway in two piconets
We first consider the simple topology shown in figure 5,
which consists of two piconets, numbered I and II, connected
by a single gateway. We consider various cases by changing
the traffic and the number of slaves in the piconets.
Experiment 1. Adaptation between gateway and
non-gateway slave traffic
Each piconet has one non-gateway slave that generates very
high traffic, with rate equal to 1, to the master. The gateway
has equal traffic to both masters. We vary the gateway traffic
to show the fair sharing of the piconet bandwidth between the
gateway and the slave. We show the results for one piconet
since the two piconets are exactly symmetric.
Figure 6(a) shows the sharing of bandwidth between the
gateway and slave for different values of gateway traffic. It
also shows the fair share of the slave and the total fraction
of the bandwidth obtained by the gateway and the slave in
the piconet. It can be seen that the slave obtains a bandwidth
equal to its fair share for different values of gateway traffic.
Moreover, the sum of the bandwidths obtained by the slave
and the gateway is nearly equal to 1. The reason for this to
be slightly less than 1 is that some of the piconet capacity is
used in sending LMP_hold_req PDUs of the LMP layer.
In figure 6(b), the comparison of the fraction of the bandwidth
obtained by the gateway to its SPF (PPF and SPF are
equal) is shown. Figure 6(b) shows that the gateway gets almost
equal to its fair share of the bandwidth for all values
of traffic. Again, the reason that the gateway obtains slightly
less than its fair share is because some of the slots are used
for LMP PDUs. This also explains why the gateway obtains
slightly less than the slave in figure 6(a).
Figure 6. (a) Sharing of bandwidth between gateway and slave. (b) Comparison
of fraction of bandwidth obtained to SPF for the gateway.
Experiment 2. Different traffic to piconets
The same topology as in the previous case, but each slave has
a traffic rate of 0.3 to the master. The gateway has a fixed traffic
rate of 0.2 to the master of Piconet I and variable traffic to
the other master. The PPF and SPF of the gateway in the first
piconet are, thus, both equal to 0.2. The traffic in Piconet I
does not change and the gateway and the slave get a constant
fraction of 0.2 and 0.3 of the piconet bandwidth, respectively.
Figure 7(a) shows the sharing of bandwidth between the
gateway and slave for different values of gateway traffic,
while figure 7(b) shows the comparison of the fraction of the
bandwidth obtained by the gateway in Piconet II to its SPF
and PPF. From the graphs, we can see that when the gateway
has different traffic to piconets, it divides its presence
among the piconets according to the traffic offered and in a
fair manner (again, the gateway obtains slightly less than its
fair share due to the LMP PDUs). Also, the gateway makes
use of the lower traffic offered by the slave in Piconet II to
obtain a higher share of the bandwidth in Piconet II.
Experiment 3. Different number of slaves
Piconet I has 3 slaves, while the number of slaves in Piconet II
is variable. Each slave generates traffic to the master at the
rate of 0.2. The gateway has a traffic rate of 0.3 to Piconet I
and 0.8 to Piconet II. The PPF and SPF of the gateway in
Piconet I are, thus, 0.2 and 0.3, respectively. In Piconet II, the
value of PPF changes depending upon the number of slaves.
In Piconet I, the slaves get a bandwidth fraction of 0.2
and the gateway gets 0.3. Figure 8(a) shows the sharing
18
R. KAPOOR ET AL.
Figure 7. (a) Sharing of bandwidth between gateway and slave in Piconet II.
(b) Comparison of fraction of bandwidth obtained by the gateway to SPF and
PPF in Piconet II.
of bandwidth between the gateway and each slave in Piconet
II. Figure 8(b) shows the comparison of the fraction
of the bandwidth obtained by the gateway in Piconet II to the
SPF and PPF. The gateway receives a fraction of the bandwidth
almost equal to its fair share. Also, as the number of
slaves increases, the fraction of the bandwidth received by
the gateway (and each slave) reduces in a fair manner.
Experiment 4. Asymmetric traffic
We now consider a case where the traffic rates from Master
to Slave and Slave to Master are different (asymmetric traffic
). We consider the same topology as in experiment 2 of the
current section, with the non-gateway slaves having the same
rate as in experiment 2. The gateway has a fixed traffic rate of
0.2 to the master of Piconet I and variable traffic to the other
master. The variable traffic is such that traffic from Master
to Slave has a rate of 0.1 and traffic from Slave to Master
varies.
Figure 9 shows the comparison of bandwidth fraction obtained
by the gateway in this experiment versus that obtained
by the gateway in experiment 2 in Piconet II for different values
of gateway traffic (which is the sum of master to gateway
and gateway to master traffic rates). We see that the fraction
is slightly lower than the fraction obtained in experiment 2.
Asymmetric traffic leads to wastage of slots, since an empty
slot is returned in one direction where there is no data to send.
It can be seen though, that the gateway still behaves in an ap-Figure
8. (a) Sharing of bandwidth between gateway and slave in Piconet II.
(b) Comparison of fraction of bandwidth obtained by the gateway to SPF and
PPF in Piconet II.
Figure 9. Comparison of fraction of bandwidth obtained by gateway in this
experiment with that in experiment 2 in Piconet II.
proximately fair manner. All other bandwidth fractions for
slaves and the gateway are the same as in experiment 2.
5.2. Single gateway shared between three piconets
We now consider a topology, where a gateway is shared between
3 piconets, numbered I, II and III. Piconet I has 5, Piconet
II has 1 and Piconet III has 4 slaves. Each slave has
a traffic rate of 0.2. The gateway has a traffic rate of 0.2 to
Piconet I, 0.3 to Piconet III and a variable rate to Piconet II.
All traffic is symmetric (same from master to slave and from
slave to master).
A FAIR AND TRAFFIC DEPENDENT SCHEDULING
19
Figure 10. Bandwidth fraction received by gateway in the three piconets.
Figure 11. Example scatternet topology.
Figure 10 shows the fraction of bandwidth obtained by the
gateway in each piconet with increasing gateway traffic rate to
Piconet II. It also shows the PPF and the Updated SPF of the
gateway in Piconet II. We do not show the fair shares of the
gateway in Piconet I and III since they are constant (0.16 and
0.2, respectively). It can be seen that the gateway manages
to get close to its fair share in the 3 piconets. The slaves in
Piconet I get a bandwidth fraction of 0.16 and the slaves in
Piconet II and III get a bandwidth fraction of 0.2 (all these are
equal to their fair shares).
5.3. Piconet with two gateways
We now show the working of the algorithm in a piconet having
2 gateways, as shown in figure 11. Piconets I, II and III
have 6, 2 and 4 non-gateway slaves, respectively. There are
two gateways, GW 1 between Piconets I and II; and GW 2 between
Piconets II and III. All slaves have a traffic rate of 0.2.
GW 1 has a traffic rate of 0.2 in Piconet I and 0.5 in Piconet II.
GW 2 has a traffic rate of 0.2 in Piconet III. We vary the traffic
rate of GW 2 in Piconet II and show the fair sharing of
bandwidth.
Figure 12 shows the fraction of bandwidth obtained by
GW 1 and GW 2 in Piconet II compared to their fair shares.
The x-axis denotes GW 2 traffic in Piconet II. It can be seen
that the bandwidth fractions obtained are very close to the
fair value. The non-gateway slaves of Piconet II receive a
bandwidth fraction of 0.2, which is equal to their fair share
(not shown in the figure). The bandwidth fraction received by
slaves in Piconets I and III does not change for different values
of GW2 traffic in Piconet II. The fair share of each slave
(including the gateway) in Piconet I is 0.14 and in Piconet III
Figure 12. Fraction of bandwidth and fair share of GW1 and GW2 in Piconet
II.
Figure 13. Actual rate estimation of the gateway and its ideal value.
is 0.2; the bandwidth fraction received by each slave is very
close to these fair shares.
5.4. Adaptivity to changing traffic demands
We now show how quickly the algorithm is able to adapt to
changing traffic. We again consider the scenario of experiment
1 of section 5.1, consisting of two piconets, each having
a non-gateway slave, connected by a single gateway. The
non-gateway slaves have a traffic rate of 1; the gateway has
equal traffic to both the masters. We vary the traffic rate of
the gateway as time progresses: for the first 2.5 seconds, the
gateway's rate is 0.1, for the next 2.5 seconds, it is 0.5 and for
the remaining time, it is 0.3.
Figure 13 shows the actual rate estimation of the gateway
(and its ideal value) versus time. It can be seen that the rate
estimation adapts very quickly to the new rate. For example,
when the rate changes from 0.1 to 0.5, the rate estimation
reaches a value of 0.45 in about half a second after 2.5 sec.
Thus, the algorithm adapts to quickly changing traffic.
Conclusions
This paper proposed a distributed scatternet-scheduling algorithm
that adapts to non-uniform and changing traffic. This
20
R. KAPOOR ET AL.
algorithm provides an integrated solution for both intra- and
inter-piconet scheduling and can be implemented using the
HOLD mode of Bluetooth. Through analysis and simulations,
we showed that the algorithm is traffic-adaptive and results in
a fair allocation of bandwidth to units. We explained earlier
that the algorithm may allow a unit to go into a power-saving
mode.
In future, we would like to explore this option, which also
assumes importance since Bluetooth devices will most likely
operate in a power-constrained environment. As future work,
we would also like to evaluate the performance of TCP and
other kinds of traffic on our algorithm. We are also working
towards interfacing the algorithm with requirements of higher
layers. In this respect, we are working towards providing QoS
support using the algorithm.
References
[1] S. Baatz, M. Frank, C. Kehl, P. Martini and C. Scholz, Adaptive scatternet
support for Bluetooth using sniff mode, in: Proc. of IEEE LCN
(2001).
[2] A. Das, A. Ghose, A. Razdan, H. Saran and R. Shorey, Enhancing performance
of asynchronous data traffic over the Bluetooth wireless ad-hoc
network, in: Proc. of IEEE INFOCOM'2001 (2001).
[3] J. Haartsen, BLUETOOTH the universal radio interface for ad hoc
wireless connectivity, Ericsson Review 3 (1998) 110117.
[4] P. Johansson, M. Kazantzidis, R. Kapoor and M. Gerla, Bluetooth an
enabler for personal area networking, IEEE Network Magazine, Wireless
Personal Area Network (September 2001).
[5] M. Kalia, D. Bansal and R. Shorey, MAC scheduling and SAR policies
for Bluetooth: A master driven TDD pico-cellular wireless system, in:
Proc. of 6th IEEE International Workshop on Mobile Multimedia Communications
(MOMUC) (1999).
[6] R. Kapoor, L. Chen, Y. Lee and M. Gerla, Bluetooth: carrying voice
over ACL links, in: Proc. of MWCN (2002).
[7] A. Mayer, Y. Ofek and M. Yung, Approximating max-min fair rates via
distributed local scheduling with partial information, in: Proc. of IEEE
INFOCOM (1996).
[8] NS-2 simulator, http://www.isi.edu/nsnam/ns/
[9] A. Racz, G. Miklos, F. Kubinszky and A. Valko, A pseudo-random
coordinated scheduling algorithm for Bluetooth scatternets, in: Proc.
of MobiHoc (2001).
[10] Specifications of the Bluetooth System core, Vol. 1, v. 1.1, www.
Bluetooth.com
[11] W. Zhang and G. Cao, A flexible scatternet-wide scheduling algorithm
for Bluetooth networks, in: Proc. of IEEE IPCCC (2002).
Rohit Kapoor received his Bachelor degree in computer
science in 1999 from the University of Roor-kee
, India. He is currently a Ph.D. candidate at the
University of California, Los Angeles (UCLA). His
research focuses on Bluetooth-based personal area
networks. He is a member of the Network Research
Lab at UCLA.
E-mail: [email protected]
Andrea Zanella received the Ph.D. degree in telecommunication
engineering from the University of
Padova, Italy, in 2002.
Prior to that he received
the Dr. Ing. degree (comparable to Master degree)
in computer engineering in 1998, still from the University
of Padova. He spent nine months, in 2001, as
post-doc researcher at the Department of Computer
Science of the University of California, Los Angeles
(UCLA), where he was engaged in research on
Wireless Networks and Wireless Access to Internet
under the supervision of Prof. Mario Gerla. Currently, he is a research fellow
in the Department of Information Engineering of the University of Padova,
Italy. His research interests are mainly focused on topics related to wireless
and mobile networking. In particular, in the last period, he has been working
on the performance aspects of wireless personal area networks based on the
Bluetooth standard.
E-mail: [email protected]
Mario Gerla is a professor in the Computer Science
Department at UCLA. He received his graduate degree
in engineering from the Politecnico di Milano
in 1966, and his M.S. and Ph.D. degrees in engineering
from UCLA in 1970 and 1973, respectively.
He joined the faculty of the UCLA Computer Science
Department in 1977. His current research is
in the area of analysis, design and control of communication
networks. Ongoing projects include the
design and evaluation of QoS routing and multicast
algorithms for IP domains, the design and evaluation of all-optical network
topologies and access protocols, the design of wireless mobile, multimedia
networks for mobile computing applications, and the development of measurement
methods and tools for evaluating the performance of high-speed
networks and applications.
E-mail: [email protected] | scheduling scheme;Round Robin;Distributed algorithm;scheduling;traffic adaptive;Scatternet presence fraction;Fairness;traffic rate;scatternets;Scatternet;Bluetooth;Rendezvous Points;Scheduling algorithm;Information exchange;heuristic;Gateway;Scheduling of gateways;Slaves;Non-uniform traffic;changing traffic;Efficiency;fair share;virtual slave;Rendezvous Point;Blueooth;Piconet presence fraction;HOLD mode;Slave unit;polling algorithm;Gateway slave traffic;Scatternets;Master unit;Allocation of bandwidth;Rendezvous point;fairness;Piconet;piconet;slave;Traffic Dependent Scheduling;Time-division multiplex;scatternet;Fair share;Bluetooth technology;Non-gateway slave traffic;bandwidth utilization;allocation of bandwidth;gateway;master;Round Robin polling |
70 | DirectoryRank: Ordering Pages in Web Directories | ABSTRACT Web Directories are repositories of Web pages organized in a hierarchy of topics and sub-topics. In this paper, we present DirectoryRank , a ranking framework that orders the pages within a given topic according to how informative they are about the topic. Our method works in three steps: first, it processes Web pages within a topic in order to extract structures that are called lexical chains, which are then used for measuring how informative a page is for a particular topic. Then, it measures the relative semantic similarity of the pages within a topic. Finally, the two metrics are combined for ranking all the pages within a topic before presenting them to the users. | INTRODUCTION
A Web Directory is a repository of Web pages that are organized in
a topic hierarchy. Typically, Directory users locate the information
sought simply by browsing through the topic hierarchy, identifying
the relevant topics and finally examining the pages listed under the
relevant topics. Given the current size and the high growth rate of
the Web [10], a comprehensive Web Directory may contain thousands
of pages within a particular category. In such a case, it might
be impossible for a user to look through all the relevant pages
within a particular topic in order to identify the ones that best represent
the current topic. Practically, it would be more time-efficient
for a user to view the Web pages in order of importance for a particular
topic, rather than go through a large list of pages.
One way to alleviate this problem is to use a ranking function which
will order the pages according to how "informative" they are of the
topic that they belong to. Currently, the Open Directory Project [3]
lists the pages within a category alphabetically, while the Google
Directory [1] orders the pages within a category according to their
PageRank [11] value on the Web. While these rankings can work
well in some cases, they do not directly capture the closeness of the
pages to the topic that they belong to.
In this paper, we present DirectoryRank, a new ranking framework
that we have developed in order to alleviate the problem of ranking
the pages within a topic based on how "informative" these pages
are to the topic. DirectoryRank is based on the intuition that the
quality (or informativeness) of a Web page with respect to a particular
topic is determined by the amount of information that the
page communicates about the given topic, relative to the other
pages that are categorized in the same topic. Our method takes as
input a collection of Web pages that we would like to rank along
with a Web Directory's topic hierarchy that we would like to use.
At a high level, our method proceeds as follows: first, we identify
the most important words inside every page and we link them together
, creating "lexical chains". We then use the topic hierarchy
and the pages' lexical chains to compute the "relatedness" (or importance
) of the pages to each of their corresponding topics. Having
determined the pages' topic importance, we measure the relative
semantic similarity among the pages that relate to the same topic.
The semantic similarity indicates the amount of content that important
pages in some topic share with each other. Finally, we employ
our DirectoryRank algorithm that uses the topic importance scores
in conjunction with the semantic similarities of the pages in order to
compute the ranking order of the pages within a Directory topic.
In order to study the effectiveness of DirectoryRank in identifying
the most informative pages within a particular topic, we applied our
method to the ranking of 318,296 Web pages listed in 156 topics in
the Google Directory. We have compared the rankings induced by
DirectoryRank to the rankings induced by PageRank for the pages
listed in those 156 topics. Our comparison reveals that the two
rankings have different merits and thus they are useful in different
tasks. To delve into the two rankings' effectiveness and investigate
which is more useful for ordering pages in Directories' topics, we
conducted a user study, where we asked a group of individuals to
compare the rankings delivered by PageRank to the rankings delivered
by DirectoryRank, and indicate which of the two is deemed as
more useful. Our results show that, in most cases, the users perceived
DirectoryRank to be more topic-informative than PageRank.
The rest of the paper is organized as follows: We start our discussion
in Section 2 with a brief introduction to PageRank, which is
currently employed by the Google Directory in order to rank pages.
In Section 3, we briefly present the topic hierarchy that we use in
our study as well as the process we follow for representing Web
pages into lexical chains. We also show how we explore the topic
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
WIDM'05, November 5, 2005, Bremen, Germany
Copyright 2005 ACM 1-59593-194-5/05/0011...$5.00.
17
hierarchy and the pagers' lexical chains for measuring the pages'
topic-importance and semantic similarities values. Finally, we present
how our DirectoryRank metric employs the above values for
measuring how informative Web pages are with respect to some
topics and rank them accordingly. In Section 4, we experimentally
study the effectiveness of DirectoryRank, by comparing its performance
to PageRank. We revise related work in Section 5 and we
conclude the paper in Section 6.
OVERVIEW OF PAGERANK
In this section, we briefly explain the main intuition of PageRank, a
metric that was primarily invented for ranking pages within the
Google Search Engine and that is currently used within the Google
Directory for ordering Web pages. For a more elaborate overview
on PageRank, we refer the reader to the work of [11]. The intuition
of PageRank metric is that a page on the Web is important if there
are a lot of other important pages pointing to it. That is, if a page p
has many links from other important pages, we may conclude that
this page is interesting to many people and that it should be considered
as being "important" or of "good" quality. Similarly, if an
important page has links to other pages, we expect that part of its
quality is transferred to the pages it points to, which in turn become
of increased significance/quality. Roughly, PageRank PR(p) defines
the importance of page p to be the sum of the importance of the
pages that endorse p. At a high level, PageRank is calculating the
probability that a "random surfer" is looking at a given page at a
given point of time. The "random surfer" is a mathematical model
that emulates the behavior of a user that, given a page, follows an
outgoing link from that page at random. Formally, given a page p
i
that has incoming links from the pages p
1
, ..., p
n
and let c
j
be the
number of out-links from p
j
, the PageRank of p
i
is given by:
1
1
( )
(1
)
( ) /
...
( ) /
i
n
n
PR p
d
d
PR p
c
PR p
c
=
+
+
+
where d corresponds to the probability that the random surfer will
get bored and his next visit will be a completely random page, and
1-d corresponds to the probability that the page the random surfer
will pick for his next visit is an outgoing link of the current page.
DIRECTORY RANK
The ranking of a Web page within a particular topic intuitively
depends on two criteria: (i) the importance of the page for the underlying
topic. This criterion helps us identify the most important
pages out of the several ones that may lie within a particular topic.
(ii) the semantic correlation of a page relative to other important
pages in the same topic. This criterion helps us rank pages relative
to each other within a topic. For measuring the importance of a
Web page in some topic, we explore a subject hierarchy that we
have built in the course of an earlier study [13] and we use the lexical
chaining technique for identifying the most important words
inside the page.
We start our discussion with a presentation of the topic hierarchy
that we use in our work (Section 3.1) and we describe the process
we follow for representing Web pages into lexical chains (Section
3.2). We also explain how we utilize the topic hierarchy and the
pages' lexical chains for measuring the pages' importance to the
hierarchy's topics (Section 3.2.1). The contribution of our work lies
in the exploitation of the topic hierarchy and the lexical chains that
we generate for representing Web pages in order to compute the
semantic similarities between the pages that are important in some
topics. Moreover, we have developed a novel framework, which
employs the pages' topic importance and semantic similarity measures
for ranking pages inside Directory topics.
3.1 The Topic Hierarchy
The main intuition in our DirectoryRank metric is that topic relevance
estimation of a Web page relies on the page's lexical coherence
, i.e. having a substantial portion of words associated with the
same topic. To capture this property, we adopt the lexical chaining
approach: for every Web page we generate a sequence of semantically
related terms, known as lexical chain. In our approach of representing
Web pages into lexical chains, we adopt the method reported
in [6], which uses WordNet [5] as the knowledge base for
providing information about the semantic relations that exist between
the words in a text. A detailed description of the lexical
chains' generation process is given in Section 3.2. Before that, we
present the topic hierarchy that we use for determining the topics
that are associated with the contents (i.e. words) of Web pages.
Since we are mainly interested in measuring the Web pages' importance
in the context of Web Directories, we decided to demonstrate
the usefulness of our DirectoryRank metric in ordering Web pages
in the topics currently used in a real Web Directory. To that end, we
applied DirectoryRank to the main topics used in the Google Directory
. Google Directory provides a hierarchical listing of Web pages
categorized by topic and reuses the data maintained by the Open
Directory Project. Moreover, since DirectoryRank relies on the
Web pages' lexical chains rather than their entire contents for
measuring the pages' importance to particular topics and since lexical
chain generation is dependent on WordNet, we decided to enrich
the top level (main) topics of the Google Directory with their
respective WordNet lexical hierarchies.
The first step we took for enriching the Google topics with WordNet
data was to examine the compatibility between these topics and
the topics used to annotate WordNet's concepts with domain information
. Note that the topic information that exists in the labels of
WordNet's contents is taken from the freely available Suggested
Upper Merged Ontology (SUMO) [4] and the MultiWordNet Domains
(MWND) [2]. Due to space limitations, here we present a
summary of our approach into enriching the Google topics with
WordNet hierarchies. A detailed description of the process we followed
for appending to the Google top level topics their corresponding
WordNet hierarchies is given in [13]. In brief, we located
the Google's top level topics among the topics used in either
SUMO or MWND for annotating WordNet concepts. Out of the 17
Google topics, 13 topics (shown in Table 1) are used for labeling
WordNet concepts with topic information. To each of those 13
topics, we integrated their corresponding sub-topics that we ac-quired
from either SUMO or MWND. The sub-topic integration
was performed automatically, simply by following WordNet's hyper/hyponymy
links. At the end of this process, we came down to a
hierarchy of 489 sub-topics, which are organized into the 13 top
level topics that we used from Google Directory.
Table 1. The Hierarchy's First Level Topics
First Level Topics
Arts News
Sports Society
Games Computers
Home Reference
Shopping Recreation
Business Science
Health
In Section 3.4, we will demonstrate how to use our topic hierarchy
for automating the task of ranking pages within topical categories.
18
3.2 Measuring Web Pages' Topic Importance
The computational model that we adopted for generating lexical
chains is presented in the work of Barzilay [6] and it generates lexical
chains in a three step approach: (i) it selects a set of candidate
terms
1
from a page, (ii) for each candidate term, it finds an appropriate
chain relying on a relatedness criterion among members of
the chains, and (iii) if such a chain is found, it inserts the term in the
chain. The relatedness factor in the second step is determined by the
type of WordNet links that connect the candidate term to the terms
stored in existing lexical chains. Figure 1 illustrates an example of
the lexical chain generated for a text containing the candidate terms:
system, network, sensor, weapon, missile, surface and net. The
subscript si denotes the id of the word's sense within WordNet.
Lexical chain
system
s6
network
s4
system
s6
sensor
s1
system
s6
weapon
s2
missile
s1
system
s6
surface
s1
net
s2
Figure 1. An example of a lexical chain.
Having generated lexical chains, we disambiguate the sense of the
words inside every chain by employing the scoring function f introduced
in [12], which indicates the probability that a word relation is
a correct one.
Given two words, w
1
and w
2
, their scoring function f via a relation
r, depends on the words' association score, their depth in WordNet
and their respective relation weight. The association score (Assoc)
of the word pair (w
1
, w
2
) is determined by the words' co-occurrence
frequency in a corpus that has been previously collected. In practice
, the greater the association score between a word pair w
1
and w
2
is, the greater the likelihood that w
1
and w
2
refer to the same topic.
Formally, the (Assoc) score of the word pair (w
1
, w
2
) is given by:
1
2
1
2
1
2
log ( ( ,
) 1)
( ,
)
( )
( )
+
=
s
s
p w w
Assoc w w
N w N w
where p(w
1
,w
2
) is the corpus co-occurrence probability of the word
pair (w
1
,w
2
) and N
s
(w) is a normalization factor, which indicates
the number of WordNet senses that a word w has. Given a word
pair (w
1
, w
2
) their DepthScore expresses the words' position in
WordNet hierarchy and is defined as:
2
2
1
2
1
2
( ,
)
( )
( )
DepthScore w w
Depth w
Depth w
=
,
where Depth (w) is the depth of word w in WordNet. Semantic
relation weights (RelationWeight) have been experimentally fixed
to 1 for reiteration, 0.2 for synonymy and hyper/hyponymy, 0.3 for
antonymy, 0.4 for mero/holonymy and 0.005 for siblings. The scoring
function f of w
1
and w
2
is defined as:
1
2
1
2
1
2
( ,
, )
( ,
)
( ,
) Re
( )
s
f w w r
Assoc w w DepthScore w w
lationWeight r
=
The value of the function f represents the probability that the relation
type r is the correct one between words w
1
and w
2
. In order to
disambiguate the senses of the words within lexical chain C
i
we
calculate its score, by summing up the f
s
scores of all the words w
j1
w
j2
(where w
j1
and w
j2
are successive words) within the chain C
i
.
Formally, the score of lexical chain C
i
, is expressed as the sum of
the score of each relation r
j
in C
i
.
1
2
( )
(
,
, )
i
s
j
j
j
r in C
j
j
Score C
f
w
w
r
=
1
As candidate terms, we use nouns and proper names because they
convey the vast majority of conceptual information in texts.
Eventually, in order to disambiguate we will pick the relations and
senses that maximize the Score (C
i
) for that particular chain. In
estimating the importance of a Web page p
i
in some Directory's
topic T
k
our first step is to identify which node within the hierarchy
(see Section 3.1) corresponds to topic T
k
of the page.
3.2.1 Topic-Importance Scoring
Once the topic of a page is located among the hierarchy's topics, we
map the words in the page's lexical chain to the WordNet nodes
under that particular topic. Recall that upon lexical chain generation
, words are disambiguated ensuring that every word inside a
page is mapped to a single word within the WordNet hierarchy. We
then determine the importance of a page p
i
to topic T
k
by counting
the number of words in the lexical chain of p
i
that are subsumed by
T
k
in the hierarchy's graph. The topic importance of a page is given
by a Relatedness Score (RScore), which indicates how relevant a
page is for a given topic. Formally, the relatedness score of a page
p
i
(represented by the lexical chain C
i
) to the hierarchy's topic T
k
is
defined as the product of the page's chain Score (C
i
) and the fraction
of words in the page's chain that are descendants of T
k
. Formally
, the RScore is given by:
RScore (C
i
, T
k
) =
k
i
i
i
Score(C ) common C and T elements
C elements
The denominator is used to remove any effect the length of a lexical
chain might have on
RScore and ensures that the final score is normalized
so that all values are between 0 and 1, with 0 corresponding
to no relatedness at all and 1 indicating the page that is highly
expressive of the page's topic. The
RScore of a page to a specific
topic captures the importance of the page in the given topic.
3.3 Semantic Similarity Scoring
The relatedness score metric that we have just presented can serve
as a good indicator for identifying the most important pages within
a topic. However, the
RScore metric does not capture the amount of
common content that is shared between the Web pages in a topic.
This is important in the cases where our topic-importance scoring
gives a low score for some pages but, at the same time, these pages
are very similar to other pages with high topic-importance scores.
In order to accommodate for this scenario, we now show how to
compute the semantic similarities among the pages that are listed in
the same Directory topic. Semantic similarity is indicative of the
pages' semantic correlation and helps in determining the ordering
of the pages that are deemed important in some topic. Our DirectoryRank
metric employs the Web page's topic-importance scores
and their semantic similarities to determine their ranking order inside
some Directory topics and is presented in the next section.
In order to estimate the Web pages' semantic similarity, we compare
the elements in a page's lexical chain to the elements in the
lexical chains of the other pages in a Directory topic. We assume
that if the chains of two Web pages have a large number of elements
in common, then the pages are correlated to each other. To
compute similarities between pages,
p
i
and
p
j
that are categorized in
the same topic, we first need to identify the common elements between
their lexical chains, represented as
PC
i
and
PC
j,
respectively.
First, we use WordNet to augment the elements of the lexical chains
PC
i
and
PC
j
with their synonyms. Chain augmentation ensures that
pages of comparable content are not regarded unrelated, if their
lexical chains contain distinct, but semantically equivalent elements
. The augmented elements of
PC
i
and
PC
j
are defined as:
19
(
)
( )
i
i
i
AugElements PC
C
Synonyms C
=
U
(
)
(
)
j
j
j
AugElements PC
C
Synonyms C
=
U
where,
Synonyms (C
i
) denotes the set of the hierarchy's concepts
that are synonyms to any of the elements in
C
i
and
Synonyms (C
j
)
denotes the set of the hierarchy's concepts that are synonyms to any
of the elements in
C
j
. The common elements between the augmented
lexical chains
PC
i
and
PC
j
are determined as:
(
,
)
(
)
(
)
i
j
i
j
ComElements PC PC
AugElements PC
AugElements PC
=
I
We formally define the problem of computing pages' semantic
similarities as follows: if the lexical chains of pages
p
i
and
p
j
share
elements in common, we produce the correlation look up table with
tuples of the form <
AugElements (PC
i
),
AugElements (PC
j
),
ComE-lements>
;. The similarity measurement between the lexical chains
PC
i
,
PC
j
of the pages
P
i
and
P
j
is given by:
)
)
2
(
,
)
(
,
)
(
(
i
j
i
j
i
j
ComElements PC PC
PC PC
s
AugElements PC
AugElements PC
=
+
where, the degree of semantic similarity is normalized so that all
values are between zero and one, with 0 indicating that the two
pages are totally different and 1 indicating that the two pages talk
about the same thing.
3.4 DirectoryRank Scoring
Pages are sorted in Directory topics on the basis of a DirectoryRank
metric, which defines the importance of the pages with respect to
the particular topics in the Directory. DirectoryRank (
DR) measures
the quality of a page in some topic by the degree to which the page
correlates to other informative/qualitative pages in the given topic.
Intuitively, an informative page in a topic, is a page that has a high
relatedness score to the Directory's topic and that is semantically
close (similar) to many other pages in that topic.
DR defines the
quality of a page to be the sum of its topic relatedness score and its
overall similarity to the fraction of pages with which it correlates in
the given topic. This way, if a page is highly related to topic
D and
also correlates highly with many informative pages in
D, its DR
score will be high. Formally, consider that page
p
i
is indexed in
Directory topic
T
k
with some
RScore (p
i
, T
k
) and let
p
1
,
p
2
, ...,
p
n
be
pages in
T
k
with which
p
i
semantically correlates with scores of
s
(PC
1
, PC
i
),
s
(PC
2
, PC
i
),...,
s
(PC
n
, PC
i
), respectively. Then, the
DirectoryRank (
DR) of p
i
is given by:
2
1
(
)
( ,
)
[
(
,
)
(
,
)
......
(
,
)]
,
i
k
i
k
s
i
s
i
s
n
DR p T
RScore p T
PC PC
PC PC
PC PC
n
i
=
+
+
+
+
/
where n corresponds to the total number of pages in topic T
k
with
which
p
i
semantically correlates.
EXPERIMENTAL SETUP
To measure the potential of our DirectoryRank metric in delivering
topic-informative rankings, we conducted an experiment, where we
studied the effectiveness of
DR in prioritizing the most informative
pages in some Directory's topics. To obtain perceptible evidence of
DirectoryRank's efficiency in a practical setting, we applied our
DR
metric to a set of Web pages listed in a number of topics in Google
Directory and we compared the rankings induced by DirectoryRank
to the rankings that Google Directory delivers for the same set of
pages and topics. In Section 4.1 we explain how we selected the
pages for our study, while in Section 4.2 we present the similarity
measure that we used for comparing the rankings induced by DirectoryRank
to the rankings delivered by PageRank, and we give obtained
results. Moreover, to delve into the behavior of DirectoryRank
we carried out a user study, presented in Section 4.3.
4.1 Experimental Dataset
In selecting our experimental data, we picked pages that are categorized
in those topics in Google Directory, which are also present in
our hierarchy. Recall that Google Directory is a replica of the Dmoz
Directory, from which we borrowed our hierarchy's 13 top-level
topics. Out of all the sub-topics organized in those 13 top-level
topics in Google Directory, 156 were represented in our hierarchy.
Having determined the topics, whose set of ranked pages would be
compared, we downloaded a total number of 318,296 pages, categorized
in one of the 156 selected topics, which in turn are organized
into the 13 top-level topics. Table 2 shows the distribution of
the experimental pages in the top level topics in Google Directory.
Table 2. Statistics on the experimental data
Category
# of documents
# of sub-topics
Arts 28,342
18
Sports 20,662
26
Games 11,062
6
Home 6,262
7
Shopping 52,342
15
Business 60,982
7
Health 23,222
7
News 9,462
4
Society 28,662
14
Computers 35,382
13
Reference 13,712
10
Recreation 8,182
20
Science 20,022
9
Total
318,296 156
Since we were interested in comparing DirectoryRank with PageRank
, in the context of ranking Web pages in Directory topics, we
recorded for the downloaded Web pages their relative ranking order
in Google Directory in each of the 156 selected topics. We then
stored the downloaded pages in a secondary index, maintaining
their relative PageRank rankings. To compute the
DR values for
every experimental page, we initially processed the downloaded
pages in order to generate and score their lexical chains. For every
page, we first computed its
RScore to the topic in which it is assigned
in Google Directory, and then we computed the semantic
similarity (
s
) for every pair of pages listed in each topic. Lastly,
using the above two scores (i.e. semantic similarity and topic relatedness
), we computed for every Web page its DirectoryRank (
DR)
value and we sorted the pages listed within each of the topics, so
that pages with higher
DR scores in some topic are prioritized
among the set of topic related pages. Using the above data, we
evaluated the effectiveness of our DirectoryRank metric in ordering
Web pages inside the Directory's topics.
4.2 Overlap of DirectoryRank and PageRank
To investigate whether there is any similarity between the rankings
induced by DirectoryRank and the rankings delivered by PageRank
for our experimental pages in the 156 topics in Google Directory,
we used the
OSim measure, reported in the work of [9], which indicates
the degree of overlap between the top
n URLs of the two
rankings. Formally, the overlap of two ranked lists A and B (each of
size
n) is given by:
(
)
,
/
OSim DR PR
A
B n
=
I
Using the above formula, we computed for each of the 156 topics
the overlap between the pages ranked in the top n=10 positions for
that topic by
DR and PR respectively. Afterwards, we first com-20
puted the average similarity between the two induced rankings for
each of the 156 selected topics, and then the average similarity between
the two induced rankings for each of the 13 top-level topics.
To compute the average similarity between
DR and PR for a top
level topic
T, we summed the average similarity of all sub-topics in
T and we divided by the number of sub-topics that T has. Table 3
gives the average similarity scores between
DR and PR for each of
the 13 top-level topics examined in our experiment.
Table 3. Average similarity of rankings for the top level topics
Category
OSim
Arts 0.038
Sports 0.019
Games 0.030
Home 0.057
Shopping 0.013
Business 0.028
Health 0.057
News 0.100
Society 0.043
Computers 0.046
Reference 0.020
Recreation 0.025
Science 0.044
Obtained results demonstrate that there is little average overlap
between the top 10 results for the two rankings. Note that for some
topics we compared the overlap between
DR and PR for a larger set
of pages (e.g.
n=20 and n=30) and we found that the OSim score of
the two rankings increases, albeit slightly, as the size of
n grows.
For example in the topic Sports, the
OSim between DR and PR for
n=10 is 0.019, whereas for n=20 the OSim score is 0.023 and for
n=30, OSim is 0.028. Our results show that even the pairs with the
greatest similarity among all pairs examined (e.g. the rankings delivered
for the News topic), according to the
OSim measure, have
little in common. Despite the usefulness of the
OSim measure for
making rough estimations about the ability of the two ranking
schemes in identifying the same top pages with respect to some
topics, it cannot directly capture which ranking is more useful for
ordering pages in Directory topics. This is because
OSim does not
indicate the degree to which the relative orderings of the top
n
pages of two rankings are in agreement. Having established that
PageRank and DirectoryRank order Web pages substantially differ-ently
, we proceed to investigate which of these rankings is better for
ordering Web pages in Directory topics. To that end, we carried out
a user study, reported next.
4.3 DirectoryRank Performance
To determine which of the two ranking measures, namely
DR and
PR, is perceived as more useful by Web users for organizing pages
in Web Directories, we carried out a user study. From our sample
data, we picked the top 10 pages listed in 7 randomly selected topics
(out of the 156 topics examined) and we recruited 15 postgraduate
volunteers from our school. Table 4 lists the 7 topics selected.
For each topic, the volunteer was shown 2 result rankings; one consisted
of the top 10 pages for the topic ranked with
DR, and the
other consisted of the top 10 pages for the topic ranked with
PR.
For each topic, the volunteer was asked to read the pages in both
lists and indicate which of the two rankings, in their opinion, is
more "useful" overall for communicating information about the
topic. Volunteers were not told anything about how either of the
rankings was generated. In order to avoid misinterpretations while
analyzing the user's selection preferences, we asked from the users
to indicate their descriptive selections directly. More specifically,
we presented to our participants the following choices and we asked
them to indicate for which of the following reasons they selected
one ranking over the other for each of the topics examined.
Table 4. Experimental Topics
Experimental Topics
T
1
Crime
T
2
Photography
T
3
Water Sports
T
4
Radiology
T
5
Mechanics
T
6
Econometrics
T
7
Collecting
Reason R1. "I prefer this ranking because I obtained significant
information about the topic from most of the pages". In our analysis
, we interpret the ranking preferences established on this reason
as "topic-informative" rankings.
Reason R2: "I prefer this ranking because I have seen most of the
pages before and I liked them". We interpret the ranking preferences
established on this reason as "popular" rankings.
We then compared the participants' descriptive selections for every
topic with the final
DR/ PR choices. This way we ensure that users'
preferences would be accurately evaluated even if two volunteers
had exactly the same descriptive selection, but they ended up casting
that selection into different
DR, PR rankings. As a final note,
we also asked our volunteers to indicate their familiarity with the
experimental topics, by characterizing as "familiar" or "unfamiliar"
each of the topics examined. In our evaluation, we considered that
one ranking was better than the other if at least 50% of the users
selected it as more "useful". Table 5 shows the rankings selected by
our subjects as more useful for each of the 7 examined topics. Every
row corresponds to a separate user. The columns marked as T
i
show
what the preference of the user was for the particular topic. Under
the T
i
columns the keyword
DR means that the user considered
DirectoryRank as more useful for that topic, while PR means that
the user deemed PageRank as more useful. The column marked as
R on the right of a T
i
column indicates the reason for which the user
voted over the specified ranking. Table 6 summarizes the rankings
preferred by the majority of the users for each of the topics.
Table 5. Rankings selected as more useful for each topic
User T
1
R
T
2
R
T
3
R
T
4
R
T
5
R T
6
R
T
7
R
#1
DR
1
DR
1
DR
1
DR
1 PR 2 DR
1 PR 2
#2 PR
2
DR
2 PR 2 DR
1
DR
1
DR
1 PR 2
#3
DR
1
DR
1
DR
1
DR
1
DR
2
DR
1 PR 2
#4 PR 1 PR 1 PR 2 DR
2 PR 2 PR 2 PR 1
#5
DR
1 PR 1 PR 2 PR 2 PR 2 DR
2 DR
1
#6 PR
2
DR
1 PR 2 DR
1
DR
1
DR
2 DR
1
#7
DR
2 PR 2 PR 1 DR
1 PR 2 DR
1 DR
1
#8
DR
1
DR
2
DR
1
DR
1 PR 1 DR
1 PR 2
#9 PR
2
DR
1 PR 2 PR 2 PR 2 DR
1 DR
2
#10
DR
1
DR
1
DR
1
DR
1
DR
1
DR
2 DR
2
#11
DR
1
DR
1
DR
1
DR
2 PR 2 PR 2 PR 2
#12
DR
1
DR
1
DR
1 PR 1 PR 2 DR
1 DR
1
#13
DR
2 PR 2 PR 1 DR
1 PR 2 DR
1 DR
1
#14 PR 2 DR
1 PR 2 DR
1
DR
1
DR
1 PR 2
#15
DR
1
DR
2
DR
1
DR
1 PR 1 DR
1 DR
1
Our survey results demonstrate that the majority of the users perceived
in overall DirectoryRank as more useful in comparison to
PageRank for ordering Web pages in the Directory's topics. This is
attested by the fact that for most of the topics examined (for 5 out of
the 7 topics), the majority of our subjects preferred
DR over PR. A
closer look at the obtained results indicates that the reason on which
21
our participants' based most of their
DR selections, is Reason 1,
which implies that the rankings delivered by
DR are perceived as
more topic-informative. Conversely, most of the users who liked
better the rankings induced by
PR, established their selection on
Reason 2. This suggests that the usefulness of
PR is not implied
mainly by how informative a page is about a topic, but rather that it
is substantially influenced by the page's popularity.
Table 6. Rankings preferred by the majority of users
Topic
Preferred by majority
T
1
Crime DirectoryRank
T
2
Photography DirectoryRank
T
3
Water Sports
PageRank
T
4
Radiology DirectoryRank
T
5
Mechanics PageRank
T
6
Econometrics DirectoryRank
T
7
Collecting DirectoryRank
Moreover, although not reported here due to space limit, our survey
results show that our participants' answers were not generally influenced
by their familiarity or not with the underlying topics. This
implies that our survey does not entail "topic-bias", since both
rankings compared are applied to pages listed in the same topic.
RELATED WORK
There have been a number of studies trying to identify the best
ranking order of the Web pages that are deemed to be relevant to a
given query/topic. The most successful of these studies [8, 11] suggest
the exploitation of the pages' links connectivity on the Web
graph for measuring the pages' importance and rank them accordingly
. The most widely known ranking metric that explores the
pages' links structure for measuring their importance on the Web is
PageRank. Currently, PageRank and its variations are used by most
major Web Search Engines to rank the results that they return to
Web users in response to their search requests. Despite PageRank's
usefulness for ordering pages in the context of Search Engines, it is
designed to measure the global importance of the pages on the
Web, independent of any particular topics. However, the overall
importance of the pages may be not a sufficient measure for ordering
the pages inside Directories' topics, essentially because pages
that are important in some topics may not be important in others,
regardless of the number and structure of the links that may appear
in those pages. To alleviate some of the inherent limitations of PageRank
, a number of researchers designed new ranking metrics,
which mainly rely on modifications of PageRank and are tailored
for specific tasks. For example, [9] studies personalization of the
PageRank metric by giving different weights to pages, [14] examine
the local and the inter-site link structure in order to compute a
global PageRank for Web pages, [7] introduce Hilltop, an algorithm
which generates query-specific authority scores for improving rankings
for popular queries. While most of these works mainly focus
on improving the rankings delivered to Web users by measuring the
Web pages' overall importance, in this paper we are more concerned
about the topic importance of Web pages by measuring the
pages' informativeness with respect to particular topics. In this
scope, we perceive our work to be complementary to previous studies
on personalized rankings [9]. Moreover, there exists prior work
that explores the lexical chaining technique as a means for representing
documents' contents [6, 12]. Recently, we employed the
lexical chaining technique for the automatic classification of Web
documents in topic hierarchies [13]. Our findings indicated the
potential of lexical chains in successfully capturing the thematic
content of Web pages. This motivated our work to use the lexical
chains generated for a number of Web pages as a means for ordering
pages within Directory topics. In the future we plan to investigate
how our approach could benefit from other linguistic approaches
, besides lexical chains.
CONCLUDING REMARKS
In this paper, we introduced DirectoryRank, a practical metric for
determining how informative Web pages are for particular topics
and ranking them accordingly. To evaluate the potential of DirectoryRank
in ordering Web pages inside Directory topics, we conducted
an experiment where we applied our DirectoryRank metric
to order a set of pages listed within 156 topics in Google Directory
and we compared the rankings induced by DirectoryRank to the
rankings that PageRank delivers in Google Directory for the same
set of pages and topics. In our study, we relied on the judgments
made by 15 users to determine which ranking is perceived as more
useful for Web Directories' users. Obtained results indicate that in
overall users preferred DirectoryRank over PageRank for ordering
Web pages inside the Directory's topics. Although it would probably
require additional studies in order to evaluate the applicability
of our method to Web Directories other than Google and assess
DirectoryRank's usefulness to a larger user and categories base, we
believe that our work can serve as the first step towards a topic-informative
ranking metric within directories.
REFERENCES
[1] Google Directory http://dir.google.com/.
[2] MultiWordNet Domains http://wndomains.itc.it/.
[3] Open Directory Project http://dmoz.com/.
[4] Sumo Ontology http://ontology.teknowledge.com/.
[5] WordNet 2.0 http://www.cogsci.princeton.edu/~wn/.
[6] Barzilay R Lexical chains for text summarization. Master's
Thesis, Ben-Gurion University, 1997.
[7] Bharat K and Mihaila G. Hilltop: a search engine based on
expert documents:
http://www.cs.toronto.edu/~georgem/ hilltop
/.
[8] Kleinberg J. Authoritative sources in a hyperlinked environment
. In Journal of the ACM, 46(5), 1999, 604-632.
[9] Haveliwala T. Topic sensitive PageRank. In Proceedings of the
11
th
WWW Conference, 2002, 517-526.
[10] Ntoulas A., Cho J. and Olston Ch. What's new on the web?
The evolution of the web from a search engine perspective. In
Proceedings of the 13
th
WWW Conference, 2004, 1-12.
[11] Page L., Brin S., Motwani R. and Winograd T. The PageRank
citation ranking: Bringing order to the web. Available at
http://dbpubs.stanford.edu:8090/pub/1999-66.
[12] Song Y.I., Han K.S. and Rim H.C. A term weighting method
based on lexical chain for automatic summarization. In Proceedings
of the 5
th
CICLing Conference, 2004, 636-639.
[13] Stamou S., Krikos V., Kokosis P., Ntoulas A. and Christodoulakis
D. Web directory construction using lexical chains. In
Proceedings of the 10
th
NLDB Conference 2005, 138-149.
[14] Wang Y. and DeWitt D. Computing PageRank in a distributed
internet search system. In Proc. of the 30
th
VLDB Conf., 2004.
22 | topic hierachy;semantic similarity;ranking metric;scoring;web directory;ranking;lexical chains;DirectoryRank;topic importance;PageRank;information retrieval;Web Directory;semantic similarities |
71 | Discovering and Ranking Web Services with BASIL: A Personalized Approach with Biased Focus | In this paper we present a personalized web service discovery and ranking technique for discovering and ranking relevant data-intensive web services. Our first prototype called BASIL supports a personalized view of data-intensive web services through source-biased focus. BASIL provides service discovery and ranking through source-biased probing and source-biased relevance metrics. Concretely, the BASIL approach has three unique features: (1) It is able to determine in very few interactions whether a target service is relevant to the given source service by probing the target with very precise probes; (2) It can evaluate and rank the relevant services discovered based on a set of source-biased relevance metrics; and (3) It can identify interesting types of relationships for each source service with respect to other discovered services, which can be used as value-added metadata for each service. We also introduce a performance optimization technique called source-biased probing with focal terms to further improve the effectiveness of the basic source-biased service discovery algorithm. The paper concludes with a set of initial experiments showing the effectiveness of the BASIL system. | INTRODUCTION
Most web services today are web-enabled applications that
can be accessed and invoked using a messaging system, typically
relying on standards such as XML, WSDL, and SOAP [29].
Many companies have latched onto the web services mantra, including
major software developers, business exchanges, eCom-merce
sites, and search engines [15, 9, 2, 1, 7]. A large and
growing portion of the web services today can be categorized
as data-intensive web services.
This research is partially supported by NSF CNS CCR, NSF ITR, DoE
SciDAC, DARPA, CERCS Research Grant, IBM Faculty Award, IBM
SUR grant, HP Equipment Grant, and LLNL LDRD.
Permission to make digital or hard copies of all or part of this work for personal
or classroom use is granted without fee provided that copies are not made or
distributed for profit or commercial advantage and that copies bear this notice
and the full citation on the first page. To copy otherwise, to republish, to post
on servers or to redistribute to lists, requires prior specific permission and/or a
fee.
ICSOC'04, November 1519, 2004, New York, New York, USA.
Copyright 2004 ACM 1-58113-871-7/04/0011 ...
$
5.00.
Data-intensive web services provide access to huge and growing
data stores and support tools for searching, manipulating,
and analyzing those data stores. For example, both Amazon [1]
and Google [7] now provide XML- and SOAP-based web service
interfaces to their underlying data repositories with support
for advanced search operators over, collectively, billions of
items. In the life sciences domain, many bioinformatics services
are transitioning from human-in-the-loop web interfaces to the
web services model [9], providing direct access to unprecedented
amounts of raw data and specialized research tools to provide
high-level analysis and search over these data services.
With the increasing visibility of web services and the Service-Oriented
Computing paradigm [18], there is a growing need
for efficient mechanisms for discovering and ranking services.
Effective mechanisms for web service discovery and ranking are
critical for organizations to take advantage of the tremendous
opportunities offered by web services, to engage in business
collaborations and service compositions, to identify potential
service partners, and to understand service competitors and
increase the competitive edge of their service offerings.
Current web service discovery techniques can be classified
into two types: categorization-based discovery and personalized
relevance-based discovery. The former discovers web services
by clustering and categorizing a collection of web services
into different groups based on certain common properties of the
services. Most of the existing UDDI [28] registry-based service
discovery methods are of this type. They typically discover relevant
services by querying metadata maintained in the common
registries (like the ones offered by Microsoft [16] and IBM [10]).
A typical question is "Which bioinformatics web services offer
BLAST capability" or "Which commercial services offer on-line
auctions". The second type of discovery mechanisms uses
personalized relevance reasoning and support questions such
as "Which services offer the same type of content as NCBI",
and "Find the top-ten web services that offer more coverage
than the BLAST services at NCBI". These two types of service
discovery techniques offer different focus and complementary
capabilities. Consider the following examples:
A bioinformatics researcher may be interested in finding all
services similar to NCBI's BLAST service for searching DNA
and protein sequence libraries [17]. Current service registries
may provide pointers to other BLAST services, but they do
not describe how these other sites relate specifically to NCBI's
BLAST service. Which services provide the most similar coverage
with respect to NCBI (e.g.
of similar proteins or organisms
)? Which services are complementary in their coverage
(e.g. of other sequence libraries)? How best should the BLAST
services be ranked relative to the NCBI service?
A health science researcher familiar with the PubMed med-153
ical literature service may be interested in discovering other
related medical digital library services. Given his prior knowledge
with PubMed, he may want to ask certain personalized
(source-biased) discovery requests, which are not supported by
the conventional service-registry-based discovery model. Examples
include: Find and rank all PubMed-related medical literature
sites. Which services have more general coverage than
PubMed? Which medical literature services are more specialized
than PubMed?
These examples highlight two fundamental differences between
the categorization-based and the personalization-based
discovery model: (1) Categorization of web services based on
general descriptions maintained at the service registries are insufficient
and inadequate when a user is interested in discovery
based on a particular service (or a set of services) with which
she has prior experience or knowledge (e.g. NCBI BLAST or
PubMed); and (2) There is a need for service ranking metrics
that capture the relative scope and coverage of the data offered
with respect to a previously known service. Although these two
types of discovery mechanisms are complementary, most existing
proposals on web service discovery fall into the first type.
Surprisingly, there are, to our knowledge, no effective means
to provide such personalized and biased discovery and ranking
support without relying on significant human intervention.
In this paper we present algorithms for discovering and ranking
relevant data-intensive web services. Our first prototype
called BASIL
1
-- supports a personalized view of web services
through source-biased probing and source-biased relevance detection
and ranking metrics. Concretely, our approach is capable
of discovering and ranking web services by focusing on the
nature and degree of the data relevance of the source service
to others. Given a service like NCBI's BLAST called the
source - the BASIL source-biased probing technique leverages
the summary information of the source to generate a series of
biased probes to other services called the targets. This source-biased
probing allows us to determine whether a target service
is relevant to the source by probing the target with very few focused
probes. We introduce the biased focus metric to discover
and rank highly relevant data services and measure relevance
between services. Our initial results on both simulation and
web experiments show that the BASIL system supports efficient
discovery and ranking of data-intensive web services.
MODEL AND PROBLEM STATEMENT
We consider a universe of discourse
W consisting of D data-intensive
web services:
W = {S
1
, S
2
, . . . , S
D
} where each service
produces one or more XML documents in response to a
particular service request. Hence, we describe each web service
S
i
as a set of M
i
documents: S
i
=
{doc
1
, doc
2
,
, doc
M
i
}. For
example, the documents corresponding to the NCBI BLAST
service would consist of genetic sequences and documentation
generated in response to a service requests. Similarly, the documents
corresponding to PubMed would consist of the set of
medical journal articles in the PubMed data repository.
There are N terms (t
1
, t
2
, ..., t
N
) in the universe of discourse
W including both the tags and content of the XML documents
where common stopwords (like `a', `the', and so on) have been
eliminated.
Optionally, the set of N terms may be further
refined by stemming [19] to remove prefixes and suffixes.
Adopting a vector-space model [22, 23] of the service data
repository, we describe each service S
i
as a vector consisting of
the terms in the service along with a corresponding weight:
Summary(S
i
) =
{(t
1
, w
i
1
), (t
2
, w
i
2
),
, (t
N
, w
iN
)
}
1
BASIL: BiAsed Service dIscovery aLgorithm
A term that does not occur in any documents served by a
service S
i
will have weight 0.
Typically, for any particular
service S
i
, only a fraction of the N terms will have non-zero
weight. We refer to the number of non-zero weighted terms in
S
i
as N
i
.
We call the vector
Summary(S
i
) a service summary for the
data-intensive web service S
i
. A service summary is a single aggregate
vector that summarizes the overall distribution of terms
in the set of documents produced by the service. In this first
prototype of BASIL, we rely on a bag-of-words model that is
indifferent to the structure inherent in the XML documents. As
we demonstrate in the experiments section, this bag-of-words
approach is quite powerful without the added burden of structural
comparisons. We anticipate augmenting future versions
of BASIL to incorporate structural components (to support
schema matching, leverage existing ontologies, etc.).
To find
Summary(S
i
), we must first represent each document
doc
j
(1
j M) as a vector of terms and the frequency of
each term in the document:
doc
j
=
{(t
1
, f req
j
1
), (t
2
, f req
j
2
),
, (t
N
, f req
jN
)
}
where f req
jk
is the frequency of occurrence of term t
k
in
document j. The initial weight for each term may be based
on the raw frequency of the term in the document and it can
be refined using alternative occurrence-based metrics like the
normalized frequency of the term and the term-frequency inverse
document-frequency (TFIDF ) weight. TFIDF weights
the terms in each document vector based on the characteristics
of all documents in the set of documents.
Given a particular encoding for each document, we may generate
the overall service summary in a number of ways. Initially
, the weight for each term in the service summary may be
based on the overall frequency of the term across all the documents
in the service (called the service frequency, or servFreq ):
w
ik
= servF req
ik
=
M
j
=1
f req
jk
. Alternatively, we can also
define the weight for each term based on the number of documents
in which each term occurs (called the document count
frequency, or docCount).
Once we have chosen our service model, to effectively compare
two data-intensive web services and determine the relevance
of one service to another, we need two technical components
: (1) a technique for generating a service summary; and
(2) a metric for measuring the relevance between the two.
2.1
Estimating Service Summaries
Ideally, we would have access to the complete set of documents
belonging to a data-intensive web service. We call a
service summary for S
i
built on these documents an actual
service summary or
ASummary(S
i
). However, the enormous
size of the underlying repositories for many data-intensive web
services coupled with the non-trivial costs of collecting documents
(through repeated service requests and individual document
transfer) make it unreasonable to generate an actual
service summary for every service available. As a result, previous
researchers in the context of distributed databases have
introduced several probing techniques for generating representative
summaries based on small samples of a document-based
collections [3, 4]. We call such a representative summary an
estimated service summary, or
ESummary(S
i
):
ESummary(S
i
) =
{(t
1
, w
i
1
), (t
2
, w
i
2
),
, (t
N
, w
iN
)
}
The number of occurring terms (i.e. those terms that have
non-zero weight) in the estimated summary is denoted by N
i
.
Typically, N
i
will be much less than the number of non-zero
weighted terms N
i
in the actual service summary since only
154
a fraction of the total documents in a service will be examined
. The goal of a prober is typically to find
ESummary(S
i
)
such that the relative distribution of terms closely matches the
distribution of terms in
ASummary(S
i
), even though only a
fraction of the total service documents will be examined.
Current probing techniques for estimating service summaries
aim at estimating the overall summary of the data served by
a web service. We classify them into two categories: random
sampling and query-based sampling.
Random Sampling
- N o Bias
If we had unfettered access to a data-intensive web service,
we could randomly select terms from the service to generate
the estimated service summary
ESummary(S
i
). Barring that,
we could randomly select documents with which to base the
estimated service summary. We will call such a random selection
mechanism an unbiased prober since all terms (or documents
) are equally likely to be selected. In practice, an unbiased
prober is unrealistic since most services only provide a
request-response mechanism for extracting documents.
Query-based Sampling
- Query Bias
As a good approximation to unbiased probing, Callan et al. [3,
4] have introduced a query-based sampling technique for generating
accurate estimates of document-based collections by examining
only a fraction of the total documents. The Callan
technique has been shown to provide accurate estimates using
very few documents (e.g. several hundred). Adapting the
Callan technique to the web services context requires repeat-edly
requesting documents from a service using a limited set of
service requests. Since the documents extracted are not chosen
randomly, but are biased by the service request mechanism
through the ranking of returned documents and by providing
incomplete access to the entire data service repository, we say
that the Callan technique displays query bias. There are several
ways to define the limited set of queries, including random
selection from a general dictionary and random selection augmented
by terms drawn from the extracted documents from the
service. In the rest of the paper, when we refer to an estimated
service summary
ESummary(S
i
), we mean one that has been
produced by a query-biased prober.
2.2
Comparing Service Summaries
In order to determine the relevance of one service S
i
to another
service S
j
and to assess the nature of their relationship,
we require an appropriate relevance metric. There are a number
of possible relevance metrics to compare two service summaries
. A fairly simple and straightforward approach is based
on a count of the number of common terms in the two services
S
i
and S
j
:
rel(S
i
, S
j
) =
|ESummary(S
i
)
ESummary(S
j
)
|
max(
|ESummary(S
j
)
|, |ESummary(S
i
)
|)
Two services with exactly the same terms represented in their
estimated summaries will have rel(S
i
, S
j
) = 1, indicating the
highest possible degree of relevance. Conversely, two services
with no terms in common will have rel(S
i
, S
j
) = 0, indicating
the lowest possible degree of relevance.
We now use an example to illustrate why the existing service
summary estimation techniques are inadequate for effectively
discovering relevant services, especially in terms of the data
coverage of one (target) in the context of the other (source).
Example:
We collected fifty documents from the Google web
service, the PubMed web service, and ESPN's search site, respectively
, using a query-based sampling technique for service
summary estimation. Using the service summaries constructed,
we find that rel(Google, P ubM ed) = 0.05 and rel(ESP N ,
P ubM ed) = 0.06. In both cases the service summaries share
very few terms in common and hence both Google and ESPN
appear to be irrelevant with respect to PubMed, even though
Google provides considerable health-related content. Based on
these figures, we could incorrectly conclude that: (1) Google
is irrelevant to PubMed; and (2) Relatively speaking, ESPN is
more relevant to PubMed than Google.
This example underlines two critical problems with current
techniques for probing and comparing service summaries:
First, current service summary estimation techniques are concerned
with generating overall (or global ) summaries of the underlying
data repositories. The goal is to generate essentially an
unbiased estimate of the actual service summary. Second, the
current relevance comparison metric fails to serve as a valuable
ranking metric or indicator of interesting relationships between
target services in terms of the data coverage of a target service
with respect to the source.
THE BASIL SYSTEM
Bearing these issues in mind, we now introduce BASIL an
efficient web service discovery and ranking prototype that relies
on a biased perspective of services rather than on a single
global perspective. BASIL relies on three fundamental steps:
(1) source-biased probing for web service discovery; (2) evaluation
and ranking of discovered services with the biased focus
metric; and (3) leveraging the biased perspective of service
sources and targets to discover interesting relationships.
3.1
Source-Biased Probing
Given a data-intensive web service the source the source-biased
probing technique leverages the summary information
of the source to generate a series of biased probes for analyzing
another service the target. This source-biased probing allows
us to determine in very few interactions whether a target service
is relevant to the source by probing the target with focused
probes.
To help differentiate the source-biased approach from others
discussed in Section 2, in this section we use to denote the
source service and to denote the target service instead of S
i
and S
j
. Given two services and , the output of the source-biased
probing is a subjective service summary for that is
biased towards . We define the source-biased summary of the
target service, denoted by
ESummary
( ), as follows:
ESummary
( ) =
{(t
1
, w
1
), (t
2
, w
2
),
, (t
N
, w
N
)
}
N is the total number of terms used in analyzing the set of
data-intensive web services. w
i
(1
i N) is the weight of
term t
i
, defined using one of the weight function introduced in
Section 2. To distinguish the term weight w
j
from the corresponding
term weight in the biased target summary, we denote
the bias by w
j
. It is important to note that typically the inequality
w
j
= w
j
does hold.
Concretely, the source-biased probing algorithm generates a
source-biased summary for a target as follows: It uses the estimated
service summary of the source , denoted by
ESummary(),
as a dictionary of candidate probe terms and sends a series of
query requests parameterized by probe terms, selected from
ESummary (), to the target service ; for each probe term,
it retrieves the top m matched documents from , generates
summary terms and updates
ESummary
( ).
This process
repeats until a stopping condition is met. Figure 1 illustrates
the source-biased probing process. Note that in this first prototype
of BASIL the service requests are constrained to keyword-based
probes. Note that the source-biased approach can also
be applied to UDDI-directory-based discovery by restricting
155
SourceBiasedProbing(Source , Target )
For target service , initialize
ESummary
( ) =
.
repeat
Invoke the probe term selection algorithm
to select a one-term query probe q from the
source of bias
ESummary().
Send the query q to the target service .
Retrieve the top-m documents from .
Update
ESummary
( ) with the terms and
frequencies from the top-m documents.
until Stop probing condition is met.
return
ESummary
( )
Figure 1: Source-Biased Probing Algorithm
the source summary to be generated from the meta-data description
maintained at the registries rather than performing
the source-biased probing directly. However, the quality of the
discovery results will be lower due to the lack of richness in the
metadata maintained at the service registries for many services.
Now we use a simple example to illustrate the power of
source-biased probing. For presentation brevity, we are considering
a simplistic world of only very few terms per service
summary. In reality, each service summary would consist of
orders of magnitude more terms:
Example:
Suppose that our goal is to understand the relevance
of Google to PubMed. Suppose,
ESummary(P ubMed)
=
{arthritis, bacteria, cancer} (where for simplicity we have
dropped the term weights from the summary). Again for simplicity
suppose that Google provides access to only three types of
information: health, animals, and cars:
ASummary(Google)
=
{arthritis, bacteria, cancer, dog, elephant, frog, garage,
helmet, indycar
}. An unbiased prober could result in ESummary
(Google) =
{arthritis, frog, helmet}, whereas a source-biased
prober could result in
ESummary
P ubM ed
(Google) =
{arthritis,
bacteria, cancer
}. This simple example illustrates the essence
of the source-biased probing and how it accentuates the commonality
between the two services.
The performance and effectiveness of the source-biased probing
algorithm depends upon a number of factors, including the
selection criterion used for choosing source-specific candidate
probe terms, and the type of stop condition used to terminate
the probing process.
Mechanisms to Select Probe Terms
There are several possible ways to select the probes based on
the statistics stored with each service summary, including uniform
random selection and selection based on top-weighted
terms.
In general, the selection criterion will recommend a
query term drawn from the set N
of all non-zero weighted
terms in the unbiased source summary
ESummary().
Uniform Random Selection: In this simplest of selection techniques
, each term that occurs in
ESummary() has an equal
probability of being selected, i.e. P rob(selecting term j) =
1
N
.
Weight-Based Selection: Rather than randomly selecting query
terms, we could instead rely on a ranking of the terms by one of
the statistics that are recorded with each service summary. For
example, all terms in
ESummary() could be ranked according
to the weight of each term. Terms would then be selected in
descending order of weight. Depending on the type of weight
cataloged (e.g. servF req, docCount, etc.), several flavors of
weight-based selection may be considered.
Different Types of Stop Probing Conditions
The stop probing condition is the second critical component in
the source-biased probing algorithm. We consider four different
types of conditions that might be used in practice:
Number of Queries: After some fixed number of query probes
(M axP robes), end the probing. This condition is agnostic to
the number of documents that are examined for each service.
Documents Returned: In contrast to the first technique, the
second condition considers not the number of queries, but the
total number of documents (M axDocs) returned by the service
. Since some queries may return no documents, this stopping
condition will require more query probes than the first
alternative when M axP robes = M axDocs.
Document Thresholding: Rather than treating each document
the same, this third alternative applies a threshold value to
each document to determine if it should be counted toward
M axDocs.
For each document, we may calculate the relevance
of the document to the source of bias
ESummary(). If
the document relevance is greater than some threshold value,
then the document is counted. Otherwise, the document is
discarded.
Steady-State: Rather than relying on a count of queries or documents
, this final stopping condition alternative instead relies
on the estimated summary reaching a steady-state. After each
probe, we calculate the difference between the new value of
ESummary
( ) and the old value.
If the difference (which
may be calculated in a number of ways) is less than some small
value
, then we consider the summary stable and stop the
probing.
Due to the space limitation, we refer readers to our technical
report [5] for detailed experiments on the impact of these two
parameters.
3.2
Evaluating and Ranking Services
Given a source and a target service, once we generate the
source-biased summary for the target service, we need an efficient
mechanism to evaluate the source-biased relevance of a
target service with respect to the source. Once a set of target
services have been evaluated with the source-biased relevance
metric, we can then rank the target services with respect to the
source of bias. We begin by discussing the necessary components
of the source-biased metric.
Let denote a source service modeled by an estimated summary
and denote a target service with a -biased summary,
and let f ocus
( ) denote the source-biased focus measure. We
define f ocus
( ) to be a measure of the topical focus of the
target service with respect to the source of bias . The focus
metric ranges from 0 to 1, with lower values indicating less
focus and higher values indicating more focus.
In general, f ocus is not a symmetric relation. We may describe
any two data-intensive web services and with the
focus in terms of by f ocus
( ) or in terms of by f ocus
().
We propose to use the well-known cosine similarity (or normalized
inner product) to approximate the source-biased focus
measure. We define the cosine-based focus as follows:
Cosine f ocus
( ) =
N
k
=1
w
k
w
k
N
k
=1
(w
k
)
2
N
k
=1
(w
k
)
2
where w
k
is the weight for term k in
ESummary() and
w
k
is the -biased weight for term k in
ESummary
( ). The
cosine ranges from 0 to 1, with higher scores indicating a higher
degree of similarity. In contrast, the cosine between orthogonal
vectors is 0, indicating that they are completely dissimilar. The
cosine measures the angle between two vectors, regardless of the
length of each vector. Intuitively, the cosine-based biased focus
is appealing since it reasonably captures the relevance between
two data-intensive web services.
Ranking Relevant Services
Given the biased focus measure, we may probe a group of tar-156
get services to identify the most relevant services to the source
of bias. For a single source of bias S
1
from our universe of discourse
W, we may evaluate multiple target services S
2
, S
3
, ...,
S
d
. For each target service, we may evaluate the appropriate
focus measure for each source-target pair (i.e. f ocus
S
1
(S
2
),
f ocus
S
1
(S
3
), etc.). We may then rank the target services in
descending order in terms of their source-biased focus with respect
to S
1
.
As we will show in our experiments section, source-biased
probing results in the identification of relevant services that
existing approaches may overlook. We also show that source-biased
probing can generate source-biased summaries of good
quality using far fewer documents than existing approaches,
placing significantly less burden on the target services.
3.3
Identifying Interesting Relationships
The critical third component of the BASIL system consists of
the techniques for exploiting and understanding interesting relationships
between services using a source-biased lens. By analyzing
the nature of the relationships between data-intensive
web services, we will provide support for understanding the relative
scope and coverage of one service with respect to another.
The source-biased probing framework and biased focus measure
provide the flexible building blocks for automated identification
of interesting relationships between services, especially
since the framework promotes an asymmetric source-biased
view for any two services. Our relationship discovery module
creates a flexible organization of services, where each service
is annotated with a list of relationship sets. The two typical
relationship types we have identified are similarity-based and
hierarchical-based.
Similarity-Based Relationships
Given the universe of discourse
W = {S
1
, S
2
, . . . , S
D
}, we identify
three similarity-based relationship sets for a particular service
S
i
. These relationship sets are defined in terms of threshold
values
high
and
low
, where 0
low
high
< 1.
- equivalent: The first relationship says that if both focus
S
i
(S
j
) >
high
and f ocus
S
j
(S
i
) >
high
hold, then we may conclude
that S
i
is sufficiently focused on S
j
and S
j
is sufficiently
focused on S
i
. Hence, the two services are approximately the
same in terms of their data coverage. We call this approximate
equality -equivalence. It indicates that the equivalence
is not absolute but is a function of the parameter
high
. Formally
, -equivalent(S
i
) =
{S
j
W | focus
S
i
(S
j
) >
high
f ocus
S
j
(S
i
) >
high
}.
- complement: If both focus
S
i
(S
j
) <
low
and f ocus
S
j
(S
i
)
<
low
hold, then we can conclude that S
i
and S
j
are sufficiently
concerned with different topics since neither one is very
focused on the other. We annotate this approximate complementary
nature with the prefix. Formally, -complement(S
i
) =
{S
j
W | focus
S
i
(S
j
) <
low
focus
S
j
(S
i
) <
low
}.
- overlap: When two services S
i
and S
j
are neither equivalent
nor -complementary, we say that the two services
-overlap. Formally, -overlap(S
i
) =
{S
j
W | S
j
/
complement
(S
i
)
S
j
/
-equivalent(S
i
)
}.
Hierarchical Relationships
In addition to similarity-based relationship sets, we also define
hierarchical relationship sets by measuring the relative coverage
of target services in
W with respect to a particular text service
S
i
(source). These hierarchical relationship sets are defined in
terms of a parameter
dif f
, where 0
dif f
1.
- superset: If focus
S
i
(S
j
)
- focus
S
j
(S
i
) >
dif f
, then a
relatively significant portion of S
i
is contained in S
j
, indicating
that S
j
has a -superset relationship with S
j
. We use the prefix
to indicate that S
j
is not a strict superset of S
i
, but rather
that the relationship is parameterized by
dif f
. Formally, superset
(S
i
) =
{S
j
W | focus
S
i
(S
j
)
- focus
S
j
(S
i
) >
dif f
}.
- subset: Conversely, If focus
S
j
(S
i
)
- focus
S
i
(S
j
) >
dif f
,
then a relatively significant portion of S
j
is contained in S
i
,
indicating that S
j
has a -subset relationship with S
i
. Similarly
, S
j
is not a strict subset of S
i
, but rather the relationship
is parameterized by
dif f
. Formally, -subset(S
i
) =
{S
j
W | focus
S
j
(S
i
)
- focus
S
i
(S
j
) >
dif f
}.
We note that the determination of the appropriate -values
is critical for the correct assignation of services to each relationship
set. In our experiments section, we illustrate how these
relationship sets may be created, but, for now, we leave the
optimization of -values as future work.
Using Relationship Sets Both similarity based and hierarchy-based
inter-service relationships can be generated automati-cally
, and used as metadata annotation to each of the services.
These source-biased relevance data provide a flexible foundation
for relationship analysis among services. For any service
S
i
, we need only consult the appropriate relationship set. The
three similarity-based relationship sets provide the basis for
answering queries of the form: "What other services are most
like X? Somewhat like X? Or complementary to X?". The two
hierarchical-based sets provide the basis for answering queries
of the form: "What other services are more general than X?
Or more specialized than X?".
In addition, these relationship sets are useful for routing
service requests to the appropriate services. For example, a
user interested in BLAST data may choose to use both NCBI's
BLAST service and all of the services that have a -equivalence
relationship with NCBI BLAST. Alternatively, a user interested
in maximizing coverage of multiple topically-distinct services
, may choose to query both the source service she knows
about and any members in the complementary set of the source
service. The hierarchical relationship sets are particularly helpful
in cases where a user may refine a service request to more
specialized services, or alternatively, may choose to generalize
the scope of the service request by considering services further
up the hierarchy to get more matching answers.
FOCAL TERM PROBING
One of the critical parameters to the success of BASIL's
source-biased probing is the choice of probe terms from the
source of bias . We have discussed several selection techniques
as well as different ways to define stop-probing conditions. In
this section we introduce a refinement over these simple selection
techniques whereby the source summary is segmented
into k groups of co-occurring terms. The main idea is to it-eratively
select one term from each of the k groups to probe
the target. We call these terms the focal terms of the corresponding
group. When used in conjunction with the general
source-biased probing algorithm, we have an enhanced version
called source-biased probing with focal terms. A unique advantage
of using focal terms is that the biased summaries of target
services can be generated in fewer queries with higher quality.
4.1
Focal Terms and Focal Term Groups
Let denote a source service with its unbiased service summary
ESummary
. We denote the set of terms with non-zero
weight in
ESummary
(i.e. the terms that actually occur in the
service ) as T erms(), where T erms() consists of n terms
t
1
, t
2
, ..., t
n
.
A focal term group is a subset of terms in the set T erms()
that co-occur in the documents of . We denote a focal term
157
Table 1: Example Focal Terms for PubMed
1
care, education, family, management, ...
2
brain, gene, protein, nucleotide, ...
3
clinical, noteworthy, taxonomy, ...
4
experimental, molecular, therapy, ...
5
aids, evidence, research, winter, ...
group i as F T erms
i
. The main idea behind source-biased probing
with focal terms is to partition the set T erms() into k disjoint
term groups such that the terms within each term group
co-occur in documents of more frequently than they do with
terms from other term groups.
Formally, we need an algorithm that can find a partition of
T erms() into k focal term groups: T erms() =
{F T erms
1
,
. . . , F T erms
i
, . . . , F T erms
k
|
k
i
=1
F T erms
i
=
{t
1
, ..., t
n
} and
F T erms
i
F T erms
j
=
}
In Table 1, we show an example of five focal term groups for
a collection of 100 PubMed documents. Note that k is intended
to be very small since the focal term groups are meant to be
very coarse.
Given k focal term groups, by selecting a focal term from
each term group F T erms
i
as a probing query, we hope to retrieve
documents that also contain many of the other words
in that focal term group. For example, suppose we are using
a frequency-based measure for query probe selection from
PubMed. The top four query terms may be "brain", "gene",
"protein", and "nucleotide". Suppose these four terms tend to
co-occur with each other as indicated in Table 1. By sending
the first query "brain" to a target service, we could reasonably
expect to find the other three terms since our analysis
of the source indicates that these four terms tend to co-occur.
A naive source-biased prober would ignore this co-occurrence
information and, instead, send the other three queries "gene",
"protein", and "nucleotide", even though we might reasonably
expect for those queries to generate documents similar to those
generated by the first query "brain". In essence, we will have
used four queries when a single query would have sufficed at
adequately exploring the term space of the target.
It is important to note that, unlike previous research in
grouping terms for query-expansion [31, 21] or finding similar
terms [24] our goal is not to find close semantic relationships
between terms, but rather to find very coarse co-occurrence associations
among terms to support a more efficient and effective
biased service summary estimation. For example, though we
may discover that "brain" and "protein" tend to co-occur, we
do not claim that there is a close semantic relationship between
the two terms.
4.2
Finding Focal Terms
In this section, we discuss how we may adapt a popular clustering
technique to the problem of focal term discovery. Recall
that in Section 2, we view a service S
i
as a set of documents,
each of which is described by a vector of terms and weights.
We now invert our view of a service using the same set of information
. We consider a service S
i
as a collection of terms, each
of which is described by a vector of the documents in which the
term occurs and a weight describing the occurrence frequency
of the term in the corresponding document. Hence, we have:
T erms(S
i
) =
{term
1
, term
2
,
, term
N
}. For the N terms in
the service, each term
j
(1
j N) is a vector of documents
and weights: term
j
=
{(doc
1
, w
j
1
), (doc
2
, w
j
2
),
, (doc
M
, w
jM
)
}
We can define a segmentation technique for finding focal term
groups by clustering the set T erms(S
i
) into k clusters. Given
the term vectors and the similarity function, a number of clus-FocalTerms
(Num Clusters k, Input Vectors
D)
Let
D = {d
1
, ..., d
n
} denote the set of n term vectors
Let M denote the total number of documents in
D
Let d
j
= < (doc
1
, w
j
1), . . . , (doc
M
, w
jM
) > denote a
term vector of M elements, w
jl
is TFIDF weight
of the doc
l
in term j (l = 1, . . . , M )
Let
C = {C
1
, ..., C
k
} denote a clustering of D
into k clusters.
Let
i
denote the center of cluster C
i
foreach cluster C
i
Randomly pick a term vector, say d
j
from
D
Initialize a cluster center
i
= d
j
, where d
j
D
repeat
foreach input term vector d
j
D
foreach cluster C
i
C i = 1, . . . , k
compute
i
= sim(d
j
, mu
i
)
if
h
is the smallest among
1
,
2
, . . . ,
k
mu
h
is the nearest cluster center to d
j
Assign d
j
to the cluster C
h
// refine cluster centers using centroids
foreach cluster C
i
C
foreach doc l in d
j
(l = 1, . . . , M ))
cw
ij
1
|C
i
|
M
l
=1
w
jl
i
< (doc
1
, cw
i
1
), . . . , (doc
M
, cw
iM
) >
until cluster centers no longer change
return
C
Figure 2: Focal Term Clustering Algorithm
tering algorithms can be applied to partition the set T erms(S
i
)
of N terms into k clusters. We choose Simple K-Means since
it is conceptually simple and computationally efficient. The algorithm
starts by generating k random cluster centers. Each
term is assigned to the cluster with the most similar (or least
distant) center. The similarity is computed based on the close-ness
of the term and each of the cluster centers. Then the
algorithm refines the k cluster centers based on the centroid
of each cluster. Terms are then re-assigned to the cluster with
the most similar center. The cycle of calculating centroids and
assigning terms in T erms(S
i
) to k clusters repeats until the
cluster centroids stabilize. Let C denote a cluster in the form
of a set of terms in the cluster. The centroid of cluster C is:
centroid
C
=
(doc
1
,
1
|C| jC
w
j
1
)
(doc
2
,
1
|C| jC
w
j
2
)
(doc
M
,
1
|C| jC
w
jM
)
where w
jl
is the weight of term j in document l, and the formula
1
|C|
l
C
w
jl
denotes the average weight of the document
l in the cluster C. A sketch of the K-Means term clustering
based on term-vector of a service is provided in Figure 2.
The similarity function used in Figure 2 can be defined using
a number of functions. In this paper, we use the cosine
similarity function. Given a set of N terms and a set of M
documents, where w
ik
denotes the weight for term k in document
i (1
k N, 1 i M), the cosine function prescribes:
sim(term
i
, term
j
) =
N
k
=1
w
ik
w
jk
N
k
=1
(w
ik
)
2
N
k
=1
(w
jk
)
2
In Section 5 we report the initial experiments on effectiveness
of using focal terms to optimize the source-biased probing
algorithm, showing that the source-biased algorithm with focal
terms results in more efficient probing for varying numbers of
focal-term groups.
158
4.3
Selecting Focal-Based Probes
Once the k focal term groups have been constructed for a
source, the remaining problem is how to select the best terms
for probing a target service. We propose a simple round-robin
selection technique whereby a single term is selected from each
focal term group in turn. Once a single term has been selected
from each group, the cycle repeats by selecting a second term
from each group, a third term, and so on.
Given this basic strategy, we may use a number of techniques
for determining the order by which to select terms from the
k groups and for selecting probe terms from each focal term
group. One way to determine the order of focal term groups is
based upon the size of each group. We begin with the group
with the most terms and end each cycle with the group that
has the smallest number of terms. For each focal term group,
we may decide which term to select for each cycle by using one
of the selection criteria discussed in Section 3.
EXPERIMENTS
In this section, we describe four sets of experiments designed
to evaluate the benefits and costs of BASIL. The first set intends
to show the effectiveness of our source-biased probing algorithm
and compare its performance with query-biased probing
and unbiased probing. The second set evaluates the biased
focus measure as an effective tool for ranking services.
The third set shows the efficiency of the biased focus measure
in identifying interesting inter-service relationships. The
fourth set evaluates the efficacy of source-biased probing with
focal terms by comparing the basic source-biased probing versus
source-biased probing with varying number of groups of
focal terms. Our experiments show that focal term probing
can achieve about ten percent performance improvement over
the basic algorithm for source-biased probing.
Since there are no large data-intensive web service collections
for experimentation, we rely on: (1) a large collection of
newsgroups designed to emulate the diversity and scope of real-world
data-intensive web services; and (2) a modest collection
of real-world web sources. Since the services in the web collection
change frequently and are beyond our control, and in an
effort not to overload any one site, we relied on the newsgroup
dataset for rigorous experimental validation.
Newsgroup Collection: We collected articles from 1,000 randomly
selected usenet newsgroups over the period June to July
2003. We eliminated overly small newsgroups containing fewer
than 100 articles, heavily spammed newsgroups, and newsgroups
with primarily binary data. After filtering out these
groups, we were left with 590 single topic newsgroups, ranging
in size from 100 to 16,000 articles. In an effort to match
the heterogeneity and scope inherent in many real-world services
, we constructed 135 additional groups of mixed topics by
randomly selecting articles from anywhere from 4 to 80 single
topic newsgroups, and 55 aggregate topic newsgroups by
combining articles from related newsgroups (e.g. by selecting
random documents from all the subgroups in comp.unix.* into
a single aggregate group). In total, the newsgroup collection
consists of over 2.5GB worth of articles in 780 groups.
Web Collection: For the second collection, we randomly selected
50 sites from the ProFusion [20] directory of web sites
that support queries, in addition to Google and PubMed. We
queried each site with a randomized set of single-word probes
drawn from the standard Unix dictionary, and collected a maximum
of 50 documents per site.
Probing Framework: We built a probing engine in Java
1.4 for use in all of our experiments. For each group in both
0.00
0.10
0.20
0.30
0.40
0
20
40
60
80
100
Documents Examined
Average Source Similarity
Source Bias
Query Bias 2
Query Bias 1
No Bias
Figure 3: Probing Efficiency for 100 Pairs
datasets, we constructed the estimated service summary based
on the overall term frequency of each term (servF req). We
eliminated a set of common stopwords (e.g. "a", "the", and
so on) as well as collection-specific stopwords (e.g. "wrote",
"said", and so on for the newsgroup collection).
5.1
Effectiveness of Source-Biased Probing
The goal of our first set of experiments is to compare source-biased
probing with existing probing techniques and to evaluate
the efficiency and quality of source-biased probing. The source-biased
probing show significant gain in terms of the percentage
of documents probed that are similar to the source.
We first evaluate the efficiency of source-biased probing in
terms of the number of documents required to be extracted
from each target and the percentage of the documents extracted
that are similar to the source. The higher percentage of documents
similar (relevant) to the source, the more effective a
probing algorithm is.
We selected 100 random source-target pairs from the newsgroup
collection.
For each pair, we evaluated four probing
techniques a source-biased prober (Source Bias) that selects
probe terms from the source summary in decreasing order of
servF req; a query-biased prober (Query Bias 1 ) that randomly
selects probes from the standard Unix dictionary of English
terms; a query-biased prober (Query Bias 2 ) that selects its
initial probe from the Unix dictionary, but once the first document
has been retrieved from the target, all subsequent probes
are selected based on the estimated servF req of the target's
service summary; and an unbiased prober (No Bias) that selects
documents at random from each target. For each pair, we
evaluated each of the four probing techniques for up to 100 total
documents extracted from each target, collecting a maximum
of 5 documents per probe query from each target.
In Figure 3, we show the average percentage of documents
similar (relevant) to the source (Cosine f ocus
( )) over all 100
source-target pairs as a function of the number of documents
examined in each target.
The percentage of the documents
extracted that are similar to the source (biased f ocus measure)
indicates the quality of document being extracted from each
target. We see that the source-biased probing outperforms the
No Bias prober and the Query Bias 1 prober by about 10% and
outperforms the Query Bias 2 prober by about 15%. Clearly,
the higher focus value means the higher success for a probing
algorithm.
Figure 4 shows another experiment where we also identified,
in our set of 100 source-target pairs, all of those pairs that were
a priori similar (e.g. mac.apps and mac.system) or dissimilar
(e.g. textiles.sewing and perl.misc). We show the relative
performance of the Source Bias, Query Bias 1, and No Bias
159
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
0
20
40
60
80
100
Documents Examined
Average Source Similarity
Source Bias
Query Bias
No Bias
Similar
Dissimilar
Figure 4: Probing Efficiency Breakdown
0.00
0.10
0.20
0.30
0.40
0.50
0.60
0.70
0.80
0
20
40
60
80
100
Documents Examined
% Docs Similar to Source
Source Bias
Query Bias 1
Query Bias 2
No Bias
Figure 5: Average Document Quality for 100 Pairs
probers against these similar and dissimilar pairs. The source-biased
prober requires fewer documents to achieve the same
relevance level as the other probers for all 100 source-target
pairs and for the similar and dissimilar pairs. For example, for
the similar source-target pairs in Figure 4, the source-biased
prober identifies target documents with 0.8 focus after extracting
fewer than 30 documents. In contrast, the other probers
require between two and three times as many documents to
achieve the same quality.
The third experiment is shown in Figure 5. Here we want
to show how quickly a source-biased prober can hone on the
most source-relevant documents in a target by plotting the
percentage of the documents extracted that are similar (relevant
) to the source for each of the four probers. As shown in
Figure 5, the source-biased prober performs nearly two-times
better than other probers: over 70% of the first 10 documents
extracted from a target are source-relevant, whereas the other
probers identify between 25% and 45% source-relevant documents
. As more documents are examined for each target, the
source-biased prober continues to maintain an advantage over
the other probers.
5.2
Ranking Effectiveness with Biased Focus
The second set of experiments intends to evaluate how well
source-biased probing compares with the alternative techniques
when it comes to evaluating and ranking collection of target
services. We use PubMed as the source and examine all 50
web sites as targets. We computed the biased focus score using
Cosine f ocus
( ) and then ranked all targets relative to
PubMed using the biased focus measure. Since the web sites do
not support random document selection, we are unable to evaluate
an unbiased prober. So this experiment only compares the
source-biased prober with query biased prober 1. Table 2 shows
Table 2: Identifying Web Sources Relevant to PubMed
Query Bias
Source Bias
1. AMA
1. Open Directory (13)
2. WebMD
2. Google (27)
3. Linux Journal
3. About (11)
4. HealthAtoZ
4. WebMD (2)
5. DevGuru
5. AMA (1)
6. FamilyTree Magazine
6. HealthAtoZ (4)
7. Mayo Clinic
7. Monster (22)
8. Novell Support
8. Mayo Clinic (7)
9. Random House
9. Random House (9)
10. January Magazine
10. BBC News (12)
0.00
0.10
0.20
0.30
0.40
0.50
0.60
0.70
0.80
0.90
comp.sys.mac.system
comp.unix.misc
gnu.emacs.help
rec.aviation.owning
rec.games.chess.misc
rec.org.sca
rec.pets.cats.misc
sci.physics.research
soc.culture.hawaii
talk.religion.misc
Relevance Precision
No Bias
Query Bias
Source Bias
Figure 6: Precision for 10 Source Newsgroups
the top-10 ranked sites relative to PubMed. In the Source Bias
column we also list in parenthesis the rank of each site assigned
by the Query Bias prober.
The query-biased prober identifies several health-related sites
in the web collection, but it mistakenly lists Linux Journal
ahead of HealthAtoZ, as well as listing a web development site
(DevGuru) and a genealogical magazine (FamilyTree) ahead of
the health-related Mayo Clinic. Overall, only four of the top-ten
sites could be considered topically relevant to PubMed. In
contrast, the source-biased prober's top-eight sites are all relevant
to PubMed. In addition to the health-related sites, the
source-biased prober also identifies three general sites that offer
access to medical literature (Open Directory, Google, and
About) that are ranked significantly lower by the query-biased
prober. Interestingly, the source-biased prober identifies a fair
number of scientific and bioinformatics-related job descriptions
in the Monster jobs site, resulting in its high relevance (similarity
) score to PubMed (high biased focus value).
To validate the quality of source-biased service evaluation, we
next randomly selected 10 sources from the newsgroup collection
to evaluate against the entire set of 780 newsgroups. We
compared the three probers Source Bias, Query Bias 1, and
No Bias. For each of the 10 sources, we measured relevance
(similarity) precision as the percentage of the top-10 ranked
target services that are considered relevant to the source using
Cosine f ocus
( ). Relevance judgments were determined by
the consensus opinion of three volunteers. Figure 6 shows the
precision for the three probers after extracting 40 documents
per target service. Source Bias results in the highest precision
in nine of ten cases, tying with the next best prober in only
two cases. For the lone failure, Source Bias does succeed after
extracting 80 documents, indicating that the mistake may
be attributable to the error inherent in probing very few documents
. In general, the average precision of the source-biased
prober is nearly double that of the next best prober.
In Figure 7 we show the average precision for the ten sources
when increasingly more documents are extracted per target.
The source-biased approach displays higher precision than both
160
0.00
0.10
0.20
0.30
0.40
0.50
20
40
60
80
100
Documents Examined
Relevance Precision
No Bias
Query Bias
Source Bias
Figure 7: Average Relevance Precision
the query-biased and unbiased probers in all cases considered,
especially when based on very few documents.
5.3
Identifying Interesting Relationships
The third set of experiments is designed to evaluate the effectiveness
of using the source-biased framework to support the
identification of interesting inter-service relationships that the
alternative schemes do not. Unlike the query-biased and unbiased
probers, the asymmetric nature of source-biased probing
allows us to characterize the nature of the relationship beyond
the single relevance ranking using biased focus measure.
We first illustrate relationship sets for PubMed over the web
collection. In Table 3 we show four classes of relationship sets
for
high
= 0.15,
low
= 0.05, and
dif f
= 0.10 using the
source-biased prober described above. Again we note, that our
interest here is to illustrate the power of the -formalation;
we leave the optimization of -values to future work. In contrast
to the simple relevance ranking in Table 2, we see how
the source-biased framework can differentiate between the very
similar services (the -equivalent sites) and the more general
services (the -superset sites) relative to PubMed. In addition,
we can identify sites with some common data (the -overlap
sites) and sites concerned with significantly different topics (the
-complement sites).
Similarly, we show in Table 4 several interesting relationships
derived from the newsgroup collection for
high
= 0.70,
low
=
0.40, and
dif f
= 0.30 using the Source Bias prober discussed
before. Again, by relying on BASIL's source-biased analysis we
may characterize relationships sets for each source.
As an example, we identify sci.physics.particle as a member
of the -subset relationship set of the mixed topic newsgroup
mixed11, which consists of 25% physics-related articles
in addition to articles on backgammon, juggling, and telecommunications
. Interestingly, we can see that there are several
overlapping relationships between newsgroups in related but
slightly different fields (e.g.
volleyball and cricket).
Finally
, we also identify several unrelated newsgroups, including
comp.sys.mac.system relative to misc.immigration.usa.
5.4
Probing with Focal Terms
In our final set of experiments, we consider the impact of focal
term probing on the success rate of source-biased probing. We
evaluate four flavors of focal term probing with 2, 3, 5, and 10
focal term groups from which to draw source-biased probes. In
our initial experiments with focal term probing, we discovered
that there was little impact on either the efficiency of probing
or the quality of target service evaluation when considering
sources from the single-topic newsgroup collection.
[Due to
space limitations, we omit these results here].
0.2
0.3
0.4
0.5
0.6
0
20
40
60
80
100
Documents Examined
Average Source Similarity
Original
Focal - 2
Focal - 3
Focal - 5
Focal - 10
Figure 8: Impact of Focal Term Probing
In contrast, we discovered that focal term probing had a
significant impact when used on mixed topic newsgroups, in
which there are documents from several unrelated single topic
newsgroups. In Figure 8, we show the probing efficiency for
the four focal term source-biased probers relative to the best
basic source-biased prober for 10 source-target pairs from the
newsgroup collection. In each case, the sources were drawn
exclusively from the mixed topic newsgroups.
All of the focal term techniques resulted in more efficient
probing versus basic source-biased probing and only minor differences
in ranking precision and relationship set generation
quality, indicating that focal term probing can be advantageous
in certain circumstances. Our intuition is that identifying focal
terms is considerably more important in cases in which there
are clear distinctions in term distributions as would be reflected
in the mixed topic newsgroups in which several groups of documents
are concerned with different topics.
RELATED WORK
Researchers have previously explored different aspects of the
service discovery problem, ranging from discovery in a federated
environment [25], to identifying services that meet certain
quality-of-service guarantees [13], to evaluating services based
on a distributed reputation metric [26], to other quality metrics
like in [32]. In contrast, we focus on the data relationships
between services to power efficient discovery and ranking.
Other researchers have previously studied the problem of re-peatedly
querying an unknown database in an effort to generate
a summary of the database internals [11, 3, 30, 4, 8, 27, 14, 6].
The main purpose of these techniques is to generate a representative
content summary of the underlying database. Querying
methods suggested include the use of random queries, queries
learned from a classifier, and queries based on a feedback cycle
between the query and the response.
More recently, Gravano et al. [12] have introduced an extension
to the Callan-style probing technique that relies on a
learned set of queries for database classification. Their probing
method is effective for classifying web sites into a pre-determined
Yahoo!-style hierarchy, but requires the potentially
burdensome and inflexible task of labelling training data for
learning the classifier probes in the first place. Additionally, if
new categories are added or old categories removed from the hierarchy
, new probes must be learned and each source re-probed.
Previous research on grouping terms (as in our source-biased
probing with focal terms) has focussed on finding terms that are
effective for query-expansion [31, 21] or finding similar terms
[24]. Our focal term formulation is similar to that used in [21],
though their goal is to find close semantic relationships between
terms, unlike our coarse-grained groupings.
161
Table 3: Source-Biased Analysis: Identifying Relationships Relative to PubMed
Service (S)
URL
Description
focus
P M
(
S)
focus
S
(
P M)
Relationship
WebMD
www.webmd.com
Health/Medical
0.23
0.18
-equivalent
AMA
www.ama-assn.org
Health/Medical
0.19
0.16
-equivalent
HealthAtoZ
www.healthatoz.com
Health/Medical
0.18
0.16
-equivalent
Open Directory
dmoz.org
Web Directory
0.44
0.08
-superset
Google
www.google.com
Web Search Engine
0.37
0.10
-superset
About
www.about.com
Web Channels
0.25
0.08
-superset
Monster
www.monster.com
Jobs
0.14
0.08
-overlap
Mayo Clinic
www.mayoclinic.com
Health/Medical
0.12
0.11
-overlap
Silicon Investor
www.siliconinvestor.com
Finance
0.03
0.04
-complement
Usenet Recipes
recipes2.alastra.com
Recipes
0.02
0.03
-complement
Table 4: Source-Biased Analysis: Identifying Relationships in the Newsgroup Collection
A
B
focus
A
(
B)
focus
B
(
A)
Relationship
comp.sys.mac.apps
comp.sys.mac.system
0.86
0.76
-equivalent
comp.sys.mac.system
comp.sys.mac.advocacy
0.79
0.74
-equivalent
sci.physics.particle
sci.physics
0.86
0.80
-equivalent
sci.physics.particle
mixed45
0.86
0.62
-subset/superset
comp.unix.misc
mixed120
0.91
0.56
-subset/superset
rec.sport.volleyball
rec.sport.cricket
0.47
0.46
-overlap
rec.games.go
rec.games.chess.misc
0.50
0.53
-overlap
rec.crafts.textiles.sewing
comp.lang.perl.misc
0.35
0.32
-complement
comp.sys.mac.system
misc.immigration.usa
0.23
0.36
-complement
CONCLUSIONS
In this paper, we have presented a novel web service discovery
and ranking prototype called BASIL that supports a personalized
view of data-intensive web services through source-biased
focus. BASIL supports personalized discovery requests
and relevance reasoning through efficient source-biased probing
and source-biased relevance metrics. Concretely, we have
shown that BASIL allows us to determine in very few interactions
whether a target service is relevant to the source service
by probing the target with very precise probes. The biased
focus measure allows us to evaluate and rank the services discovered
and to identify interesting types of source-biased relationships
for a collection of services. Additionally, we have
introduced source-biased probing with focal terms as a performance
optimization to further improve the effectiveness of the
basic source-biased algorithm.
REFERENCES
[1] Amazon.com. Amazon.com Web Services.
http://www.amazon.com/gp/aws/landing.html, 2004.
[2] Ariba. http://www.ariba.com, 2003.
[3] J. Callan, M. Connell, and A. Du. Automatic discovery
of language models for text databases. In SIGMOD '99.
[4] J. P. Callan and M. E. Connell. Query-based sampling of
text databases. Information Systems, 19(2):97130, 2001.
[5] J. Caverlee, L. Liu, and D. Rocco. Discovering and
ranking web services with BASIL: A personalized
approach with biased focus. Technical report, GIT, 2004.
[6] W. W. Cohen and Y. Singer. Learning to query the web.
In AAAI Workshop on Internet-Based Information
Systems. 1996.
[7] Google. Google Web APIs FAQ.
http://www.google.com/apis/, 2003.
[8] D. Hawking and P. Thistlewaite. Methods for
information server selection. ACM Transactions on
Information Systems, 17(1):4076, 1999.
[9] IBM. Web Services for Life Sciences.
http://www.alphaworks.ibm.com/tech/ws4LS/, 2003.
[10] IBM. IBM UDDI Business Registry.
www.ibm.com/services/uddi/, 2004.
[11] P. G. Ipeirotis, L. Gravano, and M. Sahami. Probe,
count, and classify: Categorizing hidden-web databases.
In SIGMOD '01.
[12] P. G. Ipeirotis, L. Gravano, and M. Sahami. QProber: A
system for automatic classification of hidden-web
databases. ACM TOIS, 21(1):141, 2003.
[13] Y. Liu, A. H. Ngu, and L. Zeng. QoScomputation and
policing in dynamic web service selection. In WWW '04.
[14] W. Meng, C. T. Yu, and K.-L. Liu. Detection of
heterogeneities in a multiple text database environment.
In CoopIS '99.
[15] Microsoft. .NET. http://www.microsoft.com/net/, 2003.
[16] Microsoft. Microsoft UDDI Business Registry Node.
http://uddi.microsoft.com/, 2004.
[17] National Center for Biotechnology Information. NCBI
BLAST. http://www.ncbi.nih.gov/BLAST/, 2004.
[18] M. P. Papazoglou. Service-oriented computing: Concepts,
characteristics and directions. In WISE '03.
[19] M. F. Porter. An algorithm for suffix stripping. Program,
14(3):130137, 1980.
[20] ProFusion. http://www.profusion.com/, 2004.
[21] Y. Qiu and H.-P. Frei. Concept-based query expansion.
In SIGIR '93, pages 160169, Pittsburgh, US.
[22] G. Salton and C. Buckley. Term-weighting approaches in
automatic text retrieval. In Readings in Information
Retrieval. Morgan Kauffman, San Francisco, CA, 1997.
[23] G. Salton, A. Wong, and C. Yang. A vector space model
for automatic indexing. CACM, 18(11):613620, 1971.
[24] H. Schutze and J. O. Pedersen. A cooccurrence-based
thesaurus and two applications to information retrieval.
Information Processing and Management, 33(3), 1997.
[25] K. Sivashanmugam, K. Verma, and A. Sheth. Discovery
of web services in a federated registry environment. In
ICWS '04.
[26] R. M. Sreenath and M. P. Singh. Agent-based service
selection. Journal on Web Semantics (JWS), 2003.
[27] A. Sugiura and O. Etzioni. Query routing for web search
engines: Architecture and experiments. In WWW '00.
[28] UDDI. http://www.uddi.org/, 2004.
[29] W3C Working Group. Web Services Architecture.
http://www.w3.org/TR/2004/NOTE-ws-arch-20040211/,
February 2004.
[30] W. Wang, W. Meng, and C. Yu. Concept hierarchy based
text database categorization in a metasearch engine
environment. In WISE '00.
[31] J. Xu and W. B. Croft. Query expansion using local and
global document analysis. In SIGIR '96, pages 411.
[32] L. Zeng, , B. Benatallah, M. Dumas, J. Kalagnanam, and
Q. Z. Sheng. Quality driven web services composition. In
WWW '03.
162
| focal terms;biased discovery;ranking;data-intensive web services;data-intensive services;query-biased probing;web service discovery;source-biased probing |
72 | Distance Measures for MPEG-7-based Retrieval | In visual information retrieval the careful choice of suitable proximity measures is a crucial success factor. The evaluation presented in this paper aims at showing that the distance measures suggested by the MPEG-7 group for the visual descriptors can be beaten by general-purpose measures. Eight visual MPEG-7 descriptors were selected and 38 distance measures implemented. Three media collections were created and assessed, performance indicators developed and more than 22500 tests performed. Additionally, a quantisation model was developed to be able to use predicate-based distance measures on continuous data as well. The evaluation shows that the distance measures recommended in the MPEG-7-standard are among the best but that other measures perform even better. | INTRODUCTION
The MPEG-7 standard defines among others a set of
descriptors for visual media. Each descriptor consists of a feature
extraction mechanism, a description (in binary and XML format)
and guidelines that define how to apply the descriptor on different
kinds of media (e.g. on temporal media). The MPEG-7 descriptors
have been carefully designed to meet partially complementary
requirements of different application domains: archival, browsing,
retrieval, etc. [9]. In the following, we will exclusively deal with
the visual MPEG-7 descriptors in the context of media retrieval.
The visual MPEG-7 descriptors fall in five groups: colour,
texture, shape, motion and others (e.g. face description) and sum
up to 16 basic descriptors. For retrieval applications, a rule for
each descriptor is mandatory that defines how to measure the
similarity of two descriptions. Common rules are distance
functions, like the Euclidean distance and the Mahalanobis
distance. Unfortunately, the MPEG-7 standard does not include
distance measures in the normative part, because it was not
designed to be (and should not exclusively understood to be)
retrieval-specific. However, the MPEG-7 authors give
recommendations, which distance measure to use on a particular
descriptor. These recommendations are based on accurate
knowledge of the descriptors' behaviour and the description
structures.
In the present study a large number of successful distance
measures from different areas (statistics, psychology, medicine,
social and economic sciences, etc.) were implemented and applied
on MPEG-7 data vectors to verify whether or not the
recommended MPEG-7 distance measures are really the best for
any reasonable class of media objects. From the MPEG-7 tests
and the recommendations it does not become clear, how many and
which distance measures have been tested on the visual
descriptors and the MPEG-7 test datasets. The hypothesis is that
analytically derived distance measures may be good in general but
only a quantitative analysis is capable to identify the best distance
measure for a specific feature extraction method.
The paper is organised as follows. Section 2 gives a minimum of
background information on the MPEG-7 descriptors and distance
measurement in visual information retrieval (VIR, see [3], [16]).
Section 3 gives an overview over the implemented distance
measures. Section 4 describes the test setup, including the test
data and the implemented evaluation methods. Finally, Section 5
presents the results per descriptor and over all descriptors.
BACKGROUND
The visual part of the MPEG-7 standard defines several
descriptors. Not all of them are really descriptors in the sense that
they extract properties from visual media. Some of them are just
structures for descriptor aggregation or localisation. The basic
descriptors are Color Layout, Color Structure, Dominant Color,
Scalable Color, Edge Histogram, Homogeneous Texture, Texture
Browsing, Region-based Shape, Contour-based Shape, Camera
Motion, Parametric Motion and Motion Activity.
Other descriptors are based on low-level descriptors or semantic
information: Group-of-Frames/Group-of-Pictures Color (based on
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
MIR'03, November 7, 2003, Berkeley, California, USA.
Copyright 2003 ACM 1-58113-778-8/03/00011...$5.00.
130
Scalable Color), Shape 3D (based on 3D mesh information),
Motion Trajectory (based on object segmentation) and Face
Recognition (based on face extraction).
Descriptors for spatiotemporal aggregation and localisation are:
Spatial 2D Coordinates, Grid Layout, Region Locator (spatial),
Time Series, Temporal Interpolation (temporal) and
SpatioTemporal Locator (combined). Finally, other structures
exist for colour spaces, colour quantisation and multiple 2D views
of 3D objects.
These additional structures allow combining the basic descriptors
in multiple ways and on different levels. But they do not change
the characteristics of the extracted information. Consequently,
structures for aggregation and localisation were not considered in
the work described in this paper.
2.2 Similarity measurement on visual data
Generally, similarity measurement on visual information aims at
imitating human visual similarity perception. Unfortunately,
human perception is much more complex than any of the existing
similarity models (it includes perception, recognition and
subjectivity).
The common approach in visual information retrieval is
measuring dis-similarity as distance. Both, query object and
candidate object are represented by their corresponding feature
vectors. The distance between these objects is measured by
computing the distance between the two vectors. Consequently,
the process is independent of the employed querying paradigm
(e.g. query by example). The query object may be natural (e.g. a
real object) or artificial (e.g. properties of a group of objects).
Goal of the measurement process is to express a relationship
between the two objects by their distance. Iteration for multiple
candidates allows then to define a partial order over the
candidates and to address those in a (to be defined)
neighbourhood being similar to the query object. At this point, it
has to be mentioned that in a multi-descriptor environment
especially in MPEG-7 we are only half way towards a statement
on similarity. If multiple descriptors are used (e.g. a descriptor
scheme), a rule has to be defined how to combine all distances to
a global value for each object. Still, distance measurement is the
most important first step in similarity measurement.
Obviously, the main task of good distance measures is to
reorganise descriptor space in a way that media objects with the
highest similarity are nearest to the query object. If distance is
defined minimal, the query object is always in the origin of
distance space and similar candidates should form clusters around
the origin that are as large as possible. Consequently, many well
known distance measures are based on geometric assumptions of
descriptor space (e.g. Euclidean distance is based on the metric
axioms). Unfortunately, these measures do not fit ideally with
human similarity perception (e.g. due to human subjectivity). To
overcome this shortage, researchers from different areas have
developed alternative models that are mostly predicate-based
(descriptors are assumed to contain just binary elements, e.g.
Tversky's Feature Contrast Model [17]) and fit better with human
perception. In the following distance measures of both groups of
approaches will be considered.
DISTANCE MEASURES
The distance measures used in this work have been collected from
various areas (Subsection 3.1). Because they work on differently
quantised data, Subsection 3.2 sketches a model for unification on
the basis of quantitative descriptions. Finally, Subsection 3.3
introduces the distance measures as well as their origin and the
idea they implement.
3.1 Sources
Distance measurement is used in many research areas such as
psychology, sociology (e.g. comparing test results), medicine (e.g.
comparing parameters of test persons), economics (e.g. comparing
balance sheet ratios), etc. Naturally, the character of data available
in these areas differs significantly. Essentially, there are two
extreme cases of data vectors (and distance measures): predicate-based
(all vector elements are binary, e.g. {0, 1}) and quantitative
(all vector elements are continuous, e.g. [0, 1]).
Predicates express the existence of properties and represent high-level
information while quantitative values can be used to measure
and mostly represent low-level information. Predicates are often
employed in psychology, sociology and other human-related
sciences and most predicate-based distance measures were
therefore developed in these areas. Descriptions in visual
information retrieval are nearly ever (if they do not integrate
semantic information) quantitative. Consequently, mostly
quantitative distance measures are used in visual information
retrieval.
The goal of this work is to compare the MPEG-7 distance
measures with the most powerful distance measures developed in
other areas. Since MPEG-7 descriptions are purely quantitative
but some of the most sophisticated distance measures are defined
exclusively on predicates, a model is mandatory that allows the
application of predicate-based distance measures on quantitative
data. The model developed for this purpose is presented in the
next section.
3.2 Quantisation model
The goal of the quantisation model is to redefine the set operators
that are usually used in predicate-based distance measures on
continuous data. The first in visual information retrieval to follow
this approach were Santini and Jain, who tried to apply Tversky's
Feature Contrast Model [17] to content-based image retrieval
[12], [13]. They interpreted continuous data as fuzzy predicates
and used fuzzy set operators. Unfortunately, their model suffered
from several shortcomings they described in [12], [13] (for
example, the quantitative model worked only for one specific
version of the original predicate-based measure).
The main idea of the presented quantisation model is that set
operators are replaced by statistical functions. In [5] the authors
could show that this interpretation of set operators is reasonable.
The model offers a solution for the descriptors considered in the
evaluation. It is not specific to one distance measure, but can be
applied to any predicate-based measure. Below, it will be shown
that the model does not only work for predicate data but for
quantitative data as well. Each measure implementing the model
can
be
used
as
a
substitute
for
the
original
predicate-based measure.
Generally, binary properties of two objects (e.g. media objects)
can exist in both objects (denoted as a), in just one (b, c) or in
none of them (d). The operator needed for these relationships are
UNION, MINUS and NOT. In the quantisation model they are
replaced as follows (see [5] for further details).
131
+
+
=
=
=
k
jk
ik
jk
ik
k
k
j
i
else
x
x
M
if
x
x
s
s
X
X
a
0
2
2
,
1
(
)
(
)
+
+
=
=
=
=
=
=
=
=
=
k
jk
ik
jk
ik
k
k
j
i
k
ik
jk
ik
jk
k
k
i
j
k
jk
ik
jk
ik
k
k
j
i
else
x
x
if
x
x
M
s
s
X
X
d
else
x
x
M
if
x
x
s
s
X
X
c
else
x
x
M
if
x
x
s
s
X
X
b
0
2
2
,
0
,
0
,
1
2
2
with:
( )
[
]
(
)
{ }
0
\
.
0
1
.
0
1
,
2
2
1
min
max
max
min
+
=
=
=
=
=
=
R
p
k
i
x
where
else
p
if
p
M
k
i
x
where
else
p
if
p
M
x
x
M
x
x
x
with
x
X
i
k
ik
i
k
ik
ik
ik
i
a selects properties that are present in both data vectors (X
i
, X
j
representing media objects), b and c select properties that are
present in just one of them and d selects properties that are present
in neither of the two data vectors. Every property is selected by
the extent to which it is present (a and d: mean, b and c:
difference) and only if the amount to which it is present exceeds a
certain threshold (depending on the mean and standard deviation
over all elements of descriptor space).
The
implementation
of
these
operators
is
based
on one assumption.
It is assumed that vector elements measure on interval scale. That
means, each element expresses that the measured property is
"more or less" present ("0": not at all, "M": fully present). This is
true for most visual descriptors and all MPEG-7 descriptors. A
natural origin as it is assumed here ("0") is not needed.
Introducing p (called discriminance-defining parameter) for the
thresholds
2
1
,
has the positive consequence that a, b, c, d can
then be controlled through a single parameter. p is an additional
criterion for the behaviour of a distance measure and determines
the thresholds used in the operators. It expresses how accurate
data items are present (quantisation) and consequently, how
accurate they should be investigated. p can be set by the user or
automatically. Interesting are the limits:
1.
M
p
2
1
,
In this case, all elements (=properties) are assumed to be
continuous (high quantisation). In consequence, all properties of a
descriptor are used by the operators. Then, the distance measure is
not discriminant for properties.
2.
0
,
0
2
1
p
In this case, all properties are assumed to be predicates. In
consequence, only binary elements (=predicates) are used by the
operators (1-bit quantisation). The distance measure is then highly
discriminant for properties.
Between these limits, a distance measure that uses the
quantisation model is depending on p more or less
discriminant for properties. This means, it selects a subset of all
available description vector elements for distance measurement.
For both predicate data and quantitative data it can be shown that
the quantisation model is reasonable. If description vectors consist
of binary elements only, p should be used as follows (for example,
p can easily be set automatically):
( )
,
min
.
.
,
0
,
0
2
1
=
=
p
g
e
p
In this case, a, b, c, d measure like the set operators they replace.
For example, Table 1 shows their behaviour for two one-dimensional
feature vectors X
i
and X
j
. As can be seen, the
statistical measures work like set operators. Actually, the
quantisation model works accurate on predicate data for any p.
To show that the model is reasonable for quantitative data the
following fact is used. It is easy to show that for predicate data
some quantitative distance measures degenerate to predicate-based
measures. For example, the L
1
metric (Manhattan metric)
degenerates to the Hamming distance (from [9], without weights):
distance
Hamming
c
b
x
x
L
k
jk
ik
=
+
=
1
If it can be shown that the quantisation model is able to
reconstruct the quantitative measure from the degenerated
predicate-based measure, the model is obviously able to extend
predicate-based measures to the quantitative domain. This is easy
to illustrate. For purely quantitative feature vectors, p should be
used as follows (again, p can easily be set automatically):
1
,
2
1
=
p
Then, a and d become continuous functions:
+
=
=
+
+
=
=
+
k
jk
ik
k
k
jk
ik
k
jk
ik
k
k
jk
ik
x
x
M
s
where
s
d
true
M
x
x
x
x
s
where
s
a
true
M
x
x
M
2
2
2
2
b and c can be made continuous for the following expressions:
(
)
(
)
=
=
+
=
=
=
=
k
jk
ik
k
k
k
ik
jk
ik
jk
k
k
ik
jk
ik
jk
k
jk
ik
jk
ik
k
k
jk
ik
jk
ik
x
x
s
where
s
c
b
else
x
x
if
x
x
s
where
s
c
x
x
M
x
x
M
else
x
x
if
x
x
s
where
s
b
x
x
M
x
x
M
0
0
0
0
0
0
Table 1. Quantisation model on predicate vectors.
X
i
X
j
a b c d
(1) (1) 1 0 0 0
(1) (0) 0 1 0 0
(0) (1) 0 0 1 0
(0) (0) 0 0 0 1
132
=
=
=
=
k
ik
jk
k
k
k
jk
ik
k
k
x
x
s
where
s
b
c
x
x
s
where
s
c
b
This means, for sufficiently high p every predicate-based distance
measure that is either not using b and c or just as b+c, b-c or c-b,
can be transformed into a continuous quantitative distance
measure. For example, the Hamming distance (again, without
weights):
1
L
x
x
x
x
s
where
s
c
b
k
jk
ik
k
jk
ik
k
k
=
=
=
=
+
The quantisation model successfully reconstructs the L
1
metric
and no distance measure-specific modification has to be made to
the model. This demonstrates that the model is reasonable. In the
following it will be used to extend successful predicate-based
distance measures on the quantitative domain.
The major advantages of the quantisation model are: (1) it is
application domain independent, (2) the implementation is
straightforward, (3) the model is easy to use and finally, (4) the
new parameter p allows to control the similarity measurement
process in a new way (discriminance on property level).
3.3 Implemented measures
For the evaluation described in this work next to predicate-based
(based on the quantisation model) and quantitative measures, the
distance measures recommended in the MPEG-7 standard were
implemented (all together 38 different distance measures).
Table 2 summarises those predicate-based measures that
performed best in the evaluation (in sum 20 predicate-based
measures were investigated). For these measures, K is the number
of predicates in the data vectors X
i
and X
j
. In P1, the sum is used
for Tversky's f() (as Tversky himself does in [17]) and ,
are
weights for element b and c. In [5] the author's investigated
Tversky's Feature Contrast Model and found =1, =0 to be the
optimum parameters.
Some of the predicate-based measures are very simple (e.g. P2,
P4) but have been heavily exploited in psychological research.
Pattern difference (P6) a very powerful measure is used in the
statistics package SPSS for cluster analysis. P7 is a correlation
coefficient for predicates developed by Pearson.
Table 3 shows the best quantitative distance measures that were
used. Q1 and Q2 are metric-based and were implemented as
representatives for the entire group of Minkowski distances. The
w
i
are weights. In Q5,
i
i
,
are mean and standard deviation
for the elements of descriptor X
i
. In Q6, m is 2M (=0.5). Q3, the
Canberra metric, is a normalised form of Q1. Similarly, Q4,
Clark's divergence coefficient is a normalised version of Q2. Q6 is
a further-developed correlation coefficient that is invariant against
sign changes. This measure is used even though its particular
properties are of minor importance for this application domain.
Finally, Q8 is a measure that takes the differences between
adjacent vector elements into account. This makes it structurally
different from all other measures.
Obviously, one important distance measure is missing. The
Mahalanobis distance was not considered, because different
descriptors would require different covariance matrices and for
some descriptors it is simply impossible to define a covariance
matrix. If the identity matrix was used in this case, the
Mahalanobis distance would degenerate to a Minkowski distance.
Additionally, the recommended MPEG-7 distances were
implemented with the following parameters: In the distance
measure of the Color Layout descriptor all weights were set to "1"
(as in all other implemented measures). In the distance measure of
the Dominant Color descriptor the following parameters were
used:
20
,
1
,
3
.
0
,
7
.
0
2
1
=
=
=
=
d
T
w
w
(as recommended). In the
Homogeneous Texture descriptor's distance all
( )
k
were set to
"1" and matching was done rotation- and scale-invariant.
Important! Some of the measures presented in this section are
distance measures while others are similarity measures. For the
tests, it is important to notice, that all similarity measures were
inverted to distance measures.
TEST SETUP
Subsection 4.1 describes the descriptors (including parameters)
and the collections (including ground truth information) that were
used in the evaluation. Subsection 4.2 discusses the evaluation
method that was implemented and Subsection 4.3 sketches the test
environment used for the evaluation process.
4.1 Test data
For the evaluation eight MPEG-7 descriptors were used. All
colour descriptors: Color Layout, Color Structure, Dominant
Color, Scalable Color, all texture descriptors: Edge Histogram,
Homogeneous Texture, Texture Browsing and one shape
descriptor: Region-based Shape. Texture Browsing was used even
though the MPEG-7 standard suggests that it is not suitable for
retrieval. The other basic shape descriptor, Contour-based Shape,
was not used, because it produces structurally different
descriptions that cannot be transformed to data vectors with
elements measuring on interval-scales. The motion descriptors
were not used, because they integrate the temporal dimension of
visual media and would only be comparable, if the basic colour,
texture and shape descriptors would be aggregated over time. This
was not done. Finally, no high-level descriptors were used
(Localisation, Face Recognition, etc., see Subsection 2.1),
because to the author's opinion the behaviour of the basic
descriptors on elementary media objects should be evaluated
before conclusions on aggregated structures can be drawn.
Table 2. Predicate-based distance measures.
No. Measure Comment
P1
c
b
a
.
.
Feature Contrast Model,
Tversky 1977 [17]
P2
a
No. of co-occurrences
P3
c
b
+
Hamming distance
P4
K
a
Russel 1940 [14]
P5
c
b
a
+
Kulczvnski 1927 [14]
P6
2
K
bc
Pattern difference [14]
P7
(
)(
)(
)(
)
d
c
d
b
c
a
b
a
bc
ad
+
+
+
+
Pearson
1926 [11]
133
The Texture Browsing descriptions had to be transformed from
five bins to an eight bin representation in order that all elements
of the descriptor measure on an interval scale. A Manhattan metric
was used to measure proximity (see [6] for details).
Descriptor extraction was performed using the MPEG-7 reference
implementation. In the extraction process each descriptor was
applied on the entire content of each media object and the
following extraction parameters were used. Colour in Color
Structure was quantised to 32 bins. For Dominant Color colour
space was set to YCrCb, 5-bit default quantisation was used and
the default value for spatial coherency
was used.
Homogeneous
Texture was quantised to 32 components. Scalable Color values
were quantised to sizeof(int)-3 bits and 64 bins were used. Finally,
Texture Browsing was used with five components.
These descriptors were applied on three media collections with
image content: the Brodatz dataset (112 images, 512x512 pixel), a
subset of the Corel dataset (260 images, 460x300 pixel, portrait
and landscape) and a dataset with coats-of-arms images (426
images, 200x200 pixel). Figure 1 shows examples from the three
collections.
Designing appropriate test sets for a visual evaluation is a highly
difficult task (for example, see the TREC video 2002 report [15]).
Of course, for identifying the best distance measure for a
descriptor, it should be tested on an infinite number of media
objects. But this is not the aim of this study. It is just evaluated if
for likely image collections better proximity measures than
those suggested by the MPEG-7 group can be found. Collections
of this relatively small size were used in the evaluation, because
the applied evaluation methods are above a certain minimum size
invariant against collection size and for smaller collections it is
easier to define a high-quality ground truth. Still, the average ratio
of ground truth size to collection size is at least 1:7. Especially, no
collection from the MPEG-7 dataset was used in the evaluation
because the evaluations should show, how well the descriptors
and the recommended distance measures perform on "unknown"
material.
When the descriptor extraction was finished, the resulting XML
descriptions were transformed into a data matrix with 798 lines
(media objects) and 314 columns (descriptor elements). To be
usable with distance measures that do not integrate domain
knowledge, the elements of this data matrix were normalised to
[0, 1].
For the distance evaluation next to the normalised data matrix
human similarity judgement is needed. In this work, the ground
truth is built of twelve groups of similar images (four for each
dataset). Group membership was rated by humans based on
semantic criterions. Table 4 summarises the twelve groups and the
underlying descriptions. It has to be noticed, that some of these
groups (especially 5, 7 and 10) are much harder to find with low-level
descriptors than others.
4.2 Evaluation method
Usually, retrieval evaluation is performed based on a ground truth
with recall and precision (see, for example, [3], [16]). In multi-descriptor
environments this leads to a problem, because the
resulting recall and precision values are strongly influenced by the
method used to merge the distance values for one media object.
Even though it is nearly impossible to say, how big the influence
of a single distance measure was on the resulting recall and
precision values, this problem has been almost ignored so far.
In Subsection 2.2 it was stated that the major task of a distance
measure is to bring the relevant media objects as close to the
origin (where the query object lies) as possible. Even in a multi-descriptor
environment it is then simple to identify the similar
objects in a large distance space. Consequently, it was decided to
Table 3. Quantitative distance measures.
No. Measure
Comment
No. Measure
Comment
Q1
k
jk
ik
i
x
x
w
City block
distance (L
1
)
Q2
(
)
k
jk
ik
i
x
x
w
2
Euclidean
distance (L
2
)
Q3
+
k
jk
ik
jk
ik
x
x
x
x
Canberra metric,
Lance, Williams
1967 [8]
Q4
(
)
+
k
jk
ik
jk
ik
x
x
x
x
K
2
1
Divergence
coefficient,
Clark 1952 [1]
Q5
(
)
(
)
(
)
(
)
k
k
j
jk
i
ik
k
j
jk
i
ik
x
x
x
x
2
2
Correlation
coefficient
Q6
+
+
k
ik
k
jk
ik
k
ik
k
k
jk
k
ik
jk
ik
x
m
Km
x
x
m
Km
x
x
x
m
Km
x
x
2
.
.
2
2
2
2
2
Cohen 1969 [2]
Q7
k
k
jk
ik
k
jk
ik
x
x
x
x
2
2
Angular distance,
Gower 1967 [7]
Q8
(
)
(
)
(
)
+
+
1
2
1
1
K
k
jk
jk
ik
ik
x
x
x
x
Meehl Index
[10]
Table 4. Ground truth information.
Coll. No Images Description
1
19
Regular, chequered patterns
2
38
Dark white noise
3 33
Moon-like
surfaces
Brodatz
4 35
Water-like
surfaces
5
73
Humans in nature (difficult)
6
17
Images with snow (mountains, skiing)
7
76
Animals in nature (difficult)
Corel
8
27
Large coloured flowers
9
12
Bavarian communal arms
10
10
All Bavarian arms (difficult)
11
18
Dark objects / light unsegmented shield
Arms
12
14
Major charges on blue or red shield
134
use indicators measuring the distribution in distance space of
candidates similar to the query object for this evaluation instead
of recall and precision. Identifying clusters of similar objects
(based on the given ground truth) is relatively easy, because the
resulting distance space for one descriptor and any distance
measure is always one-dimensional.
Clusters are found by
searching from the origin of distance space to the first similar
object, grouping all following similar objects in the cluster,
breaking off the cluster with the first un-similar object and so
forth.
For the evaluation two indicators were defined. The first measures
the average distance of all cluster means to the origin:
distance
avg
clusters
no
size
cluster
distance
clusters
no
i
i
size
cluster
j
ij
d
i
_
.
_
_
_
_
=
where distance
ij
is the distance value of the j-th element in the i-th
cluster,
=
CLUSTERS
i
i
CLUSTERS
i
size
cluster
j
ij
size
cluster
distance
distance
avg
i
_
_
_
, no_clusters is the
number of found clusters and cluster_size
i
is the size of the i-th
cluster. The resulting indicator is normalised by the distribution
characteristics of the distance measure (avg_distance).
Additionally, the standard deviation is used. In the evaluation
process this measure turned out to produce valuable results and to
be relatively robust against parameter p of the quantisation model.
In Subsection 3.2 we noted that p affects the discriminance of a
predicate-based distance measure: The smaller p is set the larger
are the resulting clusters because the quantisation model is then
more discriminant against properties and less elements of the data
matrix are used. This causes a side-effect that is measured by the
second indicator: more and more un-similar objects come out with
exactly the same distance value as similar objects (a problem that
does not exist for large p's) and become indiscernible from similar
objects. Consequently, they are (false) cluster members. This
phenomenon (conceptually similar to the "false negatives"
indicator) was named "cluster pollution" and the indicator
measures the average cluster pollution over all clusters:
clusters
no
doubles
no
cp
clusters
no
i
size
cluster
j
ij
i
_
_
_
_
=
where no_doubles
ij
is the number of indiscernible un-similar
objects associated with the j-th element of cluster i.
Remark: Even though there is a certain influence, it could be
proven in [5] that no significant correlation exists between
parameter p of the quantisation model and cluster pollution.
4.3 Test environment
As pointed out above, to generate the descriptors, the MPEG-7
reference implementation in version 5.6 was used (provided by
TU Munich). Image processing was done with Adobe Photoshop
and normalisation and all evaluations were done with Perl. The
querying process was performed in the following steps: (1)
random selection of a ground truth group, (2) random selection of
a query object from this group, (3) distance comparison for all
other objects in the dataset, (4) clustering of the resulting distance
space based on the ground truth and finally, (5) evaluation.
For each combination of dataset and distance measure 250 queries
were issued and evaluations were aggregated over all datasets and
descriptors. The next section shows the partially surprising
results.
RESULTS
In the results presented below the first indicator from Subsection
4.2 was used to evaluate distance measures. In a first step
parameter p had to be set in a way that all measures are equally
discriminant. Distance measurement is fair if the following
condition holds true for any predicate-based measure d
P
and any
continuous measure d
C
:
(
) ( )
C
P
d
cp
p
d
cp
,
Then, it is guaranteed that predicate-based measures do not create
larger clusters (with a higher number of similar objects) for the
price of higher cluster pollution. In more than 1000 test queries
the optimum value was found to be p=1.
Results are organised as follows: Subsection 5.1 summarises the
Figure 1. Test datasets.
Left: Brodatz dataset, middle: Corel dataset, right: coats-of-arms dataset.
135
best distance measures per descriptor, Section 5.2 shows the best
overall distance measures and Section 5.3 points out other
interesting results (for example, distance measures that work
particularly good on specific ground truth groups).
5.1 Best measure per descriptor
Figure 2 shows the evaluation results for the first indicator. For
each descriptor the best measure and the performance of the
MPEG-7 recommendation are shown. The results are aggregated
over the tested datasets.
On first sight, it becomes clear that the MPEG-7
recommendations are mostly relatively good but never the best.
For Color Layout the difference between MP7 and the best
measure, the Meehl index (Q8), is just 4% and the MPEG-7
measure has a smaller standard deviation. The reason why the
Meehl index is better may be that this descriptors generates
descriptions with elements that have very similar variance.
Statistical analysis confirmed that (see [6]).
For Color Structure, Edge Histogram, Homogeneous Texture,
Region-based Shape and Scalable Color by far the best measure is
pattern difference (P6). Psychological research on human visual
perception has revealed that in many situation differences between
the query object and a candidate weigh much stronger than
common properties. The pattern difference measure implements
this insight in the most consequent way. In the author's opinion,
the reason why pattern difference performs so extremely well on
many descriptors is due to this fact. Additional advantages of
pattern difference are that it usually has a very low variance and
because it is a predicate-based measure its discriminance (and
cluster structure) can be tuned with parameter p.
The best measure for Dominant Color turned out to be Clark's
Divergence coefficient (Q4). This is a similar measure to pattern
difference on the continuous domain. The Texture Browsing
descriptor is a special problem. In the MPEG-7 standard it is
recommended to use it exclusively for browsing. After testing it
for retrieval on various distance measures the author supports this
opinion. It is very difficult to find a good distance measure for
Texture Browsing. The proposed Manhattan metric, for example,
performs very bad. The best measure is predicate-based (P7). It
works on common properties (a, d) but produces clusters with
very high cluster pollution. For this descriptor the second
indicator is up to eight times higher than for predicate-based
measures on other descriptors.
5.2 Best overall measures
Figure 3 summarises the results over all descriptors and media
collections. The diagram should give an indication on the general
potential of the investigated distance measures for visual
information retrieval.
It can be seen that the best overall measure is a predicate-based
one. The top performance of pattern difference (P6) proves that
the quantisation model is a reasonable method to extend
predicate-based distance measures on the continuous domain. The
second best group of measures are the MPEG-7
recommendations, which have a slightly higher mean but a lower
standard deviation than pattern difference. The third best measure
is the Meehl index (Q8), a measure developed for psychological
applications but because of its characteristic properties tailor-made
for certain (homogeneous) descriptors.
Minkowski metrics are also among the best measures: the average
mean and variance of the Manhattan metric (Q1) and the
Euclidean metric (Q2) are in the range of Q8. Of course, these
measures do not perform particularly well for any of the
descriptors. Remarkably for a predicate-based measure, Tversky's
Feature Contrast Model (P1) is also in the group of very good
measures (even though it is not among the best) that ends with
Q5, the correlation coefficient. The other measures either have a
significantly higher mean or a very large standard deviation.
5.3 Other interesting results
Distance measures that perform in average worse than others may
in certain situations (e.g. on specific content) still perform better.
For Color Layout, for example, Q7 is a very good measure on
colour photos. It performs as good as Q8 and has a lower standard
deviation. For artificial images the pattern difference and the
Hamming distance produce comparable results as well.
If colour information is available in media objects, pattern
difference performs well on Dominant Color (just 20% worse Q4)
and in case of difficult ground truth (group 5, 7, 10) the Meehl
index is as strong as P6.
0,000
0,001
0,002
0,003
0,004
0,005
0,006
0,007
0,008
Q8
MP
7
P6
MP
7
Q4
MP
7
P6
MP
7
P6
MP
7
P6
MP
7
P6
MP
7
P7
Q2
Color
Layout
Color
Structure
Dominant
Color
Edge
Histogram
Homog.
Texture
Region
Shape
Scalable
Color
Texture
Browsing
Figure 2. Results per measure and descriptor.
The horizontal axis shows the best measure and the performance of the MPEG-7
recommendation for each descriptor. The vertical axis shows the values for the first indicator (smaller value = better cluster structure).
Shades have the following meaning: black=- (good cases), black + dark grey= (average) and black + dark grey + light grey=+ (bad).
136
CONCLUSION
The evaluation presented in this paper aims at testing the
recommended distance measures and finding better ones for the
basic visual MPEG-7 descriptors. Eight descriptors were selected,
38 distance measures were implemented, media collections were
created and assessed, performance indicators were defined and
more than 22500 tests were performed. To be able to use
predicate-based distance measures next to quantitative measures a
quantisation model was defined that allows the application of
predicate-based measures on continuous data.
In the evaluation the best overall distance measures for visual
content as extracted by the visual MPEG-7 descriptors turned
out to be the pattern difference measure and the Meehl index (for
homogeneous descriptions). Since these two measures perform
significantly better than the MPEG-7 recommendations they
should be further tested on large collections of image and video
content (e.g. from [15]).
The choice of the right distance function for similarity
measurement depends on the descriptor, the queried media
collection and the semantic level of the user's idea of similarity.
This work offers suitable distance measures for various situations.
In consequence, the distance measures identified as the best will
be implemented in the open MPEG-7 based visual information
retrieval framework VizIR [4].
ACKNOWLEDGEMENTS
The author would like to thank Christian Breiteneder for his
valuable comments and suggestions for improvement. The work
presented in this paper is part of the VizIR project funded by the
Austrian Scientific Research Fund FWF under grant no. P16111.
REFERENCES
[1] Clark, P.S. An extension of the coefficient of divergence for
use with multiple characters. Copeia, 2 (1952), 61-64.
[2] Cohen, J. A profile similarity coefficient invariant over
variable reflection. Psychological Bulletin, 71 (1969), 281-284
.
[3] Del Bimbo, A. Visual information retrieval. Morgan
Kaufmann Publishers, San Francisco CA, 1999.
[4] Eidenberger, H., and Breiteneder, C. A framework for visual
information retrieval. In Proceedings Visual Information
Systems Conference (HSinChu Taiwan, March 2002), LNCS
2314, Springer Verlag, 105-116.
[5] Eidenberger, H., and Breiteneder, C. Visual similarity
measurement with the Feature Contrast Model. In
Proceedings SPIE Storage and Retrieval for Media Databases
Conference (Santa Clara CA, January 2003), SPIE Vol.
5021, 64-76.
[6] Eidenberger, H., How good are the visual MPEG-7 features?
In Proceedings SPIE Visual Communications and Image
Processing Conference (Lugano Switzerland, July 2003),
SPIE Vol. 5150, 476-488.
[7] Gower, J.G. Multivariate analysis and multidimensional
geometry. The Statistician, 17 (1967),13-25.
[8] Lance, G.N., and Williams, W.T. Mixed data classificatory
programs. Agglomerative Systems Australian Comp. Journal,
9 (1967), 373-380.
[9] Manjunath, B.S., Ohm, J.R., Vasudevan, V.V., and Yamada,
A. Color and texture descriptors. In Special Issue on MPEG-7
. IEEE Transactions on Circuits and Systems for Video
Technology, 11/6 (June 2001), 703-715.
[10] Meehl, P. E. The problem is epistemology, not statistics:
Replace significance tests by confidence intervals and
quantify accuracy of risky numerical predictions. In Harlow,
L.L., Mulaik, S.A., and Steiger, J.H. (Eds.). What if there
were no significance tests? Erlbaum, Mahwah NJ, 393-425.
[11] Pearson, K. On the coefficients of racial likeness. Biometrica,
18 (1926), 105-117.
[12] Santini, S., and Jain, R. Similarity is a geometer. Multimedia
Tools and Application, 5/3 (1997), 277-306.
[13] Santini, S., and Jain, R. Similarity measures. IEEE
Transactions on Pattern Analysis and Machine Intelligence,
21/9 (September 1999), 871-883.
[14] Sint, P.P. Similarity structures and similarity measures.
Austrian Academy of Sciences Press, Vienna Austria, 1975
(in German).
[15] Smeaton, A.F., and Over, P. The TREC-2002 video track
report. NIST Special Publication SP 500-251 (March 2003),
available from: http://trec.nist.gov/pubs/trec11/papers/
VIDEO.OVER.pdf (last visited: 2003-07-29)
[16] Smeulders, A.W.M., Worring, M., Santini, S., Gupta, A., and
Jain, R. Content-based image retrieval at the end of the early
years. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 22/12 (December 2000), 1349-1380.
[17] Tversky, A. Features of similarity. Psychological Review,
84/4 (July 1977), 327-351.
0,000
0,002
0,004
0,006
0,008
0,010
0,012
0,014
0,016
0,018
0,020
P6
MP
7
Q8
Q1
Q4
Q2
P2
P4
Q6
Q3
Q7
P1
Q5
P3
P5
P7
Figure 3. Overall results (ordered by the first indicator).
The vertical axis shows the values for the first indicator (smaller value = better
cluster structure). Shades have the following meaning: black=-, black + dark grey= and black + dark grey + light grey=+.
137 | Similarity Perception;MPEG-7 descriptors;Distance Measurement;Content-based Image Retrieval;MPEG-7;distance measure;quantisation;Content-based Video Retrieval;Similarity Measurement;visual information retrieval;Visual Information Retrieval;human similarity perception |
73 | Dogs or Robots: Why do Children see them as Robotic Pets rather than Canine Machines? | In the not too distant future Intelligent Creatures (robots, smart devices, smart vehicles, smart buildings , etc) will share the everyday living environment of human beings. It is important then to analyze the attitudes humans are to adopt for interaction with morphologically different devices, based on their appearance and behavior. In particular, these devices will become multi-modal interfaces, with computers or networks of computers, for a large and complex universe of applications. Our results show that children are quickly attached to the word `dog' reflecting a conceptualization that robots that look like dogs (in particular SONY Aibo) are closer to living dogs than they are to other devices. By contrast, adults perceive Aibo as having stronger similarities to machines than to dogs (reflected by definitions of robot). Illustration of the characteristics structured in the definition of robot are insufficient to convince children Aibo is closer to a machine than to a dog. | Introduction
The play R. U. R. (Rossum's Universal Robots), written
by the Czech author Karel Capek, was produced
in London in 1923. The term robot entered the English
language (in Czech the word `robota' means
`heavy labor'). The robot concept remained science
fiction until 1961 when Unimation Inc. installed the
world's first industrial robot in the US. Unimation
Inc. made Australia's first robot, installed in 1974.
The emergence of legged autonomous robots and their
commercial release (as in Honda's Asimo and Sony's
Aibo) contribute to support the hypothesis that mobile
robots will soon become common in our everyday
environments. The commercial release of the Personal
Computer (PC) occurred just a generation ago,
yet now it is a common household item. This forecast
has prompted some studies into the acceptabil-Copyright
c 2004, Australian Computer Society, Inc. This paper
appeared at 5th Australasian User Interface Conference
(AUIC2004), Dunedin. Conferences in Research and Practice
in Information Technology, Vol. 28. A. Cockburn, Ed. Reproduction
for academic, not-for profit purposes permitted provided
this text is included.
This work was supported by a Griffith University Research
Grant as part of the project "Robots to guide the blind the
application of physical collaborative autonomous intelligent
agents".
ity and attitudes these artifacts generate among human
beings (Fong, Nourbakhsh & Dautenhahn 2003).
For example, Kahn et al have recently embarked on
a series of studies with children (Kahn Jr., Friedman
, Freier & Severson 2003) and adults (Kahn Jr.,
Friedman & Hagman 2002) investigating matters such
as humans attributing social and ethical stance to
robots like Sony's Aibo. Earlier Bumby and Dautenhahn
(Bumby & Dautenhahn 1999) explored reactions
of children as they interacted with a robot.
Reports have recently appeared in the media of cases
where humans also build emotional attachments to
robots that do not look like animals, similar to those
they have for home pets. An example is the robotic
vacuum cleaners in (Kahney 2003).
While some experts in childhood development argue
that the use of robot-dolls (machines with realism
) in children's environments `cuts off open-ended
imaginative play' there are some studies showing that
intelligent children will still explore beyond the limitations
of the machine. Similar concerns about other
technologies have been also subject of debate. For
example, the concerns about children reducing their
play with other children, or increasingly aggressive
behavior because of computer games, can be attributed
more to exposure to the type of game than to
the technology of computer games themselves (Lawry,
Upitis, Klawe, Anderson, Inkpen, Ndunda, Hsu, Leroux
& Sedighian 1995). Researchers have found that
children (in particular boys) will entertain and seek
more challenging and interesting computer games
(not necessarily violent games) and that there is no
observable increase in violent behavior or deterioration
in social behavior (Lawry et al. 1995).
Recent studies have focused on the attitudes gen-erated
by Sony's Aibo on humans, we propose here
to explore the differences between Dog (as in living
animal) and Robot (as in lifeless machine assembled
from parts) in the concepts and language formations
in children. Naturally, the smaller the difference, the
more it is likely that humans will attribute animal
characteristics (as high up as rights) to a robot. The
question is, `what makes a small or a large difference?'
If the difference is very small, perhaps humans will
interact with autonomous robots as they do with animals
. We suggest that in today's children's world,
the issue is not confusion between reality and fantasy
(Aylett 2002). To a child, Sony's Aibo is not a
fantasy but reality.
Identification of what makes children perceive a
robot as a dog (an animal) or as a robot is important,
especially if one wants to design robots stressing the
difference or diluting it. Our research reveals that the
look and feel of Sony's Aibo and its body shape go
a long way into its acceptability as a dog. Its play-7
ful behavior, tail wagging, legged walk, recovery from
falls, sitting and hand shaking are absorbed into the
child's mind. Later, illustration of its robotic features
are repeatedly insufficient to fully convince the children
that this is an artifact (and not a being with
feelings).
Unless Aibo does something unacceptable for a
dog (like speak with a human voice), it remains essentially
a dog. Our findings that human speech in Aibo
reduces its dog-ness and increases its robot-ness may
be attributed to the `uncanny valley' (Scheeff, Pinto,
Rahardja, Snibbe & Tow 2000). Although we are
not measuring emotional response, we have observed
dissatisfaction with Aibo as a dog, since clearly it is
only humans that talk (although children accept talking
toys and talking animals in fantasy or animated
movies).
The rest of this paper is organized as follows. Section
2 will describe our research methods. Section 3
will elaborate on the findings. Section 4 will present
conclusions and final remarks. Our aim is to explore
and contrast the properties currently accepted in the
definition of `mobile autonomous robot'. The International
Federation of Robotics (IFR) and the Australian
Robot Association follow the ISO standard
vocabulary (ISO 8373) to describe `manipulating industrial
robots operated in a manufacturing environ-ment'
. A robot has three essential characteristics:
1. It possesses some form of mobility (formally, a
robot must possess at least three programmable
axes of motion).
2. It can be programmed to accomplish a large variety
of tasks.
3. After being programmed, it operates automati-cally
.
Mobile robots can move away from a fixed location
and come in two varieties: tethered and autonomous;
a tethered robot may have its power supply or its
control unit overboard, possibly relying on a desktop
computer and a wall outlet and a long cord. Autonomous
mobile robots bring everything along (control
unit, body, actuators, sensors and power supply).
The control unit is typically a computer and software.
Our research attempts to find out if children do indeed
notice these properties or fixate more on the
form and behavior of the artifact.
The methods
This research was performed by a series of demonstrations
of Aibo and other robots, toys and models
. in particular using Lego Mindstorms
1
constructions
(Knudsen 1999), remote control toy cars and
autonomous battery toys.
The demonstrations were conducted with preschool
children as well as those from the first 4 years
of primary school across 3 schools, two childcare centers
, and a museum in the urban area of Brisbane,
Australia. Table 1 summarizes the presentations and
the age groups of children. Whenever consent was
given, a video/or audio of the session was recorded.
Alternatively, a secretary took notes of the statement
made by children. Sessions lasted between 30 minutes
and approximately one hour.
The sessions consisted of five stages.
1. Establishing the language. In the first minutes of
the session, children are asked to describe what
they see. The goal of this stage is to obtain a
vocabulary from the children to discuss the artifacts
that will be presented later.
1
A trademark of the LEGO group.
2. Demonstration of Aibo. In the next 5 to 7 minutes
, a demonstration of Aibo is performed with
one black model without a memory stick, so the
default simple `dog-like' behaviors are in place.
3. Demonstration of the concept of robot. This stage
illustrates the main features that are common in
accepted definitions of a robot. It also ensures
that the children observe that Sony's Aibo shares
these properties with robots.
4. Investigate animal attributes on Aibo. This stage
questions the children for the existence of animal
properties on Sony's Aibo and the justification
for their belief.
5. Challenge Aibo's animal attributes with the other
artifacts used in the session. Children are asked
to confirm or justify that Aibo is a robot. Attempts
are made to convince them of the artificial
nature of Aibo by showing the same property
in an artifact accepted as lifeless and to compel
them to decide on one side of the Dog vs Robot
debate (or generate a new term to describe it).
The initial phase starts with the projection of
a video from RoboCup-2002 legged league (Veloso,
Uther, Fujita, Asada & Kitano 1998) (the video is the
match between the University of Newcastle and Team
Sweden). Presentations at the Powerhouse Museum
in Brisbane consisted of a live match with 8 dogs programmed
as the Mi-PAL Team Griffith participation
in RoboCup-2003. After two minutes the children are
asked to describe what they see in the video. In the
video, human manipulators are visible, which contributes
to the realization that these are real things
and not an animation film. Children are requested to
indicate what the `dogs' are doing (if they suggested
this word, but if they suggest `puppies' then `puppies'
is used). That is, we use the same words the children
themselves have introduced to describe what they see.
Children are then asked to justify why it is a game
of `soccer/rugby' (or whatever word was the most
common or immediate reply).
The phase finishes by bringing an Aibo that is
switched off, placing it on the ground, turning it on,
and waiting. Since Aibo requires some seconds to
`boot' we observe children's reactions and comments.
This phase is obviously different for blind children. It
is replaced by allowing children to explore the robot
with their hands. Blind children still find and recognize
legs, paws, ears, head and tail because of shape,
texture, malleability and movement.
We then proceed to phase two where we illustrate
the default behavior of the Aibo, which includes the
following interactions.
A couple of fast strokes on its back starts it on
a walk while it makes the sounds of a marching
tune.
Hard strokes on its head produce sounds and a
red light on its head LEDs.
Soft strokes on its head produce sounds and a
green light on its LEDs.
Scratching under its chin produces another set of
sounds and lights.
Placing it on the floor on its side produces some
sounds, and then Aibo performs a routine to
stand back on its four legs. After getting up,
Aibo shakes, wagging his tail and making other
sounds.
Presenting a pink ball produces sounds and the
LED on its tail to go pink.
8
School
Level
Children's age
Group size
Boronia Childcare Center
pre-school
4-5
10
Carole Park State School
pre-school
4-5
17
Holland Park State School
3rd year primary
7-8
25
Holland Park State School
1st year primary
5-6
24
Camp Hill State School
1st year primary
5-6
10
Camp Hill State School
1st year primary
5-6
12
Camp Hill State School
1st year primary
5-6
11
Carole Park State School
1st year primary
5-6
22
Carole Park State School
2nd year primary
6-7
24
Carole Park State School
2nd year primary
6-7
28
Carole Park State School
3rd year primary
7-8
26
Carole Park State School
4th year primary
8-9
20
Powerhouse Museum
pre-school
4-5
10
Narbethong State Special School
pre-school (blind)
4-5
3
Table 1: Presentations conducted and age groups.
Figure 1: A 4-legged walking toy with visible battery,
motor and gears. A flexible tail resembles a dog tail.
Other behaviors that Aibo produces are not directly
triggered by the manipulator. These include
Aibo sitting down, Aibo lying on his stomach
and waving all fours as a synchronized dance,
Aibo waiving one leg, Aibo moving his head from
side to side and flapping his ears.
Children are then invited to interact with Aibo directly
. In particular, to show the pink ball, to produce
the green lights or invite him to walk. They are also
invited to explain what Aibo is doing in their own
words. There are a series of questions that the presenter
goes through as the illustration of behaviors is
performed. These questions are as follows:
What is Aibo doing now?
Is he happy or is he sad?
Does he like to be touched like this?
Do you think he can get up by himself?
At the completion of this phase, Aibo is turned off
and focus is transferred to other examples of robotics.
Because the commonly accepted definitions of mobile
robots includes that they have their own control unit,
phase three consists of the following:
A presentation of a 4-legged toy with a tail (made
of a spring) that has a visible battery, motor and
gears (Figure 1). It is illustrated that this toy
needs a battery to operate it and that it has
an on-off-reverse button. Children are asked to
carry out the task of setting it off or stopping
it by taking the battery out. This illustrates
that mobile autonomous robots require power
and carry their source of power.
Figure 2: A model of a humanoid robot.
Figure 3: Remote control car to contrast with the
notion of autonomous control.
A presentation of a model of a humanoid robot
(Figure 2). Although it looks like a robot, it can
be seen that it has no motors, no batteries and
essentially does nothing.
A presentation of a remote control car (radio con-trolled
) (Figure 3). This car is also shown to
have batteries on board but all behavior is de-rived
from the actions on the two-lever remote
control. The first lever produces forward or reverse
motion on the back wheels and the second
lever produces left/right turns on the front
wheels. This illustrates the notion of control (remote
and human).
A Lego Mindstorm construction extremely similar
to `Hank the Bumper Tank' (Knudsen 1999)
(Figure 4). This robot is shown to have a behavior
that allows it to move around and steer away
from obstacles it bumps into (the program is very
similar to the one suggested in (Knudsen 1999,
Chapter 2)). As part of the interactive nature
of the presentation, the children are asked to act
9
Figure 4: A Mindstorm construction with touch and
light sensor.
Figure 5: A Mindstorm construction with touch and
light sensor, that acts on its environment with a mechanical
arm, and plays sounds.
as obstacles for `adapted' Hank. The presenter
points out the sensors behind the robots bumper,
and illustrates that disconnecting these sensors
makes it `unaware' of obstacles.
We also added to Hank a program that used a
light sensor to monitor the color of the ground
beneath it.
This program was similar to the
obstacle avoidance previously mentioned, but
rather than avoid objects it would move away
from dark areas. We presented this behavior as
`being afraid of the dark'. By switching between
these modes, we illustrate that the behavior of
the robot changes with the chosen program.
Using Lego's ROBO-Lab (a graphical programming
application) to build a very simple program
that makes Hank spin in a circle for four seconds,
we show the children its programmable nature.
The children are taken through the process of
building the program, transferring it onto Hank
via an infrared interface and finally running it.
When the program is running, the children are
encouraged to count along, thus verifying that
the program is indeed the one just built. This is
repeated for at least two other timings (around
10 to 20 seconds).
A Lego construction extremely similar to Minerva
(Knudsen 1999, Chapter 6) was presented
next (Figure 5). The components were shown
to be the same as Hank's and the Lego RCX
is labeled as the `computer control'. A program
similar to the one suggested (Knudsen 1999) produces
the behavior illustrated to the children.
Minerva moves around a white floor until it finds
a black object, uses a robotic arm to pick it up,
then turns around and brings it to another position
close to where it started. It then releases
the object and plays a tune. The presenter en-sured
that the children observed that Minerva
perceives its environment and can act to change
it (thus the notion of actuator is illustrated).
A series of pictures (or videos) of autonomous
robots were shown to the children. These images
demonstrate that robots come in all sorts
of shapes and sizes. Among these are pictures of
more Aibos, Honda ASIMO, the Sony humanoid
SDX, MINERVA (Thrun, Bennewitz, Burgard,
Cremers, Dellaert, Fox, Hahnel, Rosenberg, Roy,
Schulte & Schulz 1999) and Kismet (Brooks
2002). It was pointed out that robots can produce
smiles, walk, and be as small as a cockroach.
Pictures of experimental robots were shown to
display the wires, gears and components inside
their packaging.
Aibo was then brought back as the presenter repeated
the main concepts, namely:
Aibo requires power and carries a battery. Aibo is
turned on and off. Also, it is shown that Aibo's
behaviors are interrupted and stopped if the battery
is removed.
Aibo has motors. The gears on Aibo's joints, and
wires near its neck are pointed out to the children
.
Aibo has sensors. The strokes on head and sensitivity
to the pink ball are illustrated once more.
Using another pink object (perhaps a piece of
clothing, or the memory stick of Aibo), we show
that the behavior is triggered by the color being
noticed by a sensor and not Aibo understanding
the concept of a ball.
Aibo has actuators and a control program. We install
a memory stick and re-start Aibo. With the
new program it kicks a ball as in the RoboCup
video.
We illustrate Aibo's behavior changes
with different memory sticks.
Once this is completed the next phase commences
by the presenter asking one of the following questions.
1. Does Aibo have feelings?
2. Where does Aibo get energy from?
3. Will Aibo have babies?
Responses of several children were collected. We expected
that this question would have distinct answers
depending on whether we were referring to a robot or
a `living' dog.
We then passed to the final stage. Each time a
child made a response that seemed to indicate animal
essence or animal agency in the Aibo, we chal-lenged
the response. For example, if a child indicated
that Aibo had feelings, we next asked what the child
thinks happens to the feelings when Aibo is turned
off. We found children would continue to support
their point of view. Following the previous example,
many children followed the path that Aibo was just
asleep when turned off. The challenge continued as
the presenter requested children to explain what sort
of feelings Aibo has or if the feelings fade when the
battery runs out. Also, the presenter checked if the
other artifacts, shown before, have feelings and asked
the children to explain why the others do or do not
have those or other feelings.
The sequence of challenges for the 3 questions
above were as follows.
1. Does Aibo have feelings?
What happens to Aibo's feelings when he is
turned off?
10
Figure 6: The demonstrator with a class of grade 2
children and 3 of the objects: Aibo, Hank and the
4-legged walking toy.
What happens to his feelings when the battery
runs out?
What happens to his feelings if we re-move/change/replace
the memory stick and
Aibo`s personality changes?
What happens to his feelings if Aibo is broken
? Will the feelings come back if we glue
him?
What feelings do you know Aibo has? How
do you know he is happy/sad?
Is it possible to pretend to be happy but not
be happy? Do you think Aibo is happy or
just pretending?
2. Where does Aibo get his energy from?
Where do you get your energy from?
Where do the other artifacts get their energy
from?
Does Aibo work without a battery? Do the
others work without a battery?
What do you think Aibo eats/drinks? Do
you think he needs to visit the toilet?
3. Will Aibo have babies?
Is Aibo a baby dog? How do you know?
How will Aibo look after (care for) the babies
?
Does Aibo need to charge/replace the battery
of the babies?
These paths of questioning were not all developed
ahead of the first presentation. They evolved from
one presentation to the next. Their length reflects
the resistance of the children to change their opinion,
even if all other artifacts have opposite responses to
these questions. That is, for each question, before we
progressed to the next, we confirmed that the children
sustained the notion that a difference remains
between Aibo and the other artifacts. For example,
in the last question sequence, children would start by
confirming that none of the other artifacts can have
babies while Aibo can. When progressing to `Is Aibo
a baby?' and contrasting this with `Is Hank a baby?',
most children realized that Aibo is really like Hank
and cannot have babies.
The findings
After an analysis of the transcript of our videos and
notes we summarize the following findings. It is remarkable
that when we queried the children for their
first impressions of the video we obtained the following
results. To the question `What do you see?' all
sessions had children responding that they saw `dogs'.
This is surprising for two main reasons: firstly the
video shows the robots playing soccer, a behavior not
commonly attributed to dogs. And secondly, the children
would have anticipated seeing robots through
prior conversations with parents and teachers. This
may explain why a few children claimed that the
adults in the video where robots.
As the age of the pupils involved in the study in-creased
, we noticed that the tendency to regard the
robots as `dogs' decreased. The more mature respondents
were more likely to label the robots as `robots'
or `robotic dogs'.
One pupil also gave the more
generic answer of `animals', and another thought that
they were `cats'. Interestingly, on a few occasions
children referred to the humans in the video as the
robots. We believe this is due to their anticipation to
see robots and perhaps the media culture of humanoid
robots.
To the question, `What are they doing?' most children
identified the activity as a game of `soccer'. This
is surprising, since the RoboCup has barriers around
the pitch that make the game more similar to ice-hockey
, and although the robots are legged, they do
not kick the ball with one leg. All robots in this
video kick the ball with two legs simultaneously or
head-butt the ball. The ball is also bright orange,
clearly not a soccer ball. Another point is that although
played in Australia, it is not the most commonly
played sport.
Other suggestions included, `they're fighting',
`playing hockey', and one child thought he was watching
a game of tennis.
Justifications for `why is it a game of soc-cer/football
?' included a rounded ball on the ground,
goals, two teams, referees and goalies.
When initially presenting the Aibo to the children,
rather than give it the label `it', we found that they
would usually use `him' or `her'. Once again this was
more pronounced with younger subjects. As the presenter
went on to explain the attributes of the Aibo
and show its operation, the children while probing
with questions would begin to lose the gender label.
The children generally were of the opinion that the
Aibo did have emotions, with a couple of them claiming
that this was so because it had a `mind'. This
opinion was seemingly an accepted one with many
children declaring at certain stages of the proceedings
that it was either happy or sad.
Upon the exhibition of other robots and robot-like
artifacts, the general consensus was that the Aibo did
in fact meet the criteria for being a robot. However,
the most common term used to describe it was that
of `robotic dog', where dog is the noun. This emphasis
on the dog nature of the robot suggests that the
subjects were still willing to consider it animal-like.
The youngest group, however, needed the most
convincing. They insisted that the Aibo was a dog,
even after repeated demonstrations of its robotic nature
, with the presenter even stating in no uncertain
terms that it was a robot. They did come around
eventually, with one child using the `robotic dog' description
, and his peers following suit.
We briefly describe some reaction to the other objects
. Although initially enthusiastic, children were
quickly disappointed by the model of the humanoid;
mainly its inaction made it uninteresting. One child
said, "it's just a toy, not a robot". The 4-legged walking
toy caused some laughs because it bumps into
things, but children realized rapidly that it did not
offer any `interesting' interaction beside turning it on
and off (potentially reversing the direction). The remote
control car was appealing and children wanted
to play with it even after the presentation. It was
clear to them they were controlling it.
Hank did
11
cause surprise and children wanted to continue playing
with it, or asked about how to program it. Children
wanted to interact with it and explored different
obstacles for its obstacle-backing behavior. On two
occasions we witnessed children convinced that Hank
also had feelings because it was "afraid of the dark".
The mechanical-arm robot caused amazement. We
believe this greater surprise was because children familiar
with Lego do not expect the action of a mechanical
arm lifting an object.
We also performed a variation in our initial approach
to confirm some of these findings. We approached
a different grade 6 class (12 year-olds) that
had been already working with Mindstorm robots and
had done some research assignments on the Internet
and in the library on topics such as `What is a robot?'.
We did a presentation in which the objects were not
necessarily the focus, but the properties of a robot
were the focus. We also demonstrated different applications
of robotics, like using Miranda to assist a blind
person to read a WEB page. The method for collecting
the children's attitudes was a questionnaire of 25
questions asking children to choose between two positions
and to give their reasons for such decisions. We
invited them to reflect on their responses, so they were
asked to answer the questions over a day at school and
at home. The results of 23 answered questionnaires
confirmed that a dog-looking robot rapidly acquires
animalistic properties and values in the minds of children
. In particular, 75% of the children confirmed
that Miranda should be called a `robotic dog' rather
than a `dog-looking robot'. Note that the preferred
noun is dog over robot. The reasons provided in the
questionnaire are illustrative of their thinking: "It has
more dog features than robot features", "Miranda has
characteristics a dog has", "Kinda looks like a real
dog" "It is an automatic dog" and "Just doesn't look
like a dog, she has a dog personality". And on the
question "Does Miranda have feelings?" again 75%
responded positively. Some of the reasons were "She
just isn't a robot. She's almost a real dog", "She can
be happy, unhappy", If you hit her hard, she would
make a noise, but she felt it". Note that in this presentation
we actually changed programs several times,
radically changing the behaviors and personality of
Miranda. Also, real dogs do not talk, but our programs
had a female voice for instructions to kick a
ball and a male voice reading Web pages. Only one
child classified Miranda as a robot because dogs do
not sing.
Discussion and Final Remarks
The blurring of the concept of robotic pet or canine
machine is of interest to us because of the direct applications
of autonomous mobile robots in helping people
. In particular, we foresee that people with disabilities
, the elderly and other groups in need of assistance
, are the first humans that will benefit from
autonomous mobile robots. Naturally, the attitudes,
acceptability and adequate expectations are to match
an effective human-computer interaction. If the person
expects smarter behavior of the robot (things like
gesture/voice recognition) and the technology does
not deliver, then rather than assisting, we will frustrate
the person. It is also important that anyone who
encounters a person assisted by a robot approaches
with attitudes and gestures that allow the robotic
assistant to facilitate the approach. The main motivation
behind this research is a related project on
using Aibo to assist blind people. While it may seem
straightforward that a robotic assistant for the vision
impaired person should be shaped as a dog, this is
not so. Even with guide dogs, other humans find it
difficult to approach and assist a blind person. Humans
expect a strong bond and loyalty of the animal
to its owner, fearing that dogs may misinterpret help
as interfering with the bond, causing then to react
violently.
Our findings agree with those of others (Kahn Jr.
et al. 2002) in that there is a progression of attributes
that humans ascribe to robots like Aibo. This progression
starts from Essence, and advances to Agency,
Social Standing and Moral Standing. Our findings
are that Aibo fulfills biological animistic underpinnings
(children refer to its tail, legs, ears and behaviors
in the same way as for living dogs). It also fulfills
Agency properties (children attribute intentions, feelings
, emotional states, wishes, desires, goals).
We left aside social standing in our methodology,
but strongly suspect that children attribute an emotional
connection and companionship to Aibo. We
observed a clear preference among children for `Do
you want to pat the dog/puppy?' over `Do you want
to touch the robot?'. Many children made unsolicited
comments about how similar it was playing with Aibo
to playing with their dog at home. Similarly, we refrained
from exploring children's attribution of moral
standing to Aibo (for example, should Aibo be punished
for doing something wrong). Nevertheless, we
received unsolicited suggestions that `leaving Aibo
alone or not playing with him would make him sad'
and that `batteries should always be charged, which
may mean more responsibility than for a living dog'.
These types of comments do attribute some rights to
Aibo and a sense that it also deserves some respect.
Our observations indicate that Essence and
Agency are maintained in the child's beliefs even in
the presence and practical illustration of other machines
for which they will not typically attach such
biological or animistic properties, nor psychological
characteristics (although Bob the Builder's cars and
machines talk). In fact, we witnessed arguments and
debates among the children which turned the balance
the other way around, some managing to convince
others that Hank had feelings like `being afraid of the
dark, because afraid is a feeling'.
Also, we found observations that concur with the
writing of anthropologist S. Turkle (Turkle 1999). In
particular, although we did not intend to observe
adults, we witnessed parents and teachers attempting
to convince the children that Aibo was a machine
and not a dog. Some child-carers seem to interpret
our experiment as a lecture on the living versus
the non-living. We believe this reflected some of
Turkle's conclusions about the `thinking about alive-ness'
with older people interpreting machines and
computers through mechanistic/physical interpretations
while the newer generation interprets beings in
computer games and robots as `sort of alive'. Our
best example of this was witnessing a parent selecting
a particular physical argument to convince her
5-year old of `the clear difference' why Aibo is not a
dog. This also pointed out a difference between Aibo
and dogs that we had not observed but that the adult
believed made "the difference". We attempt to illustrate
it with Figure 7. Aibo has one less joint in the
back leg than a dog (the dog, as shown in Figure 7(a),
has hip (1), knee (2), ankle (3) and toes (4)). This
is one degree of freedom less and also the toes bend
back in the dog, while they do not on Aibo. Note
that if we were to choose a physical argument it is
perhaps more obvious that Aibo does not have two
eyes or does not have a wet nose. The point is that
a basic minimum of physical structure is enough to
engage children in a psychological/conceptual interpretation
that then is hard to remove on the basis of
physical evidence.
We believe our results indicate that children are
12
(a)
(b)
Figure 7: A dog (a) has one more degree of freedom
per leg than Aibo (b) and has more movement in the
toes than Aibo.
quickly attached to the notion that `robotic dogs' are
closer to living dogs. Although we would not go as far
as S. Turkle to suggest that `living' has a new meaning
for this generation of children, we suggest that
they will see them as robotic pets more than canine
machines. We expect, therefore, that in the future,
humans will adopt more of them as an interface for
human-computer interaction.
Prof. B. Shneiderman is probably the world's leading
authority in Human-Computer Interaction. He
has repeatedly been outspoken about reducing `ma-chine
intelligence' and `software agents' for building
computers that are more effective tools (Beardsley
1999). At first, our research seems to contradict some
of his ideas; but, interaction with a robot is interaction
with a computer and we agree that it allows
for direct manipulation, even more realistic and perhaps
more meaningful than on the computer screen.
Also, it is now clear that domestic robots will soon
be around us and computers will not be restricted to
output devices like monitors, nor will computers be
confined to fixed locations. Third, we argue that studies
such as ours advance the possibilities of having a
`controllable, consistent and predictable interaction'
with a robot. Thus, we share the vision of interaction
facilitated by proper design. Finally, our aim is
interaction with people who are blind. In such case,
visualization (the coloring of pixels in a monitor) for
`insight' cannot be used. Shneiderman also agrees on
this point. We argue that properly designed robots
will offer a multi-modal interface where insight is com-municated
by embodiment and movement as well as
sound.
Other papers in the literature confirm that people
may develop strong attachments, and even affectionate
relationships with artificial information systems.
Those studies involve human adults on one side and
rather simple emulations of human intelligence in the
other. In such cases, the interface has been rather
simple (or at least not multi-modal), like through a
phone conversation. It is interesting that this may
have both positive and negative outcomes. For example
, as reported in the case of a `Health Behavior Ad-visor
System' (Kaplan, Farzanfar & Friedman 1999),
some patients felt motivated to follow a healthier life
style, while others found it inflicted a sense of guilt
that did not motivate healthier habits. We believe
that understanding people's expectations for robots
is important since these expectations will define the
context for the interactions that may result in effective
use of robotic technology. An example is the potential
attribution of moral standing to robots. This
could eventually regard the robot (and not its manufacturer
) as responsible for its actions. Certainly, this
would have many implications for our society.
Acknowledgments
The authors wish to thank the anonymous referees for
the constructive feedback provided in their reviews.
This work was supported by a Griffith University Research
Grant as part of the project "Robots to guide
the blind - the application of physical collaborative
autonomous intelligent agents".
References
Aylett, B. (2002), ROBOTS -- Bringing intelligent
machines to life?, ABC Books, Sydney NSW,
Australia.
Beardsley, T. (1999), `Humans unite!', Scientific
American March, 3536. Profile Column.
Brooks, R. (2002), `Humanoid robots', Communications
of the ACM 45(3), 3338.
Bumby, K. & Dautenhahn, K. (1999), Investigating
children's attitudes towards robots: A case
study, in `Proceedings of the Third Cognitive
Technology Conference, CT'99', M.I.N.D. Lab,
Michigan State University, East Lansing, MI.,
pp. 391410.
Fong, T., Nourbakhsh, I. & Dautenhahn, K. (2003),
`A survey of socially interactive robots', Robotics
and Autonomous Systems 42, 235243.
Kahn Jr., P., Friedman, B. & Hagman, J. (2002), I
care about him as a pal: Conceptions of robotic
pets in online Aibo discussion forum, in `Proceedings
of CHI, Interactive Poster: Fun changing
the world, changing ourselves', pp. 632633.
Kahn Jr., P. J., Friedman, B., Freier, N. & Severson,
R. (2003), Coding manual for children's interactions
with Aibo, the robotic dog -- the preschool
study, Technical Report UW CSE 03-04-03, Department
of Computer Science and Engineering,
University of Washington, Seattle, US.
Kahney,
L.
(2003),
`The
new
pet
craze:
Robovacs',
Wired
Magazine.
June,
16th;
visited
Septenber
10th,
2003,
www.wired.com/news/technology/0,1282,59249,00.html.
Kaplan,
B.,
Farzanfar,
R.
& Friedman,
R.
(1999),
Ethnographic
interviews
to
elicit
patients,
reactions to an intelligent interactive
telephone
health
behavior
advisor
system,
in
M.
N.M.
Lorenzi,
Bethesda,
ed.,
`Proceedings:
AMIA
Symposiu',
American
Medical
Informatics
Association,
www.amia.org/pubs/symposia/D005604.PDF.
Knudsen, J. (1999), The Unofficial Guide to LEGO
MINDSTORM Robots, O'Reilly, Sebastopol,
CA.
13
Lawry, J., Upitis, R., Klawe, M., Anderson, A.,
Inkpen, K., Ndunda, M., Hsu, D., Leroux, S.
& Sedighian, K. (1995), `Exploring common conceptions
about boys and electronic games', Journal
of Computer in Math and Science Teaching
14
(4), 439459.
Scheeff, M., Pinto, J., Rahardja, K., Snibbe, S. &
Tow, R. (2000), Experiences with Sparky: A social
robot, in `Proceedings of the Workshop on
Interactive Robot Entertainment'.
Thrun, S., Bennewitz, M., Burgard, W., Cremers, A.,
Dellaert, F., Fox, D., Hahnel, D., Rosenberg, C.,
Roy, N., Schulte, J. & Schulz, D. (1999), MINERVA
: A tour-guide robot that learns, in `KI Kunstliche
Intelligenz', pp. 1426.
Turkle, S. (1999), What are we thinking about when
we are thinking about computers?, in M. Biagi-oli
, ed., `The Science Studies Reader', Routledge,
New York.
Veloso, M., Uther, W., Fujita, M., Asada, M. &
Kitano, H. (1998), Playing soccer with legged
robots, in `In Proceedings of IROS-98, Intelligent
Robots and Systems Conference', Victoria,
Canada.
14 | intelligent creatures;human attitudes;language;essence;agency;robots;perceived attitude;behavioral science;zoo-morphological autonomous mobile robots;robot attributes;multi-modal interfaces;interaction;feelings;hci |
74 | Downloading Textual Hidden Web Content Through Keyword Queries | An ever-increasing amount of information on the Web today is available only through search interfaces: the users have to type in a set of keywords in a search form in order to access the pages from certain Web sites. These pages are often referred to as the Hidden Web or the Deep Web. Since there are no static links to the Hidden Web pages, search engines cannot discover and index such pages and thus do not return them in the results. However, according to recent studies, the content provided by many Hidden Web sites is often of very high quality and can be extremely valuable to many users. In this paper, we study how we can build an effective Hidden Web crawler that can autonomously discover and download pages from the Hidden Web. Since the only "entry point" to a Hidden Web site is a query interface, the main challenge that a Hidden Web crawler has to face is how to automatically generate meaningful queries to issue to the site. Here, we provide a theoretical framework to investigate the query generation problem for the Hidden Web and we propose effective policies for generating queries automatically. Our policies proceed iteratively, issuing a different query in every iteration . We experimentally evaluate the effectiveness of these policies on 4 real Hidden Web sites and our results are very promising. For instance, in one experiment, one of our policies downloaded more than 90% of a Hidden Web site (that contains 14 million documents ) after issuing fewer than 100 queries. | INTRODUCTION
Recent studies show that a significant fraction of Web content
cannot be reached by following links [7, 12]. In particular, a large
part of the Web is "hidden" behind search forms and is reachable
only when users type in a set of keywords, or queries, to the forms.
These pages are often referred to as the Hidden Web [17] or the
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
JCDL'05, June 711, 2005, Denver, Colorado, USA.
Copyright 2005 ACM 1-58113-876-8/05/0006 ...
$
5.00.
Deep Web [7], because search engines typically cannot index the
pages and do not return them in their results (thus, the pages are
essentially "hidden" from a typical Web user).
According to many studies, the size of the Hidden Web increases
rapidly as more organizations put their valuable content online
through an easy-to-use Web interface [7]. In [12], Chang et al.
estimate that well over 100,000 Hidden-Web sites currently exist
on the Web. Moreover, the content provided by many Hidden-Web
sites is often of very high quality and can be extremely valuable
to many users [7]. For example, PubMed hosts many high-quality
papers on medical research that were selected from careful peer-review
processes, while the site of the US Patent and Trademarks
Office
1
makes existing patent documents available, helping potential
inventors examine "prior art."
In this paper, we study how we can build a Hidden-Web crawler
2
that can automatically download pages from the Hidden Web, so
that search engines can index them. Conventional crawlers rely
on the hyperlinks on the Web to discover pages, so current search
engines cannot index the Hidden-Web pages (due to the lack of
links). We believe that an effective Hidden-Web crawler can have
a tremendous impact on how users search information on the Web:
Tapping into unexplored information:
The Hidden-Web
crawler will allow an average Web user to easily explore the
vast amount of information that is mostly "hidden" at present.
Since a majority of Web users rely on search engines to discover
pages, when pages are not indexed by search engines, they are
unlikely to be viewed by many Web users. Unless users go directly
to Hidden-Web sites and issue queries there, they cannot
access the pages at the sites.
Improving user experience: Even if a user is aware of a number
of Hidden-Web sites, the user still has to waste a significant
amount of time and effort, visiting all of the potentially relevant
sites, querying each of them and exploring the result. By making
the Hidden-Web pages searchable at a central location, we can
significantly reduce the user's wasted time and effort in searching
the Hidden Web.
Reducing potential bias: Due to the heavy reliance of many Web
users on search engines for locating information, search engines
influence how the users perceive the Web [28]. Users do not
necessarily perceive what actually exists on the Web, but what
is indexed by search engines [28]. According to a recent article
[5], several organizations have recognized the importance of
bringing information of their Hidden Web sites onto the surface,
and committed considerable resources towards this effort. Our
1
US Patent Office: http://www.uspto.gov
2
Crawlers are the programs that traverse the Web automatically and
download pages for search engines.
100
Figure 1: A single-attribute search interface
Hidden-Web crawler attempts to automate this process for Hidden
Web sites with textual content, thus minimizing the associated
costs and effort required.
Given that the only "entry" to Hidden Web pages is through
querying a search form, there are two core challenges to implementing
an effective Hidden Web crawler: (a) The crawler has to
be able to understand and model a query interface, and (b) The
crawler has to come up with meaningful queries to issue to the
query interface. The first challenge was addressed by Raghavan
and Garcia-Molina in [29], where a method for learning search interfaces
was presented. Here, we present a solution to the second
challenge, i.e. how a crawler can automatically generate queries so
that it can discover and download the Hidden Web pages.
Clearly, when the search forms list all possible values for a query
(e.g., through a drop-down list), the solution is straightforward. We
exhaustively issue all possible queries, one query at a time. When
the query forms have a "free text" input, however, an infinite number
of queries are possible, so we cannot exhaustively issue all possible
queries. In this case, what queries should we pick? Can the
crawler automatically come up with meaningful queries without
understanding the semantics of the search form?
In this paper, we provide a theoretical framework to investigate
the Hidden-Web crawling problem and propose effective ways of
generating queries automatically. We also evaluate our proposed
solutions through experiments conducted on real Hidden-Web sites.
In summary, this paper makes the following contributions:
We present a formal framework to study the problem of Hidden-Web
crawling. (Section 2).
We investigate a number of crawling policies for the Hidden
Web, including the optimal policy that can potentially download
the maximum number of pages through the minimum number of
interactions. Unfortunately, we show that the optimal policy is
NP-hard and cannot be implemented in practice (Section 2.2).
We propose a new adaptive policy that approximates the optimal
policy. Our adaptive policy examines the pages returned from
previous queries and adapts its query-selection policy automatically
based on them (Section 3).
We evaluate various crawling policies through experiments on
real Web sites. Our experiments will show the relative advantages
of various crawling policies and demonstrate their potential
. The results from our experiments are very promising. In
one experiment, for example, our adaptive policy downloaded
more than 90% of the pages within PubMed (that contains 14
million documents) after it issued fewer than 100 queries.
FRAMEWORK
In this section, we present a formal framework for the study of
the Hidden-Web crawling problem. In Section 2.1, we describe our
assumptions on Hidden-Web sites and explain how users interact
with the sites. Based on this interaction model, we present a high-level
algorithm for a Hidden-Web crawler in Section 2.2. Finally in
Section 2.3, we formalize the Hidden-Web crawling problem.
2.1
Hidden-Web database model
There exists a variety of Hidden Web sources that provide information
on a multitude of topics. Depending on the type of information
, we may categorize a Hidden-Web site either as a textual
database or a structured database. A textual database is a site that
Figure 2: A multi-attribute search interface
mainly contains plain-text documents, such as PubMed and Lexis-Nexis
(an online database of legal documents [1]). Since plain-text
documents do not usually have well-defined structure, most
textual databases provide a simple search interface where users
type a list of keywords in a single search box (Figure 1). In contrast
, a structured database often contains multi-attribute relational
data (e.g., a book on the Amazon Web site may have the fields
title=`Harry Potter'
, author=`J.K. Rowling' and
isbn=`0590353403'
) and supports multi-attribute search interfaces
(Figure 2). In this paper, we will mainly focus on textual
databases that support single-attribute keyword queries. We
discuss how we can extend our ideas for the textual databases to
multi-attribute structured databases in Section 6.1.
Typically, the users need to take the following steps in order to
access pages in a Hidden-Web database:
1. Step 1. First, the user issues a query, say "liver," through the
search interface provided by the Web site (such as the one shown
in Figure 1).
2. Step 2. Shortly after the user issues the query, she is presented
with a result index page. That is, the Web site returns a list of
links to potentially relevant Web pages, as shown in Figure 3(a).
3. Step 3. From the list in the result index page, the user identifies
the pages that look "interesting" and follows the links. Clicking
on a link leads the user to the actual Web page, such as the one
shown in Figure 3(b), that the user wants to look at.
2.2
A generic Hidden Web crawling algorithm
Given that the only "entry" to the pages in a Hidden-Web site
is its search from, a Hidden-Web crawler should follow the three
steps described in the previous section. That is, the crawler has
to generate a query, issue it to the Web site, download the result
index page, and follow the links to download the actual pages. In
most cases, a crawler has limited time and network resources, so
the crawler repeats these steps until it uses up its resources.
In Figure 4 we show the generic algorithm for a Hidden-Web
crawler. For simplicity, we assume that the Hidden-Web crawler
issues single-term queries only.
3
The crawler first decides which
query term it is going to use (Step (2)), issues the query, and retrieves
the result index page (Step (3)). Finally, based on the links
found on the result index page, it downloads the Hidden Web pages
from the site (Step (4)). This same process is repeated until all the
available resources are used up (Step (1)).
Given this algorithm, we can see that the most critical decision
that a crawler has to make is what query to issue next. If the
crawler can issue successful queries that will return many matching
pages, the crawler can finish its crawling early on using minimum
resources. In contrast, if the crawler issues completely irrelevant
queries that do not return any matching pages, it may waste all
of its resources simply issuing queries without ever retrieving actual
pages. Therefore, how the crawler selects the next query can
greatly affect its effectiveness. In the next section, we formalize
this query selection problem.
3
For most Web sites that assume "AND" for multi-keyword
queries, single-term queries return the maximum number of results.
Extending our work to multi-keyword queries is straightforward.
101
(a) List of matching pages for query "liver".
(b) The first matching page for "liver".
Figure 3: Pages from the PubMed Web site.
A
LGORITHM
2.1.
Crawling a Hidden Web site
Procedure
(1)
while ( there are available resources ) do
// select a term to send to the site
(2)
q
i
= SelectTerm()
// send query and acquire result index page
(3)
R(q
i
) = QueryWebSite( q
i
)
// download the pages of interest
(4)
Download(
R(q
i
) )
(5)
done
Figure 4: Algorithm for crawling a Hidden Web site.
S
q
1
q
q
q
2
3
4
Figure 5: A set-formalization of the optimal query selection
problem.
2.3
Problem formalization
Theoretically, the problem of query selection can be formalized
as follows: We assume that the crawler downloads pages from a
Web site that has a set of pages
S (the rectangle in Figure 5). We
represent each Web page in
S as a point (dots in Figure 5). Every
potential query
q
i
that we may issue can be viewed as a subset of
S,
containing all the points (pages) that are returned when we issue
q
i
to the site. Each subset is associated with a weight that represents
the cost of issuing the query. Under this formalization, our goal is to
find which subsets (queries) cover the maximum number of points
(Web pages) with the minimum total weight (cost). This problem
is equivalent to the set-covering problem in graph theory [16].
There are two main difficulties that we need to address in this
formalization. First, in a practical situation, the crawler does not
know which Web pages will be returned by which queries, so the
subsets of
S are not known in advance. Without knowing these
subsets the crawler cannot decide which queries to pick to maximize
the coverage. Second, the set-covering problem is known to
be NP-Hard [16], so an efficient algorithm to solve this problem
optimally in polynomial time has yet to be found.
In this paper, we will present an approximation algorithm that
can find a near-optimal solution at a reasonable computational cost.
Our algorithm leverages the observation that although we do not
know which pages will be returned by each query
q
i
that we issue,
we can predict how many pages will be returned. Based on this information
our query selection algorithm can then select the "best"
queries that cover the content of the Web site. We present our prediction
method and our query selection algorithm in Section 3.
2.3.1
Performance Metric
Before we present our ideas for the query selection problem, we
briefly discuss some of our notation and the cost/performance metrics
.
Given a query
q
i
, we use
P (q
i
) to denote the fraction of pages
that we will get back if we issue query
q
i
to the site. For example, if
a Web site has 10,000 pages in total, and if 3,000 pages are returned
for the query
q
i
= "medicine", then P (q
i
) = 0.3. We use P (q
1
q
2
) to represent the fraction of pages that are returned from both
q
1
and
q
2
(i.e., the intersection of
P (q
1
) and P (q
2
)). Similarly, we
use
P (q
1
q
2
) to represent the fraction of pages that are returned
from either
q
1
or
q
2
(i.e., the union of
P (q
1
) and P (q
2
)).
We also use Cost
(q
i
) to represent the cost of issuing the query
q
i
. Depending on the scenario, the cost can be measured either in
time, network bandwidth, the number of interactions with the site,
or it can be a function of all of these. As we will see later, our
proposed algorithms are independent of the exact cost function.
In the most common case, the query cost consists of a number
of factors, including the cost for submitting the query to the site,
retrieving the result index page (Figure 3(a)) and downloading the
actual pages (Figure 3(b)). We assume that submitting a query incurs
a fixed cost of
c
q
. The cost for downloading the result index
page is proportional to the number of matching documents to the
query, while the cost
c
d
for downloading a matching document is
also fixed. Then the overall cost of query
q
i
is
Cost
(q
i
) = c
q
+ c
r
P (q
i
) + c
d
P (q
i
).
(1)
In certain cases, some of the documents from
q
i
may have already
been downloaded from previous queries. In this case, the crawler
may skip downloading these documents and the cost of
q
i
can be
Cost
(q
i
) = c
q
+ c
r
P (q
i
) + c
d
P
new
(q
i
).
(2)
Here, we use
P
new
(q
i
) to represent the fraction of the new documents
from
q
i
that have not been retrieved from previous queries.
Later in Section 3.1 we will study how we can estimate
P (q
i
) and
P
new
(q
i
) to estimate the cost of q
i
.
Since our algorithms are independent of the exact cost function,
we will assume a generic cost function Cost
(q
i
) in this paper. When
we need a concrete cost function, however, we will use Equation 2.
Given the notation, we can formalize the goal of a Hidden-Web
crawler as follows:
102
P
ROBLEM
1. Find the set of queries
q
1
, . . . , q
n
that maximizes
P (q
1
q
n
)
under the constraint
n
i=1
Cost
(q
i
) t.
Here,
t is the maximum download resource that the crawler has.
KEYWORD SELECTION
How should a crawler select the queries to issue? Given that the
goal is to download the maximum number of unique documents
from a textual database, we may consider one of the following options
:
Random: We select random keywords from, say, an English dictionary
and issue them to the database. The hope is that a random
query will return a reasonable number of matching documents.
Generic-frequency: We analyze a generic document corpus collected
elsewhere (say, from the Web) and obtain the generic frequency
distribution of each keyword. Based on this generic distribution
, we start with the most frequent keyword, issue it to the
Hidden-Web database and retrieve the result. We then continue
to the second-most frequent keyword and repeat this process until
we exhaust all download resources. The hope is that the frequent
keywords in a generic corpus will also be frequent in the
Hidden-Web database, returning many matching documents.
Adaptive: We analyze the documents returned from the previous
queries issued to the Hidden-Web database and estimate which
keyword is most likely to return the most documents. Based on
this analysis, we issue the most "promising" query, and repeat
the process.
Among these three general policies, we may consider the random
policy as the base comparison point since it is expected to
perform the worst. Between the generic-frequency and the adaptive
policies, both policies may show similar performance if the
crawled database has a generic document collection without a specialized
topic. The adaptive policy, however, may perform significantly
better than the generic-frequency policy if the database has a
very specialized collection that is different from the generic corpus.
We will experimentally compare these three policies in Section 4.
While the first two policies (random and generic-frequency policies
) are easy to implement, we need to understand how we can analyze
the downloaded pages to identify the most "promising" query
in order to implement the adaptive policy. We address this issue in
the rest of this section.
3.1
Estimating the number of matching pages
In order to identify the most promising query, we need to estimate
how many new documents we will download if we issue the
query
q
i
as the next query. That is, assuming that we have issued
queries
q
1
, . . . , q
i-1
we need to estimate
P (q
1
q
i-1
q
i
), for
every potential next query
q
i
and compare this value. In estimating
this number, we note that we can rewrite
P (q
1
q
i-1
q
i
)
as:
P ((q
1
q
i-1
) q
i
)
= P (q
1
q
i-1
) + P (q
i
) - P ((q
1
q
i-1
) q
i
)
= P (q
1
q
i-1
) + P (q
i
)
- P (q
1
q
i-1
)P (q
i
|q
1
q
i-1
)
(3)
In the above formula, note that we can precisely measure
P (q
1
q
i-1
) and P (q
i
| q
1
q
i-1
) by analyzing previously-downloaded
pages: We know
P (q
1
q
i-1
), the union of
all pages downloaded from
q
1
, . . . , q
i-1
, since we have already issued
q
1
, . . . , q
i-1
and downloaded the matching pages.
4
We can
also measure
P (q
i
| q
1
q
i-1
), the probability that q
i
appears
in the pages from
q
1
, . . . , q
i-1
, by counting how many times
q
i
appears in the pages from
q
1
, . . . , q
i-1
. Therefore, we only need
to estimate
P (q
i
) to evaluate P (q
1
q
i
). We may consider a
number of different ways to estimate
P (q
i
), including the following
:
1. Independence estimator: We assume that the appearance of the
term
q
i
is independent of the terms
q
1
, . . . , q
i-1
. That is, we
assume that
P (q
i
) = P (q
i
|q
1
q
i-1
).
2. Zipf estimator: In [19], Ipeirotis et al. proposed a method to
estimate how many times a particular term occurs in the entire
corpus based on a subset of documents from the corpus. Their
method exploits the fact that the frequency of terms inside text
collections follows a power law distribution [30, 25]. That is,
if we rank all terms based on their occurrence frequency (with
the most frequent term having a rank of 1, second most frequent
a rank of 2 etc.), then the frequency
f of a term inside the text
collection is given by:
f = (r + )
(4)
where
r is the rank of the term and , , and are constants that
depend on the text collection.
Their main idea is (1) to estimate the three parameters,
, and
, based on the subset of documents that we have downloaded
from previous queries, and (2) use the estimated parameters to
predict
f given the ranking r of a term within the subset. For
a more detailed description on how we can use this method to
estimate
P (q
i
), we refer the reader to the extended version of
this paper [27].
After we estimate
P (q
i
) and P (q
i
|q
1
q
i-1
) values, we
can calculate
P (q
1
q
i
). In Section 3.3, we explain how
we can efficiently compute
P (q
i
|q
1
q
i-1
) by maintaining a
succinct summary table. In the next section, we first examine how
we can use this value to decide which query we should issue next
to the Hidden Web site.
3.2
Query selection algorithm
The goal of the Hidden-Web crawler is to download the maximum
number of unique documents from a database using its limited
download resources. Given this goal, the Hidden-Web crawler
has to take two factors into account. (1) the number of new documents
that can be obtained from the query
q
i
and (2) the cost of
issuing the query
q
i
. For example, if two queries,
q
i
and
q
j
, incur
the same cost, but
q
i
returns more new pages than
q
j
,
q
i
is more
desirable than
q
j
. Similarly, if
q
i
and
q
j
return the same number
of new documents, but
q
i
incurs less cost then
q
j
,
q
i
is more desirable
. Based on this observation, the Hidden-Web crawler may
use the following efficiency metric to quantify the desirability of
the query
q
i
:
Efficiency
(q
i
) = P
new
(q
i
)
Cost
(q
i
)
Here,
P
new
(q
i
) represents the amount of new documents returned
for
q
i
(the pages that have not been returned for previous queries).
Cost
(q
i
) represents the cost of issuing the query q
i
.
Intuitively, the efficiency of
q
i
measures how many new documents
are retrieved per unit cost, and can be used as an indicator of
4
For exact estimation, we need to know the total number of pages in
the site. However, in order to compare only relative values among
queries, this information is not actually needed.
103
A
LGORITHM
3.1.
Greedy SelectTerm()
Parameters:
T : The list of potential query keywords
Procedure
(1)
Foreach
t
k
in
T do
(2)
Estimate Efficiency
(t
k
) =
P
new
(t
k
)
Cost(t
k
)
(3)
done
(4)
return
t
k
with maximum Efficiency
(t
k
)
Figure 6: Algorithm for selecting the next query term.
how well our resources are spent when issuing
q
i
. Thus, the Hidden
Web crawler can estimate the efficiency of every candidate
q
i
,
and select the one with the highest value. By using its resources
more efficiently, the crawler may eventually download the maximum
number of unique documents. In Figure 6, we show the query
selection function that uses the concept of efficiency. In principle,
this algorithm takes a greedy approach and tries to maximize the
"potential gain" in every step.
We can estimate the efficiency of every query using the estimation
method described in Section 3.1. That is, the size of the new
documents from the query
q
i
,
P
new
(q
i
), is
P
new
(q
i
)
= P (q
1
q
i-1
q
i
) - P (q
1
q
i-1
)
= P (q
i
) - P (q
1
q
i-1
)P (q
i
|q
1
q
i-1
)
from Equation 3, where
P (q
i
) can be estimated using one of the
methods described in section 3. We can also estimate Cost
(q
i
) sim-ilarly
. For example, if Cost
(q
i
) is
Cost
(q
i
) = c
q
+ c
r
P (q
i
) + c
d
P
new
(q
i
)
(Equation 2), we can estimate Cost
(q
i
) by estimating P (q
i
) and
P
new
(q
i
).
3.3
Efficient calculation of query statistics
In estimating the efficiency of queries, we found that we need to
measure
P (q
i
|q
1
q
i-1
) for every potential query q
i
. This calculation
can be very time-consuming if we repeat it from scratch for
every query
q
i
in every iteration of our algorithm. In this section,
we explain how we can compute
P (q
i
|q
1
q
i-1
) efficiently
by maintaining a small table that we call a query statistics table.
The main idea for the query statistics table is that
P (q
i
|q
1
q
i-1
) can be measured by counting how many times the keyword
q
i
appears within the documents downloaded from
q
1
, . . . , q
i-1
.
We record these counts in a table, as shown in Figure 7(a). The
left column of the table contains all potential query terms and the
right column contains the number of previously-downloaded documents
containing the respective term. For example, the table in Figure
7(a) shows that we have downloaded 50 documents so far, and
the term model appears in 10 of these documents. Given this number
, we can compute that
P (model|q
1
q
i-1
) =
10
50
= 0.2.
We note that the query statistics table needs to be updated whenever
we issue a new query
q
i
and download more documents. This
update can be done efficiently as we illustrate in the following example
.
E
XAMPLE
1. After examining the query statistics table of Figure
7(a), we have decided to use the term "computer" as our next
query
q
i
. From the new query
q
i
= "computer," we downloaded
20 more new pages. Out of these, 12 contain the keyword "model"
Term
t
k
N (t
k
)
model
10
computer
38
digital
50
Term
t
k
N (t
k
)
model
12
computer
20
disk
18
Total pages:
50
New pages:
20
(a) After
q
1
, . . . , q
i-1
(b) New from
q
i
= computer
Term
t
k
N (t
k
)
model
10+12 = 22
computer
38+20 = 58
disk
0+18 = 18
digital
50+0 = 50
Total pages:
50 + 20 = 70
(c) After
q
1
, . . . , q
i
Figure 7: Updating the query statistics table.
q
i
1
i-1
q
\/ ... \/
q
q
i
/
S
Figure 8: A Web site that does not return all the results.
and
18 the keyword "disk." The table in Figure 7(b) shows the
frequency of each term in the newly-downloaded pages.
We can update the old table (Figure 7(a)) to include this new
information by simply adding corresponding entries in Figures 7(a)
and (b). The result is shown on Figure 7(c). For example, keyword
"model" exists in
10 + 12 = 22 pages within the pages retrieved
from
q
1
, . . . , q
i
. According to this new table,
P (model|q
1
q
i
)
is now
22
70
= 0.3.
3.4
Crawling sites that limit the number of
results
In certain cases, when a query matches a large number of pages,
the Hidden Web site returns only a portion of those pages. For example
, the Open Directory Project [2] allows the users to see only
up to
10, 000 results after they issue a query. Obviously, this kind
of limitation has an immediate effect on our Hidden Web crawler.
First, since we can only retrieve up to a specific number of pages
per query, our crawler will need to issue more queries (and potentially
will use up more resources) in order to download all the
pages. Second, the query selection method that we presented in
Section 3.2 assumes that for every potential query
q
i
, we can find
P (q
i
|q
1
q
i-1
). That is, for every query q
i
we can find the
fraction of documents in the whole text database that contains
q
i
with at least one of
q
1
, . . . , q
i-1
. However, if the text database returned
only a portion of the results for any of the
q
1
, . . . , q
i-1
then
the value
P (q
i
|q
1
q
i-1
) is not accurate and may affect our
decision for the next query
q
i
, and potentially the performance of
our crawler. Since we cannot retrieve more results per query than
the maximum number the Web site allows, our crawler has no other
choice besides submitting more queries. However, there is a way
to estimate the correct value for
P (q
i
|q
1
q
i-1
) in the case
where the Web site returns only a portion of the results.
104
Again, assume that the Hidden Web site we are currently crawling
is represented as the rectangle on Figure 8 and its pages as
points in the figure. Assume that we have already issued queries
q
1
, . . . , q
i-1
which returned a number of results less than the maximum
number than the site allows, and therefore we have downloaded
all the pages for these queries (big circle in Figure 8). That
is, at this point, our estimation for
P (q
i
|q
1
q
i-1
) is accurate.
Now assume that we submit query
q
i
to the Web site, but due to a
limitation in the number of results that we get back, we retrieve the
set
q
i
(small circle in Figure 8) instead of the set
q
i
(dashed circle
in Figure 8). Now we need to update our query statistics table so
that it has accurate information for the next step. That is, although
we got the set
q
i
back, for every potential query
q
i+1
we need to
find
P (q
i+1
|q
1
q
i
):
P (q
i+1
|q
1
q
i
)
=
1
P (q
1
q
i
) [P (q
i+1
(q
1
q
i-1
))+
P (q
i+1
q
i
) - P (q
i+1
q
i
(q
1
q
i-1
))]
(5)
In the previous equation, we can find
P (q
1
q
i
) by estimating
P (q
i
) with the method shown in Section 3. Additionally, we
can calculate
P (q
i+1
(q
1
q
i-1
)) and P (q
i+1
q
i
(q
1
q
i-1
)) by directly examining the documents that we have
downloaded from queries
q
1
, . . . , q
i-1
. The term
P (q
i+1
q
i
)
however is unknown and we need to estimate it. Assuming that
q
i
is a random sample of
q
i
, then:
P (q
i+1
q
i
)
P (q
i+1
q
i
) =
P (q
i
)
P (q
i
)
(6)
From Equation 6 we can calculate
P (q
i+1
q
i
) and after we
replace this value to Equation 5 we can find
P (q
i+1
|q
1
q
i
).
EXPERIMENTAL EVALUATION
In this section we experimentally evaluate the performance of
the various algorithms for Hidden Web crawling presented in this
paper. Our goal is to validate our theoretical analysis through real-world
experiments, by crawling popular Hidden Web sites of textual
databases. Since the number of documents that are discovered
and downloaded from a textual database depends on the selection
of the words that will be issued as queries
5
to the search interface
of each site, we compare the various selection policies that were
described in section 3, namely the random, generic-frequency, and
adaptive algorithms.
The adaptive algorithm learns new keywords and terms from the
documents that it downloads, and its selection process is driven by
a cost model as described in Section 3.2. To keep our experiment
and its analysis simple at this point, we will assume that the cost for
every query is constant. That is, our goal is to maximize the number
of downloaded pages by issuing the least number of queries. Later,
in Section 4.4 we will present a comparison of our policies based
on a more elaborate cost model. In addition, we use the independence
estimator (Section 3.1) to estimate
P (q
i
) from downloaded
pages. Although the independence estimator is a simple estimator,
our experiments will show that it can work very well in practice.
6
For the generic-frequency policy, we compute the frequency distribution
of words that appear in a 5.5-million-Web-page corpus
5
Throughout our experiments, once an algorithm has submitted a
query to a database, we exclude the query from subsequent submissions
to the same database from the same algorithm.
6
We defer the reporting of results based on the Zipf estimation to a
future work.
downloaded from 154 Web sites of various topics [26]. Keywords
are selected based on their decreasing frequency with which they
appear in this document set, with the most frequent one being selected
first, followed by the second-most frequent keyword, etc.
7
Regarding the random policy, we use the same set of words collected
from the Web corpus, but in this case, instead of selecting
keywords based on their relative frequency, we choose them ran-domly
(uniform distribution). In order to further investigate how
the quality of the potential query-term list affects the random-based
algorithm, we construct two sets: one with the
16, 000 most frequent
words of the term collection used in the generic-frequency
policy (hereafter, the random policy with the set of 16,000 words
will be referred to as random-16K), and another set with the
1 million
most frequent words of the same collection as above (hereafter,
referred to as random-1M). The former set has frequent words that
appear in a large number of documents (at least
10, 000 in our collection
), and therefore can be considered of "high-quality" terms.
The latter set though contains a much larger collection of words,
among which some might be bogus, and meaningless.
The experiments were conducted by employing each one of the
aforementioned algorithms (adaptive, generic-frequency, random-16K
, and random-1M) to crawl and download contents from three
Hidden Web sites: The PubMed Medical Library,
8
Amazon,
9
and
the Open Directory Project[2]. According to the information on
PubMed's Web site, its collection contains approximately
14 million
abstracts of biomedical articles. We consider these abstracts
as the "documents" in the site, and in each iteration of the adaptive
policy, we use these abstracts as input to the algorithm. Thus our
goal is to "discover" as many unique abstracts as possible by repeat-edly
querying the Web query interface provided by PubMed. The
Hidden Web crawling on the PubMed Web site can be considered
as topic-specific, due to the fact that all abstracts within PubMed
are related to the fields of medicine and biology.
In the case of the Amazon Web site, we are interested in downloading
all the hidden pages that contain information on books.
The querying to Amazon is performed through the Software De-veloper's
Kit that Amazon provides for interfacing to its Web site,
and which returns results in XML form. The generic "keyword"
field is used for searching, and as input to the adaptive policy we
extract the product description and the text of customer reviews
when present in the XML reply. Since Amazon does not provide
any information on how many books it has in its catalogue, we use
random sampling on the 10-digit ISBN number of the books to estimate
the size of the collection. Out of the
10, 000 random ISBN
numbers queried,
46 are found in the Amazon catalogue, therefore
the size of its book collection is estimated to be
46
10000
10
10
= 4.6
million books. It's also worth noting here that Amazon poses an
upper limit on the number of results (books in our case) returned
by each query, which is set to
32, 000.
As for the third Hidden Web site, the Open Directory Project
(hereafter also referred to as dmoz), the site maintains the links to
3.8 million sites together with a brief summary of each listed site.
The links are searchable through a keyword-search interface. We
consider each indexed link together with its brief summary as the
document of the dmoz site, and we provide the short summaries
to the adaptive algorithm to drive the selection of new keywords
for querying. On the dmoz Web site, we perform two Hidden Web
crawls: the first is on its generic collection of
3.8-million indexed
7
We did not manually exclude stop words (e.g., the, is, of, etc.)
from the keyword list. As it turns out, all Web sites except PubMed
return matching documents for the stop words, such as "the."
8
PubMed Medical Library: http://www.pubmed.org
9
Amazon Inc.: http://www.amazon.com
105
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
50
100
150
200
fraction of documents
query number
Cumulative fraction of unique documents - PubMed website
adaptive
generic-frequency
random-16K
random-1M
Figure 9: Coverage of policies for Pubmed
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
100
200
300
400
500
600
700
fraction of documents
query number
Cumulative fraction of unique documents - Amazon website
adaptive
generic-frequency
random-16K
random-1M
Figure 10: Coverage of policies for Amazon
sites, regardless of the category that they fall into. The other crawl
is performed specifically on the Arts section of dmoz (http://
dmoz.org/Arts
), which comprises of approximately
429, 000
indexed sites that are relevant to Arts, making this crawl topic-specific
, as in PubMed. Like Amazon, dmoz also enforces an upper
limit on the number of returned results, which is
10, 000 links with
their summaries.
4.1
Comparison of policies
The first question that we seek to answer is the evolution of the
coverage metric as we submit queries to the sites. That is, what
fraction of the collection of documents stored in the Hidden Web
site can we download as we continuously query for new words selected
using the policies described above? More formally, we are
interested in the value of
P (q
1
q
i-1
q
i
), after we submit
q
1
, . . . , q
i
queries, and as
i increases.
In Figures 9, 10, 11, and 12 we present the coverage metric for
each policy, as a function of the query number, for the Web sites
of PubMed, Amazon, general dmoz and the art-specific dmoz, respectively
. On the y-axis the fraction of the total documents downloaded
from the website is plotted, while the x-axis represents the
query number. A first observation from these graphs is that in general
, the generic-frequency and the adaptive policies perform much
better than the random-based algorithms. In all of the figures, the
graphs for the random-1M and the random-16K are significantly
below those of other policies.
Between the generic-frequency and the adaptive policies, we can
see that the latter outperforms the former when the site is topic spe-0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
100
200
300
400
500
600
700
fraction of documents
query number
Cumulative fraction of unique documents - dmoz website
adaptive
generic-frequency
random-16K
random-1M
Figure 11: Coverage of policies for general dmoz
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
50
100
150
200
250
300
350
400
450
fraction of documents
query number
Cumulative fraction of unique documents - dmoz/Arts website
adaptive
generic-frequency
random-16K
random-1M
Figure 12: Coverage of policies for the Arts section of dmoz
cific. For example, for the PubMed site (Figure 9), the adaptive
algorithm issues only 83 queries to download almost 80% of the
documents stored in PubMed, but the generic-frequency algorithm
requires 106 queries for the same coverage,. For the dmoz/Arts
crawl (Figure 12), the difference is even more substantial: the adaptive
policy is able to download 99.98% of the total sites indexed in
the Directory by issuing 471 queries, while the frequency-based algorithm
is much less effective using the same number of queries,
and discovers only 72% of the total number of indexed sites. The
adaptive algorithm, by examining the contents of the pages that it
downloads at each iteration, is able to identify the topic of the site as
expressed by the words that appear most frequently in the result-set.
Consequently, it is able to select words for subsequent queries that
are more relevant to the site, than those preferred by the generic-frequency
policy, which are drawn from a large, generic collection.
Table 1 shows a sample of 10 keywords out of 211 chosen and submitted
to the PubMed Web site by the adaptive algorithm, but not
by the other policies. For each keyword, we present the number of
the iteration, along with the number of results that it returned. As
one can see from the table, these keywords are highly relevant to
the topics of medicine and biology of the Public Medical Library,
and match against numerous articles stored in its Web site.
In both cases examined in Figures 9, and 12, the random-based
policies perform much worse than the adaptive algorithm, and the
generic-frequency. It is worthy noting however, that the random-based
policy with the small, carefully selected set of
16, 000 "qual-ity"
words manages to download a considerable fraction of 42.5%
106
Iteration
Keyword
Number of Results
23
department
2, 719, 031
34
patients
1, 934, 428
53
clinical
1, 198, 322
67
treatment
4, 034, 565
69
medical
1, 368, 200
70
hospital
503, 307
146
disease
1, 520, 908
172
protein
2, 620, 938
Table 1: Sample of keywords queried to PubMed exclusively by
the adaptive policy
from the PubMed Web site after 200 queries, while the coverage
for the Arts section of dmoz reaches 22.7%, after 471 queried keywords
. On the other hand, the random-based approach that makes
use of the vast collection of
1 million words, among which a large
number is bogus keywords, fails to download even a mere 1% of the
total collection, after submitting the same number of query words.
For the generic collections of Amazon and the dmoz sites, shown
in Figures 10 and 11 respectively, we get mixed results: The generic-frequency
policy shows slightly better performance than the adaptive
policy for the Amazon site (Figure 10), and the adaptive method
clearly outperforms the generic-frequency for the general dmoz site
(Figure 11). A closer look at the log files of the two Hidden Web
crawlers reveals the main reason: Amazon was functioning in a
very flaky way when the adaptive crawler visited it, resulting in
a large number of lost results. Thus, we suspect that the slightly
poor performance of the adaptive policy is due to this experimental
variance. We are currently running another experiment to verify
whether this is indeed the case. Aside from this experimental
variance, the Amazon result indicates that if the collection and the
words that a Hidden Web site contains are generic enough, then the
generic-frequency approach may be a good candidate algorithm for
effective crawling.
As in the case of topic-specific Hidden Web sites, the random-based
policies also exhibit poor performance compared to the other
two algorithms when crawling generic sites: for the Amazon Web
site, random-16K succeeds in downloading almost 36.7% after issuing
775 queries, alas for the generic collection of dmoz, the fraction
of the collection of links downloaded is 13.5% after the 770th
query. Finally, as expected, random-1M is even worse than random-16K
, downloading only 14.5% of Amazon and 0.3% of the generic
dmoz.
In summary, the adaptive algorithm performs remarkably well in
all cases: it is able to discover and download most of the documents
stored in Hidden Web sites by issuing the least number of queries.
When the collection refers to a specific topic, it is able to identify
the keywords most relevant to the topic of the site and consequently
ask for terms that is most likely that will return a large number of
results . On the other hand, the generic-frequency policy proves to
be quite effective too, though less than the adaptive: it is able to retrieve
relatively fast a large portion of the collection, and when the
site is not topic-specific, its effectiveness can reach that of adaptive
(e.g. Amazon). Finally, the random policy performs poorly in
general, and should not be preferred.
4.2
Impact of the initial query
An interesting issue that deserves further examination is whether
the initial choice of the keyword used as the first query issued by
the adaptive algorithm affects its effectiveness in subsequent itera-tions
. The choice of this keyword is not done by the selection of the
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
10
20
30
40
50
60
fraction of documents
query number
Convergence of adaptive under different initial queries - PubMed website
pubmed
data
information
return
Figure 13: Convergence of the adaptive algorithm using different
initial queries for crawling the PubMed Web site
adaptive algorithm itself and has to be manually set, since its query
statistics tables have not been populated yet. Thus, the selection is
generally arbitrary, so for purposes of fully automating the whole
process, some additional investigation seems necessary.
For this reason, we initiated three adaptive Hidden Web crawlers
targeting the PubMed Web site with different seed-words: the word
"data", which returns 1,344,999 results, the word "information"
that reports
308, 474 documents, and the word "return" that retrieves
29, 707 pages, out of 14 million. These keywords represent
varying degrees of term popularity in PubMed, with the first
one being of high popularity, the second of medium, and the third
of low. We also show results for the keyword "pubmed", used in
the experiments for coverage of Section 4.1, and which returns 695
articles. As we can see from Figure 13, after a small number of
queries, all four crawlers roughly download the same fraction of
the collection, regardless of their starting point: Their coverages
are roughly equivalent from the 25th query. Eventually, all four
crawlers use the same set of terms for their queries, regardless of
the initial query. In the specific experiment, from the 36th query onward
, all four crawlers use the same terms for their queries in each
iteration, or the same terms are used off by one or two query numbers
. Our result confirms the observation of [11] that the choice of
the initial query has minimal effect on the final performance. We
can explain this intuitively as follows: Our algorithm approximates
the optimal set of queries to use for a particular Web site. Once
the algorithm has issued a significant number of queries, it has an
accurate estimation of the content of the Web site, regardless of
the initial query. Since this estimation is similar for all runs of the
algorithm, the crawlers will use roughly the same queries.
4.3
Impact of the limit in the number of results
While the Amazon and dmoz sites have the respective limit of
32,000 and 10,000 in their result sizes, these limits may be larger
than those imposed by other Hidden Web sites. In order to investigate
how a "tighter" limit in the result size affects the performance
of our algorithms, we performed two additional crawls to
the generic-dmoz site: we ran the generic-frequency and adaptive
policies but we retrieved only up to the top 1,000 results for every
query. In Figure 14 we plot the coverage for the two policies
as a function of the number of queries. As one might expect, by
comparing the new result in Figure 14 to that of Figure 11 where
the result limit was 10,000, we conclude that the tighter limit requires
a higher number of queries to achieve the same coverage.
For example, when the result limit was 10,000, the adaptive pol-107
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
500
1000
1500
2000
2500
3000
3500
Fraction of Unique Pages
Query Number
Cumulative fraction of unique pages downloaded per query - Dmoz Web site (cap limit 1000)
adaptive
generic-frequency
Figure 14: Coverage of general dmoz after limiting the number
of results to 1,000
icy could download 70% of the site after issuing 630 queries, while
it had to issue 2,600 queries to download 70% of the site when
the limit was 1,000. On the other hand, our new result shows that
even with a tight result limit, it is still possible to download most
of a Hidden Web site after issuing a reasonable number of queries.
The adaptive policy could download more than 85% of the site after
issuing 3,500 queries when the limit was 1,000. Finally, our
result shows that our adaptive policy consistently outperforms the
generic-frequency policy regardless of the result limit. In both Figure
14 and Figure 11, our adaptive policy shows significantly larger
coverage than the generic-frequency policy for the same number of
queries.
4.4
Incorporating the document download
cost
For brevity of presentation, the performance evaluation results
provided so far assumed a simplified cost-model where every query
involved a constant cost. In this section we present results regarding
the performance of the adaptive and generic-frequency algorithms
using Equation 2 to drive our query selection process. As we discussed
in Section 2.3.1, this query cost model includes the cost for
submitting the query to the site, retrieving the result index page,
and also downloading the actual pages. For these costs, we examined
the size of every result in the index page and the sizes of the
documents, and we chose
c
q
= 100, c
r
= 100, and c
d
= 10000,
as values for the parameters of Equation 2, and for the particular
experiment that we ran on the PubMed website. The values that
we selected imply that the cost for issuing one query and retrieving
one result from the result index page are roughly the same, while
the cost for downloading an actual page is 100 times larger. We
believe that these values are reasonable for the PubMed Web site.
Figure 15 shows the coverage of the adaptive and generic-frequency
algorithms as a function of the resource units used during
the download process. The horizontal axis is the amount of
resources used, and the vertical axis is the coverage. As it is evident
from the graph, the adaptive policy makes more efficient use of
the available resources, as it is able to download more articles than
the generic-frequency, using the same amount of resource units.
However, the difference in coverage is less dramatic in this case,
compared to the graph of Figure 9. The smaller difference is due
to the fact that under the current cost metric, the download cost of
documents constitutes a significant portion of the cost. Therefore,
when both policies downloaded the same number of documents,
the saving of the adaptive policy is not as dramatic as before. That
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
5000
10000
15000
20000
25000
30000
Fraction of Unique Pages
Total Cost (c
q
=100, c
r
=100, c
d
=10000)
Cumulative fraction of unique pages downloaded per cost unit - PubMed Web site
adaptive
frequency
Figure 15: Coverage of PubMed after incorporating the document
download cost
is, the savings in the query cost and the result index download cost
is only a relatively small portion of the overall cost. Still, we observe
noticeable savings from the adaptive policy. At the total cost
of 8000, for example, the coverage of the adaptive policy is roughly
0.5 while the coverage of the frequency policy is only 0.3.
RELATED WORK
In a recent study, Raghavan and Garcia-Molina [29] present an
architectural model for a Hidden Web crawler. The main focus of
this work is to learn Hidden-Web query interfaces, not to generate
queries automatically. The potential queries are either provided
manually by users or collected from the query interfaces. In contrast
, our main focus is to generate queries automatically without
any human intervention.
The idea of automatically issuing queries to a database and examining
the results has been previously used in different contexts.
For example, in [10, 11], Callan and Connel try to acquire an accurate
language model by collecting a uniform random sample from
the database. In [22] Lawrence and Giles issue random queries to
a number of Web Search Engines in order to estimate the fraction
of the Web that has been indexed by each of them. In a similar
fashion, Bharat and Broder [8] issue random queries to a set of
Search Engines in order to estimate the relative size and overlap of
their indexes. In [6], Barbosa and Freire experimentally evaluate
methods for building multi-keyword queries that can return a large
fraction of a document collection. Our work differs from the previous
studies in two ways. First, it provides a theoretical framework
for analyzing the process of generating queries for a database and
examining the results, which can help us better understand the effectiveness
of the methods presented in the previous work. Second,
we apply our framework to the problem of Hidden Web crawling
and demonstrate the efficiency of our algorithms.
Cope et al. [15] propose a method to automatically detect whether
a particular Web page contains a search form. This work is complementary
to ours; once we detect search interfaces on the Web
using the method in [15], we may use our proposed algorithms to
download pages automatically from those Web sites.
Reference [4] reports methods to estimate what fraction of a
text database can be eventually acquired by issuing queries to the
database. In [3] the authors study query-based techniques that can
extract relational data from large text databases. Again, these works
study orthogonal issues and are complementary to our work.
In order to make documents in multiple textual databases searchable
at a central place, a number of "harvesting" approaches have
108
been proposed (e.g., OAI [21], DP9 [24]). These approaches essentially
assume cooperative document databases that willingly share
some of their metadata and/or documents to help a third-party search
engine to index the documents. Our approach assumes uncoop-erative
databases that do not share their data publicly and whose
documents are accessible only through search interfaces.
There exists a large body of work studying how to identify the
most relevant database given a user query [20, 19, 14, 23, 18]. This
body of work is often referred to as meta-searching or database
selection problem over the Hidden Web. For example, [19] suggests
the use of focused probing to classify databases into a topical
category, so that given a query, a relevant database can be selected
based on its topical category. Our vision is different from this body
of work in that we intend to download and index the Hidden pages
at a central location in advance, so that users can access all the
information at their convenience from one single location.
CONCLUSION AND FUTURE WORK
Traditional crawlers normally follow links on the Web to discover
and download pages. Therefore they cannot get to the Hidden
Web pages which are only accessible through query interfaces. In
this paper, we studied how we can build a Hidden Web crawler that
can automatically query a Hidden Web site and download pages
from it. We proposed three different query generation policies for
the Hidden Web: a policy that picks queries at random from a list
of keywords, a policy that picks queries based on their frequency
in a generic text collection, and a policy which adaptively picks a
good query based on the content of the pages downloaded from the
Hidden Web site. Experimental evaluation on 4 real Hidden Web
sites shows that our policies have a great potential. In particular, in
certain cases the adaptive policy can download more than
90% of
a Hidden Web site after issuing approximately
100 queries. Given
these results, we believe that our work provides a potential mechanism
to improve the search-engine coverage of the Web and the
user experience of Web search.
6.1
Future Work
We briefly discuss some future-research avenues.
Multi-attribute Databases
We are currently investigating how
to extend our ideas to structured multi-attribute databases. While
generating queries for multi-attribute databases is clearly a more
difficult problem, we may exploit the following observation to address
this problem: When a site supports multi-attribute queries,
the site often returns pages that contain values for each of the query
attributes. For example, when an online bookstore supports queries
on title, author and isbn, the pages returned from a query
typically contain the title, author and ISBN of corresponding books.
Thus, if we can analyze the returned pages and extract the values
for each field (e.g, title = `Harry Potter', author =
`J.K. Rowling'
, etc), we can apply the same idea that we
used for the textual database: estimate the frequency of each attribute
value and pick the most promising one. The main challenge
is to automatically segment the returned pages so that we can identify
the sections of the pages that present the values corresponding
to each attribute. Since many Web sites follow limited formatting
styles in presenting multiple attributes -- for example, most book
titles are preceded by the label "Title:" -- we believe we may learn
page-segmentation rules automatically from a small set of training
examples.
Other Practical Issues
In addition to the automatic query generation
problem, there are many practical issues to be addressed
to build a fully automatic Hidden-Web crawler. For example, in
this paper we assumed that the crawler already knows all query interfaces
for Hidden-Web sites. But how can the crawler discover
the query interfaces? The method proposed in [15] may be a good
starting point. In addition, some Hidden-Web sites return their results
in batches of, say, 20 pages, so the user has to click on a
"next" button in order to see more results. In this case, a fully automatic
Hidden-Web crawler should know that the first result index
page contains only a partial result and "press" the next button automatically
. Finally, some Hidden Web sites may contain an infinite
number of Hidden Web pages which do not contribute much significant
content (e.g. a calendar with links for every day). In this
case the Hidden-Web crawler should be able to detect that the site
does not have much more new content and stop downloading pages
from the site. Page similarity detection algorithms may be useful
for this purpose [9, 13].
REFERENCES
[1] Lexisnexis http://www.lexisnexis.com.
[2] The Open Directory Project, http://www.dmoz.org.
[3] E. Agichtein and L. Gravano. Querying text databases for efficient information
extraction. In ICDE, 2003.
[4] E. Agichtein, P. Ipeirotis, and L. Gravano. Modeling query-based access to text
databases. In WebDB, 2003.
[5] Article on New York Times. Old Search Engine, the Library, Tries to Fit Into a
Google World. Available at: http:
//www.nytimes.com/2004/06/21/technology/21LIBR.html
,
June 2004.
[6] L. Barbosa and J. Freire. Siphoning hidden-web data through keyword-based
interfaces. In SBBD, 2004.
[7] M. K. Bergman. The deep web: Surfacing hidden value,http:
//www.press.umich.edu/jep/07-01/bergman.html
.
[8] K. Bharat and A. Broder. A technique for measuring the relative size and
overlap of public web search engines. In WWW, 1998.
[9] A. Z. Broder, S. C. Glassman, M. S. Manasse, and G. Zweig. Syntactic
clustering of the web. In WWW, 1997.
[10] J. Callan, M. Connell, and A. Du. Automatic discovery of language models for
text databases. In SIGMOD, 1999.
[11] J. P. Callan and M. E. Connell. Query-based sampling of text databases.
Information Systems, 19(2):97130, 2001.
[12] K. C.-C. Chang, B. He, C. Li, and Z. Zhang. Structured databases on the web:
Observations and implications. Technical report, UIUC.
[13] J. Cho, N. Shivakumar, and H. Garcia-Molina. Finding replicated web
collections. In SIGMOD, 2000.
[14] W. Cohen and Y. Singer. Learning to query the web. In AAAI Workshop on
Internet-Based Information Systems, 1996.
[15] J. Cope, N. Craswell, and D. Hawking. Automated discovery of search
interfaces on the web. In 14th Australasian conference on Database
technologies, 2003.
[16] T. H. Cormen, C. E. Leiserson, and R. L. Rivest. Introduction to Algorithms,
2nd Edition. MIT Press/McGraw Hill, 2001.
[17] D. Florescu, A. Y. Levy, and A. O. Mendelzon. Database techniques for the
world-wide web: A survey. SIGMOD Record, 27(3):5974, 1998.
[18] B. He and K. C.-C. Chang. Statistical schema matching across web query
interfaces. In SIGMOD Conference, 2003.
[19] P. Ipeirotis and L. Gravano. Distributed search over the hidden web:
Hierarchical database sampling and selection. In VLDB, 2002.
[20] P. G. Ipeirotis, L. Gravano, and M. Sahami. Probe, count, and classify:
Categorizing hidden web databases. In SIGMOD, 2001.
[21] C. Lagoze and H. V. Sompel. The Open Archives Initiative: Building a
low-barrier interoperability framework In JCDL, 2001.
[22] S. Lawrence and C. L. Giles. Searching the World Wide Web. Science,
280(5360):98--100, 1998.
[23] V. Z. Liu, J. C. Richard C. Luo and, and W. W. Chu. Dpro: A probabilistic
approach for hidden web database selection using dynamic probing. In ICDE,
2004.
[24] X. Liu, K. Maly, M. Zubair and M. L. Nelson. DP9-An OAI Gateway Service
for Web Crawlers. In JCDL, 2002.
[25] B. B. Mandelbrot. Fractal Geometry of Nature. W. H. Freeman & Co.
[26] A. Ntoulas, J. Cho, and C. Olston. What's new on the web? the evolution of the
web from a search engine perspective. In WWW, 2004.
[27] A. Ntoulas, P. Zerfos, and J. Cho. Downloading hidden web content. Technical
report, UCLA, 2004.
[28] S. Olsen. Does search engine's power threaten web's independence?
http://news.com.com/2009-1023-963618.html
.
[29] S. Raghavan and H. Garcia-Molina. Crawling the hidden web. In VLDB, 2001.
[30] G. K. Zipf. Human Behavior and the Principle of Least-Effort.
Addison-Wesley, Cambridge, MA, 1949.
109
| crawler;deep web;hidden web;Hidden Web crawling;query selection;efficiency;Deep Web crawler;coverage;keyword selection;adaptive algorithm;potential bias;adaptive algorithmn;accurate language model;keyword query;keyword queries |
75 | Easy Language Extension with Meta-AspectJ | Domain-specific languages hold the potential of automating the software development process. Nevertheless, the adop-tion of a domain-specific language is hindered by the difficulty of transitioning to different language syntax and employing a separate translator in the software build process. We present a methodology that simplifies the development and deployment of small language extensions, in the context of Java. The main language design principle is that of language extension through unobtrusive annotations. The main language implementation idea is to express the language as a generator of customized AspectJ aspects, using our Meta-AspectJ tool. The advantages of the approach are twofold. First, the tool integrates into an existing software application much as a regular API or library, instead of as a language extension. This means that the programmer can remove the language extension at any point and choose to implement the required functionality by hand without needing to rewrite the client code. Second, a mature language implementation is easy to achieve with little effort since AspectJ takes care of the low-level issues of interfacing with the base Java language | INTRODUCTION AND MOTIVATION
The idea of extensible languages has fascinated programmers
for many decades, as evident by the extensibility fea
This material is based upon work supported by the National
Science Foundation under Grants No. CCR-0220248
and CCR-0238289.
Copyright is held by the author/owner.
ICSE'06, May 2028, 2006, Shanghai, China.
ACM 1-59593-085-X/06/0005.
tures in languages as old as Lisp.
From a Software Engineering
standpoint, the main advantages of expressing a
concept as a language feature, as opposed to a library API,
are in terms of conciseness, safety, and performance.
A
language feature allows expressing the programmer's intent
much more concisely--in contrast, libraries are limited to
a function- or method-call syntax. A language feature also
allow better static error checking--a library can only check
the static types of arguments of a function call against the
declared types of the formals. Finally, a language feature
can take advantage of context information and employ an
optimized implementation, while a library routine cannot be
customized according to its uses.
Despite these advantages, there are excellent reasons why
full language extensibility is undesirable. Changing the syntax
and semantics of a programming language is confusing
and can lead to incomprehensible code. Furthermore, programming
languages are complex entities, designed to provide
a small number of features but allow them to be combined
as generally as possible.
A new feature can either
increase the complexity of the language implementation significantly
(because of interactions with all existing features),
or will need to be limited in its interactions, which is a bad
language design principle that leads to single-use features
and design bloat.
In past work [3], we have advocated expressing small
language extensions purely through unobtrusive annotations
. Indeed, the introduction of user-defined annotations
in mainstream programming languages, such as C# and
Java, has allowed specialized language extensions (e.g., for
distributed computing, persistence, or real-time programming
) to be added without changing the base syntax.
We believe that the approach of limited language extension
through annotations meshes very well with an implementation
technique that uses our Meta-AspectJ (MAJ)
tool [4] to express the semantics of the language extension.
Specifically, MAJ is a language that allows writing programs
that generate aspects in the AspectJ language [1]. The programmer
can easily express an extension to the Java language
as a program that: a) reads annotations and type
information from an existing program using Java reflection;
b) outputs a customized AspectJ aspect that transforms the
original program according to the information in the annotation
; c) executes the generated aspect by using the standard
AspectJ compiler.
In other words, our approach uses the AspectJ language
as a compiler back-end.
AspectJ code is not written by
the application programmer but generated by the language
865
extension, for the sole purpose of expressing program transformations
easily and generally. This is appropriate, as AspectJ
embodies the Aspect-Oriented Programming [2] philosophy
of expressing program enhancements orthogonally
and independently of the original source code.
Our approach has the advantage of simplifying the implementation
of the language extension significantly, without
encouraging undisciplined language extension (since the
only extensions allowed are through annotations). Specifically
, the approach leverages the engineering sophistication
of the AspectJ compiler implementation and its provisions
for dealing correctly with different Java language features.
If a programmer were to replicate the same effort by hand,
she would likely need to reproduce much of the AspectJ
compiler complexity.
The purpose of this paper is to support the idea of implementing
small language extensions as programs that produce
aspects. We have recently implemented a number of
such small extensions to Java and they all exhibit a striking
simplicity. Specifically, we did not have to implement (or
extend) a Java parser, we did not need to deal with syntax
tree pattern matching and transformation, and we did not
need to provide special handling for many Java complexities.
The combined annotations-MAJ approach ensured that our
small language extensions were implementable in a few hundreds
of lines of code, without sacrificing generality in their
conditions of use. We discuss two such extensions in detail,
after first introducing the MAJ language.
BACKGROUND MAJ SYNTAX
MAJ is an extension of Java that allows writing programs
that generate AspectJ source code.
MAJ offers
two operators for creating AspectJ code fragments: `[...]
("quote") and #[...] ("unquote").
The quote operator
creates representations of AspectJ code fragments. Parts
of these representations can be variable and are desig-nated
by the unquote operator (instances of unquote can
only occur inside a quoted code fragment).
For example
, the value of the MAJ expression `[call(* *(..))] is
a data structure that represents the abstract syntax tree
for the fragment of AspectJ code call(* *(..)).
Similarly
, the MAJ expression `[!within(#className)] is a
quoted pattern with an unquoted part. Its value depends
on the value of the variable className.
If, for instance,
className holds the identifier "SomeClass", the value of
`[!within(#className)] is the abstract syntax tree for the
expression !within(SomeClass).
MAJ also introduces a new keyword infer that can be
used in place of a type name when a new variable is being
declared and initialized to a quoted expression. For example,
we can write:
infer pct1 = `[call(* *(..))];
This declares a variable pct1 that can be used just like any
other program variable. For instance, we can unquote it:
infer adv1 = `[void around() : #pct1 { }];
This creates the abstract syntax tree for a piece of AspectJ
code defining (empty) advice for a pointcut. Of course, since
AspectJ is an extension of Java, any regular Java program
fragment can be generated using MAJ.
We can now see a full MAJ method that generates a trivial
but complete AspectJ file:
void generateTrivialLogging(String classNm) {
infer aspectCode =
`[
package MyPackage;
aspect #[classNm + "Aspect"] {
before : call(* #classNm.*(..))
{ System.out.println("Method called"); }
}
];
System.out.println(aspectCode.unparse());
}
The generated aspect causes a message to be printed before
every call of a method in a class. The name of the affected
class is a parameter passed to the MAJ routine. This code
also shows the unparse method that MAJ supports for creating
a text representation of their code.
EXAMPLE 1 FILLING INTERFACE METHODS
Our first language extension is simple but a good example
to our approach, since it can be defined very quickly and it
is hard to implement with alternate means.
The Java language ensures that a class cannot declare
to "implement" an interface unless it provides implementations
for all of its methods. Nevertheless, this often results
in very tedious code. For instance, it is very common in
code dealing with the Swing graphics library to implement
an event-listener interface with many methods, yet provide
empty implementations for most of them because the application
does not care about the corresponding events. The
example code below is representative:
private class SomeListener
implements MouseListener, MouseMotionListener {
public void mousePressed (MouseEvent event) {
... // do something
}
public void mouseDragged (MouseEvent event) {
... // do something
}
// the rest are not needed. Provide empty bodies.
public void mouseReleased (MouseEvent event) {}
public void mouseEntered (MouseEvent event) {}
public void mouseExited (MouseEvent event) {}
public void mouseMoved (MouseEvent event) {}
}
Of course, the programmer could avoid providing the
empty method bodies on a per-interface basis, by associating
each interface with a class that by default provides empty
implementations of all interface methods. Then a client class
can inherit the empty implementations and only provide implementations
for the methods it needs. This pattern is indeed
supported in Swing code (through library classes called
adapters), but it is usually not possible to employ since the
listener class may already have another superclass. Instead,
it would be nice to provide a simple Java language extension
implemented as an annotation. The implementation of
the extension would be responsible for finding the unimple-mented
methods and supplying empty implementations by
default (or implementations that just return a default primitive
or null value, in the case of methods that have a return
866
type). In this case, the above class could be written more
simply as:
@Implements ({"MouseListener","MouseMotionListener"})
public class SomeListener {
public void mousePressed (MouseEvent event) {
... // do something
}
public void mouseDragged (MouseEvent event) {
... // do something
}
}
Of course, this extension should be used carefully since
it weakens the tests of interface conformance performed by
the Java compiler.
We implemented the above Java extension using MAJ.
The code for the implementation was less than 200 lines
long, with most of the complexity in the traversal of Java
classes, interfaces, and their methods using reflection. The
code processes a set of given Java classes and retrieves the
ones with an Implements annotation. It then finds all methods
that are in any of the interfaces passed as arguments to
the Implements annotation and are not implemented by the
current class. For each such method, code is generated in an
AspectJ aspect to add an appropriate method implementation
to the class. For instance, the code to add the method
to the class in the case of a void return type is:
infer newMethod =
`[ public void #methodName (#formals) {} ];
aspectMembers.add(newMethod);
Finally, the class needs to be declared to implement the
interfaces specified by the annotation. This is easily done
by emitting the appropriate AspectJ code:
infer dec = `[declare parents:
#[c.getName()] implements #[iface.getName()]; ];
The final aspect (slightly simplified for formatting reasons
) generated for our earlier listener class example is:
public aspect SomeListenerImplementsAspect1 {
void SomeListener.mouseEntered(MouseEvent e) {}
void SomeListener.mouseExited(MouseEvent e) {}
void SomeListener.mouseMoved(MouseEvent e) {}
void SomeListener.mouseReleased(MouseEvent e) {}
declare parents:
SomeListener implements MouseListener;
declare parents:
SomeListener implements MouseMotionListener;
}
This aspect performs exactly the modifications required
to the original class so that it correctly implements the
MouseListener and MouseMotionListener interfaces.
We invite the reader to consider how else this language
extension might be implemented. Our approach of using
annotations in combination with MAJ yielded a very simple
implementation by letting AspectJ deal with most of the
complexities of Java. Specifically, we did not have to deal
with the low-level complexities of either Java source syntax
or Java bytecode. For instance, we did not have to do any
code parsing to find the class body or declaration that needs
to be modified.
Dealing with Java syntactic sugar, such
as inner classes, was automatic. We did not need to do a
program transformation to add the implements clauses or
the extra methods to the class. Similarly, we did not need
to worry about the valid syntax for adding an implemented
interface if the class already implements another.
EXAMPLE 2 LANGUAGE SUPPORT FOR OBJECT POOLING
Our second example language extension addresses a common
programming need, especially in server-side programming
. Software applications often find the need for pooling
frequently-used objects with high instantiation costs. We
use the following database connection class as a running example
:
public class DBConnection {
public DBConnection(String dbURI,
String userName,
String password ) { ... }
public void open() { ... }
public void close() { ... }
}
The cost of an open() call is very high for a database connection
. In applications concerned with performance, such as
high-volume websites with lots of database requests, one often
finds the need to pool database connections and keep
them open, instead of repeatedly creating new ones and
opening them.
Making a class such as DBConnection into a "pooled"
class involves at the very least creating a pooling manager
class that knows how to manage instances of the class being
pooled.
A different pooling manager class is needed
for each class being pooled, since the manager needs to
have class-specific information such as how to instantiate
a new instance of the class when the pool is running low
(e.g., DBConnection objects are created by a constructor
call, followed by an open() call), and how to uniquely identify
objects of the same class that belong to different pools
(e.g., DBConnection objects of different dbURI, userName,
and password combinations need to be in different pools,
and the pooling manager needs to understand which pool to
fetch objects from when a request arrives).
We expressed the pooling concept as a language feature
that can used transparently with any Java class, as long
as some broad properties hold regarding its construction
and instantiation interface. The rest of the application will
be completely oblivious to the change. This facilitates the
application of pooling after a large code base which uses
the class in its non-pooled form has been developed. Using
our extension, converting a class to a pooled class involves
only 4 annotations: @pooled, @constructor, @request, and
@release. For example, to convert the DBConnection class
into a "pooled" class, and to adapt an existing code base to
using the pooled functionality, the user only has the add the
following annotations to the code:
@pooled(mgr=pooled.PoolTypes.BASIC, max=10, min=2)
public class DBConnection {
@constructor
public DBConnection(String dbURI,String userName,
String password ) { ... }
867
@request public void open() { ... }
@release public void close() { ... }
}
The @pooled annotation indicates that class DBConnection
should be pooled. It accepts parameters that can be used to
customize the pooling policy. @constructor annotates the
constructor call whose parameters serve as unique identifiers
for different kinds of DBConnection objects. In this example,
DBConnection objects with different dbURI, userName, and
password combinations should be maintained separately.
@request annotates the method that signals for the request
of a pooled object, and @release annotates the method call
that signals for the return of the pooled object back to the
pooling manager.
The implementation of this extension using MAJ is less
than 400 lines of code.
The MAJ program searches for
classes annotated with @pooled, and generates two Java
classes and one aspect to facilitate converting this class to be
pooled. We next describe the generated code in more detail.
The reader may want to consider in parallel how the same
task could be accomplished through other means. Neither
conventional Java facilities (i.e., classes and generics) nor
AspectJ alone would be sufficient for expressing the functionality
we describe below in a general way, so that it can
be applied with little effort to arbitrary unomdified classes.
For instance, none of these facilities can be used to create a
proxy class with methods with identical signatures as those
of an arbitrary Java class.
First, a pooling manager class, PoolMgrForDBConnection,
is generated for DBConnection.
The pooling manager
class contains methods for requesting and releasing pooled
DBConnection objects, as well as code to manage the expansion
of the pool based on the min and max parameters.
In order to retrofit an existing code base to use
DBConnection as a pooled class, we need to introduce proxy
objects that will be used wherever an object of the original
class would exist in the application code. This is necessary
as different objects from the perspective of the client code
will correspond to the same pooled object. We generate a
proxy class as a subclass of the pooled class. In our example
: DBConnection_Proxy extends DBConnection. All
instances of the proxy class share a static reference to
an instance of PoolMgrForDBConnection.
Each proxy instance
holds (non-static) references to the parameters to
the @constructor constructor call, and the DBConnection
object obtained from the pooling manager. The proxy class
rewrites the @request and @release methods: the @request
method is rewritten to obtain an object of DBConnection
type from the pooling manager, using the unique identifiers
kept from the constructor call, and the @release method
returns the DBConnection method back to the pool, while
setting the reference to this object to null. The MAJ code in
the proxy takes care to exactly replicate the signature of the
original methods, including modifiers and throws clauses.
For instance, the @release method in the proxy is generated
as:
infer meth =
`[ #mods #ret #[m.getName()] (#formals) #throwStmt
{
m_poolMgr.release(m_uniqueId, m_proxiedObj);
m_proxiedObj = null;
}];
All other methods simply delegate to the same method in
the superclass.
The idea is to have variables declared to hold a
DBConnection object, now hold a DBConnection_Proxy object
.
Therefore, to complete the "proxy" pattern, we
need to change all the calls of new DBConnection(...) to
new DBConnection_Proxy(...). This is the role of our generated
aspect: tedious recoding effort is easily replaced by
an aspect: the aspect intercepts all the constructor calls of
DBConnection, and returns an object instantiated by calling
new DBConnection_Proxy(...).
In summary, a user can easily turn a class into a pooled
class, and retrofit any existing code base to use this class in
its new, pooled form. The client code does not need to be
hand-edited at all, other than with the introduction of our
4 annotations.
FUTURE WORK
We believe that the years to come will see the emergence
of a healthy ecology of small language extensions
based on the annotation features of Java and C#. There
are already major examples of such extensions, especially
with distribution- and persistence-related annotations, implemented
in the context of J2EE Application Servers. Such
extensions can be implemented with heavyweight support-e
.g., parsing files, or recognizing annotations in a class loader
and performing bytecode manipulation. In fact, the JBoss
AOP mechanism (in whose early design and implementation
we have played an active role) is the foremost example
of infrastructure used to implement annotation-based language
extensions. Nevertheless, experience from compilers
in general-purpose languages has shown that it is beneficial
to develop a mature back-end language and implement high-level
features by translating to that back-end. Our approach
proposes that AspectJ is well-suited as such a back-end language
for small Java language extensions and that generating
AspectJ code offers significant simplicity benefits. In
the future we plan to support this claim with more examples
and perform a thorough comparison with competing
mechanisms.
REFERENCES
[1] G. Kiczales, E. Hilsdale, J. Hugunin, M. Kersten, J. Palm, and
W. G. Griswold. An overview of AspectJ. In ECOOP '01:
Proceedings of the 15th European Conference on
Object-Oriented Programming, pages 327353, London, UK,
2001. Springer-Verlag.
[2] G. Kiczales, J. Lamping, A. Menhdhekar, C. Maeda, C. Lopes,
J.-M. Loingtier, and J. Irwin. Aspect-oriented programming. In
M. Ak
sit and S. Matsuoka, editors, Proceedings European
Conference on Object-Oriented Programming, volume 1241,
pages 220242. Springer-Verlag, Berlin, Heidelberg, and New
York, 1997.
[3] Y. Smaragdakis. A personal outlook on generator research. In
C. Lengauer, D. Batory, C. Consel, and M. Odersky, editors,
Domain-Specific Program Generation. Springer-Verlag, 2004.
LNCS 3016.
[4] D. Zook, S. S. Huang, and Y. Smaragdakis. Generating
AspectJ programs with meta-AspectJ. In Generative
Programming and Component Engineering (GPCE), pages
118. Springer-Verlag, October 2004.
868
| language extensions;annotation;domain-specific language;language extension;Meta-AspectJ;Java;simplicity;domain-specific languages |
76 | Hourly Analysis of a Very Large Topically Categorized Web Query Log | We review a query log of hundreds of millions of queries that constitute the total query traffic for an entire week of a general-purpose commercial web search service. Previously, query logs have been studied from a single, cumulative view. In contrast, our analysis shows changes in popularity and uniqueness of topically categorized queries across the hours of the day. We examine query traffic on an hourly basis by matching it against lists of queries that have been topically pre-categorized by human editors. This represents 13% of the query traffic. We show that query traffic from particular topical categories differs both from the query stream as a whole and from other categories. This analysis provides valuable insight for improving retrieval effectiveness and efficiency. It is also relevant to the development of enhanced query disambiguation, routing, and caching algorithms. | INTRODUCTION
Understanding how queries change over time is critical to
developing effective, efficient search services. We are unaware of
any log analysis that studies differences in the query stream over
the hours in a day; much less how those differences are
manifested within topical categories. We focus on Circadian
changes in popularity and uniqueness of topical categories.
Emphasis on changing query stream characteristics over this
longitudinal (time) aspect of query logs distinguishes this work
from prior static log analysis, surveyed<A href="76.html#7"> in [7].
We began with the hypothesis that there are very different
characteristics during peak hours and off-peak hours during a day.
After reviewing a week's worth of data hundreds of millions of
queries - we have found, not surprisingly, that:
The number of queries issued is substantially lower during
non-peak hours than peak hours.
However, we knew little about how often queries are repeated
from one hour of the day to the next. After examining the
behavior of millions of queries from one hour of the day to the
next we have found the less obvious result:
The average number of query repetitions in an hour does not
change significantly on an hourly basis throughout the day.
Most queries appear no more than several times per hour.
These queries consistently account for a large portion of total
query volume throughout the course of the day.
The queries received during peak hours are more similar to
each other than their non-peak hour counterparts.
We also analyze the queries representing different topics using a
topical categorization of our query stream. These cover
approximately 13% of the total query volume. We hypothesized
that traffic behavior for some categories would change over time
and that others would remain stable. For 16 different categories,
we examined their traffic characteristics:
Some topical categories vary substantially more in
popularity than others as we move through an average day.
Some topics are more popular during particular times of the
day, while others have a more constant level of interest over
time.
The query sets for different categories have differing
similarity over time. The level of similarity between the
actual query sets received within topical categories varies
differently according to category.
This leads us to believe that predictive algorithms that are able to
estimate the likelihood of a query being repeated may well be
possible. This could have a significant impact on future cache
management and load-balancing algorithms. Such algorithms
could improve retrieval effectiveness by assisting in query
disambiguation, making it easier to determine what information
need is being expressed by a query at a given time. They could
also assist research in search efficiency that takes into account
query arrival-rate<A href="76.html#7">s [3].
Our analysis covers the entirety of the tens of millions of queries
each day in the search log from America Online
over a
complete week in December. This represents a population of tens
of millions of users searching for a wide variety of topics. Section
2 reviews the prior work in query log analysis. Section 3
describes our analysis of overall query traffic. Section 4 describes
our analysis of trends in categorized queries. Finally, in Section 5
we present our conclusions and directions for future work.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies
are not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
SIGIR'04, July 2529, 2004, Sheffield, South Yorkshire, UK.
Copyright 2004 ACM 1-58113-881-4/04/0007...$5.00.
321
PRIOR WORK
Examinations of search engine evaluation indicate that
performance likely varies over time due to differences in query
sets and collections<A href="76.html#7"> [6]. Although the change in collections over
time has been studied (e.g., the growth of the web)<A href="76.html#7"> [10], analysis
of users' queries has been primarily limited to the investigation of
a small set of available query logs that provide a snapshot of their
query stream over a fixed period of time. Prior work can be
partitioned into static query log analysis and some recent
disclosures by web search engines.
Query log analysis can be partitioned into large-scale log analysis,
small-scale log analysis and some other applications of log
analysis such as categorization and query clustering. Jansen and
Pooch provide a framework for static log analysis, but do not
address analysis of changes in a query stream over time<A href="76.html#7"> [7].
Given that most search engines receive on the order of between
tens and hundreds of millions of queries a day<A href="76.html#8"> [22], current and
future log analysis efforts should use increasingly larger query sets
to ensure that prior assumptions still hold.
Previous studies measured overall aspects of users' queries from
static web query logs. In the only large-scale study (all others
involve only a few million queries), Silverstein concludes that
users typically view only the top ten search results and that they
generally enter short queries from a static analysis of an AltaVista
query log from six weeks in 1998 consisting of 575 million non-empty
queries<A href="76.html#8"> [16]. He also found that only 13.6% of queries
appear more than three times, the top 25 queries represent 1.5% of
the total query volume, and in 75% of sessions users do not revise
their queries. Additionally, co-occurrence analysis of the most
frequent 10,000 queries showed that the most correlated terms are
often constituents of phrases. No time-based or topic-based
analysis of this query load was reported; it does not provide
insight into how or when any usage or topical interest changes
occur. Other studies examine the effect of advanced query
operators on the search service coverage of Google, MSN, and
AOL, finding that in general, they had little effect<A href="76.html#7"> [4]. These
overall statistics do not provide any insight into temporal changes
in the query log, but do provide some insight into how people use
search services.
Jansen, et. al, also provide analysis of query frequency<A href="76.html#7"> [7][<A href="76.html#8">19].
Their findings indicate that the majority (57%) of query terms
from the Excite log of more than 51,000 queries are used only
once, and a large majority (78%) occur three times or less. These
studies show that neither queries nor their component terms
follow a Zipfian distribution, as the number of rare, infrequently
repeated queries and terms is disproportionately large. Other
studies have focused on user behavior at the query session level
and found varying results, with some estimating reformulated
queries constituting 40-52% of queries in a log<A href="76.html#8"> [18][21]. Wang,
et. al examined a log of more than 500,000 queries to a university
search engine from 1997-2001 <A href="76.html#8">[23]. They find trends in the
number of queries received by season, month, and day. We
extend upon this work by examining the larger community of
general web searchers and analyzing trends corresponding to hour
of day.
Several studies examine query categories in small, static logs.
Spink, et. al analyzed logs totaling more than one million queries
submitted to the Excite web search engine during single days in
1997, 1999, and 2<A href="76.html#8">001 [18][19][20]. They classified
approximately 2,500 queries from each log into 11 topical
categories and found that although search topics have changed
over the years, users' behaviors have not. Ross and Wolfram
categorized the top 1,000 term pairs from the one million query
Excite log into 30 subject areas to show commonalities of terms in
categorie<A href="76.html#8">s [13]. Jansen, et. al used lists of terms to identify image,
audio, and video queries and measure their presence in the one
million query Excite<A href="76.html#7"> log [9]. In order to examine the differences
in queries from users in different countries, Spink, et. al, examined
a 500,000 query log from the FAST web search engine during
2001, believed to be used largely by Europeans at that time,
classifying 2,500 queries from it into the same topical categories.
They found differences between FAST and Excite in the topics
searched fo<A href="76.html#8">r [17].
Other work manually grouped queries by task. Broder defines
queries as informational, navigational or transactional and
presents a study of AltaVista users via a popup survey and manual
categorization of 200 queries from a log<A href="76.html#7"> [2]. Beitzel, et. al
implicitly categorized queries from a search log as navigational by
matching them to edited titles in web directories to automatically
evaluate navigational web search<A href="76.html#7"> [1]. Xie and Wolfram
automatically categorized query terms by using results from web
search engines to assign the terms to broad subject categorie<A href="76.html#8">s [25].
Several studies of query caching examine query frequency
distributions from a static log, focusing on the average likelihood
of an arbitrary query being repeated over the entire, fixed-length
log. Lempel and Moran evaluated the performance of caching
strategies over a log of seven million queries to AltaVista in 2001
and found that the frequencies of queries in their log followed a
power law<A href="76.html#8"> [11]. Eiron and McCurley compared query vocabulary
from a log of nearly 1.3 million queries posed to a corporate
intranet to the vocabulary of web page anchor text and found that
the frequency of queries and query terms follows a tail-heavy
power law<A href="76.html#7"> [5]. Xie and O'Hallaron studied query logs from the
Vivisimo meta-search engine of 110,881 queries over one month
in 2001 in comparison to the Excite log of 1.9 million over one
day in 1999 and found that although as in other studies over half
of the queries are never repeated, the frequencies of queries that
are repeated do follow a Zipfian distribution<A href="76.html#8"> [26]. Saraiva, et. al
evaluated a two-level caching scheme on a log of over 100,000
queries to a Brazilian search engine and found that query
frequencies follow a Zipf-like distri<A href="76.html#8">bution [15]. Markatos
simulated the effect of several types of query caches on an Excite
query log of approximately one million queries and found that
traditional caching methods provide significant improvements in
efficiency<A href="76.html#8"> [12]. Although traditional MRU-style caches obviously
enhance throughput by exploiting temporal locality at the minute-to
-minute level, these studies do not examine changes in the query
stream according to the hour of the day that may be leveraged in
enhanced cache design.
It is well known that different users represent the same
information need with different query terms, making query
clustering attractive when examining groups of related queries.
However, as Raghavan and Sever have shown, traditional
similarity measures are unsuitable for finding query-to-query
similarity<A href="76.html#8"> [13]. Wen, et. al, incorporated click-through to cluster
users' queries<A href="76.html#8"> [23]. In evaluating their system, they analyzed a
random subset of 20,000 queries from a single month of their
approximately 1-million queries-per-week traffic. They found
322
that the most popular 22.5% queries represent only 400 clusters of
queries using differing sets of query terms.
Many web search services have begun to offer views of the most
popular and/or changing (becoming drastically more or less
popular) queries: AOL Member Trends, Yahoo - Buzz Index,
Lycos - The Lycos 50 with Aaron Schatz, Google Zeitgeist,
AltaVista - Top Queries, Ask Jeeves, Fast (AllTheWeb). These
views necessarily incorporate a temporal aspect, often showing
popular queries for the current time period and those that are
consistently popular. Some also break down popularity by topical
categories. Systems seeking to display changing queries must
address the issue of relative versus absolute change in a query's
frequency to find queries whose change is "interesting", not
simply a query that went from frequency one to two (a 200%
jump), or one that went from 10,000 to 11,000 (a 1000 absolute
change).
OVERALL QUERY TRAFFIC
We examine a search log consisting of hundreds of millions of
queries from a major commercial search service over the seven-day
period from 12/26/03 through 1/1/04. This log represents
queries from approximately 50 million users. We preprocess the
queries to normalize the query strings by removing any case
differences, replacing any punctuation with white space (stripping
advanced search operators from the approximately 2% of queries
containing them), and compressing white space to single spaces.
The average query length is 1.7 terms for popular queries and 2.2
terms over all queries. On average, users view only one page of
results 81% of the time, two pages 18% and three or more 1% of
the time. First, we examine trends in the query stream as a whole,
and then focus on trends related to queries manually categorized
into topical categories.
We begin our analysis of the overall stream by examining how the
volume of query traffic changes as we move from peak to non-peak
hours. We show the percentage of the day's total and
distinct number of queries for each hour in the day on average
over our seven-day peri<A href="76.html#3">od in Figure 1 (all times in our query log
are Eastern Standard Time). Only 0.75% of the day's total queries
appear from 5-6AM, whereas 6.7% of the day's queries appear
from 9-10PM. Perhaps more interestingly, the ratio of distinct to
total queries in a given hour is nearly constant throughout the day.
This shows that the average number of times a query is repeated is
virtually constant over the hours in a day, remaining near 2.14
with only a 0.12 standard deviation.
Although the average repetition of queries remains nearly
constant, we can examine this in greater detail by measuring the
frequency distribution of queries at various hours in the day, as
seen<A href="76.html#3"> in Figure 2. From this analysis it is clear that the vast
majority of queries in an hour appear only one to five times and
that these rare queries consistently account for large portions of
the total query volume throughout the course of the day.
Figure 1
Although we have shown that the query distribution does not
change substantially over the course of a day, this does not
provide insight into how the sets of queries vary from one hour to
the next. To examine this, we measure the overlap between the
sets of queries entered during those hours. We use traditional set
and bag overlap measures as gi<A href="76.html#4">ven in Equation 1 and Equation 2,
respectively. Distinct overlap measures the similarity between the
sets of unique queries from each hour, while overall (bag) overlap
measures the similarity of their frequency distributions by
incorporating the number of times each query appears in an hour,
)
;
(
A
q
C
i
. While these measures examine the similarity of the sets
of queries received in an hour and the number of times they are
entered, they do not incorporate the relative popularity or ranking
of queries within the query sets. To examine this, we also
measure the Pearson correlation of the queries' frequencies. As
can be seen from<A href="76.html#4"> Equation 3 (where
)
;
( A
q
C
is the mean number
of query repetitions in period A and
)
;
( A
q
C
s
is the standard
deviation of all the query frequencies in period A), this measures
the degree of linear correlation between the frequencies of the
queries in each hour, so two hours that had exactly the same
queries with exactly the same frequencies would have a
correlation of one. Note that this normalizes for the effect of
differing query volume, i.e., the correlation of two hours with
exactly the same underlying query distributions simply scaled by a
constant would also have a correlation of one.
Figure 2
Percentage of Average Daily Query Traffic at Each Hour
0%
1%
2%
3%
4%
5%
6%
7%
8%
0
6
12
18
Hour of Day
P
e
r
c
en
t
a
g
e
o
f
D
a
i
l
y Qu
er
y T
r
af
f
i
c
Average Total Queries
Average Distinct Queries
Frequency Distribution of Selected Hours from 12/26/03
0%
5%
10%
15%
20%
25%
30%
35%
40%
1,
00
11
0
,
0
0
0
50
11
,
0
0
0
20
15
0
0
10
12
0
0
51
1
0
0
26
5
0
21
2
5
16
2
0
11
1
5
10
9
8
7
6
5
4
3
2
1
Frequency Ranges
P
erc
en
t
a
g
e
o
f
T
o
t
a
l
Q
u
er
i
e
s
12AM-1AM
6AM-7AM
12PM-1PM
6PM-7PM
323
B
A
B
A
B
A
overlap
dist
=
)
,
(
.
Equation 1: Distinct Overlap of Query Sets from Hours A
and B
+
=
B
A
q
i
i
B
q
i
A
q
i
B
A
q
i
i
i
i
i
i
B
q
C
A
q
C
B
q
C
A
q
C
B
q
C
A
q
C
B
A
overlap
))
;
(
),
;
(
min(
)
;
(
)
;
(
))
;
(
),
;
(
min(
)
,
(
Equation 2: Overall Overlap of Query Sets from Hours A and
B
)
;
(
)
;
(
1
,
)
)
;
(
)
;
(
)(
)
;
(
)
;
(
(
1
1
B
q
C
A
q
C
n
i
i
i
B
A
s
s
B
q
C
B
q
C
A
q
C
A
q
C
n
r
=
=
Equation 3: Pearson Correlation of Query Frequencies from
Hours A and B
Figure 3
In<A href="76.html#4"> Figure 3 we examine the average level of overlap and
correlation between the query sets received during the same hour
for each day over our week. As measuring overlap over the set of
all queries appearing in our week would be computationally
expensive, we use the set of all the tens of millions of queries in
the day after our seven-day period as an independent sample and
measure overlap at each hour in our week of the queries matching
those in that sample. Although we previously saw that the
frequency distribution of queries does not substantially change
across hours of the day,<A href="76.html#4"> Figure 3 shows that the similarity between
the actual queries that are received during each hour does in fact
change. This trend seems to follow query volume, which is
apparent if we sort the same overlap data by query volume as is
done<A href="76.html#4"> in Figure 4. Clearly, as query volume increases the queries
that compose that traffic are more likely to be similar across
samples of those peak time periods.
This finding is consistent with prior analyses of web query caches
showing they significantly improve performance under heavy
load. The more redundancy they are able to detect, the more
caching algorithms are able to enhance throughput. Although the
prior work primarily measures the effect of this redundancy in
cache performance, it is obvious that redundancy must exist and
be detected for caching to succeed. By examining the overall
query stream by hour we are able to infer the effectiveness of
general caching algorithms at those times.
Figure 4
QUERY CATEGORIES
In Section 3 we analyzed the entire query log. However, this
blanket view of the query traffic does not provide insight into the
characteristics of particular categories of queries that might be
exploited for enhanced efficiency or effectiveness. For example, a
search provider who returns specialized results for entertainment
queries cannot determine from general query traffic alone whether
a given query is more likely to be referring to entertainment
related content or how to best process and cache that query.
The remainder of our analysis focuses on trends relating to topical
category of queries. Our query set is categorized simply by
exactly matching queries to one of the lists corresponding to each
category. These lists are manually constructed by editors who
categorize real users' queries, generate likely queries, and import
lists of phrases likely to be queries in a category (e.g., cities in the
US for the US Sites category). Queries that match at least one
category list comprise 13% of the total query traffic on average.
This represents millions of queries per day.
Figure 5
To verify that our defined category lists sufficiently cover the
topics in the query stream, we manually classified a random
sample of queries, assigning them to "Other" if they did not
intuitively fit into an existing category, as can be seen<A href="76.html#4"> in Figure 5.
To determine the number of queries required to achieve a
representative sample, we calculate the necessary sample size in
queries, ss = (z
2
2
)/
2
, where z is the confidence level value,
is
Sorted Average Overlap Characteristics from 1/2/04 that Matched Each Hour
10%
15%
20%
25%
30%
35%
40%
45%
50%
55%
60%
5
6
4
7
3
2
8
1
9
10
0
11 12 13 14 23 15 16 17 18 22 19 21 20
Hour of Day
P
e
rcen
t
a
g
e
Overlap
Distinct Overlap
Pearson
Sampled Categorized Query Stream Breakdown
Personal
Finance
3%
Computing
9%
Research &
Learn
9%
Entertainment
13%
Games
5%
Holidays
1%
Home
5%
US Sites
3%
Porn
10%
Shopping
13%
Sports
3%
T ravel
5%
Other
16%
Health
5%
Average Overlap Characteristics of Matching Queries from 1/2/04
10%
15%
20%
25%
30%
35%
40%
45%
50%
55%
0
6
12
18
Hour of Day
Overlap
Distinct Overlap
Pearson
324
the sample standard deviation, and
is the error rate. By setting
our confidence level to 99% and error rate to 5%, we require a
sample of 600 queries. The relative percentages for each category
of the approximately 13% of query volume that match any
category list over our week (s<A href="76.html#5">ee Figure 9) are within the error rate
of those from our manually categorized sample. This shows that
our lists are a reasonable representation of these topical categories.
We focus on a subset of these categories and examine music and
movies independent of other entertainment queries. The relative
size of each category list we used is given in<A href="76.html#5"> Figure 6. Obviously,
not all queries listed actually match those entered by users,
especially when the category contains large imported lists of
phrases.
Figure 6
Although we have shown that our lists are a fair representation of
the topics in the query stream, this does not indicate what portion
of the frequency distribution of that stream they represent. To
determine this, we measured the average proportion of queries
matching any category list that appear at various frequencies each
hour and compared them to the average overall hourly frequency
distribution of the query stream (see<A href="76.html#5"> Figure 7). Unsurprisingly,
this comparison shows that queries in the category lists represent
more popular, repeated queries than average, although the general
shape of the distributions is similar.
Figure 7
4.1 Trends in Category Popularity
We begin our temporal analysis of topical categories by
measuring their relative popularity over the hours in a day. First,
we examine the percent of total query volume matching a selected
group of category lists, as can be seen<A href="76.html#5"> in Figure 8. It is clear that
different topical categories are more and less popular at different
times of the day. Personal finance, for example, becomes more
popular from 7-10AM, while music queries become less popular.
Although it is difficult to compare the relative level of popularity
shift from one category to another due to the differences in scale
of each of their percentages of the query stream, it is clear that
some categories' popularity changes more drastically throughout
the day than others.
Figure 8
In order to quantify this, we calculated the KL-divergence
(<A href="76.html#5">Equation 4) between the likelihood of receiving any query at a
particular time and the likelihood of receiving a query in a
particular category, as can be seen in<A href="76.html#5"> Figure 9. This reveals that
the top three categories in terms of popularity are pornography,
entertainment, and music.
)
,
|
(
)
|
(
log
)
|
(
))
,
|
(
)
|
(
(
t
c
q
p
t
q
p
t
q
p
t
c
q
p
t
q
p
D
q
=
Equation 4: KL-Divergence of Query Occurrence Likelihood
for Category
c
and Total Stream at Time
t
Figure 9
Comparing these divergences to the proportion of categorized
queries in each category in<A href="76.html#5"> Figure 6 quickly illustrates that
divergence is not correlated with the number of queries
categorized in each category. Also shown in<A href="76.html#5"> Figure 9 is the
average percentage of the entire query volume and distinct queries
that match each category. Although the categories that cover the
largest portions of the query stream also have the most relative
popularity fluctuation, this correlation does not continue
throughout all categories.
Relative Percentage of Categorized Queries
0%
5%
10%
15%
20%
25%
30%
35%
40%
45%
50%
Shopp
ing
Com
puti
ng
Trave
l
Hom
e
Hea
lth
Gov
ernm
ent Games
Rese
arch
& L
earn
ing
Porn Holiday
s
Spor
ts
Mov
ies
Pers
onal
Fina
nce
Ente
rtain
men
t
US S
ites Music
P
e
r
c
e
n
t
a
ge
of
C
a
t
e
gor
i
z
e
d
Q
u
e
r
i
e
s
Hourly Frequency Distribution of Matching Queries vs. All Queries Averaged over
7 Days and 16 Categories
0%
5%
10%
15%
20%
25%
30%
35%
> 1,0
00
201-500
51-1
00
21-2
5
11-1
5
9
7
5
3
1
Frequency Ranges
P
e
r
cent
a
g
e
of
A
v
er
age
T
o
t
a
M
a
t
c
hi
ng Q
u
er
i
e
s
Avg. Matching Queries
Avg. Queries
Category Percentage of Entire Query Stream and Divergence from
Likelihood of any Query at Each Hour
0%
1%
2%
3%
4%
5%
6%
Com
puti
ng
Spo
rts
Holi
day
s
Rese
arch
and L
earni
ng
Hea
lth
Gam
es
US S
ites
Shopp
ing
Gov
ernm
ent
Mov
ies
Tra
vel
Pers
ona
l Fin
ance Home Music
Ente
rtain
men
t
Porn
Category
KL-Divergence
% of query stream
Distinct % of query stream
Categorical Percent over Time
0%
1%
2%
3%
4%
1
2
3
4
5
6
7
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Hour of Day
Entertainment
Games
Health
Personal Finance
Shopping
Music
USSites
Porn
325
We drilled down into the highly fluctuating categories and
examined the behavior of the queries with the most highly
fluctuating frequencies in each category. From this we hoped to
gain some insight into the reasons why certain categories
fluctuate, and the effect of terms and queries with very high flux
on those categories. For example, the three most changing queries
for the entertainment category on average over our week were:
Table 1: Top Three Fluctuating Entertainment Queries
gwyneth paltrow
paris hilton
orlando bloom
All three of these queries are specifically related to recent events
in US popular culture; the actress Gwyneth Paltrow recently
married in secret, and the news of her nuptials broke during the
week we analyzed. Hilton Hotel heiress Paris Hilton has been a
popular topic recently; she starred in a prime time reality TV show
entitled "The Simple Life". Also popular is Orlando Bloom, the
actor who portrays a popular character in the "Lord of the Rings"
trilogy. As the final installment of the series was released in US
theatres during the week prior to our query log, it is no surprise to
see his name as a top-changing query.
Drilling down further, we pinpointed some of the specific
instances where these popular queries jumped the most. For
example, in the afternoon of Friday, December 27th, the
popularity of the query "gwyneth paltrow" skyrocketed. From 3-4PM
, it occurred once, from 4-5PM it occurred 67 times, and
from 5PM-6PM it occurred 11,855 times. The top changing (on
average) twenty-five queries, after normalization, in the
Entertainment and Music categories are shown in<A href="76.html#6"> Table 2.
Table 2: Top 25 Fluctuating Queries from Music and
Entertainment
Music Entertainment
lyrics
music
britney spears
furniture
love
hilary duff
good charlotte
sloppy seconds
jessica simpson
b2k
eminem
christina aguilera
simple plan
justin timberlake
free music
linkin park
michael jackson
beyonce
jennifer lopez
50 cent
kinky
napster
chic
tupac
blink 182
gwyneth paltrow
paris hilton
orlando bloom
espn
disney
johnny depp
much music
disney channel
hgtv
disneychannel com
www disneychanel com
katie holmes pictures
pamela anderson
cartoon network
hilary duff
fake
chad michael murray
vivica a fox
disneychannel
care bears
sailor moon
www cartoonnetwork com
days of our lives
charmed
tom welling
We also looked at some of the most frequently changing terms to
see how they relate to the change of entire queries containing
those terms. Some excellent examples of this behavior in the
Entertainment category include the terms "pictures" (the tenth-most
changing term) and "duff" (the 17
th
-most changing term).
We looked at the popularity change (i.e., change in frequency) for
queries containing these terms and found that several of them also
exhibited large changes over time. For example, on the afternoon
of December 28
th
from noon to 5PM EST, the query "hilary duff"
changed from an initial frequency of 27 from 12-1PM to a peak of
131 (from 3-4PM), and then stabilized around 70 for the rest of
the evening; similar spikes in frequency for this query occurred at
similar times during other days in our period of study.
4.2 Trends in Uniqueness of Queries Within
Categories
Although we have shown that different categories have differing
trends of popularity over the hours of a day, this does not provide
insight into how the sets of queries within those categories change
throughout the day. In order to examine this, we return to the
overlap measures used in Sec<A href="76.html#3">tion 3. Overlap, distinct overlap, and
the Pearson correlation of query frequencies for Personal Finance
and Music are show<A href="76.html#6">n in Figure 10 a<A href="76.html#7">nd Figure 11.
Figure 10
Although the uniqueness of queries in categories in general
appears to be correlated with that of the entire query stream
(<A href="76.html#4">Figure 3), that of particular categories appears to be substantially
different from one to the next. For example, if we compare the
overlap characteristics of personal finance with those of music, we
see they are quite different. Not only does personal finance have
generally higher overlap, but it has a much higher overall overlap
than distinct overlap, whereas they are nearly equal for music.
Other categories with generally high overlap and distinct overlap
are shopping, computing, and travel. Also, the correlation of
frequencies of personal finance queries is very high all day,
indicating searchers are entering the same queries roughly the
same relative amount of times, this is clearly not true for music.
Some categories have a high Pearson correlation. This indicates
that a significant portion of the queries in these categories is often
ranked similarly by frequency. These categories are:
pornography, travel, research and learning, and computing, and
their Pearson correlations are illustrate<A href="76.html#7">d in Figure 12.
Personal Finance Overlap
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
1
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
Overlap
Dist. Olap
Pearson
326
Figure 11
It is clear that some categories have very similarly ranked queries
by frequency throughout the day, while others vary dramatically
according to query volume. Referring back to<A href="76.html#5"> Figure 6 and<A href="76.html#5"> Figure
9, uniqueness of queries in particular categories does not appear to
be correlated with the number of queries in their respective
category lists, the proportion of the query stream they represent, or
the number of distinct queries they match.
Figure 12
This type of data is potentially of great use to query caching
algorithms. For example, if it is known a priori that queries for
certain categories are similarly ranked throughout the day, they
can be given higher priority in a query-caching scheme.
Similarly, queries in categories whose rankings change vastly over
time might be given low caching priority.
CONCLUSIONS AND FUTURE WORK
This study focuses on investigating the nature of changes in the
query stream of a very large search service over time.
Understanding how users' queries change over time is critical to
developing effective, efficient search systems and to engineering
representative test sets and evaluations that drive this
development. In this study we find trends over time that are stable
despite continuing fluctuation in query volume. Although the
average query is repeated only twice during any given hour of the
day, the total query traffic varies both in magnitude from one hour
to the next, and also in degree of overlap and correlation in
popularity of the queries that are received. In addition, we also
find that the frequency distribution of an hour's worth of queries
remains constant throughout the day. Also, at the most general
level, we find that query volume is highest and query sets are most
stable during peak hours of the day.
This study further investigates changes in the query stream over
time by examining the nature of changes in popularity of
particular topical categories. For this we use a set of topical
categories created by human editors that represents approximately
13% of the average query traffic. We show that popularity of
some of these categories fluctuates considerably while other
categories remain relatively stable over the hours in a day.
Additionally, we show that the overlap and correlation in
popularity of the queries within each topical category varies quite
differently over the course of the day.
Extending this analysis to investigate changes in the very rare
queries not often matched by our category lists would provide
insight into whether those are changing similarly to more popular
queries. One method for approaching this might be to incorporate
automatic query classification methods to extend our basic lists
This study is the gateway to a large and diverse body of future
work. Integrating this knowledge of Circadian changes in the
query stream by category will likely yield improved query
disambiguation, query caching, and load balancing algorithms.
BIBLIOGRAPHY
[1]
Beitzel, S., Jensen, E., Chowdhury, A., and Grossman, D.
Using Titles and Category Names from Editor-driven
Taxonomies for Automatic Evaluation. In Proceedings of
CIKM'03 (New Orleans, LA, November, 2003), ACM Press.
[2]
Broder, A. A Taxonomy of Web Search. SIGIR Forum
36(2) (Fall, 2002).
[3]
Chowdhury, A., G. Pass. "Operational Requirements for
Scalable Search Systems", In Proceedings of CIKM'03 (New
Orleans, LA, November 2003), ACM Press.
[4]
Eastman, C., B. Jansen, "Coverage, Relevance, and Ranking:
The Impact of Query Operators on Web Search Engine
Results", ACM Transactions on Information Systems, Vol.
21, No. 4, October 2003, Pages 383411.
[5]
Eiron, N., K. McCurley. "Analysis of Anchor Text for Web
Search", In Proceedings of SIGIR'03 (Toronto, Canada, July
2003), ACM Press.
[6]
Hawking, D., Craswell, N., and Griffiths, K. Which Search
Engine is Best at Finding Online Services? In Proceedings
of WWW10 (Hong Kong, May 2001), Posters. Actual poster
available as
http://pigfish.vic.cmis.csiro.au/~nickc/pubs/www10actualpos
ter.pdf
[7]
Jansen, B. and Pooch, U. A review of Web searching studies
and a framework for future research.
Journal of the American Society for Information Science and
Technology 52(3), 235-246, 2001.
[8]
Jansen, B., Spink, A., and Saracevic, T. Real life, real users,
and real needs: a study and analysis of user queries on the
web. Information Processing and Management, 36(2)
(2000), 207-227.
[9]
Jansen, B.J., Goodrum, A., Spink, A. Searching for
multimedia: video, audio, and image Web queries. World
Wide Web 3(4), 2000.
[10]
Lawrence, S. and Giles, C.L. Searching the World Wide
Web. Science 280(5360),
98-100, 1998.
Pearson Correlations of Frequencies for Categories
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
Personal Finance
Music
Movies
Porn
Computing
Games
Entertainment
Government
Music Overlap
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15 16 17 18 19 20 21 22 23
Overlap
Dist. Olap
Pearson
327
[11]
Lempel, R. and Moran, S. Predictive caching and
prefetching of query results in search engines. In
Proceedings of WWW12 (Budapest, May 2003).
[12]
Markatos, E.P. On Caching Search Engine Query Results. In
the Proceedings of the 5th International Web Caching and
Content Delivery Workshop, May 2000.
[13]
Raghavan, V. and Sever, H. On the Reuse of Past Optimal
Queries. In Proc. of the 1995 SIGIR Conference, 344-350,
Seattle, WA, July 1995.
[14]
Ross, N. and Wolfram, D. End user searching on the
Internet: An analysis of term pair topics submitted to the
Excite search engine. Journal of the American Society for
Information Science 51(10), 949-958, 2000.
[15]
Saraiva, P., Moura, E., Ziviani, N., Meira, W., Fonseca, R.,
Riberio-Neto, B. Rank-preserving two-level caching for
scalable search engines. In Proc. of the 24th SIGIR
Conference, 51-58, New Orleans, LA, September, 2001.
[16]
Silverstein, C., Henzinger, M., Marais, H., and Moricz, M.
Analysis of a very large web search engine query log. SIGIR
Forum 33(1) (Fall, 1999), 6-12.
[17]
Spink, A., Ozmutlu, S., Ozmutlu, H.C., and Jansen, B.J.
U.S. versus European web searching trends. SIGIR Forum
36(2), 32-38, 2002.
[18]
Spink, A., Jansen, B.J., Wolfram, D., and Saracevic, T.
From E-sex to e-commerce: Web search changes. IEEE
Computer, 35(3), 107-109, 2002.
[19]
Spink, A., Wolfram, D., Jansen, B.J. and Saracevic, T.
Searching the Web: The Public and Their Queries. Journal
of the American Society of Information Science 53(2), 226-234
, 2001.
[20]
Spink, A., Jansen, B.J., and Saracevic, T. Vox populi: The
public searching of the web. Journal of the American
Society of Information Science 52 (12), 1073-1074, 2001.
[21]
Spink, A., Jansen, B.J., and Ozmultu, H.C. Use of query
reformulation and relevance feedback by Excite users.
Internet Research: Electronic Networking Applications and
Policy 10 (4), 2000.
[22]
Sullivan, D. Searches Per Day. Search Engine Watch,
February, 2003.
http://searchenginewatch.com/reports/article.php/2156461
[23]
Wang, P., Berry, M., and Yang, Y. Mining longitudinal web
queries: Trends and patterns.
Journal of the American Society for Information Science and
Technology 54(8), 743-758, June 2003.
[24]
J. Wen, J. Nie, H. Zhang "Query Clustering using User
Logs" ACM Transactions on Information Systems, Vol. 20,
No. 1, January 2002, pp59-81.
[25]
Wolfram, D., H. Xie, "Subject categorization of query terms
for exploring Web users' search interests", Journal of the
American Society for Information Science, v.53 n.8, p.617-630
, June 2002.
[26]
Xie, Y., O'Hallaron, D. Locality in Search Engine Queries
and Its Implications for Caching. Infocom 2002.
328 | query traffic;query stream;frequency distribution;topical categories;log analysis;query log;Query Log Analysis;Web Search |
77 | Efficient Multi-way Text Categorization via Generalized Discriminant Analysis | Text categorization is an important research area and has been receiving much attention due to the growth of the on-line information and of Internet. Automated text categorization is generally cast as a multi-class classification problem. Much of previous work focused on binary document classification problems. Support vector machines (SVMs) excel in binary classification, but the elegant theory behind large-margin hyperplane cannot be easily extended to multi-class text classification. In addition, the training time and scaling are also important concerns. On the other hand, other techniques naturally extensible to handle multi-class classification are generally not as accurate as SVM. This paper presents a simple and efficient solution to multi-class text categorization. Classification problems are first formulated as optimization via discriminant analysis . Text categorization is then cast as the problem of finding coordinate transformations that reflects the inherent similarity from the data. While most of the previous approaches decompose a multiclass classification problem into multiple independent binary classification tasks, the proposed approach enables direct multi-class classification. By using Generalized Singular Value Decomposition (GSVD), a coordinate transformation that reflects the inherent class structure indicated by the generalized singular values is identified. Extensive experiments demonstrate the efficiency and effectiveness of the proposed approach. | INTRODUCTION
With the ever-increasing growth of the on-line information and
the permeation of Internet into daily life, methods that assist users
in organizing large volumes of documents are in huge demand.
In particular, automatic text categorization has been extensively
studied recently. This categorization problem is usually viewed
as supervised learning, where the gaol is to assign predefined category
labels to unlabeled documents based on the likelihood in-ferred
from the training set of labeled documents. Numerous approaches
have been applied, including Bayesian probabilistic approaches
[20, 31], nearest neighbor [22, 19], neural networks [33],
decision trees [2], inductive rule learning [4, 9], support vector machines
[18, 14], Maximum Entropy [26], boosting [28], and linear
discriminate projection [3] (see [34] for comparative studies of text
categorization methods).
Although document collections are likely to contain many different
categories, most of the previous work was focused on binary
document classification. One of the most effective binary classification
techniques is the support vector machines (SVMs) [32]. It
has been demonstrated that the method performs superbly in binary
discriminative text classification [18, 34]. SVMs are accurate and
robust, and can quickly adapt to test instances. However, the elegant
theory behind the use of large-margin hyperplanes cannot be
easily extended to multi-class text categorization problems. A number
of techniques for reducing multi-class problems to binary problems
have been proposed, including one-versus-the-rest method,
pairwise comparison [16] and error-correcting output coding [8, 1].
In these approaches, the original problems are decomposed into a
collection of binary problems, where the assertions of the binary
classifiers are integrated to produce the final output. In practice,
which reduction method is best suited is problem-dependent, so it
is a non-trivial task to select the decomposition method. Indeed,
each reduction method has its own merits and limitations [1]. In
addition, regardless of specific details, these reduction techniques
do not appear to be well suited for text categorization tasks with
a large number of categories, because training of a single, binary
SVM requires O
n
time for 1 7
2 1 where n is the number
of training data [17]. Thus, having to train many classifiers has
a significant impact on the overall training time. Also, the use of
multiple classifiers slows down prediction. Thus, despite its elegance
and superiority, the use of SVM may not be best suited for
multi-class document classification. However, there do not appear
to exist many alternatives, since many other techniques that can
be naturally extended to handle multi-class classification problems,
317
such as neural networks and decision trees, are not so accurate as
SVMs [34, 35].
In statistics pattern recognition literature, discriminant analysis
approaches are well known to be able to learn discriminative
feature transformations (see, e.g., [12]). For example, Fisher discriminant
analysis [10] finds a discriminative feature transformation
as eigenvectors associated with the largest eigenvalues of matrix
T
^
1
w
^
b
, where ^
w
is the intra-class covariance matrix and
^
b
is the inter-class covariance matrix
1
. Intuitively, T captures
not only compactness of individual classes but separations among
them. Thus, eigenvectors corresponding to the largest eigenvalues
of T are likely to constitute a discriminative feature transform.
However, for text categorization, ^
w
is usually singular owing to
the large number of terms. Simply removing the null space of ^
w
would eliminate important discriminant information when the projections
of ^
b
along those directions are not zeros [12]. This issue
has stymied attempts to use traditional discriminant approaches in
document analysis.
In this paper we resolve this problem. We extend discriminant
analysis and present a simple, efficient, but effective solution to
text categorization. We propose a new optimization criterion for
classification and cast text categorization as the problem of finding
transformations to reflect the inherent similarity from the data. In
this framework, given a document of unknown class membership,
we compare the distance of the new document to the centroid of
each category in the transformed space and assign it to the class
having the smallest distance to it. We call this method Generalized
Discriminant Analysis (GDA), since it uses generalized singular
value decomposition to optimize transformation. We show that the
transformation derived using
GDA
is equivalent to optimization
via the trace or determinant ratios.
GDA
has several favorable properties: First, it is simple and can
be programed in a few lines in MATLAB. Second, it is efficient.
(Most of our experiments only took several seconds.) Third, the
algorithm does not involve parameter tuning. Finally, and probably
the most importantly, it is very accurate. We have conducted extensive
experiments on various datasets to evaluate its performance.
The rest of the paper is organized as follows: Section 2 reviews the
related work on text categorization. Section 3 introduces our new
criterion for discriminant analysis. Section 4 introduces the basics
of generalized singular value decomposition and gives the solution
of the optimization problem. Section 5 shows that the transformation
derived using
GDA
can also be obtained by optimizing the
trace or determinant ratios. Section 6 presents some illustrating examples
. Section 7 shows experimental results. Finally, Section 8
provides conclusions and discussions.
RELATED WORK
Text categorization algorithms can be roughly classified into two
types: those algorithms that can be naturally extended to handle
multi-class cases and those require decomposition into binary classification
problems. The first consists of such algorithms as Naive
Bayes [22, 19], neural networks [25, 33], K-Nearest Neighbors [22,
19], Maximum Entropy [26] and decision trees. Naive Bayes uses
the joint distributions of words and categorizes to estimate the probabilities
that an input document belongs to each document class and
1
This is equivalent to using eigenvectors associated with the smallest
eigenvalues of matrix T
^
1
b
^
w
. It indicates that traditional
discriminant analysis requires the non-singularity of at least one covariance
matrix. Since the rank of ^
w
is usually greater than that of
^
b
, we will base our discussion on the eigenvalue-decomposition
of T
^
1
w
^
b
.
then selects the most probable class. K-Nearest Neighbor finds the
k nearest neighbors among training documents and uses the categories
of the k neighbors to determine the category of the test document
. The underlying principle of maximum entropy is that without
external knowledge, uniform distribution should be preferred.
Based on this principle, it estimate the conditional distribution of
the class label given a document.
The reduction techniques that are used by the second group include
one-versus-the-rest method [29], error-correcting output coding
[8], pairwise comparison [16], and multi-class objective functions
, where the first two have been applied to text categorization
[34, 13].
In the one-versus-the-rest method a classifier separating between
from a class and the rest is trained for each class. Multi-class classification
is carried out by integrating prediction of these individual
classifiers with a strategy for resolving conflicts. The method is
sometimes criticizes for solving asymmetric problems in a symmetrical
manner and for not considering correlations between classes.
Error-correcting output coding (ECOC) [8] partitions the original
set of classes into two sets in many different ways. A binary
classifier is trained for each partition. The partitions are carefully
chosen so that the outputs of these classifiers assign a unique binary
codeword for each class (with a large Hamming distance between
any pair of them). The class of an input with unknown class membership
is chosen by computing the outputs of the classifiers on
that input and then finding the class with the codeword closest to
the output codeword.
Although SVMs are considered to be very effective in binary
classification, its large training costs may make it unsuitable for
multi-class classification with a large number of classes if the above
decomposition techniques are applied. Also, the lack of a clear
winner among the above techniques makes the reduction task complicated
. Our
GDA
directly deals with multi-class classification
and does not require reduction to binary classification problems.
Other techniques for text categorization exist. Godbole et al.
[14] propose a new multi-class classification technique that exploits
the accuracy of SVMs and the speed of Naive Bayes. It uses a
Naive Bayes classifier to compute a confusion matrix quickly. Then
it uses this matrix to reduce both the number and the complexity
of binary SVMs to be built. Chakrabarti et al. [3] propose a fast
text classification technique that uses multiple linear projections. It
first projects training instances to low-dimensional space and then
builds decision tree classifiers on the projected spaces. Fragoudis
et al. [11] propose a new algorithm that targets both feature and
instance selection for text categorization.
In summary, as pointed out in [34, 26], there is no obvious winner
in multi-class classification techniques. For practical problems,
the choice of approach will have to be made depending on the constraints
, e.g., the desired accuracy level, the time available, and the
nature of the problem.
NEW CRITERION FOR DISCRIMINANT ANALYSIS
Suppose the dataset D has m instances, d
1
d
m
, having p features
each. Then D can be viewed as a subset of R
p
as well as
a member of R
m
p
. Suppose D has L classes, D
1
D
L
having
m
1
m
L
instances, respectively, where m
L
i 1
m
i
. For each i,
1
i
L, let J
i
be the set of all j, 1
j
m, such that the j-th
instance belongs to the i-th class, and let c
i
be the centroid of the
i-th class, i.e., the component-wise average of the m
i
vectors in the
318
class. Let c be the centroid of the entire dataset. The intra-class
scatter matrix of D, ^
w
, is defined by
^
w
L
i 1
j
J
i
d
j
c
i
T
d
j
c
i
and its inter-class scatter matrix, ^
b
, is defined by
^
b
L
i 1
j
J
i
d
j
c
T
d
j
c
Let A
w
be the m
p matrix constructed by stacking D
1
e
1
T
c
1
,
, D
L
e
L
T
c
L
and let A
b
be the p
m
matrix whose columns are, from left to right,
m
1
c
1
c
T
m
L
c
L
c
T
. Then
^
w
A
w
A
T
w
and ^
b
A
b
A
T
b
Although there are ways (such as Kernel tricks [24]) for utilizing
non-linear transformation, we will focus on linear transformation
. Given a linear transformation
, the covariance matrices in
the transformed space are
A
b
T
A
b
T
A
T
b
A
b
T
^
b
and
A
w
T
A
w
T
A
T
w
A
w
T
^
w
Fisher's linear discriminant analysis discriminates inter-class distance
and intra-class distance by using their corresponding covariance
matrices. The optimal projection can be obtained by solving
the generalized eigenvalue problem:
^
b
^
w
(1)
If ^
w
is nonsingular,
is given by the eigenvectors of matrix
^
1
w
^
b
. As we already pointed out, the approach fails if ^
w
is singular
which is often the case in document classification
2
. Usually,
this problem is overcome by using a nonsingular intermediate space
of ^
w
obtained by removing the null space of ^
w
and then computing
eigenvectors. However, the removal of the null space of ^
w
possibly eliminates some useful information because some of the
most discriminant dimensions may be lost by the removal. In fact,
the null space of ^
w
is guaranteed to contain useful discriminant
information when the projections of ^
b
are not zeros along those
directions. Thus, simple removal of the null space of ^
w
is not an
effective resolution [12].
Once the transformation
has been determined, classification
is performed in the transformed space based on a distance metrics,
such as Euclidean distance
d
x y
i
x
i
y
i
2
and cosine measure
d
x y
1
i
x
i
y
i
i
x
2
i
i
y
2
i
A new instance, z, it is classified to
argmin
k
d
z
x
k
(2)
where x
k
is the centroid of k-th class.
2
In fact, ^
w
is nonsingular only if there are p
L samples. This is
usually impractical.
3.2
The New Criterion
We propose the use of the following criterion for discriminating
inter-class and intra-class distances by inter-class and intra-class
covariance matrices:
min
A
b
I
n 2
F
A
w
2
F
(3)
where X
F
is the Frobenius norm of the matrix X , i.e.,
i j
x
2
i j
.
The criterion does not involve the inverse of the intra-class matrix
and is similar to Tikhonov regularization of least squares problems.
Intuitively, the first term of (3) is used to minimize the difference
between the projection of x
i
x in a new space and the i-th unit
vector of the new space. The second term is used to minimize the
intra-class covariance.
The equation (3) can be rewritten as
min
A
w
A
b
0
I
n
2
F
(4)
and this is a least squares problem with the solution
A
T
w
A
w
A
T
b
A
b
A
T
b
(5)
GENERALIZED SINGULAR VALUE DECOMPOSITION
Here we will show how to use GSVD to compute efficiently the
solution to the optimization problem formulated in Section 3 and
show that the solution thus obtained is stable.
4.1
The Basics of GSVD
Singular value decomposition (SVD) is a process of decomposing
a rectangular matrix into three other matrices of a very special
form. It can be viewed as a technique for deriving a set of uncor-related
indexing variables or factors [6]. A Generalized Singular
Value Decomposition (GSVD) is an SVD of a sequence of matrices
. GSVD has played a significant role in signal processing and
in signal identification and has been widely used in such problems
as source separation, stochastic realization and generalized Gauss-Markov
estimation.
The diagonal form of GSVD, shown below, was first introduced
in [21].
T
HEOREM
1. (GSVD Diagonal Form
[21]) If A
R
m
p
,
B
R
n
p
, and rank
A
T
B
T
k, then there exist two orthogonal
matrices, U
R
m
m
and V
R
n
n
, and a non-singular matrix,
R
p
p
, such that
U
T
0
0
V
T
A
B
X
C
S
I
k
0
(6)
where C and S are nonnegative diagonal and of dimension m
k and n
k, respectively, 1
S
11
S
min
n k
min
n k
0, and
C
T
C
S
T
S
I
k
.
The generalized singular values are defined to be the
component-wise ratios of the diagonal entries of the two diagonal
matrices. In signal processing, A is often the signal matrix and B is
the noise matrix, in which case the generalized singular values are
referred to as signal-noise ratios.
4.2
Stable Solution
By plugging the GSVD matrices of A
w
and A
b
in (5), we have
X
I
k
0
S
T
V
T
. Since V is orthogonal, we can drop it without
319
changing the squared distance. So, we have
X
I
k
0
S
T
(7)
This derivation of
holds even if ^
w
is singular. Thus, by using
GSVD to solve the new criterion, we can avoid removing null
space, thereby keeping all the useful information. The degree of
linear independence of the original data, rank
A
T
w
A
T
b
, is equal to
k, Since
R
p
k
, rank
A
w
T
A
b
T
, the degree of linear
independence in the transformed space, is at most k.
We now state a theorem that shows that the solution is stable.
T
HEOREM
2. (GSVD relative perturbation bound [7]) Suppose
A and B be matrices with the same number of columns and B
is of full column rank. Let A
A
1
D
1
and B
B
1
D
2
such that D
1
and D
2
have full rank. Let E
E
1
D
1
and F
F
1
D
2
be perturbations
of A and B, respectively, such that for all x there exist some
1
2
1 for which it holds that
E
1
x
2
1
A
1
x
2
F
1
x
2
2
B
1
x
2
Let
i
and ~
i
be the i-th generalized singular value of
A B
and
that of
A
E B
F
, respectively. Then either
i
~
i
0 or
i
~
i
i
1
2
1
2
The above theorem gives a bound on the relative error of the
generalized eigenvalues (C
ii
and S
ii
) if the difference between the
estimated covariance matrices and the genuine covariance matrices
is small. This guarantees that the relative error of
is bounded
by the relative error of estimated intra- and inter-class covariance
matrices.
GSVD also brings some favorable features, which might improve
accuracy. In particular, computation of the cross products
A
T
b
A
b
and A
T
w
A
w
, which causes roundoff errors, is not required.
4.3
The GDA Algorithm
The pseudo codes of the training and prediction procedures are
described as follows:
Algorithm 1 Training procedure
= Train (x's)
Input: the training data x
i
's
Output: the transformation
;
begin
1.
Construct the matrices A
w
and A
b
;
2.
Perform GSVD on the matrix pair;
3.
Obtain
as described in equation 7.
4.
Return
;
end
Algorithm 2 Prediction Procedure T
= Predict (
, x)
Input: the transformation
generated by the training procedure;
and a new instance x;
Output: the label T of the new instance;
begin
1.
Perform Prediction as in equation 2;
2.
Return T
;
end
CONNECTIONS
Here we show that the above transformation derived using our
new criterion can also be obtained by optimizing the trace or determinant
ratios.
5.1
Optimizing the determinant ratio
Fisher's criterion is to maximize the ratio of the determinant of
the inter-class scatter matrix of the projected samples to the determinant
of the intra-class scatter matrix of the projected samples:
J
T
^
b
T
^
w
(8)
One way to overcome the requirements of non-singularity of
Fisher's criterion is looking for solutions that simultaneously maximize
T
^
b
minimize
T
^
w
. Using GSVD, A
b
and A
w
are decomposed as A
w
UC I
k
0 X
1
and A
b
V S I
k
0 X
1
.
To maximize
J
,
T
^
b
should be increased while decreasing
T
^
w
. Let C
C I
k
0
and S
S I
k
0
. Then we have
^
b
A
T
b
A
b
X S
2
X
1
and ^
w
A
T
w
A
w
XC
2
X
1
. This implies
T
^
b
T
X S
2
X
1
S
X
1
2
and
T
^
w
T
XC
2
X
1
C
X
1
2
Thus, the matrix
satisfying X
1
I
k
0
would simultaneously
maximize
T
^
b
and minimize
T
^
w
(since the diagonal
of S is decreasing). So, we have
X
I
k
0
. In the case
where we must weight the transformation with the generalized singular
,
X
I
k
0
S
T
is the transformation we want.
5.2
Optimizing the trace ratio
The same transformation can also be obtained by optimizing the
trace ratio. Using GSVD, we have
trace
T
^
b
trace
S
S
T
X
1
T
X
T
trace
S
S
T
GG
T
k
i 1
S
2
ii
g
ii
and
trace
T
^
w
trace
C
C
T
X
1
T
X
T
trace
C
C
T
GG
T
k
i 1
C
2
ii
g
ii
where G
X
1
and g
ii
is the ii-th term of G. Since C
T
C
S
T
S
I
k
, we have
trace
T
^
b
trace
T
^
w
k
i 1
S
2
ii
g
ii
k
i 1
C
2
ii
g
ii
k
i 1
g
ii
If we force that trace
T
^
b
1, the optimization is formulated
as minimization of trace
T
^
w
k
i 1
g
ii
1. Here g
ii
's
are diagonal elements of a positive semi-definite matrix, so they
are nonnegative.
Also, for all i, g
ii
0 implies that for all j
320
g
i j
g
ji
0.
Note that GG
T
is a p
p matrix.
Since only
the first k diagonal entries,
g
ii ki 1
, appear in the formula for
trace
T
^
w
k
i 1
g
ii
1, the quantities of other m
k diagonal
entries do not affect the optimization. Thus, we may set all
of these to 0, thereby obtaining
X
I
k
0
. In the case when
we want to weight the transformation with the generalized singular
values, we obtain
X
I
k
0
S
T
.
TEXT CLASSIFICATION VIA GDA EX-AMPLES
A well-known transformation method in information retrieval is
Latent Semantic Indexing (LSI) [6], which applies Singular Value
Decomposition (SVD) to the document-term matrix and computes
eigenvectors having largest eigenvalues as the directions related to
the dominant combinations of the terms occurring in the dataset
(latent semantics). A transformation matrix constructed from these
eigenvectors projects a document onto the latent semantic space.
Although LSI has been proven extremely useful in information retrieval
, it is not optimal for text categorization because LSI is com-pletely
unsupervised. In other words, LSI deals with the data without
paying any particular attention to the underlying class structure
. It only aims at optimally transforming the original data into
a lower dimensional space with respect to the mean squared error
, which has nothing to do with the discrimination of the different
classes. Our
GDA
approach possesses advantages of both
discriminant analysis and of latent semantic analysis. By explic-itly
taking the intra-class and inter-class covariance matrices into
the optimization criterion,
GDA
deals directly with discrimination
between classes. Furthermore, by employing GSVD to solve the
optimization problem,
GDA
tries to identify the latent concepts
indicated by the generalized singular values.
To illustrate how well
GDA
can perform, we present here two
examples. In the first example, we compare
GDA
against LDA and
LSI. Figure 1 shows a small dataset consisting of nine phrases in
three topics: user interaction, graph theory, and distributed systems.
No.
Class
Phrase
1
1
Human interface for user response
2
1
A survey of user opinion of computer
system response time
3
1
Relation of user-perceived response
time to error measurement
4
2
The generation of random, binary,
unordered trees
5
2
The intersection graph of paths in trees
6
2
Graph Minors IV: Widths of trees and
well-quasi-ordering
7
3
A survey of distributed shared memory system
8
3
RADAR: A multi-user distributed system
9
3
Management interface tools for
distributed computer system
Figure 1: Nine example sentences
After removing words (terms) that occurs only once, we have the
document-term matrix as shown in Figure 2.
The first and second samples in each class are used for training
.
GDA
, LDA, and LSI are run on the training data to obtain
transformation matrices.
Figure 3 shows the plot of the
word
\
No.
1
2
3
4
5
6
7
8
9
a
1
1
1
computer
1
1
distributed
1
1
1
for
1
1
graph
1
1
interface
1
1
of
2
1
1
1
1
1
response
1
1
1
survey
1
1
system
1
1
1
1
the
1
1
time
1
1
trees
1
1
1
user
1
1
1
1
Figure 2: Document-term Matrix
distances/similarities between document pairs in the transformed
space using each of the three methods.
(a)
GDA
(b) LDA
(c) LSI
Figure 3: Pairwise document similarity via
GDA
, LDA, and
LSI. The darker the close is the more similar the documents
are.
GDA
is a clear winner.
The second example illustrates differences between
GDA
and
LSI. Distinction among three newsgroups in 20NG are attempted
by selecting from each newsgroup twenty training and twenty for
testing. Figure 4 shows plots of the the sixty testing articles using
the two dominant directions as the axes.
GDA
has clear separation
while the LSI plot shows an L-shaped concentration of the
data points. The confusion matrices of these methods are shown in
Table 1.
GDA
clearly performed better than LSI.
prediction
prediction
actual
1
2
3
actual
1
2
3
1
20
0
0
1
20
0
0
2
0
19
1
2
0
3
17
3
0
0
0
3
7
5
8
Table 1: The confusion matrices. Left:
GDA
. Right: LSI.
EXPERIMENTS
For our experiments we used a variety of datasets, most of which
are frequently used in the information retrieval research. The range
of the number of classes is from four to 105 and the range of the
number of documents is from 476 to 20,000, which seem varied
321
-1.5
-1
-0.5
0
0.5
1
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
Group 1
Group 2
Group 3
(a)
GDA
-0.07
-0.06
-0.05
-0.04
-0.03
-0.02
-0.01
0
0.01
0.02
0.03
-0.05
0
0.05
0.1
0.15
0.2
0.25
0.3
Group 1
Group 2
Group 3
(b) LSI
Figure 4: Document plots. The three groups are separated sig-nificantly
better with
GDA
than with LSI.
enough to obtain good insights as to how
GDA
performs. Table 2
summarizes the characteristics of the datasets.
20Newsgroups
The 20Newsgroups (20NG) dataset contains
approximately 20,000 articles evenly divided among 20 Usenet
newsgroups. The raw text size is 26MB. All words were stemmed
using a porter stemming program, all HTML tags were skipped,
and all header fields except subject and organization of the posted
article were ignored.
WebKB
The WebKB dataset
3
contains Web pages collected
from university computer science departments. There are approximately
8,300 documents in the set and they are divided into seven
categories: student, faculty, staff, course, project, department, and
other. The raw text size of the dataset is 27MB. Among the seven
categories, student, faculty, course, and project are the four most
populous. The subset consisting only of these categories is also
used here, which is called WebKB4. In neither of the datasets, we
used stemming or stop lists.
Industry
Sector
The
Industry
Section
dataset
4
is
based on the data made available by Market Guide, Inc.
(www.marketguide.com). The set consists of company homepages
that are categorized in a hierarchy of industry sectors, but we
disregarded the hierarchy. There were 9,637 documents in the
dataset, which were divided into 105 classes. We tokened the
documents by skipping all MIME and HTML headers and using a
standard stop list. We did not perform stemming.
Reuters
The Reuters-21578 Text Categorization Test Collection
contains documents collected from the Reuters newswire in
1987. It is a standard text categorization benchmark and contains
135 categories. We used its subsets: one consisting of the ten most
frequent categories, which we call Reuters-Top10, and the other
consisting of documents associated with a single topic, which we
call Reuters-2. Reuters-2 had approximately 9,000 documents and
50 categories.
TDT2
TDT2 is the NIST Topic Detection and Tracking text
corpus version 3.2 released in December 6, 1999 [30]. This corpus
contains news data collected daily from nine news sources in
two languages (American English and Mandarin Chinese), over a
period of six months (JanuaryJune in 1998). We used only the
English news texts, which were collected from New York Times
Newswire Service, Associated Press Worldstream Service, Cable
News Network, Voice of America, American Broadcasting Company
, and Public Radio International. The documents were manu-ally
annotated using 96 target topics. We selected the documents
having annotated topics and removed the brief texts. The resulting
3
Both
20NG
and
WebKB
are
available
at
http://www-2
.cs.cmu.edu/afs/cs/project/theo-11/www/wwkb.
4
Available at http://www.cs.cmu.edu/ TextLearning/datasets.html
dataset contained 7,980 documents.
K-dataset
This dataset was obtained from the WebACE
project [15]. It contained 2,340 documents consisting of news articles
from Reuters News Service made available on the Web in October
1997. These documents were divided into 20 classes. They
were processed by eliminating stop words and HTML tags, stemming
the remaining words using Porter's suffix-stripping algorithm.
CSTR
This is the dataset of the abstracts of technical reports
published in the Department of Computer Science at the University
of Rochester between 1991 and 2002
5
. The dataset contained 476
abstracts, which were divided into four research areas: Symbolic-AI
, Spatial-AI, Systems, and Theory. We processed the abstracts
by removing stop words and applying stemming operations on the
remaining words.
Datasets
# documents
# class
20NG
20,000
20
WebKB4
4,199
4
WebKB
8,280
7
Industry Sector
9,637
105
Reuters-Top10
2,900
10
Reuters-2
9,000
50
CSTR
476
4
K-dataset
2,340
20
TDT2
7,980
96
Table 2: Data Sets Descriptions
7.2
Data Preprocessing
In all experiments, we randomly chose 70% of the documents for
training and assigned the rest for testing. It is suggested in [35] that
information gain is effective for term removal and it can remove up
to 90% or more of the unique terms without performance degrade.
So, we first selected the top 1,000 words by information gain with
class labels. The feature selection is done with the Rainbow package
[23].
Here we use classification accuracy for evaluation. Different
measures, such as precision-recall graphs and F
1
measure [34],
have been used in the literature. However, since the datasets used
in our experiments are relatively balanced and single-labeled, and
our goal in text categorization is to achieve low misclassification
rates and high separation between different classes on a test set,
we thought that accuracy is the best measure of performance. All
of our experiments were carried out on a P4 2GHz machine with
512M memory running Linux 2.4.9-31.
7.3
Experimental Results
Now we present and discuss the experimental results. Here we
compare
GDA
against Naive Bayes (NB for short), K-Nearest
Neighbor (KNN for short), Maximum Entropy (ME for short),
LDA, and SVM on the same datasets with the same training and
testing data. Recall that the first three of the methods we compare
against are commonly-used direct methods for multi-class classification
(in the sense that they do not require reduction to binary
classification problems). For experiments involving SVM we used
SVMTorch [5]
6
, which uses the one-versus-the-rest decomposition.
Table 3 and Figure 5 show performance comparisons.
GDA
outperformed all the other five methods on 20NG, WebKB4, WebKB
and Industry Sector. SVM performed the best on Reuters-2,
5
The TRs are available at http://www.cs.rochester.edu/trs.
6
Download-able at http://old-www.idiap.ch/learning/SVMTorch.html.
322
K-dataset, and TDT2.
GDA
outperformed LDA on all the experiments
, and the improvement was significant (more than 10%) when
the sample size was relatively small (in the case of CSTR, Reuters-Top10
, and K-dataset).
On 20NG, the performance of
GDA
s 95 03%, which is approximately
10% higher than that of NB, 6% higher than that of ME,
and 4% higher than that of SVM. On the WebKB4 dataset,
GDA
beats NB by approximately 5%, and both ME and SVM by approximately
2%. On the WebKB dataset,
GDA
beats NB by approximately
16% and ME by 6%. The performance of
GDA
is about
8% higher than that of NB and by 6% than that of ME on the Industry
Sector. The results with
GDA
and with SVM are almost
the same on WebKB, Industry Sector, Reuters-Top10, and CSTR.
On Reuters-2, K-dataset, and TDT2, SVM performs slightly better
than
GDA
by 3%. ME achieves the best results on the CSTR
dataset while NB is the winner on Reuters-top10 in terms of performance
On CSTR, the performance of
GDA
is 2% lower than that
of NB and 4% lower than that of ME. On Reuters-Top10,
GDA
is beaten by NB by approximately 1%. In total, the performance
of
GDA
is always either the winner or very close to the winner:
it is ranked the first four times, ranked the second three times, and
ranked the third in the remaining two. Although there is no single
winner over all datasets,
GDA
seems to outperform the rest on
most counts. We can say that
GDA
is a viable, competitive algorithm
in text categorization.
Datasets
GDA
NB
KNN
ME
LDA
SVM
20NG
95.03
85.60
50.70
89.06
93.90
91.07
WebKB4
94.01
85.13
37.29
91.93
90.72
92.04
WebKB
79.02
61.01
44.81
71.30
77.35
78.89
Industry Sector
66.57
56.32
39.48
58.84
66.49
65.96
Reuters-Top10
81.98
83.33
74.07
81.65
71.46
81.13
Reuters-2
89.82
87.88
73.22
88.56
88.65
92.43
CSTR
88.50
90.85
82.53
92.39
68.29
88.71
K-dataset
88.44
86.14
58.26
86.19
77.69
91.90
TDT2
90.54
91.59
86.63
89.18
88.41
93.85
Table 3: Performance comparisons. For KNN we set k to 30.
0
10
20
30
40
50
60
70
80
90
100
20N
ewsg
roup
s
Web
KB
4
Web
KB
Ind
ustr
y S
ecto
r
Reu
ters
-top
10
Reu
ters
-2
URCS
K-d
atas
et
TDT
2
GDA
NB
KNN
ME
LDA
SVM
Figure 5: Performance Comparison
GDA
is also very efficient and most experiments are done in
several seconds. Table 4 summarizes the running time for all the
experiments of
GDA
and SVM. Figure 6 and Figure 7 present the
comparisons of training and prediction time respectively. The time
saving of
GDA
is very obvious. In summary, these experiments
have shown that
GDA
provides an alternate choice for fast and
efficient text categorization.
GDA
GDA
SVM
SVM
Datasets
Training
Prediction
Training
Prediction
20NG
171.80
6.86
270.20
64.28
WebKB4
63.4
0.20
114.67
54.72
WebKB
94.64
0.43
1108.17
103.03
Industry Sector
88.23
6.45
423.54
79.82
Reuters-Top10
61.23
0.15
94.28
18.65
Reuters-2
96.19
1.13
566.53
85.10
CSTR
3.65
0.02
7.50
2.77
K-dataset
62.88
0.18
84.56
47.70
TDT2
21.69
5.14
89.91
26.76
Table 4: Time Table in seconds.
0
200
400
600
800
1000
1200
20N
ewsg
rou
ps
We
bKB
4
We
bKB
Ind
ustry
Sect
or
Reu
ters
-top
10
Reu
ters
-2
CST
R
K-d
ata
set
TDT
2
Training
Time
GDA
SVM
Figure 6: Training Time Comparisons
0
20
40
60
80
100
120
20N
ewsg
rou
ps
We
bKB
4
We
bKB
Ind
ustry
Sect
or
Reu
ters
-top
10
Reu
ters
-2
CST
R
K-d
ata
set
TDT
2
Prediction
Time
GDA
SVM
Figure 7: Prediction Time Comparisons
DISCUSSIONS AND CONCLUSIONS
In this paper, we presented
GDA
, a simple, efficient, and yet accurate
, direct approach to multi-class text categorization.
GDA
utilizes
GSVD to transform the original data into a new space, which
could reflect the inherent similarities between classes based on a
new optimization criterion. Extensive experiments clearly demonstrate
its efficiency and effectiveness.
Interestingly enough, although traditional discriminant approaches
have been successfully applied in pattern recognition, little
work has been reported on document analysis. As we mentioned
earlier, this is partly because the intra-class covariance matrix is
usually singular for document-term data and hence restrict the usage
of discriminant. Our new criterion avoids the problem while
still preserving the discriminative power of the covariance matrix.
323
Another big barrier to application of discriminant analysis in document
classification is its large computation cost. As we know,
traditional discriminant analysis requires a large amount of computation
on matrix inversion, SVD, and eigenvalue-analysis. The
costs of these operations are extremely large in document analysis
because the matrices have thousands of dimension. Our approach
makes use of effective feature selection via information gain, with
which we can remove up to 90% or more of the unique terms without
significant performance degrade [35]. One of our future plans
is to explore how the performance correlates with different feature
selection methods and the number of words selected. There are also
other possible extensions such as using random projection to reduce
the dimensionality before applying discriminant analysis [27].
Acknowledgments
This work is supported in part by NSF grants EIA-0080124, DUE-9980943
, and EIA-0205061, and NIH grant P30-AG18254.
REFERENCES
[1] Allwein, E. L., Schapire, R. E., & Singer, Y. (2000). Reducing
multiclass to binary: A unifying approach for margin
classifiers. ICML-00 (pp. 916).
[2] Apte, C., Damerau, F., & Weiss, S. (1998). Text mining with
decision rules and decision trees. Proceedings of the Workshop
with Conference on Automated Learning and Discovery:
Learning from text and the Web.
[3] Chakrabarti, S., Roy, S., & Soundalgekar, M. V. (2002). Fast
and accurate text classification via multiple linear discriminant
projections. Proceedings of the 28th International Conference
on Very Large Databases (pp. 658669).
[4] Cohen, W. W., & Singer, Y. (1996). Context-sensitive learning
methods for text categorization. Proceedings of the 19th Annual
International ACM SIGIR Conference on Research and
Development in Information (pp. 307315).
[5] Collobert, R., & Bengio, S. (2001). SVMTorch: Support
vector machines for large-scale regression problems. Journal of
Machine Learning Research, 1, 143160.
[6] Deerwester, S. C., Dumais, S. T., Landauer, T. K., Furnas,
G. W., & Harshman, R. A. (1990). Indexing by latent semantic
analysis. Journal of the American Society of Information
Science, 41, 391407.
[7] Demmel, J., & Veselic, K. (1992). Jacobi's method is more
accurate than QP. SIAM Journal on Matrix Analysis and
Applications, 13, 1019.
[8] Dietterich, T. G., & Bakiri, G. (1995). Solving multiclass
learning problems via error-correcting output codes. Journal of
Artificial Intelligence Research, 2, 263286.
[9] Dumais, S., Platt, J., Heckerman, D., & Sahami, M. (1998).
Inductive learning algorithms and representations for text
categorization. CIKM-98 (pp. 148155).
[10] Fisher, R. (1936). The use of multiple measurements in
taxonomic problems. Annals of Eugenics, 7, 179188.
[11] Fragoudis, D., Meretakis, D., & Likothanassis, S. (2002).
Integrating feature and instance selection for text classification.
SIGKDD-02 (pp. 501506).
[12] Fukunaga, K. (1990). Introduction to statistical pattern
recognition. Academic Press.
[13] Ghani, R. (2000). Using error-correcting codes for text
classification. ICML-00 (pp. 303310).
[14] Godbole, S., Sarawagi, S., & Chakrabarti, S. (2002). Scaling
multi-class support vector machine using inter-class confusion.
SIGKDD-02 (pp. 513518).
[15] Han, E.-H., Boley, D., Gini, M., Gross, R., Hastings, K.,
Karypis, G., Kumar, V., Mobasher, B., & Moore, J. (1998).
WebACE: A web agent for document categorization and
exploration. Agents-98 (pp. 408415).
[16] Hastie, T., & Tibshirani, R. (1998). Classification by
pairwise coupling. Advances in Neural Information Processing
Systems. The MIT Press.
[17] Joachims, T. (1998). Making large-scale support vector
machine learning practical. In Advances in kernel methods:
Support vector machines.
[18] Joachims, T. (2001). A statistical learning model of text
classification with support vector machines. SIGIR-01 (pp.
128136).
[19] Lam, W., & Ho., C. (1998). Using a generalized instance set
for automatic text categorization. SIGIR-98 (pp. 8189).
[20] Lewis, D. D. (1998). Naive (Bayes) at forty: The
independence assumption in information retrieval. ECML-98.
[21] Loan, C. V. (1976). Generalizing the singular value
decomposition. SIAM J. Num. Anal., 13, 7683.
[22] Masand, B., Linoff, G., & Waltz., D. (1992). Classifying
news stories using memory based reasoning. SIGIR-92 (pp.
5964).
[23] McCallum, A. K. (1996). Bow: A toolkit for statistical
language modeling, text retrieval, classification and clustering.
http://www.cs.cmu.edu/ mccallum/bow.
[24] Mika, S., Ratsch, G., Weston, J., Scholkopf, B., & Muller,
K.-R. (1999). Fisher discriminant analysis with kernels. Neural
Networks for Signal Processing IX (pp. 4148). IEEE.
[25] Ng, H. T., Goh, W. B., & Low, K. L. (1997). Feature
selection, perceptron learning, and a usability case study for
text categorization. Proceedings of the 20th Annual
International ACM SIGIR Conference on Research and
Development in Information (pp. 6773).
[26] Nigam, K., Lafferty, J., & McCallum, A. (1999). Using
maximum entropy for text classification. In IJCAI-99 Workshop
on Machine Learning for Information Filtering (pp. 6167).
[27] Papadimitriou, C. H., Tamaki, H., Raghavan, P., & Vempala,
S. (1998). Latent semantic indexing: A probabilistic analysis.
Proceedings of the Symposium on Principles of Database
Systems (pp. 159168).
[28] Schapire, R. E., & Singer, Y. (2000). Boostexter: A
boosting-based system for text categorization. Machine
Learning, 39, 135168.
[29] Scholkopf, B., & J.Smola, A. (2002). Learning with kernels.
MIT Press.
[30] TDT2 (1998). Nist topic detection and tracking corpus.
http://www.nist.gove/speech/tests/tdt/tdt98/index.htm.
[31] Tzeras, K., & Hartmann, S. (1993). Automatic indexing
based on Bayesian inference networks. SIGIR-93 (pp. 2234).
[32] Vapnik, V. N. (1998). Statistical learning theory. Wiley, New
York.
[33] Wiener, E. D., Pedersen, J. O., & Weigend, A. S. (1995). A
neural network approach to topic spotting. 4th Annual
Symposium on Document Analysis and Information Retrieval
(pp. 317332).
[34] Yang, Y., & Liu, X. (1999). A re-examination of text
categorization methods. SIGIR-99 (pp. 4249).
[35] Yang, Y., & Pederson, J. O. (1997). A comparative study on
feature selection in text categorization. ICML-97 (pp. 412420).
324
| multi-class classification;text categorization;GSVD;Discriminant Analysis;Multi-class Text Categorization;SVMs;GDA;discriminant analysis |
78 | Efficient Phrase Querying with an Auxiliary Index | Search engines need to evaluate queries extremely fast, a challenging task given the vast quantities of data being indexed. A significant proportion of the queries posed to search engines involve phrases. In this paper we consider how phrase queries can be efficiently supported with low disk overheads. Previous research has shown that phrase queries can be rapidly evaluated using nextword indexes, but these indexes are twice as large as conventional inverted files. We propose a combination of nextword indexes with inverted files as a solution to this problem. Our experiments show that combined use of an auxiliary nextword index and a conventional inverted file allow evaluation of phrase queries in half the time required to evaluate such queries with an inverted file alone, and the space overhead is only 10% of the size of the inverted file. Further time savings are available with only slight increases in disk requirements. Categories and Subject Descriptors | INTRODUCTION
Search engines are used to find data in response to ad hoc
queries. On the Web, most queries consist of simple lists
of words. However, a significant fraction of the queries include
phrases, where the user has indicated that some of the
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
SIGIR'02, August 11-15, 2002, Tampere, Finland.
Copyright 2002 ACM 1-58113-561-0/02/0008 ...
$
5.00.
query terms must be adjacent, typically by enclosing them
in quotation marks. Phrases have the advantage of being
unambiguous concept markers and are therefore viewed as
a valuable retrieval technique [5, 6, 7, 10]
In this paper, we explore new techniques for efficient evaluation
of phrase queries.
A standard way to evaluate phrase queries is to use an
inverted index, in which for each index term there is a list
of postings, and each posting includes a document identifier,
an in-document frequency, and a list of offsets. These offsets
are the ordinal word positions at which the term occurs in
the document. Given such a word-level inverted index and a
phrase query, it is straightforward to combine the postings
lists for the query terms to identify matching documents.
This does not mean, however, that the process is fast. Even
with an efficient representation of postings [16], the list for
a common term can require several megabytes for each gigabyte
of indexed text. Worse, heuristics such as frequency-ordering
[13] or impact-ordering [1] are not of value, as the
frequency of a word in a document does not determine its
frequency of participation in a particular phrase.
A crude solution is to use stopping, as is done by some
widely-used web search engines (the Google search engine,
for example, neglects common words in queries), but this
approach means that a small number of queries cannot be
evaluated, while many more evaluate incorrectly [12]. Another
solution is to index phrases directly, but the set of
word pairs in a text collection is large and an index on such
phrases difficult to manage.
In recent work, we proposed nextword indexes as a way
of supporting phrase queries and phrase browsing [2, 3, 15].
In a nextword index, for each index term or firstword there
is a list of the words or nextwords that follow that term,
together with the documents and word positions at which
the firstword and nextword occur as a pair. The disadvantage
of a nextword index is its size, typically around half
that of the indexed collection. Also, as described originally,
nextword index processing is not particularly efficient, as the
nextwords must be processed linearly and (compared to an
standard inverted index) for rare firstwords the overhead of
the additional layer of structure may outweigh the benefits.
In this paper we propose that phrase queries be evaluated
through a combination of an inverted index on rare
words and a form of nextword index on common words. We
explore the properties of phrase queries and show experimentally
that query evaluation time can be halved if just
the three most common firstwords are supported through
a nextword index. While phrase browsing is not possible
215
with such an arrangement, the disk overheads of the partial
nextword index are small and the benefits are substantial.
We have observed that many ordinary queries -- those
without quotation marks -- nonetheless resolve successfully
if processed as a phrase query, a phenomenon that search
engine users are familiar with, as the most popular engines
highly rank matches in which the query terms are adjacent.
This suggests that phrase querying is a potential method for
a fast "first cut" evaluation method, as it allows more rapid
identification of documents in which the terms occur as a
phrase.
PROPERTIES OF QUERIES
With large web search engines being used daily by millions
of users, it has become straightforward to gather large
numbers of queries and see how users are choosing to express
their information needs. Some search engine companies have
made extracts of their query logs freely available. In our research
, we have made extensive use of query logs provided
by Excite dating to 1997 and 1999, as well as more recent
logs from other sources.
These logs have similar properties
(with regard to our purposes), and we report primarily
on the Excite logs in this work. In the Excite log, after
sanitizing to remove obscenity there are 1,583,922 queries
(including duplicates). Of these, 132,276 or 8.3% are explicit
phrase queries, that is, they include a sequence of two
or more words enclosed in quotes. Amongst the rest of the
queries--those without a phrase-- about 5% contain a word
that does not occur at all in the 21.9 gigabytes (Gb) of
data we use. However, almost exactly 41% of the remaining
non-phrase queries actually match a phrase in the 21.9 Gb
dataset we use in our experiments.
A surprising proportion of the phrases include a common
term. Amongst the explicit phrase queries, 11,103 or 8.4%
include one of the three words that are commonest in our
dataset, "the", "to", and "of". 14.4% of the phrase queries
include one of the 20 commonest terms. In some of these
queries the common word has a structural function only, as
in tower of london, and can arguably be safely neglected
during query evaluation. In other queries, however, common
words play an important role, as in the movie title end
of days or the band name the who, and evaluation of these
queries is difficult with the common words removed, especially
when both "the" and "who" happen to be common
terms [12].
Taken together, these observations suggest that stopping
of common words will have an unpredictable effect. Stopping
may yield efficiency gains, but means that a significant
number of queries cannot be correctly evaluated. We experimented
with a set of 122,438 phrase queries that between
them match 309
10
6
documents.
Stopping of common
words means that a query such as tower of london must
be evaluated as tower -- london: the query evaluation engine
knows that the two remaining query terms must appear
with a single term between them. If the commonest three
words are stopped, there are 390
10
6
total matches for
all queries extracted from the log. However, these are distributed
extremely unevenly amongst the queries: for some
queries the great majority of matches are incorrect. The
figure rises to 490
10
6
for the commonest 20 words, and
1693
10
6
for the commonest 254 words, while a significant
number of queries, containing only stopped words, cannot
be evaluated at all.
It can be argued that stopwords are often insignificant,
and that even a document that is technically a mismatch-due
to the wrong stopword being present--may be just as
likely to be relevant as a document where the match is correct
. However, it is worth emphasising that there are many
queries in which stopwords do play an important role. The
words "to" and "from" are often stopped, for example, but
mismatches to the query flights to london are likely to
be incorrect. Another instance is that the word "the" often
forms part of a description, thus the moon should not match
websites about a moon of Jupiter, Keith Moon, or a book
publisher.
Amongst the phrase queries, the median number of words
in a phrase is 2, and the average is almost 2.5. About 34%
of the queries have three words or more, and 1.3% have six
words or more. A few queries are much longer, such as titles:
the architect of desire beauty and danger in
the stanford white family by suzannah lessard.
Another point of interest is where in a phrase the common
words occur.
In English, the common words rarely
terminate a phrase query. Only 0.4% of phrase queries with
"the", "to", or "of" have these words at the end. Almost all
of these queries are short: virtually no queries of four words
or more terminate with one of the commonest terms. In
the short queries ending in a common term, the other query
terms are themselves usually common. We take advantage
of these trends in the methods for phrase query evaluation
proposed in this paper.
INVERTED INDEXES
Inverted indexes are the standard method for supporting
queries on large text databases; there are no practical alternatives
to inverted indexes that provide sufficiently fast
ranked query evaluation. An inverted index is a two-level
structure. The upper level is all the index terms for the collection
. For text databases, the index terms are usually the
words occurring in the text, and all words are included. The
lower level is a set of postings lists, one per index term. Following
the notation of Zobel and Moffat [17], each posting
is a triple of the form:
d, f
d,t
, [o
1
, . . . , o
f
d,t
]
where d is the identifier of a document containing term t, the
frequency of t in d is f
d,t
, and the o values are the positions
in d at which t is observed. An example inverted file is shown
in Figure 1. In this example, there is a vocabulary of five
words, each of which has a postings list.
It is straightforward to use an inverted index to evaluate
a phrase query. Consider the query magdalene sue
prentiss. Of these terms, "magdalene" is the rarest, and
its inverted list is fetched first. The postings are decoded
and a temporary structure is created, recording which documents
contain this word and the ordinal word positions in
each document at which it occurs. The term "prentiss" is
the next rarest, and is processed next. For each document
identifier and word offset in the temporary structure created
earlier, a posting is sought to see whether "prentiss"
is in the document two words later. If the search fails, that
word position is discarded from the temporary structure, as
is the document identifier if no word positions for that document
remain. As both the structure and the postings are
sorted, this process is a linear merge. Then the postings list
216
On Disk
Vectors
1,(<9,2,[4,1001]>)
53,(<9,3,[3,8,90] ...)
23,(<4,2,[5,34]>, ...)
243,(<5,1,[45]>,<9,1,[7]> ...)
In Memory
Vocabulary
new
in
historic
hampshire
railroads
15,(<1,1,[100]>,<9,1,[6]> ...)
Figure 1: An inverted file for a collection with a vocabulary of five words.
for "sue" is fetched and decoded, and used to further delete
entries from the temporary structure. The remaining entries
are documents and word positions at which the phrase
occurs.
Summarizing, phrase queries are evaluated as follows.
1. Sort the query terms from rarest to commonest, keeping
note of their original position in the phrase.
2. Fetch the postings list for the first (rarest) query term.
Decode this list into a temporary structure of document
identifiers and word offset positions.
3. For each remaining query term, decode its postings
list, merging it with the temporary data; this merge
process discards from the temporary structure all document
identifiers and word offsets that do not match
any entry in the postings list.
In this query evaluation model, processing of the first
query term establishes a superset of the possible locations
of the complete phrase, which are maintained in a temporary
structure; as the subsequent query terms are evaluated,
this structure is pruned, never added to. It is thus essential
to begin processing with the rarest query term, to avoid
creation of an excessively large temporary structure (or of
having to process the inverted lists in stages to stay within
a memory limit).
A simple heuristic to address this problem is to directly
merge the inverted lists rather than decode them in turn. On
the one hand, merging has the disadvantage that techniques
such as skipping [11] cannot be as easily used to reduce processing
costs (although as we discuss later skipping does not
necessarily yield significant benefits). On the other hand,
merging of at least some of the inverted lists is probably the
only viable option when all the query terms are moderately
common.
Whether the lists are merged or processed in turn, the
whole of each list needs to be fetched (unless query processing
terminates early due to lack of matches). For ranked
query processing it is possible to predict which postings
in each inverted list are most likely to be of value, and
move these to the front of the inverted list; techniques for
such list modification include frequency-ordering [13] and
impact-ordering [1]. With these techniques, only the first of
the inverted lists need be fetched during evaluation of most
queries, greatly reducing costs.
In contrast, for phrase querying it is not simple to predict
which occurrences of the term will be in a query phrase, and
thus such reordering is unlikely to be effective. Offsets only
Table 1: Size of inverted index (Mb) after stopping
of common words.
Number of
Index size
words stopped
(Mb)
0
2350
3
2259
6
2195
10
2135
20
2089
254
1708
have to be decoded when there is a document match, but
they still have to be retrieved.
Other techniques do have the potential to reduce query
evaluation time, in particular skipping [11], in which additional
information is placed in inverted lists to reduce the
decoding required in regions in the list that cannot contain
postings that will match documents that have been identified
as potential matches. On older machines, on which CPU
cycles were relatively scarce, skipping could yield substantial
gains. On current machines, however, disk access costs
are the more important factor, and in other experiments we
have observed that the increase in length of lists required
by skipping outweighs the reduction in decoding time. We
therefore do not use skipping in our experiments.
We have implemented a phrase query evaluator based on
inverted lists, using compression techniques similar to those
employed in MG [16] to reduce costs, and have used it to
test the efficiency of phrase query evaluation. Our test data
is 21.9 Gb of HTML containing about 8.3 Gb of text (drawn
from the TREC large web track [9]).
Table 1 shows the size of the index with a range of levels
of stopping. As can be seen, the three commonest words
account for around 4% of the index size, and only small space
savings are yielded by stopping. However, as Table 2 shows,
the impact of stopping on query evaluation time is dramatic.
Just removing the three commonest words reduces average
time by about 60%, and by a factor of 3 for longer queries.
For these longer queries, the savings continue to increase as
more common words are stopped. It is the scale of these
savings that make stopping attractive, despite the fact that
they are at the cost of inaccurate query results.
217
Table 2: Times for phrase query evaluation (seconds
) on an inverted index after stopping of common
words.
Results are shown for all queries; 2-word
queries only; and 5-word queries only.
Number of
Overall
2-word
5-word
words stopped
time (sec)
queries
queries
0
1.56
0.49
6.41
3
0.66
0.30
1.94
6
0.45
0.29
1.07
10
0.40
0.28
0.81
20
0.37
0.28
0.70
254
0.18
0.16
0.26
NEXTWORD INDEXES
Inverted indexes allow evaluation of phrase queries, but
faster evaluation is possible with phrase-oriented indexes.
One possibility is to use a conventional inverted index in
which the terms are word pairs. Another way to support
phrase based query modes is to index and store phrases
directly [8] or simply by using an inverted index and approximating
phrases through a ranked query technique [5,
10]. Greater efficiency, with no additional in-memory space
overheads, is possible with a special-purpose structure, the
nextword index [15], where search structures are used to accelerate
processing of word pairs. The nextword index takes
the middle ground by indexing pairs of words and, therefore,
is particularly good at resolving phrase queries containing
two or more words. As noted above and observed elsewhere,
the commonest number of words in a phrase is two [14].
A nextword index is a three-level structure. The highest
level is of the distinct index terms in the collection, which we
call firstwords. At the middle level, for each firstword there
is a data structure (such as a front-coded list, or for fast
access a structure such as a tree) of nextwords, which are
the words observed to follow that firstword in the indexed
text. For example, for the firstword "artificial", nextwords
include "intelligence", "insemination", and "hip". At the
lowest level, for each nextword there is a postings list of the
positions at which that firstword-nextword pair occur.
An example nextword index is shown in Figure 2. In this
example, there are two firstwords, "in" and "new". Some
of the nextwords for "in" are "all", "new", and "the". For
each firstword-nextword pair, there is a postings list. (A
nextword index is of course a form of inverted index, but for
consistency with other work we use "inverted index" solely
to refer to a standard word-level inverted file.)
In nextword indexes, the postings lists are typically short,
because most pairs only occur infrequently. For example,
the postings list for the firstword-nextword pair "the"
"who"
is orders of magnitude smaller than the postings lists for
these words in an inverted file. It follows that phrase query
evaluation can be extremely fast.
Nextword indexes also have the benefit of allowing phrase
browsing or phrase querying [4, 15]; given a sequence of
words, the index can be used to identify which words follow
the sequence, thus providing an alternative mechanism
for searching text collections. However, we do not consider
phrase browsing further in this paper.
For phrase queries of more than two words, multiple postings
lists must be fetched from the nextword index to resolve
the query. Selection of which listings to fetch requires a little
care. For example, with the query
boulder municipal employees credit union
the query can be resolved by fetching the postings lists for
the firstword-nextword pairs "boulder"
"municipal", "employees"
"credit", and "credit""union". Alternatively, it
would be possible to get the lists for "boulder"
"municipal",
"municipal"
"employees", and "credit""union". Which is
most efficient depends on which is shorter: the list for "employees"
"credit" or the list for for "municipal""employees".
Unfortunately, establishing which is shorter requires two
disk accesses, to retrieve the nextwords for "employees" and
"municipal". However, we have observed that the frequency
of a firstword closely correlates to the lengths of its nextword
lists.
Thus in the query
historic railroads in new hampshire
we can with confidence choose "railroads"
"in" in preference
to "in"
"new", because "railroads" is much less common
than "in". We have considered algorithms for choosing
order of evaluation elsewhere [3]. An efficient algorithm for
evaluating phrase queries with a nextword index is as follows
.
1. If the number of query terms n is even, the query
can consist of n/2 disjoint firstword-nextword pairs. If
the number of query terms n is odd, n/2 firstword-nextword
pairs must be chosen. However, in both cases
it is more efficient to choose more than the minimum
number of pairs, if doing so avoids choice of a common
word as a firstword.
2. The method we use is to choose all n - 1 firstword-nextword
pairs; then sort them by increasing firstword
frequency; then discard from the list the pairs that
are completely covered by preceding selections. This
approach can lead to processing of more than n/2
pairs, but experimentally was shown to reduces costs
overall.
3. The selected word pairs are sorted by increasing frequency
of the firstwords, then their postings lists are
processed as for inverted file phrase query processing.
The nextword index for our Web collection is 4487 Mb in
size, almost exactly twice that of an inverted file. For phrase
queries, the savings in query evaluation time are dramatic.
Average query evaluation time is reduced to 0.06 seconds,
faster than inverted files by a factor of 25. For two-word
queries, the time falls to 0.01 seconds, which is faster by a
factor of 50. The time for 5-word queries is 0.32.
An interesting possibility suggested by these results is
that--given space for a nextword index--all queries be evaluated
as if they were phrases. We observed above that a
significant fraction of all queries successfully evaluate, and
indeed on browsing the query logs it is obvious that many
of the queries without quotation marks are nonetheless intended
to be phrases. Spink et al. [14] suggest that most
two-word queries should be treated as a phrase query even
if they were entered as a ranked query. Given that search
engines return as highest matches the pages in which the
218
In Memory
Vocabulary
On Disk
Nextword Lists
in
new
...
age
hampshire
house
...
the
all
new
On Disk Inverted Vectors
15,(<1,15,[100]>,<65,1,[1]>,<74,7,[23,43,54,62,68,114,181,203]>, ...)
1,(<9,1,[6]>)
3,(<1,1,[12]>,<34,3,[23,34,111]>,<77,1,[29]>)
305,(<9,2,[7,199]>,<532,1,[256]>, ...)
2,(<9,1,[423]>,<19,1,[4]>)
2,(<31,3,[21,41,91]>,<44,1,[34)]>)
Figure 2: A nextword index with two firstwords.
query words appear in sequence, use of a nextword index
provides a rapid mechanism for finding these pages.
Much of the speed improvement for phrase queries yielded
by nextword indexes is for queries involving a non-rare word.
Indeed, for queries of rare words there may be little gain, as
query processing with nextword indexes involves more complex
structures than does processing with inverted indexes.
As the two approaches to phrase query processing appear,
then, to have complementary advantages, it is attractive to
try to combine their strengths.
COMBINED QUERY EVALUATION
We have observed above that inverted indexes are the
least efficient for phrases involving common words, the case
where nextword indexes yield the greatest speed advantage.
We therefore propose that common words only be used as
firstwords in a stripped-down nextword index, and that this
new index be used where possible in evaluation of phrase
queries. We call this a top frequency based scheme, since
only the most frequent words are indexed in the nextword
index. We have explored other schemes based on the frequency
of words in the indexed collection, or based on the
frequency of words in the query log. None of the investi-gated
schemes offered a better space and time trade-off, so
we report only results from the top frequency scheme.
An example of a top frequency combined index is shown
in Figure 3. At the left there is a vocabulary of five words.
Each word has an inverted list, together constituting a complete
inverted file for these words. In addition, the common
words "in" and "new" have a nextword index.
With a combined index, processing involves postings lists
from both the inverted index and the nextword index. Consider
again the query:
historic railroads in new hampshire
Neither "historic" nor "railroads" is a common word, so
establishing that these terms occur in the phrase involves
fetching their postings lists from the inverted index and processing
in the usual way. However, "in" and "new" are both
common. The posting list for the firstword-nextword pair
"in"
"new" from the nextword index must be fetched and
processed. Then there is a choice. On the one hand, the
nextword index postings list for "new"
"hampshire" cannot
be longer than the inverted index postings list for "hampshire"
and in all likelihood is a great deal shorter.
On
the other hand, compared to the inverted index, an extra
disk access is required to fetch a postings list from the
nextword index. In our implementation, we process using
the nextword index if possible, and resort to the inverted
index only for terms that are not in an indexed firstword-nextword
pair.
In summary, we use the following process:
1. Identify all pairs in the list in which the first term is
an indexed firstword. Sort these terms, and prune the
list as for standard evaluation of phrase queries via a
nextword index.
2. For all terms not in a firstword-nextword pair, sort.
3. Process the postings lists in increasing order of firstword
frequency, so that processing of nextword index
lists and of inverted file lists is interleaved.
In this model, a common word need only be evaluated via its
postings list in the inverted file if it occurs as the last word
in a query, which in the Excite query log is a rare event.
We have tested other query resolution methods that involved
term sorting based on nextword frequency (or NWF,
the number of nextwords for a firstword), inverted document
frequency (or IDF, the number of documents in which
a word occurs), or both. We also experimented with resolving
nextword entries of a given query always first, or always
last. We found overall that these different resolution methods
did not significantly vary in query speed and behaved
almost identically to sorting by IDF only. We therefore sort
inverted index terms and nextword terms based on IDF since
we do not need to keepanother statistical value per index
term and sorting is straightforward.
EXPERIMENTAL RESULTS
All experiments were run on an Intel 700 MHz Pentium
III-based server with 2 Gb of memory, running the Linux
operating system under light load. In Table 3 we show sizes
of nextword indexes in which only the commonest terms are
allowed as firstwords. The table shows that a nextword index
that contains only the three commonest terms consumes
254 Mb, that is, just over 10% of the space of the inverted
index or around 1% of the size of the original HTML collection
.
Query evaluation time with a combined index is shown
in Table 4. (The "0" line is repeated from Table 2.) As
can be seen, use of a nextword index allows evaluation of all
phrase queries, and much more rapidly than was previously
possible. Use of a partial nextword index of 1% of the HTML
collection halves query evaluation time; a partial nextword
219
in
hampshire
historic
new
railroads
In Memory
Vocabulary
On Disk
Nextword Lists
...
the
all
new
15,(<15,1,[100]>,<65,1,[1]>,<74,7,[23,43,54,62,68,114,181]> ...)
251,(<5,1,[45]>,<9,1,[6]> ...)
1,(<9,1,[7]>)
23,(<9,3,[4,8,245]> ...)
2,(<1,1,[53]>,<9,2,[4,1001>])
23,(<1,2,[65,98]>,<9,4,[7,54,64,69]> ...)
age
hampshire
house
...
15,(<2,1,[100]>,<6,1,[1]>,<9,8,[1,5,54,62,68,114,181,203]> ...)
On Disk Inverted Vectors
3,(<1,1,[12]>,<34,3,[23,34,111]>,<77,1,[29]>)
2,(<31,3,[21,41,91]>,<44,1,[34]>)
305,(<9,2,[7,54]>,<532,1,[256]> ...)
2,(<9,1,[423]>,<19,1,[4]>)
Figure 3: A combined inverted file and nextword index.
Table 3: Size of nextword index (Mb) containing
only common firstwords.
Number of
Index size
common words
(Mb)
3
254
6
427
10
520
20
657
254
1366
index of less than 3% of the size of the collection cuts time
to a third.
These are substantial savings at low cost. Phrase query
processing time with a nextword index is only slightly greater
than with a stopped inverted file, and no answers are lost.
Such combined processing can be integrated with other
heuristics for phrase query evaluation. For example, a strategy
that is likely to be successful in the context of a web
search engine is to maintain indexes (perhaps for a limited
time only) on phrases, or word pairs from phrases,
that are commonly posed as queries. Amongst our 132,276
queries, 72,184 are distinct. The commonest phrase query
(thumbnail post) occurs 205 times and involves no common
words. The queries themselves contain 92,846 distinct
word pairs; the commonest pair occurs 683 times. Indexing
of common query pairs has the potential to yield significant
further savings.
CONCLUSIONS
We have proposed that phrase queries on large text collections
be supported by use of a small auxiliary index. In
this approach, all words in the text are indexed via an inverted
file; in addition, the commonest words are indexed
via an auxiliary nextword index, which stores postings lists
for firstword-nextword pairs. We have shown that the cost
of evaluating phrase indexes can be cut by a factor of three,
with an auxiliary index that is only 3% of the size of the
Table 4: Times for phrase query evaluation (seconds
) on a combined index, with different numbers
of common words used in the nextword index. Results
are shown for all queries; 2-word queries only;
and 5-word queries only.
Number of
Overall
2-word
5-word
common words
time (sec)
queries
queries
0
1.56
0.49
6.41
3
0.76
0.31
2.99
6
0.57
0.31
2.28
10
0.53
0.30
2.10
20
0.50
0.30
1.98
254
0.46
0.27
1.83
indexed data.
These results show that there is no need to use stopping
in phrases. Indeed, the statistics on the number of matches
indicate that such stopping leads to significant error rates.
While it can be argued that mistakes in matching due to
stopping of common words are in many cases unimportant,
there are numerous queries where the stopwords are significant
; moreover, we have demonstrated that there is no reason
to make such mistakes at all.
Our schemes have scope for improvement. In particular,
choosing of pairs during query evaluation requires further
exploration, and we are further investigating structures for
representing nextword lists. However, our results show that
evaluation of phrase queries can be dramatically accelerated
with only a small additional index, and that stopping of
phrases leads to errors and is not necessary for efficiency.
220
Acknowledgements
This research was supported by the Australian Research
Council. We thank Amanda Spink, Doug Cutting, Jack Xu,
and Excite Inc. for providing the query log.
REFERENCES
[1] V. N. Anh, O. Kretser, and A. Moffat. Vector-Space
ranking with effective early termination. In W. B.
Croft, D. J. Harp er, D. H. Kraft, and J. Zobel,
editors, Proc. ACM-SIGIRInternational Conference
on Research and Development in Information
Retrieval, pages 3542, New York, Sept. 2001.
[2] D. Bahle, H.E. Williams, and J. Zobel. Compaction
techniques for nextword indexes. In Proc. 8th
International Symposium on String Processing and
Information Retrieval (SPIRE2001), pages 3345, San
Rafael, Chile, 2001.
[3] D. Bahle, H. E. Williams, and J. Zobel. Optimised
phrase querying and browsing in text databases. In
M. Oudshoorn, editor, Proc. Australasian Computer
Science Conference, pages 1119, Gold Coast,
Australia, Jan. 2001.
[4] P. Bruza, R. McArthur, and S. Dennis. Interactive
internet search: keyword, directory and query
reformulation mechanisms compared. In N. J. Belkin,
P. Ingwersen, and M.-K. Leong, editors, Proc.
ACM-SIGIRInternational Conference on Research
and Development in Information Retrieval, pages
280287, Athens, 2000.
[5] C. L. Clarke, G. V. Cormack, and E. A. Tudhope.
Relevance ranking for one- to three-term queries. In
Proc. of RIAO-97, 5th International Conference
"Recherche d'Information Assistee par Ordinateur",
pages 388400, Montreal, CA, 1997.
[6] W. B. Croft, H. R. Turtle, and D. D. Lewis. The use
of phrases and structured queries in information
retrieval. In A. Bookstein, Y. Chiaramella, G. Salton,
and V. V. Raghavan, editors, Proc. ACM-SIGIR
International Conference on Research and
Development in Information Retrieval, pages 3245,
Chicago, 1991.
[7] E. F. de Lima and J. O. Pedersen. Phrase recognition
and expansion for short, precision-biased queries based
on a query log. In Proc. ACM-SIGIRInternational
Conference on Research and Development in
Information Retrieval, pages 145152, Berkeley, 1999.
[8] C. Gutwin, G. Paynter, I. Witten, C. NevillManning,
and E. Frank. Improving browsing in digital libraries
with keyphrase indexes. Decision Support Systems,
27(1/2):81104, 1998.
[9] D. Hawking, N. Craswell, P. Thistlewaite, and
D. Harman. Results and challenges in web search
evaluation. In Proc. of the Eighth International
World-Wide Web Conference, volume 31, pages
13211330, May 1999.
[10] D. D. Lewis and W. B. Croft. Term clustering of
syntactic phrases. In J.-L. Vidick, editor, Proc.
ACM-SIGIRInternational Conference on Research
and Development in Information Retrieval, pages
385404, 1990.
[11] A. Moffat and J. Zobel. Self-indexing inverted files for
fast text retrieval. ACM Transactions on Information
Systems, 14(4):349379, 1996.
[12] G. W. Paynter, I. H. Witten, S. J. Cunningham, and
G. Buchanan. Scalable browsing for large collections:
A case study. In Proc. of the 5th ACM International
Conference on Digital Libraries, pages 215223, San
Antonio, 2000.
[13] M. Persin, J. Zobel, and R. Sacks-Davis. Filtered
document retrieval with frequency-sorted indexes.
Journal of the American Society for Information
Science, 47(10):749764, 1996.
[14] A. Spink, D. Wolfram, B. J. Jansen, and T. Saracevic.
Searching the web: The public and their queries.
Journal of the American Society for Information
Science, 52(3):226234, 2001.
[15] H. Williams, J. Zobel, and P. Anderson. What's next?
index structures for efficient phrase querying. In
M. Orlowska, editor, Proc. Australasian Database
Conference, pages 141152, Auckland, New Zealand,
1999.
[16] I. H. Witten, A. Moffat, and T. C. Bell. Managing
Gigabytes: Compressing and Indexing Documents and
Images. Morgan Kaufmann, San Francisco, California,
second edition, 1999.
[17] J. Zobel and A. Moffat. Exploring the similarity
space. SIGIRForum, 32(1):1834, 1998.
221
| common words;evaluation efficiency;stopping;Indexing;nextword index;index representation;phrase query evaluation;query evaluation;phrase query;inverted index |
79 | Efficient retrieval of similar shapes | We propose an indexing technique for the fast retrieval of objects in 2D images basedon similarity between their boundary shapes. Our technique is robust in the presence of noise andsupports several important notions of similarity including optimal matches irrespective of variations in orientation and/or position. Our method can also handle size-invariant matches using a normalization technique, although optimality is not guaranteedhere. We implementedour method and performed experiments on real (hand-written digits) data. Our experimental results showedthe superiority of our method comparedto search basedon sequential scanning, which is the only obvious competitor. The performance gain of our method increases with any increase in the number or the size of shapes. | Introduction
There is an increasing interest in storing andretrieving non-textual
objects in databases. For example, this kind of data can
be storedin the form of extenders in DB2, DataBlades in In-formix
, and cartridges in Oracle. Non-textual objects are frequently
in the form of images or shapes. In cases where the key
information for description or classification of an object can be
found in its boundary, it is natural to store only the boundary
anddo retrieval basedon that. Among the areas of applications
for boundary shape matching are industrial inspection,
object recognition in satellite images, character recognition,
classification of chromosomes, andtarget recognition.
For example, consider the following query:
Query 1
Findall shapes similar to a given shape.
A basic question here is how we judge whether two shapes
(for example the two shown in Fig. 1) are similar. There is a
large body of work in the area of pattern recognition and computer
vision on extracting boundary features of a shape and
doing shape matching based on those features. The boundary
of an object can be described in terms of simple descriptors
such as length, diameter, and curvature ([MM86]), chain
Fig. 1.
Two shape boundaries both representing character `9'
codes ([BG80,Bri81]), Fourier descriptors ([PF77,ZR72]) or
moments ([BSA91]). Among these features, we use Fourier
descriptors as our shape features. Theoretical andexperimen-tal
evidence in favor of Fourier descriptors can be found in the
literature [PF77,KSP95].
Similar shapes often have differences in size and orientation
. For example, consider the two shapes shown in Fig. 1. The
Euclidean distance between their Fourier descriptors is 22.88.
If we rotate the shape on the right by
30
in the clockwise
(cw) direction, the Euclidean distance between their Fourier
descriptors drops to zero. A simple approach to remove differences
due to shifting, scaling, and rotation is to normalize
Fourier descriptors before storing them in a database. However
, there are still two problems with normalization. First,
normalization is not guaranteedto minimize the distance between
two arbitrary shapes. Second, normalization is not always
desirable; for example, the shapes `9' and`6' shouldnot
be treatedas similar if we are doing character recognition. A
solution is to rewrite the query as follows:
Query 2
Findall shapes that become similar to a given shape
after being rotatedby
[-30
, 30
].
If our shape collection includes, for example, shapes of airplanes
, we may write our query insteadas follows:
Query 3
Findall shapes similar to a given shape irrespective
of rotation.
In this paper, we study the issue of efficiently processing
these queries. We show how to organize Fourier descriptors in
a multidimensional index, and how to efficiently use the index
18
D. Rafiei, A.O. Mendelzon: Efficient retrieval of similar shapes
in processing a broadrange of similarity queries. Our goal is
to develop an access method that can handle shapes of various
sizes andorientations, is much faster than sequential scanning,
and does not miss any qualifying data objects in the answers
(false positives are acceptable if they can be eliminatedin a
post-processing step without much performance degradation).
The organization of the rest of the paper is as follows.
Section 2 provides some backgroundmaterial on relatedwork,
shape representation using Fourier descriptors and shape matching
. In Sect. 3, we propose our technique for indexing shapes
andprocessing similarity queries. Section 4 presents experimental
results. We conclude in Sect. 5.
Background
2.1 Related work
The following relevant methods for multidimensional indexing
andsearch have been proposed:
Jagadish [Jag91] proposes a technique for storing and retrieving
shape descriptions in a multidimensional index. He
maps shapes into their constituent rectangles, keeps a few
larger rectangles in a multidimensional index, and uses the
area difference between the constituent rectangles of shapes
as a measure of similarity. Due to a normalization process, the
shape description is invariant under translation and scaling. A
problem with this approach is that a shape can be normally
coveredby multiple sets of rectangles. This can leadto ambiguity
or storing multiple representations of the same shape.
Furthermore, it is not possible to do matching in the presence
of rotation; for example, two identical shapes may not match
if one is rotatedby
45
.
Mehrotra andGary [MG93] decompose a shape into several
components anduse fixed-sizedsegments of each component
as the shape features. Basedon a normalization process,
the shape description is made invariant under translation, scaling
, androtation. A problem with this approach is that since
a shape is broken down into pieces, the overall shape of the
boundary is lost. In addition, each shape is described in terms
of multiple feature vectors, andthis introduces extra overhead
during insertions and retrievals.
Berchtoldet al. [BKK97] study the issue of storing polygons
so that they can be retrievedbasedon partial similarity
matches. They extract almost all possible boundary segments
of polygons, transform each segment into a sequence of slope
changes, andmap the resulting sequences into their first few
Fourier coefficients. Thus, each polygon is representedusing
a set of feature points, andthe minimum bounding rectangle
of these points for each polygon is storedin a multidimensional
index. Due to a normalization, the shape representation
is invariant to translation, scaling, androtation, but it is not invariant
to the starting point. This problem is handled by storing
multiple descriptions of a polygon, each associated to a
starting point. Again, representing a polygon in terms of multiple
points introduces extra overhead during insertions and
retrievals.
The QBIC (Query By Image Content) system [FBF
+
94]
contains a component for approximate shape matching. The
system keeps a 20-D feature vector to describe the shape of
x x
y , y
1
1
y
imaginary axis
x
real axis
0
0
Fig. 2.
A boundary and its representation as a complex sequence
every object identified in an image. Features, for example, include
the area and the circularity, i.e., whether the object is circular
or not. To allow fast retrieval, it is suggestedto transform
feature vectors using the Karhunen Loeve (KL) transform and
keep a few important features (those associatedwith the few
largest eigenvalues) in a multidimensional index. However, the
choice of proper features andtheir weighting for each application
is not an easy task. Some features are abstract quantities
which may not easily fit in a distance function computation.
In addition, the use of the KL transform makes the multidimensional
index rather static.
The aforementionedmethods are less general than ours because
the notion of similarity is fixedbefore query evaluation;
this notion cannot be changedunless a new index structure
is created. Our method, instead, provides a set of transformations
to express the notion of similarity in a query; yet, the
resulting queries are evaluatedusing the same index, without
prior knowledge of the specific transformations used. Therefore
we have not comparedthe performance of our method
with theirs, but with sequential scanning instead.
Related work on time series data includes the work of
Agrawal et al. [AFS93] on using the discrete Fourier transform
for retrieving similar time series andextensions andimprove-ments
over this approach [GK95,RM97,RM00]. Similar to
our framework, Goldin and Kanellakis [GK95] show that the
similarity retrieval will be roughly invariant to simple translations
andscales if sequences are normalizedbefore being
stored in the index. The authors store in the index both the
translation and the scale factors, in addition to normalized sequences
, andalso allow those factors to be queriedusing range
predicates (see Goldin's Ph.D. thesis [Gol97] for implementation
details).
A general framework for composing similarity queries is
proposed by Jagadish, Mendelzon, and Milo [JMM95]. Our
work here can be seen as a special case of this framework over
shapes. Our shape matching can also be incorporatedwithin a
multimedia query language such as MOQL [L
OSO97] where
multiple features of images are simultaneously queried.
2.2 Shape representation using Fourier descriptors
Given the figure of an object in the complex plane, its boundary
can be tracedproducing a 1-D complex function
b
t
of
time. For example, a point moving along the boundary shown
in Fig. 2 generates the complex function
b
t
= x
t
+ jy
t
for
D. Rafiei, A.O. Mendelzon: Efficient retrieval of similar shapes
19
t = 0, . . . , N - 1 which is periodic with period N. That is, the
x-axis of the figure is treatedas the real axis andthe y-axis as
the imaginary axis of a sequence of complex numbers. Further
information on tracing the boundary of a shape and possible
alternatives in representing it can be foundin any image processing
textbook such as Gonzalez andWoods [GW92].
It shouldbe notedthat the description is solely basedon
the shape of the boundary; objects can still have holes in them,
but this is not reflected in the description. Given a boundary
function
b
t
, its Fourier transform can be written as
B
f
= 1
N
N-1
t=0
b
t
e
-j2tf
N
(1)
where
f { (N - 1)/2 , . . . , 0, . . . , (N - 1)/2 } and
j = -1 is the imaginary unit. The coefficients B
0
, B
1
, . . .,
called Fourier descriptors, describe the shape of the object in
the frequency domain. The transformation is loss-less since
the energy in the frequency domain is the same as the energy
in the spatial domain (due to Parseval's theorem) and also the
inverse Fourier transform gives the original boundary function
.
2.3 Shape matching using Fourier descriptors
Consider two boundary functions
b
t
= x
t
+ jy
t
and
b
t
=
x
t
+jy
t
(for
t = 0, . . . , N -1).A typical measure of similarity
between the two boundaries is the Euclidean distance, which
corresponds to mean-square error and which is also directly
relatedto the cross-correlation [Raf98].
D
2
(b, b ) =
N-1
t=0
|b
t
- b
t
|
2
(2)
However, the distance computation becomes ambiguous if the
two boundaries have different numbers of samples. A solution
to avoidthis problem is to findthe Fourier descriptors
B and
B , respectively, for b and b anduse a fixednumber of lower
frequency descriptors (say,
2M +1) to compute the Euclidean
distance, i.e.,
D
2
(B, B ) =
M
f=-M
|B
f
- B
f
|
2
.
(3)
Our proposal
The general overview of the proposedmethodis as follows:
1. Obtain the Fourier descriptors of every shape boundary in
the database.
2. Compute a fingerprint for every shape, as discussed in
Sect. 3.1, andbuilda multidimensional index using the
fingerprints. Each fingerprint is storedas a point in the
multidimensional index.
3. For basic similarity queries (proximity, nearest neighbours
and all-pairs), use the index to retrieve candidate shapes.
The qualifying shapes are identified after retrieving their
full database records and examining them.
4. For queries that use transformations in their expressions
of similarities, if necessary, apply the transformations to
the index, as discussed in Sect. 3.4, and retrieve candidate
shapes. The full database record of every candidate shape
is examinedto findout if it qualifies.
We use Fourier descriptors as our shape features. Given a
set of shape boundaries, for each boundary
b we findits Fourier
transform andretain only a fixednumber of lower frequency
descriptors. This number, which we denote by
2M + 1, can
be chosen, for example to be the average length of a boundary
in the spatial domain. If the number of Fourier descriptors
happens to be less than
2M + 1, we store zero for higher
frequency descriptors.
3.1 Computing a fingerprint
To aidin the retrievals that we intendto perform, we apply a
few transformations to the descriptors, rather than storing them
directly. First, we set
B
0
to
0. B
0
is the only descriptor that
carries information about the shape location. This setting minimizes
the distance function (Eq. 3) with respect to translation.
Next, the scale normalization is achieved by dividing every
coefficient
B
f
by the amplitude of
B
1
, often calledthe fundamental
frequency.
|B
1
| turns out to be the largest amplitude
when the boundary is traced in the counter-clockwise (ccw)
direction and the boundary does not cross itself [WW80]. After
the normalization,
B
0
is
0, so we do not need to store it.
Instead, we store the original value of
B
0
before the normalization
. It shouldbe notedthat the real andthe imaginary parts of
the initial value of
B
0
represent the shift factors, respectively,
along the X and the Y coordinates; the amplitude of the initial
value of
B
1
represents the scale factor. To totally get ridof
B
1
, which already has an amplitude of
1 for all shapes, we do
an additional normalization. We shift the starting point such
that the phase of
B
1
becomes zero.
Definition 3.1. Given the Fourier descriptors
B
-M
, . . . , B
M
of a shape, denote the real part of
B
0
by
sh
x
, the imaginary
part of
B
0
by
sh
y
, the amplitude of
B
1
by
sc, and the phase
of
B
1
by
p. The shape description is defined as the sequence
(sh
x
, sh
y
, sc, S
-1
, S
2
, S
-2
, S
3
, S
-3
, . . . , S
M
, S
-M
).
(4)
where
S
i
= ((B
i
- (sh
x
+ sh
y
j))/sc) e
-ipj
(a complex
number) for
i = -1, 2, 3, . . ..
The Euclidean distance between two shape descriptions, irrespective
of variations in location andsize, can be computedas
follows:
D
2
(S, S ) =
M
f=-M,f=0,1
|S
f
- S
f
|
2
.
(5)
Such a description is still sensitive to changes in orientation
andstarting point of the tracing. We can assume that every
data or query shape has a fixed starting point, if we encode its
boundary using the same tracing algorithm and perform the
same normalization. For example, a tracing algorithm may
always start from the top right corner of a shape andtrace it
in the ccw direction. In this way, the starting point for two
identical shapes will always be the same. Two similar shapes
20
D. Rafiei, A.O. Mendelzon: Efficient retrieval of similar shapes
may still have small variations in their starting points, but those
variations can be easily resolvedby allowing some variations
in starting points. This is discussed in Sect. 3.4.3.
There are sophisticatedtechniques to do phase normalization
[PF77,WW80]. For example, Wallace et al. [WW80]
suggest making the phases of the two coefficients of largest
amplitude equal to zero. This is believed to shift the starting
point over the axis of symmetry andalso rotate the axis of
symmetry such that it coincides with the real axis. However,
it shouldbe notedthat none of these techniques are perfect in
the sense that a shape can have two or more different phase
normalizations, each as goodas the others; or equivalently,
two fairly similar shapes may have descriptors which are far
from each other.
For the purpose of indexing, important features of the description
needto be identifiedandplacedin the fingerprint.
First, changing the orientation or the starting point of a boundary
only affects the phases of descriptors. To insulate the index
from such changes, the information about the phases of
descriptors is not stored in a fingerprint. Second, as is shown
in Fig. 3, the lower frequency descriptors contain information
about the general shape, andthe higher frequency descriptors
contain information about smaller details. There are strong
reasons to believe that for a large class of boundary functions,
the lower frequency descriptors contain most of the energy.
For example, for continuous piece-wise smooth functions, the
amplitude spectrum
|S
f
| decreases at a rate proportional to
f
-2
[RH74, Page 373]. Thus, we can define a fingerprint of a
shape as follows:
Definition 3.2. Given a shape description
(sh
x
, sh
y
, sc, S
-1
,
S
2
, S
-2
, . . . , S
M
, S
-M
), the fingerprint of the shape is defined
as
(sh
x
, sh
y
, sc, |S
-1
|, |S
2
|, |S
-2
|, . . . , |S
k
|, |S
-k
|) where
k ( M) is the cut-off frequency.
Next we show the completeness of the feature extraction.
3.2 Using fingerprints for indexing
The completeness of the indexing methodis basedon the following
lemma:
Lemma 3.3. The use of a fingerprint, in place of a full shape
description for shape matching always returns a superset of
the answer set.
Proof: For every pair of boundaries
S and S of length
2M + 1 andfor every k M, we have
M
f=-M,f=0,1
|S
f
- S
f
|
2
k
f=-k,f=0,1
||S
f
| - |S
f
||
2
(6)
This is due to the fact that for every term
||S
f
| - |S
f
|| in
the right side of the inequality, there is a term
|S
f
- S
f
| in the
left side and
|S
f
- S
f
| ||S
f
| - |S
f
||.
Thus, storing the fingerprints of shapes in the index does
not affect the correctness since the index returns a superset of
the answer set. Furthermore, the distance function on the right
side of Eq. 6 is invariant to changes in the starting point of the
boundary and rotation.
However, the index will not be effective if the choice of
k results in a large number of false hits or high index dimensionality
(the curse of dimensionality). Our experiments in
Sect. 4.2 show that the value of
k can be chosen as low as 2
which results in storing 5 Fourier amplitudes in the index.
There are a large number of multidimensional data structures
which can be used for indexing (see the survey by Gaede
andGunther [GG98] for details). We use the R*-tree as it is
expectedto work well for up to 20 dimensions andthe length
of a fingerprint is expectedto be less than 20.
3.3 Basic similarity queries
Within this section, we assume that the shapes being com-paredhave
the correct sizes, positions, andorientations. Such a
match can also be useful, for example before insertions, to prevent
storing two replicas of the same image. We consider the
three basic similarity queries over a shape database: (a) proximity
query
1
; (b) all-pairs query; and(c) nearest-neighbours
query.
In a proximity query, we are given a query shape anda
threshold , andwe wouldlike to findall database shapes
that are within distance
of the query shape. To perform a
proximity query, both the shape description and its fingerprint
are computedas describedin Sect. 3.1, in the same way as each
data shape has been. The fingerprint is then used as a search
key into the shape index, to retrieve all data shapes that are
locatedin its proximity. Note that the index retrieves a superset
of the answer set since it only keeps the fingerprints of shape
descriptions. The actual result is obtained in an additional step
where the Euclidean distance between the full database record
of every matching shape andthe query shape is computed.
In an all-pairs query, we are given two data sets and a
threshold , andwe want to findall pairs of shapes such that
one shape is within distance of the other. To perform an all-pairs
query, we do a spatial join between the corresponding
indices of the two data sets. This is followed by an additional
step where the Euclidean distance between the full database
records of matching shapes are computed.
In a nearest-neighbours query, we are given a query shape,
andwe wish to finddata shapes which are the closest to
the query shape in distance. To perform a nearest-neighbours
query, both the shape description and its fingerprint are computed
(as discussedin Sect. 3.1), andthe fingerprint is usedas
a search key over the index. Since the index employs the distance
between fingerprints for its pruning andthis distance is
an underestimate of the real distance between descriptions, a
nearest neighbour identified through searching the index may
not be the real nearest neighbour. For example, of the two
shapes a and b, a couldbe the closest to the query shape based
on the distance between full descriptions, but the index will
return b if b is the closest basedon the distance between fingerprints
.
To fix the problem, we pick the nearest neighbour(s) iden-tifiedthrough
the index andcompute the distances between
full descriptions of the retrievedshapes andthe query shape.
If we denote the minimum distance over all retrieved shapes
with , the distance from the real nearest neighbours cannot
1
This is often referredto as a range query as well [AFS93,LJF94].
D. Rafiei, A.O. Mendelzon: Efficient retrieval of similar shapes
21
orginal, N=34
4 descriptors are used
6 descriptors are used
8 descriptors are used
10 descriptors are used
12 descriptors are used
Fig. 3.
Example of reconstructions from Fourier descriptors
be greater than ; otherwise the shapes identified through the
index are the nearest neighbours. The full algorithm is as follows
:
Algorithm 1:
1. Using a nearest-neighbours search algorithm (such as
[RKV95]), retrieve the nearest neighbour(s) from the index
.
2. For every candidate returned in step 1, retrieve its full
database recordandcompute its real distance from the
query shape. Let NN be the set of all data shapes at the
minimum real distances from the query shape; let be this
minimum distance.
3. Using
as an initial threshold, pose an incremental proximity
query to the index (results are returned one at a time
andthe threshold can be tightenedduring the process).
4. Get the next data shape within distance
of the query
shape. If the distance between the data shape and the query
shape is less than , then set NN to be the new data shape
and
to be the new distance; if the distance between the
new data shape and the query shape is , then add the new
data shape to NN. Repeat this step until there are no more
qualifying data shapes.
Algorithm 1 is a refinement of the nearest-neighbours algorithm
given by Korn et al. [KSF
+
96]. The refinement is in
the form of tightening the proximity query thresholdin Step 4
as more data shapes are retrieved. There is another incremental
refinement of the same algorithm, proposedby Seidl and
Kriegel [SK98], which can also be used.
3.4 Queries with transformations
A natural way of doing shape matching is to remove certain
differences before running a comparison. We can think of this
process as applying certain transformations to images before
doing a matching. We consider the following four kinds of
transformations:
1. Shifting andscaling.
2. Rotation.
3. Change of starting point.
4. Smoothing.
In this section, we center our discussion on proximity queries,
but the same techniques are applicable to nearest-neighbours
andall-pairs queries.
Transformations 1 to 3 can be supportedin a multidimensional
index by providing a function that computes the distance
between a data shape and a query shape; transformations can
be applied to shape descriptions inside the function. Transformation
4 can be supported by registering an additional function
that checks if an index entry overlaps with a query entry. The
transformation can then be appliedto either the index entry or
the query entry (or both) before checking for an overlap. Most
multidimensional index structures allow users to define such
a function.
The next four subsections respectively discuss the eval-uations
of queries that use individual transformations 1 to 4
in their expressions of similarities. More details on evaluating
queries that use a combination of transformations in their
expressions of similarities can be foundelsewhere [RM00].
3.4.1 Match with shifting or scaling
In many cases we do not care about the position of a shape
within a coordinate system or about its size for matching purposes
.
To match with shifting or scaling, a fingerprint is com-putedfor
the query shape, as describedin Sect. 3.1, andthis
fingerprint is usedas a search key for the index. If we are interested
in a match invariant under shifting, we simply discard
the shift factor of the query point andpermit any value for the
shift factor. Similarly, for scaling-invariant matching, we dis-cardthe
scale factor of the query point andpermit any value
for the scale factor.
22
D. Rafiei, A.O. Mendelzon: Efficient retrieval of similar shapes
3.4.2 Match with rotation
We often wish to match shapes irrespective of small variations
in orientation. For example, the two shapes shown in Fig. 1
make a perfect match, if one shape is rotatedby
30
. To achieve
this, we state in our query the range of the rotation we wish to
perform before doing a shape matching. Query 2, for instance,
retrieves all database shapes that match a given query shape
after one shape is being rotatedby
[-30
, 30
].
Sometimes, we wouldlike to do matching totally invariant
to rotation. For example, we may not care about the orientation
at all if we are doing airplane recognition. This can be accom-plishedby
simply allowing a rotation of
[-180
, 180
]
before matching.
To perform a match with rotation, a fingerprint is computed
for the query shape andis usedas a search key to the index. The
search key is used to retrieve all candidates from the index.
These candidates include all data points that match the query
point irrespective of rotation factor. They also include false
positives, i.e., data points that are not in the proximity of the
query point for any rotation factor. To discard false positives,
we need to retrieve the full database record of every candidate
andcheck whether it actually falls in the proximity (say within
distance ) of the query shape after being rotatedby some
[
1
,
2
]. On the other hand, rotating a shape boundary by
is equivalent to multiplying every descriptor S
f
by
e
j
. We
can thus rewrite Eq. 5 to make it reflect the rotation.
D
2
(S, S ) =
M
f=-M,f=0,1
|S
f
- e
j
.S
f
|
2
(7)
Lemma 3.4. The minimum and the maximum of Eq. 7 take
place at
= arctan(-X/Y ) + c. where c is an integer,
X =
f
sin
f
,
Y =
f
cos
f
and
S
f
.S
f
=
f
e
j
f
(
denotes the complex conjugation
2
).
Since we are interestedin the minimum of Eq. 7 when
[
1
,
2
] and 1
,
2
, the minimum must take
place either at an endpoint (i.e.,
1
or
2
) or any point
{arctan(-A/B) - , arctan(-A/B) + , arctan(-A/B)}
which is inside the region. It is straightforward to compute
the distance function for these values andfindout the optimal
rotation factor that results in the minimum distance.
3.4.3 Match with changing starting point
When we compare two boundaries, we do not care about their
starting points. If we use the same tracing algorithm for every
boundary, there cannot be large variations in the starting point
(though small variations are still possible). However, we may
not have much control over the tracing algorithm, andas a
result two similar shapes may have different starting points;
or even if we use the same tracing algorithm for all boundaries,
we may want to remove small variations (if any) before doing
a comparison.
Shifting the starting point of a boundary by
3
is equivalent
to multiplying every descriptor
S
f
by
e
jf
. This operation
, similar to rotation, only affects the phases of Fourier
2
The complex conjugate of
z = x+yj is defined as z
= x-yj.
3
For example,
= 2s
0
/N for a boundary of length N means
shifting its starting point by
s
0
points in ccw direction.
descriptors. Thus, we can still use the index to retrieve all
candidates. To discard false positives, we need to retrieve the
full database record of every candidate and check whether it
still qualifies after the starting point is being shiftedby some
[
1
,
2
]. We can again rewrite Eq. 5 to make it reflect the
shift in starting point.
D
2
(S, S ) =
M
f=-M,f=0,1
|S
f
- e
jf
.S
f
|
2
(8)
The optimal value for
can be obtainedby equating the derivative
of the above equation to zero andfinding the roots. This
can be done using numerical techniques up to the machine
precision [PTVF92].
3.4.4 Match with smoothing
Occasionally, we wish to do matching based on overall shape,
irrespective of small variations in details and noise. In such
cases, we wouldlike to smooth out sharp edges andsmall variations
before doing the comparison. To achieve this, we can
apply a moving average transformation to shape boundaries.
When an l-point moving average is appliedto a boundary,
every point is replacedwith the average of its l surrounding
points. On the other hand, applying a moving average to a
boundary in the spatial domain corresponds to a vector multiplication
in the frequency domain. For example, to apply a
2-point moving average to a boundary with 10 points, we can
equivalently multiply its Fourier descriptors by the Fourier
transform of vector
m
2
= (
1
2
,
1
2
, 0, 0, 0, 0, 0, 0, 0, 0). This
gives us the Fourier descriptors of the smoothed boundary.
A distinguishing feature of smoothing, compared to other
transformations discussed in this paper, is that its effect on
a shape depends on the characteristics of the shape. This is
unlike rotation, for instance, where the effect of rotating a
data shape by
before a comparison is the same as that of
rotating the query shape by
-.
Given a query shape anda desiredmoving average for
smoothing, the matching can be performedas follows:
1. Findthe Fourier transform of the desiredmoving average
(as demonstrated for 2-point moving average); let us
denote this by
M.
2. Transforming the query shape: Apply the transformation to
the query shape description
(sh
x
, sh
y
, sc, Q) by replacing
Q with Q where Q
i
= Q
i
M
i
for
i = -1, -2, 2, . . . ,
-k, k.
3. Construct a search key by computing the fingerprint of the
new shape description.
4. Transforming the index: Apply
M to data entries stored in
the index before checking for an overlap between a data
entry and the search key; this is done inside the function
that checks if a data entry from the index overlaps the
search key.
5. For every candidate, retrieve its full database record, apply
M to it andcheck if the resulting shape falls in the
proximity of
Q .
The transformation can be appliedto the index on the fly as
the index is being traversed. The issue of on-the-fly applying
single or multiple transformations to an index is studied
in the domain of time series data [RM97,RM00]. The same
techniques can be appliedto the domain of shapes.
D. Rafiei, A.O. Mendelzon: Efficient retrieval of similar shapes
23
a
D=0
b
D=0.05
c
D=0.10
d
D=0.15
e
D=0.20
f
D=0.25
g
D=0.30
i
D=0.10
h
D=0.40
j
D=0.20
Fig. 4.
Query shapes, shown in the top two rows, andtheir nearest neighbours, shown in the bottom two rows
Experiments
To determine the effectiveness of our proposed technique, we
implementedour methodandran experiments on a dataset
of 11,000 real hand-written digits. The data was obtained
from the CEDAR CDROM dataset, which was gathered from
scannedZIP codes at the Buffalo Post Office [Hul94]. For
every digit, the dataset held 1,100 images. Each image was
originally in the form of a 16
16 gray-scale image which
was convertedinto a binary image (by thresholding) andwas
traced to identify a shape boundary. Then, the boundary was
encoded using 30 lower Fourier descriptors. For boundaries
with length less than 30, zero was padded at the end. For each
shape, both its description and its fingerprint are computed,
as outlinedin Sect. 3.1, andusedfor the purpose of indexing.
As our index, we used Norbert Beckmann's implementation
of the R*-tree [BKSS90]. For the nearest-neighbours search,
we implementedthe algorithm developedby Roussopoulos et
al. [RKV95] as part of Algorithm 1 over R*-tree. We stored
10,000 shapes (1,000 samples of each digit) in the index and
usedthe 1,000 remaining samples as queries. We ran each
query 10 times andaveragedthe execution times from these
runs. All our experiments were conducted on a 168 MHz Ul-trasparc
station.
We investigatedthe following questions:
How effective andpractical is our technique in classifying
shapes in a real data domain?
How many Fourier coefficients shouldwe store in the index
? Storing larger number of coefficients reduces the
number of false positives but increases the index dimensionality
, andas a result the search time.
How does our technique compare to sequential scanning?
4.1 Shape classification
To verify the effectiveness of our proposedtechnique in classifying
shapes, we triedto classify all 1,000 query shapes by
assigning every query shape to the class of its nearest neighbours
. When there was more than one nearest neighbours for
a shape, we pickedone randomly. The result was interesting:
96.4% of shapes were classifiedcorrectly. Some of those query
shapes are shown, in their gray scale andbinary representation
, in the two top rows of Fig. 4 along with their nearest
neighbours shown in the two bottom rows of the same figure.
Table 1.
Various ranges of rotations andtheir effects in correctly
classifying the shapes of hand-written digits
Rotation factor
Fraction of query shapes
classifiedcorrectly (%)
[0, 0]
96.4%
[-10, 10]
96.5%
[-20, 20]
96.4%
[-30, 30]
96.4%
[-40, 40]
96.3%
[-50, 50]
96.3%
As is shown, query shapes shown in Figs. 4a to 4h are classified
correctly with their Euclidean distances from their nearest
neighbours varying from 0 to 0.40. The query shape shown in
Fig. 4i is not classifiedcorrectly, but its binary representation
looks quite similar to that of its nearest neighbour. The
query shape shown in Fig. 4j looks different from its nearest
neighbour, though their boundaries still look similar.
In another experiment, we usedQuery 2 andtriedto identify
for each query shape its nearest neighbour irrespective
of a rotation factor
[-30
, 30
]. This did not change the
overall classification rate, i.e., only 96.4% of shapes were classifiedcorrectly
. However, allowing a rotation factor in general
did retrieve better matches. Figure 5 shows six query shapes
(in the top two rows), their original nearest neighbours (in
the middle two rows) and their optimal nearest neighbours (in
the bottom two rows) when the rotation factor variedfrom
-30
to
30
. As is shown, for example rotating the data shape
shown at the bottom of Fig. 5a by
11
in the ccw direction
reduces its Euclidean distance from the query shape to 0.30;
this is less than the Euclidean distance between the same query
shape andits original nearest neighbour. Table 1 summarizes
the effect of various rotations in correctly classifying shapes.
As is shown, applying a small rotation (
[-10, 10]) to d ata
shapes before matches slightly improves the classification rate
of hand-written digits; larger rotations, on the other hand, either
have no effect or deteriorate the rate of correct classifications
. This is because the digit data is generally sensitive
to orientations andallowing larger rotations can potentially
retrieve more non-identical digits.
We later picked1,000 shapes among those storedin the
database, applied to each shape a random rotation in the range
[-, ] andusedit as a query shape. We only specifiedthe
24
D. Rafiei, A.O. Mendelzon: Efficient retrieval of similar shapes
a
D=0.34
R=11,D=0.30
b
D=0.25
R=-11,D=0.22
c
D=0.36
R=9,D=0.35
d
D=0.60
R=15,D=0.57
e
D=0.45
R=27,D=0.35
f
D=0.48
R=-12,D=0.41
Fig. 5.
Query shapes (the top two rows), their original nearest neighbours
(the middle two rows) and their optimal nearest neighbours
(the bottom two rows) varying the rotation factor in
[-30
, 30
]
2
4
6
8
10
0.3
0.4
0.5
0.6
0.7
0.8
Number of Fourier amplitudes
Fraction of shapes classified correctly
Fig. 6.
The fraction of query shapes classifiedcorrectly, varying the
number of Fourier amplitudes used for classification
rotation interval in our query. As expected, for each query
shape, only the shape itself was retrievedfrom the database.
4.2 Varying the cut-off frequency
The effectiveness of the index mainly depends on the concentration
of the key shape information within a few descriptors
of fingerprints. To measure this effectiveness, we ran some
experiments varying the number of Fourier descriptors stored
in a fingerprint. Figure 6 shows the ratio of query shapes that
are classifiedcorrectly (according to the criteria outlinedin
Sect. 4.1) to all query shapes varying the number of Fourier
amplitudes used for classification. As the number of amplitudes
increases up to 6, the ratio of shapes that are classified
correctly increases accordingly up to 0.778. This ratio remains
the same despite increasing the number of Fourier amplitudes
from 6 to 10. Comparedto a full shape description which consists
of both the amplitudes and the phases of 30 lower Fourier
coefficients, classifying 96.4% of the shapes correctly, a fingerprint
does a pretty good job using only 6 amplitudes which
make up only 10% of a full shape description and still classifying
0.778% of the shapes correctly.
Figure 7a shows the average execution time of Algorithm 1
for 1,000 nearest-neighbours queries, broken into: (1) search
time in Step 1 to identify the initial approximate nearest neighbours
; and(2) search time in Step 3 to findthe real nearest
neighbours. Figure 7b shows the fraction of index nodes accessed
, averaged over 1,000 nearest-neighbours queries, again
broken into the fractions accessedin Step 1 andStep 3.
As the number of Fourier amplitudes increases, the index
selectivity improves, i.e., the index gives fewer false hits. The
number of false hits, as is depicted in Fig. 8 for a proximity
query, mainly depends on the number of Fourier amplitudes
usedin fingerprints andthe output size of the query. Due to
the high similarity between different shapes of the same digit,
a large fraction of our false hits (for example, 62% when the
output size was 11 andthe number of Fourier amplitudes was
6) were other shapes of the same digit depicted by the query
shape which were not within the specifieddistance of the query
shape.
The reduction in false hits reduces the search time since
less time is needed to remove those false hits. However, increasing
the number of Fourier amplitudes after some point,
often calledthe cut-off frequency, either does not reduce the
number of false hits or reduces it only slightly. This is because
higher frequency amplitudes carry less of the energy
than lower frequency ones. On the other hand, the search time
increases with the index dimensionality, because the tree becomes
deeper. Furthermore, the pruning becomes harder, as
is shown in Fig. 7 with the ratio of index nodes that are accessed
, because the probability of an arbitrary data bounding
rectangle being close to the query point increases with the
dimensionality.
Given the trade-off between the tree search time and the
time spent for removing false hits, it is natural to expect that
there is an optimal cut-off frequency. Basedon our experiments
, as illustratedin Figs. 6 and7, the optimal cut-off frequency
occurs for as few as 6 Fourier amplitudes.
4.3 Comparison to sequential scanning
Figure 9 shows the average execution time of our proposed
methodcomparedto sequential scanning for 1,000 nearest-neighbours
queries. To get its best performance, we used
bufferedinput for sequential scanning, in a system with buffer
size of 8,192 bytes. For the experiment shown in Fig. 9a, the
border length was fixed to 30 while the database size variedfrom
10,000 to 30,000 shapes. Since the size of dataset
was limited, we doubled or tripled the size by adding one or
two randomly rotated copies of each shape to the database.
This doubling did not affect the performance of sequential
scanning, which was linear in the input size, but we expected
the doubling to deteriorate the performance of our method
since high similarity among database shapes would increase
the number of false hits. For the experiment shown in Fig. 9b,
the number of shapes was fixedat 10,000 while the number of
Fourier descriptors used to represent a boundary varied from
D. Rafiei, A.O. Mendelzon: Efficient retrieval of similar shapes
25
2
4
6
8
10
0
200
400
600
800
1000
1200
Number of Fourier amplitudes
Execution time (msec)
: initial NN query
: proximity query
: Total
a
2
4
6
8
10
0
0.2
0.4
0.6
0.8
Number of Fourier amplitudes
Fraction of index nodes accessed
b
: initial NN query
: proximity query
: Total
Fig. 7.
Break up of a the execution time and b the fraction of index nodes accessed, for nearest-neighbours queries, varying the number of
Fourier amplitudes
2
3
4
5
6
10
20
30
40
50
60
70
Number of Fourier amplitudes
# of false hits per every qualifying shape
a
0
200
400
600
800
1000
6
8
10
12
14
16
18
20
Output size
# of false hits per every qualifying shape
b
Fig. 8.
The average number of false positives for every qualifying shape a varying the number of Fourier amplitudes and fixing the average
output size of the query to 11, and b varying the average output size andfixing the number of Fourier amplitudes to 6
1
1.5
2
2.5
3
x 10
4
0
1000
2000
3000
4000
5000
6000
Number of shapes
Execution time (msec)
: Seq
: Index
a
20
30
40
50
0
500
1000
1500
2000
2500
3000
Border Length
Execution time (msec)
: Seq
: Index
b
Fig. 9. a
Time per query varying the number of shapes, for nearest-neighbours queries. b Time per query varying the border length, for
nearest-neighbours queries
26
D. Rafiei, A.O. Mendelzon: Efficient retrieval of similar shapes
20 to 50. As shown in the figure, increasing either the number
of shapes or the border length increases the relative advantage
of our method, making it more attractive for large databases.
Conclusions
We have proposedan indexing technique that can efficiently
retrieve images of objects basedon similarity between their
boundary shapes. We have used Fourier descriptors as our
shape features andhave developedan index organization such
that similar shapes can be easily retrievedirrespective of their
differences in size, position and orientation. The highlight of
our contribution is an index structure that helps find optimal
matches between shapes irrespective of various differences between
them. Our technique has the following desirable properties
:
It uses a shape matching mechanism which is well-studied
in the area of pattern recognition.
It exploits the fact that important features of a large class
of shapes are concentratedwithin only a few Fourier descriptors
.
It can handle shapes of various sizes.
It guarantees efficient retrieval of all qualifying shapes.
Furthermore, we have presenteda refinement of an earlier
nearest-neighbours search algorithm for feature vectors that
are truncated, due to the significance of some features over
others, before being storedin a R-tree index.
Acknowledgement. We thank the Natural Sciences andEngineering
Research Council of Canada andthe Institute for Robotics andIntel-ligent
Systems for their support of this work.
References
[AFS93] Agrawal R, Faloutsos C, Swami A (1993) Efficient similarity
search in sequence databases. In: Proc. 4th International
Conference on Foundations of Data Organizations
andAlgorithms (FODO '93), pp 6984, Chicago
[BG80]
Bribiesca E, Guzman A (1980) How to describe pure form
andhow to measure differences in shape using shape numbers
. Pattern Recognition 12(2):101112
[BKK97] BerchtoldS, Keim DA, Kriegel HP (1997) Using extended
feature objects for partial similarity retrieval. VLDB J
6(4):333348
[BKSS90] Beckmann N, Kriegel HP, Schneider R, Seeger B (1990)
The R* tree: an efficient androbust index methodfor points
andrectangles. In: Proc. ACM SIGMOD International
Conference on Management of Data, pp 322331, Atlantic
City
[Bri81]
Bribiesca E (1981) Arithmetic operations among shapes
using shape numbers. Pattern Recognition 13(2):123138
[BSA91] Belkasim SO, Shridhar M, Ahmadi M (1991) Pattern
recognition with invariants: a comprehensive study and
new results. Pattern Recognition 24:11171138
[FBF
+
94] Faloutsos C, Barber R, Flickner M, Niblack W, Petkovic
D, Equitz W (1994) Efficient andeffective querying by
image content. J Intell Inf Syst 3(3/4):231262
[GG98]
Gaede V, Gunther O (1998) Multidimensional access
methods. ACM Comput Surv 30(2):170231
[GK95]
Goldin DQ, Kanellakis PC (1995) On similarity queries
for time-series data: constraint specification and implementation
. In: 1st Int. Conference on the Principles and
Practice of Constraint Programming, Lecture Notes in
Computer Science, vol. 976. Springer, Berlin Heidelberg
New York, pp. 137153
[Gol97]
Goldin
DQ
(1997)
Constraint
query
algebras
.
PhD
thesis,
Brown
University,
www.cs.brown.edu/people/dgk/Papers/thesis.ps
[GW92]
Gonzalez RC, Woods RE (1992) Digital image processing.
Addison-Wesley, Reading, Mass., USA
[Hul94]
Hull JJ (1994) A database for handwritten text recognition
research. IEEE Trans Pattern Anal Mache Intell
16(5):550554
[Jag91]
Jagadish HV (1991) A retrieval technique for similar
shapes. In: Proc. ACM SIGMOD International Conference
on Management of Data, pp 208217, Denver, Colo.,
USA
[JMM95] Jagadish HV, Mendelzon AO, Milo T (1995) Similarity-basedqueries
. In: Proc. 14th ACM SIGACT-SIGMOD-SIGART
Symposium on Principles of Database Systems,
pp 3645, San Jose, Calif., USA
[KSF
+
96] Korn F, Sidiropoulos N, Faloutsos C, Siegel E, Protopa-pas
Z (1996) Fast nearest neighbor search in medical image
databases. In: Proc. 22nd International Conference on
Very Large Data Bases, pp 215226, Mumbai, India
[KSP95] Kauppinen H, Seppanen T, Pietikainen M (1995) An
experimental comparison of autoregressive andFourier-baseddescriptors
in 2D shape classification. IEEE Trans
Pattern Anal Mach Intell 17(2):201207
[LJF94]
Lin KI, Jagadish HV, Faloutsos C (1994) The TV-tree
- an index structure for high-dimensional data. VLDB J
3(4):517542
[L
OSO97] Li JZ,
Ozsu MT, Szafron D, Oria V (1997) MOQL: a
multimedia object query language. In: Proc. 3rd International
Workshop on Multimedia Information Systems, pp
1928
[MG93]
Mehrotra R, Gary JE (1993) Feature-basedretrieval of
similar shapes. In: Proc. 9th International Conference on
Data Engineering, pp 108115, Vienna, Austria
[MM86] Mokhtarian F, MackworthA (1986) A scale-baseddescrip-tion
andrecognition of planar curves andtwo dimensional
shapes. IEEE Trans Pattern Anal Mach Intell 8(1):3443
[PF77]
Persoon E, Fu KS (1977)
Shape discrimination using
Fourier descriptors. IEEE Trans Syst Man Cybern
7(2):170179
[PTVF92] Press WH, Teukolsky SA, Vetterling WT, Flannery BP
(1992) Numerical recipes in C. Cambridge University,
Cambridge, UK
[Raf98]
Rafiei D (1998) Fourier-transform basedtechniques in
efficient retrieval of similar time sequences. PhD thesis,
University of Toronto
[RH74]
RichardCW, Hemami H (1974) Identification of three-dimensional
objects using Fourier descriptors of the
boundary curve. IEEE Trans Syst Man Cybern 4:371378
[RKV95] Roussopoulos N, Kelley S, Vincent F (1995) Nearest
neighbor queries. In: Proc. ACM SIGMOD International
Conference on Management of Data, pp 7179, San Jose,
Calif., USA
[RM97]
Rafiei D, Mendelzon AO (1997) Similarity-based queries
for time series data. In: Proc. ACM SIGMOD International
Conference on Management of Data, pp 1324, Tucson,
Ariz., USA
D. Rafiei, A.O. Mendelzon: Efficient retrieval of similar shapes
27
[RM00]
Rafiei D, Mendelzon AO (2000) Querying time series
data based on similarity. IEEE Trans Knowl Data Eng
12(5):675693
[SK98]
Seidl T, Kriegel HP (1998) Optimal multi-step nearest
neighbour search. In: Proc. ACM SIGMOD International
Conference on Management of Data, pp 154165, Seattle,
Wash., USA
[WW80] Wallace TP, Wintz PA (1980)
An efficient three-dimensional
aircraft recognition algorithm using normal-izedfourier
descriptors. Comput Graph Image Process
13:99126
[ZR72]
Zahn CT, Roskies RZ (1972) Fourier descriptors for plane
closedcurves. IEEE Trans Comput 21(3):269281 | fingerprint;Shape retrieval Similarity retrieval Fourier descriptors;non textual objects;efficiency;database;handwriting recognition;Fourier descriptors;Image databases;search;queries;shape classification;indexing techniques;Similarity queries |
8 | A Flexible 3D Slicer for Voxelization Using Graphics Hardware | In this paper we present a simple but general 3D slicer for voxelizing polygonal model. Instead of voxelizing a model by projecting and rasterizing triangles with clipping planes, the distance field is used for more accurate and stable voxelization. Distance transform is used with triangles on voxels of each slice. A voxel is marked with opacity only when the shortest distance between it and triangles is judged as intersection. With advanced programmable graphics hardware assistance, surface and solid voxelization are feasible and more efficient than on a CPU. | Introduction
Object representation is a broad topic in research. In computer
graphics, polygons play a dominant role in 3D graphics because
they approximate arbitrary surfaces by meshes. In games and animations
, surface representation is the main technique used in rendering
and visualization. However, volumetric representation, an
alternative method to traditional geometric representation, is well
known since the 1980s. It provides a simple and uniform description
to measure and model a volumetric objects and establish the
research field of volumetric graphics. Voxelization is a process of
constructing a volumetric representation of an object. Voxelizing a
polygonal object is not only a shift of the representation, it gives
an opportunity to manipulate mesh object with volumetric operations
such as morphological operation and solid modeling. Many
applications, for example, CSG modeling, virtual medicine, haptic
rendering, visualization of geometric model, collision detection, 3D
spatial analysis, and model fixing, work on volumetric representation
or use it as the inter-medium.
In this paper, we calculate the accurate distance field on GPU to
compute coverage of each voxel in voxelization for polygonal models
. Our method works for arbitrary triangulated models without
any preprocessing for models, except organizing meshes slab by
slab in order to prune the unnecessary computation while voxelizing
complex models. By using the power of GPU, Hausdorff distance
is guaranteed between each sampled voxel and the polygonal
model. Surface voxelization with distance field on a GPU works
well and runs faster than on a PC with a CPU. Our method is a reliable
and accurate solution for the polygonal model under a given
distribution of sampling voxels and a specific opacity criterion. Besides
, error tolerance in voxelization is easy to manipulate by adjusting
the threshold of opacity criterion for voxelization, which
also dominates the smoothness of features and the thickness of surface
voxelization.
e-mail:[email protected]
The rest of paper is organized as follows. Some related works are
surveyed in the section 2. In section 3, we present the computation
of Hausdorff distance and our framework. The experimental results
are illustrated in section 4. Finally, we conclude the proposed approach
and point out some future works.
Related Works
Volume construction approaches are often referred to as scan-conversion
or voxelization methods. Researchers mainly focused
on modeling aspects such as robustness and accuracy. Wang and
Kaufman [Wang and Kaufman 1993] used a method that samples
and filters the voxels in 3D space to produce alias-free 3D volume
models. They used filters to produce final density from the support
of the region that polygons lie inside. Schroeder and Lorensen
[Schroeder et al. 1994] created a volumetric model by finding clos-est
polygon from distance map and classify the opacity of voxels.
Huang et al. [Huang et al. 1998] described separability and min-imality
as two desirable features of a discrete surface representation
and extended 2D scan line algorithm to perform voxelization.
Dachille and Kaufman [Dachille IX and Kaufman 2000] presented
an incremental method for voxelizing a triangle with pre-filtering
to generate multivalued voxelization. Widjaya et al. [Widjaya et al.
2003] presented the voxelization in common sampling lattices as
general 2D lattices including hexagonal lattices and 3D body-center
cubic lattices. Ju [Ju 2004] constructed an octree grid for recording
intersected edges with the model and expanded nodes to scan-convert
a polygon on an octree, and then generate signs from the
boundary edges and faces of the octree.
In recent years, attention on performance of voxelization raises.
More and more studies try to explore the benefits of graphics hardware
for more efficient rendering. Chen and Feng [Chen and Fang
1999] presented a slicing-based voxelization algorithm to generate
slices of the underlying model in the frame buffer by setting appropriate
clipping planes and extracting each slice of the model, which
is extended and published in later [Chen and Fang 2000]. Karabassi
and Theoharis [Karabassi and Theoharis 1999] projected an object
to six faces of its bounding box through standard graphics system
for the outermost parts and read back the information from depth
buffer. However it works well only on convex objects. Dong et
al. [Dong et al. 2004] proposed a real-time voxelization method using
GPU acceleration to rasterize and texelize an object into three
directional textures and then synthesize textures back to the final
volume.
285
Hausdorff Distance Computation and Voxelization
In this section, we first discuss the computation of Hausdorff distance
between a given triangle and a point. Then we explain how
we use GPU to compute the distance field of triangles and modify
the rendering pipeline.
3.1
Distance Field Computation
For a given 3D point P(x, y, z) and a triangle T (V
0
,V
1
,V
2
), Hausdorff
distance is the shortest distance between the point P and any
point v on the triangle. A point on triangle T can be parametrically
defined by two linearly independent vectors with two weights (s,t)
by
T
(s,t) = B + se
0
+ te
1
,
(1)
where (s,t) D = {(s,t) : s [0, 1],t [0, 1], s + t 1}, and B =
V
0
, e
0
= V
1
-V
0
and e
1
= V
2
-V
0
.
For any point on triangle T , the distance from T to P is
T
(s,t) - P ,
(2)
or we can use the squared-distance function instead
Q
(s,t) = T (s,t) - P
2
,
(3)
where a point p = (
s
, t) exists which makes Q(
s
, t) minimum.
Therefore, the computation of distance can be reduced into a min-imization
problem. For an efficient computation, we can expand
Q
(s,t) as
Q
(s,t) = as
2
+ 2bst + ct
2
+ 2ds + 2et + f ,
(4)
where
a
=
e
0
e
0
b
=
e
0
e
1
c
=
e
1
e
1
d
=
e
0
(B - P)
e
=
e
1
(B - P)
f
=
(B - P) (B - P)
(5)
From analyzing the gradient of Q(s,t), the minimum
s
and t happens
only when Q is zero, where
s
= be - cd
ac
- b
2
t = bd - ae
ac
- b
2
(6)
If (
s
, t) D, the minimum distance is the distance between p and
P
; otherwise, according to the sign of
s
and t, there are six possible
regions that the shortest distance point p may lie on triangle T ,
as shown in Figure 1. Efficient solutions are well addressed on
the book [Schneider and Eberly 2003] by CPU computation with
simple calculation and logic classification.
However, in GPU, there is no efficient dynamic flow control to determine
the shortest point on a triangle. Therefore, instead of directly
computing the point of shortest distance on a triangle, we
compute the distance from the 3D point to four possible points
which may be inside the triangle or on the three boundaries and
then the minimum scalar is the shortest distance. These four points
are
(s
0
,t
0
)
=
(
be
-cd
ac
-b
2
,
bd
-ae
ac
-b
2
)
(s
1
,t
1
)
=
(0, e
c
)
(s
3
,t
3
)
=
(
c
+e-b-d
a
-2b+c
,
a
+d-b-e
a
-2b+c
)
(s
2
,t
2
)
=
(d
a
, 0),
(7)
Figure 1: Six regions in s,t coordinate. Space is partitioned by
range of parameters s and t for efficient shortest position classification
and calculation.
Figure 2: Rendering pipeline for generating a distance field of a
triangle. A quad is rendered instead of a triangle. Five channels of
each vertex of a quad (position, normal, and 4 texture coordinates)
is filled with position of quad, position of voxel, and information of
a triangle: v
0
, e
0
, e
1
, and normal N respectively.
where position (s
0
,t
0
) assumes point p is inside the triangle, positions
(s
1
,t
1
), (s
2
,t
2
) and (s
3
,t
3
) assume point p is on boundaries
of s = 0, t = 0, and s + t = 1. All calculated points are truncated
in the range of [0, 1] so that three end vertices of the triangle T are
also in consideration and it guarantees these points are on the triangle
for distance computation. Therefore, the minimum distance is
the shortest distance from the point P to the triangle T .
3.2
Geometry Rendering for Voxelization
Voxelization by projection and rasterization faces the difficulty of
non-uniform sampling of polygons because polygons with arbitrary
orientations are not parallel to projection plane for maximum projection
area. Even classifying polygons and projecting them to individual
best-fit plane, there still have no guarantee on valid rasterization
. However, distance field is omni-directional, i.e., insensitive
to the projection plane, and has no assumption on input geometry
and therefore no extra preprocessing is required.
Our approach is a slice-based approach for distance field generation
and voxelization. Figure 2 shows the rendering process
for generating the distance field for a triangle. For each triangle
T
i
= {T
i
(s,t)|v
0
+ se
0
+te
1
, s 0,t 0, s +t 1}, a full-filled quad
Q
i
= {q
i
0
, q
i
1
, q
i
2
, q
i
3
} is rendered and rasterized to generate the distance
field from voxels on a slice to the triangle. Triangle data and
voxel positions are associated with the rendering quads. Voxel positions
are stored in the channel of vertex normal, the triangle data
(base vertex B, vectors e
0
and te
1
, and the normal N) are separately
stored in channels of texture coordinates and transmitted to GPU.
Voxel positions are linearly interpolated in rasterization of rendering
pipeline and pairs of triangle data and voxel positions are sent
to pixel processors for Hausdorff distance computation. After distance
computation, the shortest distance between a triangle and a
286
voxel is stored in the pixel depth and the pixel color is assigned
for identification depending on applications. For example, binary
surface voxelization uses color information to identify whether a
voxel intersects a geometry such as 0 for empty and 1 for opacity;
distance visualization uses color information to display the distance
from geometries, etc.
The distance field of polygonal objects is constructed incrementally
by rendering quads of triangles. Each pixel of depth buffer keeps
the shortest distance from its voxel to triangles rendered. Depth
buffer updates when a triangle is rendered. Unless distance is recal-culated
on different slice or rendered objects deform, quads which
have been rendered have no need to be re-rendered again even new
geometry are added in the scene. Depth buffer of the viewport is
initialized as infinitude.
The rendering pseudo code is abstracted as follows:
for each triangle t on slice i {
Create a quad Q for the triangle t
for k = 0 to 3 {
// assign a full-filled quad
// q is end vertices of quad
Q.q[k].position = ScreenBoundary.q[k];
// assign voxel position, and triangle data
Q.q[k].normal = Slice[i].q[k];
Q.q[k].tex0 = t.B;
Q.q[k].tex1 = t.e0;
Q.q[k].tex2 = t.e1;
}
RenderQuad(Q);
}
3.3
Surface Voxelization
We use local distance field to reduce work load of GPU because
the distance field far away from a triangle is meaningless for surface
voxelization. For each triangle, we extend its parallel projected
bounding rectangle by a small scalar for effective rasterization especially
for triangles perpendicular to the projection plane. Due to
coherence and precision in interpolating voxel positions, triangles
are rendered with extended bounding rectangles. While a pixel is
rasterized by a quad, Hausdorff distance is calculated according to
the interpolated voxels, i.e., centers of voxels, and the triangle data.
Only if the distance is less than the given threshold, e.g., distance
from a uniform voxel center to its corner, the pixel is marked as
opacity. Using local distance field could guarantee a small region
of Hausdorff distance but greatly improve the performance of surface
voxelization.
For more efficient voxelization process on GPU, triangles can be
culled early by space partitioning. We construct an active triangle
list for each slice. Currently we define slabs along Z-axis. According
to partitioning planes, triangles are filtered and rearranged
slab by slab. Many triangles can be pruned while rendering a slice.
It is significantly helpful while voxelizing very complex models.
Because distance field is insensitive to projecting directions of triangles
, selection of partitioning plane has no influence on effective-ness
of voxelization.
Experimental Results
We implement our fragment program using HLSL on a Pentium 4
3.0 MHz PC with 1G RAM and a nVidia Geforce FX5800 graphics
card running on Windows XP with DirectX 9.0c. We use Vertex
Shader 1.1 and Pixel Shader 2.0 to implement fragment program
in scattering pairs of voxel positions and triangle data and in distance
calculation and visualization. Table 1 shows the performance
Figure 4: Rendering from the results of voxelization (512
3
): dragon
of 870K faces in 512
3
voxels.
of surface voxelization on different models and in different voxel
resolutions. Figure 3 and Figure 4 demonstrate quality of voxelization
results. In the experiment, opacity threshold is set to the distance
from voxel center to its corner. That means if the shortest
distance between a voxel and a triangle is less than the threshold,
the voxel will be marked as opacity. Note that voxels are normal-ized
to cubes in rendering so the scale of output may differ from the
original polygonal model.
In Table 1, we list average time on surface voxelization per slice, per
voxel, and per triangle. In the same resolution, voxelization time is
proportional to the complexity of polygonal model. For each voxel,
process time is always less than 0.1 ms. Even when voxel resolution
increase, GPU still could handle voxelization for complex object in
stable throughput which may be increased much more for CPU.
Due to speed up by using local distance field and culling for unre-lated
geometry, voxelization by distance field can be displayed slice
by slice interactively under 128
3
voxel resolution. When voxel resolution
is low, voxelization time highly depends on complexity of
model. However, when the voxel resolution increases higher, even
for a low complexity model, it still need more time to voxelize a
model than in lower voxel resolution. On average, resolution of
256
3
could provide a benefit of reliable voxelization both in quality
and time cost.
Rendering cost is stable for a triangle even when resolution of volume
increases while it is linear on a CPU. Voxelization with proposed
method is still slower than methods using traditional projection
and rasterization by graphic hardware. However, our method
is stable, correct and flexible because the opacity of each voxel is
determined by thresholding the distance field of objects.
Conclusion
In this paper, we propose a GPU-based 3D slicer approach for voxelizing
polygonal models. We calculate minimum distance between
pairs of sampled voxels and triangles of arbitrary models with guarantee
of Hausdorff distance. With programmable hardware ver-tex/pixel
processors, efficient surface voxelization, solid voxelization
, and visualization of the distance field all are feasible on the
proposed 3D slicer.
However, in current implementation, performance of pixel shader
is the bottleneck in overall processing speed. Area of rasterization
also has a significant influence on the loading of pixel shader.
287
Model
Faces
Res.
Time(s)
Time
Slices
Time
Voxels
Time
Tri
.
Res.
Time(s)
Time
Slices
Time
Voxels
Time
Tri
.
Beethoven
5027
128
13.94
0.11
6.65
2.77
256
85.86
0.34
5.12
17.08
Teapot
6320
128
14.31
0.11
6.82
2.26
256
87.62
0.34
5.22
13.86
Cup
7494
128
15.24
0.12
7.27
2.03
256
93.65
0.37
5.58
12.50
Bunny
10000
128
16.24
0.13
7.75
1.62
256
93.98
0.37
5.60
9.40
Bunny
69451
128
43.21
0.34
20.60
0.62
256
231.85
0.91
13.82
3.34
Dragon
871414
128
84.21
0.66
40.15
0.10
256
325.87
1.27
19.42
0.37
Buddha
1087716
128
170.44
1.33
81.27
0.16
256
347.65
1.36
20.72
0.32
Dragon
871414
512
1748.11
3.41
13.02
2.01
* The time unit in Time/Voxels is s
Buddha
1087716
512
1825.47
3.57
13.60
1.68
* The time unit in Time/Tri. is ms
Table 1: Surface voxelization on different models and in different voxel resolutions.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
Figure 3: Rendering from the results of voxelization (256
3
): (a) Beethoven in 256
3
voxels, (b) Teapot in 256
3
voxels, (c) Cup in 256
3
voxels,
(d) Bunny of 10000 faces in 256
3
voxels, (e) dragon of 10000 faces in 256
3
voxels, (f) Bunny of 69451 faces in 256
3
voxels, (g) dragon of
870K faces in 256
3
voxels, and (h) Buddha of 1M faces in 256
3
voxels.
Therefore, in the near feature, searching a better computational
methodology for GPU is one direction to improve performance of
distance field computation. In addition, a sophisticated culling for
error-free distance computation will be a technique in demand. To
improve the quality of voxelization, adaptive dense voxelization
and a mechanism for quality measurement and guide on GPU is
another interesting topic.
References
C
HEN
, H.,
AND
F
ANG
, S. 1999. Fast voxelization of 3D synthetic
objects. ACM Journal of Graphics Tools 3, 4, 3345.
C
HEN
, H.,
AND
F
ANG
, S. 2000. Hardware accelerated voxelization
. Computers and Graphics 24, 3, 433442.
D
ACHILLE
IX, F.,
AND
K
AUFMAN
, A. E. 2000. Incremental
triangle voxelization. In Graphics Interface, 205212.
D
ONG
, Z., C
HEN
, W., B
AO
, H., Z
HANG
, H.,
AND
P
ENG
, Q.
2004. Real-time voxelization for complex polygonal models. In
Proceedings of Pacific Graphics '04
, 4350.
H
UANG
, J., Y
AGEL
, R., F
ILIPPOV
, V.,
AND
K
URZION
, Y. 1998.
An accurate method for voxelizing polygon meshes. In Proceedings
of IEEE symposium on Volume visualization
, 119126.
J
U
, T. 2004. Robust repair of polygonal models. ACM Transactions
on Graphics 23
, 3, 888895.
K
ARABASSI
, G. P. E.-A.,
AND
T
HEOHARIS
, T. 1999. A fast
depth-buffer-based voxelization algorithm.
ACM Journal of
Graphics Tools 4
, 4, 510.
S
CHNEIDER
, P.,
AND
E
BERLY
, D. H. 2003. Geometry Tools for
Computer Graphics
. Morgan Kaufmann.
S
CHROEDER
, W. J., L
ORENSEN
, W. E.,
AND
L
INTHICUM
, S.
1994. Implicit modeling of swept surfaces and volumes. In Proceedings
of IEEE Visualization
, 4045.
W
ANG
, S. W.,
AND
K
AUFMAN
, A. E. 1993. Volume sampled
voxelization of geometric primitives. In Proceedings of IEEE
Visualization
, 7884.
W
IDJAYA
, H., M
UELLER
, T.,
AND
E
NTEZARI
., A. 2003. Voxelization
in common sampling lattices. In Proceedings of Pacific
Graphics '03
, 497501.
288 | Graphics hardware;Hausdorff distance;Voxelization;Distance field;voxelization;local distance field;Object representation;rasterization;Polygonal object;GPU-based 3D slicer approach;GPU;3D slicer;slice-based approach;Rendering;rendering;adaptive dense voxelization;Volumetric representation;pixel shader;opacity;Surface voxelization;polygonal model;Surface representation;Rendering cost;GPU computation;Hausforff distance;object representation;polygonal objects;volumetric representation;triangles;Rendering pipeline;distance transform;volume construction;Modeling;Computational Geometry;geometric representation;hausdorff distance;distance field;Computer Graphics;Polygonal model;3D modelling;infinitude |
80 | ρ-Queries: Enabling Querying for Semantic Associations on the Semantic Web | This paper presents the notion of Semantic Associations as complex relationships between resource entities. These relationships capture both a connectivity of entities as well as similarity of entities based on a specific notion of similarity called -isomorphism. It formalizes these notions for the RDF data model, by introducing a notion of a Property Sequence as a type. In the context of a graph model such as that for RDF, Semantic Associations amount to specific certain graph signatures. Specifically, they refer to sequences (i.e. directed paths) here called Property Sequences, between entities, networks of Property Sequences (i.e. undirected paths), or subgraphs of ρ-isomorphic Property Sequences. The ability to query about the existence of such relationships is fundamental to tasks in analytical domains such as national security and business intelligence, where tasks often focus on finding complex yet meaningful and obscured relationships between entities. However, support for such queries is lacking in contemporary query systems, including those for RDF. This paper discusses how querying for Semantic Associations might be enabled on the Semantic Web, through the use of an operator ρ. It also discusses two approaches for processing ρ-queries on available persistent RDF stores and memory resident RDF data graphs, thereby building on current RDF query languages. | INTRODUCTION
The Semantic Web [13] proposes to explicate the meaning of
Web resources by annotating them with metadata that have been
described in an ontology. This will enable machines to
"understand" the meaning of resources on the Web, thereby
unleashing the potential for software agents to perform tasks on
behalf of humans. Consequently, significant effort in the
Semantic Web research community is devoted to the development
of machine processible ontology representation formalisms.
Some success has been realized in this area in the form of W3C
standards such as the eXtensible Markup Language (XML) [16]
which is a standard for data representation and exchange on the
Web, and the Resource Description Framework (RDF) [42], along
with its companion specification, RDF Schema (RDFS) [17],
which together provide a uniform format for the description and
exchange of the semantics of web content. Other noteworthy
efforts include OWL [25], Topic Maps [53], DAML+OIL [31].
There are also related efforts in both the academic and
commercial communities, which are making available tools for
semi-automatic [30] and automatic [49][29] semantic (ontology-driven
and/or domain-specific) metadata extraction and
annotation.
With the progress towards realizing the Semantic Web, the
development of semantic query capabilities has become a
pertinent research problem. Semantic querying techniques will
exploit the semantics of web content to provide superior results
than present-day techniques which rely mostly on lexical (e.g.
search engines) and structural properties (e.g. XQuery [24]) of a
document. There are now a number of proposals for querying
RDF data including RQL [40], SquishQL [45], TRIPLE [49],
RDQL [48]. These languages offer most of the essential features
for semantic querying such as the ability to query using
ontological concepts, inferencing as part of query answering, and
some allow the ability to specify incomplete queries through the
use of path expressions. One key advantage of this last feature is
that users do not need to have in-depth knowledge of schema and
are not required to specify the exact paths that qualify the desired
resource entities. However, even with such expressive
capabilities, many of these languages do not adequately support a
query paradigm that enables the discovery of complex
relationships between resources. The pervasive querying
paradigm offered by these languages is one in which queries are
of the form: "Get all entities that are related to resource A via a
relationship R" where R is typically specified as possibly a join
condition or path expression, etc. In this approach, a query is a
Copyright is held by the author/owner(s).
WWW2003, May 20-24, 2003, Budapest, Hungary.
ACM 1-58113-680-3/03/0005.
690
specification of which entities (i.e. resources) should be returned
in the result. Sometimes the specification describes a relationship
that the qualifying entities should have with other entities, e.g. a
join expression or a path expression indicating a structural
relationship. However, the requirement that such a relationship be
specified as part of the query is prohibitive in domains with
analytical or investigative tasks such as national/homeland
security [11] and business intelligence, where the focus is on
trying to uncover obscured relationships or associations between
entities and very limited information about the existence and
nature of any such relationship is known to the user. In fact, in
this scenario the relationship between entities is the subject of the
user's query and should being returned as the result of the query
as opposed to be specified as part of the query. That is, queries
would be of the form "How is Resource A related to Resource
B?". For example, a security agent may want to find any
relationship between a terrorist act and a terrorist organization or
a country known to support such activities.
One major challenge in dealing with queries of this nature is that
it is often not clear exactly what notion of a relationship is
required in the query. For example, in the context of assessing
flight security, the fact that two passengers on the same flight are
nationals of a country with known terrorist groups and that they
have both recently acquired some flight training, may indicate an
association due to a similarity. On the other hand, the fact that a
passenger placed a phone call to someone in another country that
is known to have links to terrorist organizations and activities
may indicate another type of association characterized by
connectivity. Therefore, various notions of "relatedness" should
be supported.
This paper intends to make two main contributions. First, we
formalize a set of complex relationships for the RDF data model,
which we call Semantic Associations. Second, we outline
two possible approaches for processing queries about Semantic
Associations through the use of an operator
(-Queries). One of
the two approaches is based on processing
-queries on persistent
RDF data systems such as RDFSuite [8], while the other is based
on processing these queries on a main memory based
representation of an RDF model such as JENA [56].
The rest of the paper is organized as follows: Section 2 discusses
some background and motivates our work with the help of an
example. Section 3 presents the formal framework for Semantic
Associations; section 4 discusses implementation strategies for
the
operator, section 5 reviews some related work, and section 6
concludes the paper.
BACKGROUND & MOTIVATION
Although there are various knowledge modeling languages that
may be used on the Semantic Web such as Topic Maps [55],
UML [47], DAML+OIL [31], OWL [25], etc., in this paper we
have chosen to formalize Semantic Associations for the RDF data
model. It should be clear that we are not suggesting that the
notion of Semantic Associations only applies to RDF. On the
contrary, the notion is very general and is applicable to any data
model that can be represented as a graph. The choice of RDF for
formalization does not confer serious problems however. In the
first place, some of these other models e.g. DAML+OIL build
upon RDF. Secondly, there is work on mappings from other
formalisms to RDF [20][41].
Next, we will briefly summarize the RDF data model and then
motivate our work with an example.
2.1 RDF
RDF [42] is a standard for describing and exchanging semantics
of web resources. It provides a simple data model for describing
relationships between resources in terms of named properties and
their values. The rationale for the model is that by describing
what relationships an entity has with other entities, we somehow
capture the meaning of the entity. Relationships in RDF, or
Properties as they are called, are binary relationships
between two resources, or between a resource and a literal value.
An RDF Statement, which is a triple of the form (Subject,
Property, Object), asserts that a resource, the Subject, has
a Property whose value is the Object (which can be either
another resource or a literal). This model can be represented as a
labeled directed graph, where nodes represent the resources
(ovals) or literals (rectangles) and arcs representing properties
whose source is the subject and target is the object, and are
labeled with the name of the property. For example, in the bottom
part of Figure 1, we can see a node &r1 connected by a paints
arc to the node &r2, which reflects the fact that &r1 (a painter
with first name Pablo, and last name Picasso) painted another
resource &r2 (a painting). The meaning of the nodes and arcs is
derived from the connection of these nodes and arcs to a
vocabulary (the top part of the figure). The vocabulary contains
describes types of entities i.e. classes (e.g. Museum) and types of
properties (e.g. creates) for the domain. The vocabulary
description and is done using the companion specification to RDF
called the RDF Schema specification [17]. For example in Figure
1, classes like Painter, Museum and properties such as
Paints, are defined. Resources are connected to classes using
an rdf:typeof property indicating an instantiation relationship.
2.2 MOTIVATING EXAMPLE
Although the focus of our current evaluations involves scenarios
in the National Security domain, for brevity and pedagogical
reasons, for this paper we have chosen to use a modified version
of the example from [40]. We will now illustrate Semantic
Associations by way of a simple example shown in Figure 1. The
figure shows an RDF model base containing information to be
used in the development of a cultural portal, given from two
perspectives, reflected in two different schemas (the top part of
the figure). The top left section of the picture is a schema that
reflects a museum specialist's perspective of the domains using
concepts like Museum, Artist, Artifact, etc. The top right
section is a schema that reflects a Portal administrator's
perspective of the domains using administrative metadata
concepts like file-size, mime-type, etc. to describe
resources. The lower part of the figure is the model base (or
description base in the linguo of [40]), that has
descriptions about some Web resources, e.g., museum websites
(&r3, &r8), images of artifacts (&r2, &r5, &r7) and for
resources that are not directly present on the Web, e.g., people,
nodes representing electronic surrogates are created (&r1, &r4,
&r6 for the artists Pablo Picasso, Rembrandt, and Rodin August
respectively).
691
&r3
&r5
"Reina Sofia
Museun"
&r7
"oil on
canvas"
&r2
2000-02-01
"oil on
canvas"
&r8
"Rodin
Museum"
"image/jpeg"
2000-6-09
Ext. Resource
String
Date
Integer
String
title
file_siz
e
last_modified
m
i
m
e
t
y
p
e
Artist
Sculptor
Artifact
Sculpture
Museum
String
String
String
fname
lname
creates
exhibited
sculpts
String
Painting
Painter
paints
technique
material
typeOf(instance)
subClassOf(isA)
subPropertyOf
mime-type
exhibited
technique
exhibited
title
last_modified
last_modified
title
technique
exhibited
"Rodin"
"August"
&r6
&r1
fname
lname
fname
lname
paints
paints
creates
&r4
"Rembrandt"
"Pablo"
"Picasso"
fname
Figure 1: Cultural Portal Information in RDF
Typically, a query language allows you to find all entities that are
related by a specific relationship. For example, we may ask a
query to retrieve all resources related to resource &r1 via a
paints relationship, or via a paints.exhibited
relationship, and get &r2 as a result for the first query and &r3 as
the answer for the second query. However, we are unable to ask
queries such as "How are resources &r1 and &r3 related? Such a
query should return for example that "&r1 paints &r2 which is
exhibited in &r3", indicating a path connecting the two
entities. With a query such as this one, the user is trying to
determine if there is a relationship between entities, and what
the nature of the relationship(s) is(are). It should be possible to
ask such a query without any type of specification as to the nature
of the relationship, such as using a path expression to give
information about the structure of the relationship. For example,
the following example RQL query
select * from
{;Artist}@P{X}.{;Sculpture}@Q{Y}.@R{Z}
finds all data paths that traverse the class hierarchies Artist and
Sculpture, containing three schema properties, one for each
property variable (@variable). However, we notice that the query
requires that a property variable be added for every edge in the
required path. That is, the user is required to have some idea of at
least the structure e.g. length, of the relationship. One approach
that some of these systems offer to alleviate this problem is that
they provide mechanisms for browsing or querying schemas to
allow users to get the information they need. While this may be a
reasonable requirement when querying specific domains with a
few schemas involved, on the Semantic Web, many schemas may
be involved in a query, and requiring a user to browse them all
would be a daunting task for the user. In fact, in some cases, such
information may not be available to all users (e.g., classified
information) even though the data may be used indirectly to
answer queries. Furthermore, browsing schemas do not always
give the complete picture, especially in the case of RDFS
schemas, because, entities may belong to different schemas,
creating links between entities that are not obvious from just
looking at the schemas. For example in Figure 1, the relationship
paints.exhibited.title connecting &r1 to "Reina Soifa
Museum", is not apparent by just looking at either schema.
So far, we have talked about relationships in terms of a directed
path connecting two entities. However, there are some other
interesting types of relationships. Let us take for example,
resources &r4 and &r6. Both resources could be said to be related
because they have both created artifacts (&r5, and &r7) that are
exhibited at the same museum (&r8). In this case, having some
relationship to the same museum associates both resources. This
kind of connectivity is an undirected path between the entities.
Another closely related kind of association is class membership.
For example, &r1 and &r6 are both Artists, even though of a
different kind, and therefore are somewhat associated. Also, &r1
and &r6 could be said to be associated because they both have
creations (&r2, and &r7) that are exhibited by a Museum (&r3
and &r8 respectively). In this case, the association is that of a
similarity. So, in the first three associations the relationships
capture some kind of connectivity between entities, while the last
association captures a similarity between entities. Note that the
notion of similarity used here is not just a structural similarity, but
a semantic similarity of paths (nodes and edges) that the entities
are involved in. Nodes are considered similar, if they have a
common ancestor class. For example in the relationship involving
&r1 and &r6, although one case involves a painting and the other
a sculpture, we consider them similar because sculptures and
paintings are kinds of Artifacts and sculpting and painting are
both kinds of creative activities (the notion of similarity is
extended to properties as well).
The Semantic Associations shown in this example are fairly
simple involving only short paths and are useful only for the
purpose of illustration. However, in environments that support
information analytics and knowledge discovery involve longer
paths, especially undirected paths, which are not easily detectable
by users in fast-paced environments. For example at airport
security portals, agents may want to quickly determine if a
passenger has any kind of link to terrorist organizations or
activities.
FRAMEWORK
The framework described in this section provides a formal basis
for Semantic Associations. It builds on the formalization for the
RDF data model given in [40], by including a notion of a
Property Sequence. A Property Sequence allows us to
capture paths in the RDF model and forms the basis for
formalizing Semantic Associations as binary relations on Property
Sequences. Secondly, we some complex queries called
-queries
for querying about Semantic Associations.
3.1 Formal Data Model
In section 2.1, we describe the RDF data model informally as a
labeled directed graph. To recap, the RDF Schema specification
[17] provides a special vocabulary for describing classes and
properties in a domain. A Property is defined by specifying its
domain (the set of classes that it applies to), and its range
(either a Literal type e.g. String, Integer, etc, or the classes whose
entities it may take as values). Classes are defined in terms of
their relationship to other classes using the rdfs:sublassOf
property to place them at the appropriate location in a class
hierarchy, as well as other user specified properties that may
include them in their range or domain thereby linking them to
other classes. Properties may also be organized in a hierarchy
using the rdfs:subPropertyOf property.
692
The formalization in [40] defines a graph data model along with a
type system that connects the RDF Model & Syntax specification
with the RDFS schema specification using an interpretation
mechanism. It forms the basis for a typed RDF query language
called RQL [40]. RQL is fairly expressive and supports a broad
range of queries. Its type system T is the set of all possible types
that can be constructed from the following types:
=
C
|
P
|
M
|
U
|
L
| {
} | [1:
1
, 2:
2
, ..., n:
n
] | (1:
1
+ 2:
2
+
... + n:
n
)
where
C
indicates a class type,
P
a property type,
M
a
metaclass type,
L
a literal type in the set L of literal type names
(string, integer, etc.),
and
U
is the type for resource URIs. For the
RDF multi-valued types we have
{.}
as the Bag type,
[.]
is the
Sequence type, and
(.)
is the Alternative type. The set of values
that can be constructed using the resource URIs, literals and class
property names is called V. Then, the interpretation of types in T
is given naturally by the interpretation function [[ ]], which is a
mapping from
to the set of values in V. For example, a class C
is interpreted as unary relation of type {
U
}, which is the set of
resources (i.e. of type
U
) that have an rdf:typeOf property
with range C, and includes the interpretations of the subclasses of
C. For a property p, [[p]] is given by
{[v
1
, v
2
] | v
1
[[ p.domain ]], v
2
[[ p.range ]] }
{ [[ p' ]] | p' is a subPropertyOf p}
It defines an RDF schema as a 5-tuple RS = (V
S
, E
S
,
, , H)
where: V
S
is the set of nodes and E
S
is the set of edges.
is an
incidence function
: E
S
V
S
V
S
, and
is a labeling function
that maps class and property names to one of the types in T, i.e.
:
V
S
E
S
T. H = (N, <), where N = C P, C and P are the set
of class and property names in RS, respectively. H is a well-formed
hierarchy, i.e., < is a smallest partial ordering such that: if
p
1
, p
2
P and p
1
< p
2
, then p
1
.domain
p
2
.domain and p
1
.range
p
2
.range. It also formalizes an instance of an RDFS schema called
a description base which contains all the asserted
instances of classes and properties of an RDF schema.
We generalize these definitions to sets of RDF schemas and
description bases as basic notion of context for a
-query.
3.1.1 Definition 1
The RDFS schema set of RDFS Schemas RSS = {RS
i
: 1
i n}.
Let
C = C
S1
C
S2
... C
S2
where C
Si
is the set of class names
in schema RS
i
and
P
= P
S1
P
S2
... P
Sn
, where P
Si
is the set
of property names in RS
i
then N = C
P
.
[40] defines a description base RD which is an instance of an
RDFS schema RS containing all the asserted instances of the
classes and properties in RS. We generalize that definition here to
the union of instances of the set of schemas in an RDFS schema
set.
3.1.2 Definition 2
An instance of an RDF schema set RSS = {RS
1
, RS
2
, .. RS
n
}, is a
description base RDS defined as a 5-tuple = (V
DS
, E
DS
,
,
,
),
where V
DS
= V
D1
V
D2
... V
Dn
and V
Di
is the set of nodes in
the description base of the schema RS
i
, and E
DS
is defined
similarly.
is the incidence function : E
DS
V
DS
V
DS
,
is a
value function that maps the nodes to the values in V i.e.
: V
DS
V, is a labeling function that maps each node either to one of
the container type names (Seq, Bag, Alt) or to a set of class
names from the schema set RSS whose interpretations contain the
value of the node, and each edge e = [v
1
, v
2
] to a property name p
in RSS, where the interpretation of p contains the pair
[
(v
1
),
(v
2
)], i.e., the values of v
1
and v
2
. Formally,
: V
D
E
D
2
N
{Bag, Seq, Alt} in the following manner:
i.
For a node n in RDS,
(n) = {c | c C and
(n) [[c]]}
ii.
For an edge e from node n
1
to n
2
,
(e) = p
P and
the
values of n
1
to n
2
belong in the interpretation of p: [
(n
1
),
(n
2
)]
[[p]].
In order capture paths in the RDF graph model, we define a
notion of a Property Sequence, represented in the graph as a
sequence of edges (i.e. properties). There is a choice to be made
in the method for realizing such a notion in a query language such
as RQL. One option is to add paths or property sequences as types
in a query language making them first class citizens. The second
option is to realize them as complex operations such as Cartesian
product, on property types. We choose the later approach because
attempting to make paths as first class citizens brings up
additional issues such as defining path subsumption and so on.
We will now define the notion of a Property Sequence.
3.1.3 Definition 3 (Property Sequence)
A Property Sequence PS is a finite sequence of properties
[P
1
, P
2
, P
3
, ... P
n
] where P
i
is a property defined in an RDF
Schema RS
j
of a schema set RSS. The interpretation of PS is
given by:
[[PS]]
i=1
n
[[P
i
]] where for ps
[[PS]], called an instance
of PS, ps[i]
[[P
i
]] for 1
i n and ps[i][1] = ps[i+1][0]).
ps[i][1] refers to the second element of the i
th
ordered pair and
ps[i+1][0] refers to the first element of the i+1
th
ordered pair. We
define a function NodesOfPS()which returns the set of nodes
of a Property Sequence PS, i.e.
PS.NodesOfPS()= {C
1
, C
2
, C
3
, ... C
k
} where C
i
is a class in
either the domain or range of some Property P
j
in PS, 1
j n.
For example in Figure 1, for PS =
c
reates.exhibited.title,
PS
.NodesOfPS () = {Artist,
Artifact, Museum, Ext. Resource, String}.
Let PS = [P
1
, P
2
, P
3
, ... P
n
], a description base RDS is said to
satisfy
or be a
model
of PS (RDS |= PS) if there exists a
sequence of edges e
1
, e
2
, e
3
, ... e
n
in the description base RDS
such that for all i,
(e
i
) = P
i
,
(e
i
) = (v
i
, v
i+1
) and
i=1
n
(v
i
, v
i+1
) =
ps for some ps
[[PS]].
We define a function
PSNodesSequence
() on Property
Sequence instances that returns its sequence of nodes, i.e.
ps.PSNodesSequence()= [v
1
, v
2
, v
3
, ... v
n+1
]. The node v
1
is called the origin of the sequence and v
n+1
is called the
terminus.
Next, we define a set of binary relations on Property Sequences.
693
3.1.4 Definition 4 (
Joined Property Sequences)
PS
1
PS
2
is true if:
NodesOfPS(PS
1
)
NodesOfPS(PS
2
)
0.
The Property Sequences PS
1
and PS
2
are called joined, and for
C
(NodesOfPS(PS
1
)
NodesOfPS(PS
2
)), C is called a
join node. For example, in Figure 2, the sequences
creates.exhibited. and paints.exhibited are joined
because they have a join node Museum.
&r3
&r5
&r7
"oil on
canvas"
&r2
"oil on
canvas"
&r8
Artist
Sculptor
Artifact
Sculpture
Museum
String
String
fname
lname
creates
exhibited
sculpts
String
String
Painting
Painter
paints
technique
material
typeOf(instance)
subClassOf(isA)
subPropertyOf
exhibited
technique
exhibited
technique
exhibited
"Rodin"
"August"
&r6
&r1
fname
lname
fname
lname
paints
paints
creates
&r4
"Rembrandt"
"Pablo"
"Picasso"
fname
Figure 2 : Isomorphic Property Sequences
3.1.5 Definition 5 (
-Isomorphic Property
Sequences)
Two property sequences PS
1
= P
1
, P
2
, P
3
, ... P
m
, and PS
2
= Q
1
, Q
2
,
Q
3
, ... Q
m
, are called
-isomorphic
(PS
1
PS
2
), if
for all i, 1
i m: P
i
= Q
i
or P
i
Q
i
or Q
i
P
i
(
means
subPropertyOf )
For example in Figure 2, the sequences paints.exhibited
and creates.exhibited are isomorphic
because
paints
is considered to be similar to creates, since
paints
is a subproperty of
creates.
Note that the example that we use
here is somewhat misleading because the example shown for
Joined Property Sequences also happens to be
-Isomorphic.
However, the two notions are quite different because Joined
Property Sequences are not required to be similar.
3.1.6 Definition 6 (Length)
The
length
of a Property Sequence is equal to the number of
properties in the Property Sequence. In the case of a Joined
Property Sequence its length is the sum of all the properties in its
constituent Property Sequences, i.e. the length of the
undirected path from origin of one Property Sequence to the
origin of the other Property Sequence. For example, in Figure 2,
the Joined Property Sequences [creates.exhibited,
paints.exhibited] has a length of 4.
3.2 Semantic Associations
We can now define some binary relations on the domain of
entities i.e. resources, based on the different types of Property
Sequences.
3.2.1 Definition 7 (
-pathAssociated)
-pathAssociated (x, y) is true if there exists a Property
Sequence with ps
[[PS]] and, either x and y are the origin and
terminus of ps respectively, or vice versa, i.e. y is origin and x is
terminus. Then ps is said to satisfy
-pathAssociated (x,
y) written as ps |= -pathAssociated (x, y).
3.2.2 Definition 8 (
-joinAssociated)
Let PS
1
and PS
2
be two Property Sequences such that PS
1
PS
2
with a join node C, and there exists ps
1
and ps
2
such that ps
1
[[
PS
1
]] and ps
2
[[ PS
2
]] and, n
ps1.PSNodesSequence()
ps2.PSNodesSequence(), then -joinAssociated (x,
y) is true if either of the conditions are satisfied.
1) x is the origin of ps
1
and y is the origin of ps
2
or
2) x is the terminus of ps
1
and y is the terminus of ps
2
.
This means that either ps
1
.PSNodesSequence = [ x, b, c ... n,
.,., r ] and ps
2
.PSNodesSequence = [ y, , , . . n, , ], or
ps
1
.PSNodesSequence = [ a, b, c ... n, .,., r ,x] ] and
ps
2
.PSNodesSequence = [ , , , . . n, , y] and n [[ C ]].
We say that (ps
1
, ps
2
) |= -joinAssociated (x, y).
3.2.3 Definition 9 (
-cpAssociated)
This is a special case of Definition 5 that captures an inclusion or
sibling relationship (i.e. common parent) between resources.
-cpAssociated (x, y) is true if there exists two Property
Sequences PS
1
and PS
2
such that PS
1
PS
2
which satisfy
joinAssociated
(x, y) and, both PS
1
and PS
2
are of the
form: rdf.typeOf.(rdfs:subClassOf)*. This relation
is used to capture the notion that entities are related if they either
belong to the same class or to sibling classes. For example, &r1
and &r6 are related because they are both Artists. We say that
(ps
1
, ps
2
) |= -cpAssociated (x, y). However, in order
to reduce the possibility of meaningless associations e.g. both x
and y belong to rdfs:Resource, we make further restrictions.
We say that
-cpAssociated (x, y) is strong if
1) For the join node j of the Joined Property Sequence (inducing
the association (i.e. the common parent of x and y), j ,
where
called the ceiling, refers to the most general
class in the hierarchy that is to be considered, which is usually
user-specified.
2) the length of the Joined Property Sequence inducing the
association is minimal. By minimal we mean that it is less
than a specific value indicated by context or specified by user.
The first restriction ensures that we do go to far up the hierarchy
looking for a common parent, while the second ensures that the
relationship is not too distant to be meaningful in the user's
context.
3.2.4 Definition 10 (
-IsoAssociated)
-IsoAssociated (x, y) is true if there exists two
property sequences PS
1
and PS
2
such that PS
1
PS
2
, and there
exists ps
1
and ps
2
such that ps
1
[[PS
1
]] and ps
2
[[PS
2
]] such
that, x is the origin of ps
1
and y is the origin of ps
2
. We say that
(ps
1
, ps
2
) |= -IsoAssociated (x, y).
694
We say that x and y are semantically associated if either
pathAssociated
(x, y),
-cpAssociated(x, y), -IsoAssociated(x, y),
or
-joinAssociated(x, y).
3.3
-Queries
A
-Query Q is defined as a set of operations that map from a
pair of keys (e.g. resource URIs) to the set of Property Sequences
PS
in the following manner:
1.
:
U (2)
2
PS
2.
:
U (2)
2
PS(2)
3.
:
U (2)
2
PS(2)
U (2)
= { {x, y} : x, y
U
and x
y }. Similarly,
PS
(2)
is the set
of pairs of Property Sequences. In 1., we map from a pair of keys
x and y to a set of Property Sequences that induces a pathAssociation
of x and y. In 2., we map from (x, y) to a set of
binary tuples of Property Sequences that induces either a
joinAssociation
or a strong
-cpAssociation of x and y and in 3.,
we map from (x, y) to a set of binary tuples of Property
Sequences that induces a
-isoAssociation.
STRATEGIES FOR PROCESSING -QUERIES
Our strategy for implementation involves investigating alternative
approaches to implementing the
-operator, and evaluate their
merits and demerits. We consider two major categories. The first
category, which we have developed a partial implementation for,
involves leveraging existing RDF persistent data storage
technologies. Here, a
-query processing layer is developed above
the RDF data storage layer, which performs some of the
computation and, relegates a specific portion of the computation
to the data store
layer. In the second approach, the
implementation involves the use of a memory resident graph
representation of the RDF model, along with
the use of efficient
graph traversal algorithms. We will outline how query processing
is done using both approaches.
4.1 Exploiting RDF Data Management
Systems
In this approach, we leverage existing RDF data storage
technologies such as RDFSuite [8] and SESAME [18] and
develop a
-query processing layer which performs some of the
computation and, relegates a specific portion of the computation
to the data store layer. Figure 3 gives an illustration of this
approach (although, this is somewhat of an oversimplification, it
adequate for the purposes of this discussion). Here the processing
of a
-query is broken down to 4 phases. Phases 2 and 4 occur at
the data store layer and phases 1 and 3 occur at the
-query
processing layer.
Phase 1 captures the query, i.e. the resources and context (i.e.
schema set). In the second stage, the resources are classified i.e.,
the classes that entities belong to, within the given context, are
identified. This involves a query to the data store layer, which
exploits the rdf:typeOf statements to answer the query. Much of
the processing is done in the third phase where potential paths
involving the entities in the query are discussed by querying a
PathGuide (a combination of index structures that stores
information about paths that exist between resources classes).
There are two kinds of paths that are kept in the PathGuide. The
first kind of path is that which is obvious from the schema. The
second kind is those paths that exist at the data level but are not
evident at the schema level. This is because of the fact that the
RDF data model allows multiple classifications of entities.
Consequently, every instance of a multiple classification induces
a connection between two classes that does not exist at the
schema level, and thus is not general to all entities of those
classes. Therefore, a query to the PathGuide yields potential
property sequences paths between entities, since some of the
paths are not general to entire classes but specific to the entities
that are multiply classified. For example in Figure 1, the
paints.exhibited.title sequence is not a sequence in
either the left or right schema, but is present in the description
base (i.e. between &r1 and the literal node "Reina Sofia
Museum"). The reason for this is &r3`s membership in both the
Museum and the Ext.Resource classes, this can be seen as
having created an intermediate class node that collapses Museum
and the Ext.Resource classes, and consequently links the
paints.exhibited sequence to the title property.
The fourth stage of query processing involves the validation of
the paths found in the PathGuide for the specific entities in the
query, by executing queries on the underlying data store. The
output of this stage selects only those paths whose queries do not
return an empty result.
r1
r1 = http://www.xxx.com/yyy
r2
r2 = http://www.zzz.net/
A
B
E
C
D
r1
r1 = http://www.xxx.com/yyy
r2
r2 = http://www.zzz.net/
A
B
E
C
D
r1
r1 = http://www.xxx.com/yyy
r2
r2 = http://www.zzz.net/
A
B
E
C
D
r1
r1 = http://www.xxx.com/yyy
r2
r2 = http://www.zzz.net/
1. Query Entities
2. Classification of Entities
3. Identification of Candidate Paths
4. Pruning of Invalid Paths
Figure 3: Illustration of
-Query Processing
4.1.1 Issues
Two challenges arise from storing all potential paths between
classes in the PathGuide indexes. The first is that it causes the size
of indexes to be quite large. Second, the potential paths found in
the PathGuide in response to a query, could generate a large
number of RQL queries that need to be validated at the data store
layer, which slows down processing time significantly. However,
heuristics could be employed to minimize these problems. For
example, to reduce the size of the indices, we could choose to
avoid adding every single potential path between classes in the
index, but include only those whose support value is at least as
large as a user supplied threshold value, where the support value
represents the percentage of resources that are involved in
695
multiple classification for any given pair of classes. This means
that if very few resources induce a connection between two
otherwise unconnected schema classes because of a multiple
classification, then we do not include in the indexes, those
additional paths created due to the multiple classification, thereby
reducing the size of the indices. The rationale for this is that the
probability of those paths being involved in the result of a query
is low, therefore the additional cost of storing the paths in the
indices may not be worth it. A second heuristic is to try to prune
the number of paths that need to be validated at the data storage
layer. This could be done by assigning weights to Semantic
Associations based on the contextual relevance and then
validating only those associations with a high relevance weight.
Our work in this area is still in progress.
An additional problem with processing
-queries on existing RDF
storage systems is that some of these systems represent each
property as a separate relation in a relational database model.
Therefore, the computation of a Property Sequence results in a
large number of joins which has a negative impact of the speed of
query processing. Currently, we do not see any easy solution to
this problem.
4.2 Using Graph Algorithms
This approach involves the computation of Semantic Associations
on a memory-resident graph representation of the RDF model
such as that provided by JENA [56], or the memory
representation of the schema set as in SESAME [18], to which
graph traversals algorithms can be applied. In the case of
pathAssociation
we can search for paths between entities, and in
the case of a
-joinAssociation we check if the two entities belong
in the same connected component. One issue with this approach is
that that trying to find all paths between entities could easily lead
to an exponential time algorithm. However, [52] provides
promising fast algorithms for solving path problems which may
be employed for such computations. In particular, it offers near-linear
time algorithms for computing a path expression
representing the set of all paths between nodes in a graph. Such a
representation may then be processed using contextual
information to prune paths that are not very relevant in the given
context. In addition, other heuristics may be added. For example,
a user may be asked to characterize the desired result set, e.g.
shortest paths or longest paths, which will also help to reduce the
result set. Similar heuristics to those discussed in the first
approach that use context to prune paths based on degree of
relevance can also be used here. In that case, the complexity of
the problem can be analyzed along the number of semantic paths
retrieved
Complexity =
(n-1)
(l=1)
(# paths of length l) (probability of keeping path of length l).
Another issue is the complexity of graph isomorphism problem
which is known to be NP-complete. However, certain classes of
graphs have properties making them amenable to efficient
manipulation. For example, [12] describes a polynomial time
algorithm for detecting isomorphism in rooted directed path
graphs, which includes the exact class of graphs that are required
for checking
-isomorphism. We are currently working on a
prototype implementation of this approach.
RELATED WORK
There is some relationship between our work and that on querying
object-oriented and semi-structured data using path expressions
[2][3][19][22][23][24][34]. Although, these systems provide
powerful and expressive capability, allowing users to query for
data without having in-depth schema knowledge, most of them
work on the premise that the goal of a query is to find data entities
but not complex relationships such as Semantic Associations.
Some of these systems [19][22] support paths as first class entities
and allow for path variables to be used outside of the FROM
clause, i.e. to be returned as a result of a query which suggests
that queries for
-pathAssociations could be supported. However,
they typically assume a simpler data model which is a rooted
directed graph without the nuances of RDF such as multiple
classification and property hierarchies. Furthermore, the more
complex Semantic Associations such as the
-joinAssociation and
-Isomorphism are not supported, even in systems like [22] which
provide some functions that range over path variables, e.g., the
difference function which returns the difference in the set of paths
that originate from two nodes.
With respect to RDF, the current generation of RDF query
languages RQL [40], SquishQL [45], RDQL [48], do not support
path variables as first class entities and so cannot even be used for
querying for path relationships. In the case of the logic-based
RDF query languages such as TRIPLE [51], the inference rules
required to reason about the full range of the Semantic
Associations discussed here, would require functionality beyond
FOL.
The DISCOVER system [38] provides a keyword proximity
search capability over relational databases, which return
associations called Joining Sequences. Joining Sequences
represent the paths connecting keywords in the query, obtained by
traversing foreign key links. However, the semantics associated
with these associations is not explicit, but is implicit in the
database schema. Thus, the interpretation of the meaning and
usefulness of the associations must be done by users.
Furthermore, other more complex Semantic Associations such as
the
-Isomorphism are not captured.
There is a common intuition underlying our work and some of the
tasks related to data mining, in that they both involve discovering
relationships. However, there are significant differences in the
goals, methods and results produced by the both kinds of systems.
The first difference is articulated in a statement made in [32],
where data mining is said to be opportunistic while information
access techniques (such as ours) are goal-driven. Traditional data
mining [21][26] focuses on discovering patterns and relationships
in data that can be used to develop models of the data. In
association analysis [7], rules that associate attribute-value pairs
are learned from patterns of co-occurrences of attribute values in
data, which capture co-occurrence relationships between
attributes. On the contrary, we do not try to learn patterns from
data rather, we provide specific rules for inferring relationships
between entities by looking at property value dependencies, and
focus on providing methods for verifying whether these kinds of
associations exist between entities. That is, we identify
meaningful sequences of binary predicates while data mining
association rules involve sets of attribute value pairs. Therefore,
we view data mining as a complimentary technology. For
example, the association rules learnt from patterns in data can
696
provide knowledge that can be used to guide the search for
Semantic Associations or to rank resulting Semantic Associations
based on how close the follow the patterns.
An initial discussion on Semantic Associations is made in [10].
CONCLUSION & FUTURE WORK
Most RDF query systems do not provide adequate querying
paradigms to support querying for complex relationships such as
Semantic Associations. Support for such querying capabilities is
highly desirable in many domains. We have presented a formal
framework for these Semantic Associations for the RDF data
model, and reviewed some implementation strategies for
computing them. There are many open research issues that we
plan to focus on in the near future. First, it may be necessary to
develop data organization techniques for data that will adequately
support the kinds of queries discussed here. Such techniques
should eliminate the need for an excessive number of expensive
computations such as joins during query processing. Secondly, we
plan to develop techniques for dealing with the space complexity
problem of the indices used in the PathGuide. For example we
may use encoding schemes that compress path information, or
heuristics for managing the size of the indices. Another top
priority is the development of context-sensitive ranking
algorithms that assign higher weights to results that are most
relevant in the query context. Finally, we will perform a
comparative study of the two implementation strategies discussed
in section 4 over a testbed consisting of large amount of
automatically extracted metadata generated using the SCORE
system [41].
ACKNOWLEDGMENTS
Our thanks to Drs. Bob Robinson, John Miller, Krys Kochut, and
Budak Arpinar for the illuminating discussions and insightful
contributions, and to Boanerges Aleman-Meza on his revision
comments. We are also indebted to Dr. Vassilis Christophides
whose comments and suggestions were invaluable in preparing
the final version of the paper.
This work is funded by NSF-ITR-IDM Award # 0219649 titled
"
Semantic Association Identification and Knowledge Discovery
for National Security Applications
."
REFERENCES
[1]
S. Abiteboul, P. Buneman, and D. Suciu. Data on the Web:
From Relations to Semistructured Data and XML. Morgan
Kaufmann, 1999.
[2]
S. Abiteboul. Querying Semi-Structured data. In Proc. of
ICDT, Jan 1997.
http://citeseer.nj.nec.com/abiteboul97querying.html
[3]
S. Abiteboul, D. Quass, J. McHugh, J. Widom, and J.
Wiener. The Lorel Query Language for Semistructured Data.
International Journal on Digital Libraries, 1(1):68--88, April
1997.
[4]
S. Abiteboul, R. Hull, and V. Vianu. Foundations of
Databases. Addison-Wesley, 1995.
[5]
R. Agrawal. Alpha: An Extension of Relational Algebra to
Express a Class of Recursive Queries. IEEE Transactions on
Software Engineering. 14(7):879-- 885, July 1988.
[6]
R. Agrawal, A. Borgida, and H.V. Jagadish. Efficient
Management of Transitive Relationships in Large Data
Bases. In SIGMOD'89, pages 253--262, Portland, Oregon,
USA, 1989.
[7]
R. Agrawal, T. Imielienski and A. Swami. Mining
Assocation Rules between Sets of Items in Large Databases.
Proc. Conf. On Management of Data. Washington, DC,
USA, 207--216. ACM Press, New York, NY USA 1993.
[8]
S. Alexaki, G. Karvounarakis, V. Christophides, D.
Plexousakis, and K. Tolle. The ICS-FORTH RDFSuite:
Managing Voluminous RDF Description Bases. In 2nd
International Workshop on the Semantic Web, pages 1--13,
Hong Kong, 2001.
[9]
S. Alexaki, G. Karvounarakis, V. Christophides, D.
Plexousakis, and K. Tolle. On Storing Voluminous RDF
descriptions: The case of Web Portal Catalogs. In 4th
International Workshop on the Web and Databases
(WebDB), Santa Barbara, CA, 2001. Available at
http://139.91.183.30:9090/RDF/publications/webdb2001.pdf
[10]
K. Anyanwu, A. Sheth,
The
Operator: Discovering and
Ranking Associations on the Semantic Web.
SIGMOD
Record (Special issue on Amicalola Workshop), December
2002.
[11]
D. Avant, M. Baum, C. Bertram, M. Fisher, A. Sheth, Y.
Warke, "Semantic Technology Applications for Homeland
Security," Proc. of the 11
th
Intl Conf. on Information and
Knowledge Management (CIKM 2002), McLean, VA,
November 4-9, 2002, pp. 611--613.
[12]
L. Babel, I. Ponomarenko, G. Tinhofer. The Isomorphism
Problem for Directed Paths and For Rooted Directed Path
Graphs. Journal of Algorithms, 21:542--564, 1996.
[13]
T. Berners-Lee, J. Hendler, and O. Lassila. The Semantic
Web. Scientific American, May 2001.
[14]
B. Berendt, A. Hotho, G. Stumme. Towards Semantic Web
Mining. In Proceedings of the International Semantic Web
Conference, pp. 264--278, Sardinia, Italy. June 2002.
[15]
A. Branstadt, V. B. Le, J. P. Spinrad. Graph Classes: A
Survey. SIAM 1999.
[16]
T. Bray, J. Paoli, and C.M. Sperberg-McQueen. Extensible
Markup Language (XML) 1.0. W3C Recommendation,
February 1998.
[17]
D. Brickley and R.V. Guha. Resource Description
Framework (RDF) Schema Specification 1.0, W3C
Candidate Recommendation. 2000.
[18]
J. Broekstra, A. Kampman, and F. van Harmelen. SESAME:
An Architecture for Storing and Querying RDF Data and
Schema Information. In D. Fensel, J. Hendler, H. Lieberman,
and W. Wahlster, editors, Semantics for the WWW. MIT
Press, 2001.
697
[19]
P. Buneman, M. Fernandez, D. Suciu. UnQL: A Query
Language and Algebra for Semistructured Data Based on
Structural Recursion. VLDB Journal, 9(1):76--110, 2000.
[20]
W. W. Chang, A Discussion of the Relationship Between
RDF-Schema and UML. A W3C Note, NOTE-rdf-uml-19980804
.
[21]
M. Chen, J. Han and P. Yu. Data Mining: An Overview from
the Database Perspective. IEEE Trans. On Knowledge and
Data Engineering. Vol. 8. No. 6. December 1996.
[22]
V. Christophides, S. Abiteboul, S. Cluet, and M. Scholl.
From Structured Documents to Novel Query Facilities. In
Proc. of ACM SIGMOD Conf. on Management of Data, pp.
313--324, Minneapolis, Minnesota, May 1994.
[23]
V. Christophides, S. Cluet, and G. Moerkotte. Evaluating
Queries with Generalized Path Expressions. In Proc. of ACM
SIGMOD, pp. 413--422, 1996.
[24]
D. Chamberlin, D. Florescu, J. Robie, J. Simeon, and M.
Stefanescu. XQuery: A Query Language for XML. Working
draft, World Wide Web Consortium, June 2001.
http://www.w3.org/TR/xquery/
[25]
M. Dean, D. Connolly, F. Harmelen, J. Hendler, I. Horrocks,
D. McGuinness, P. F. Patel-Schneider, and L. Stein. OWL
Web Ontology Language 1.0 Reference, W3C Working
Draft 29 July 2002. http://www.w3.org/TR/owl-ref/.
[26]
U. M. Fayyad, G. Piatetsky-Shapiro, P. Smyth and R.
Uthurusany. Advances in Knowledge Discovery and Data
Mining.. AAAI/MIT Press 1996.
[27]
R. Fikes. DAML+OIL query language proposal, August
2001. http://www.daml.org/listarchive/joint-committee/0572
.html.
[28]
R. H. Guting. GraphDB: Modeling and querying graphs in
databases. In Proceedings of the International Conference on
Very Large Data Bases, pp. 297--308, 1994.
[29]
B. Hammond, A. Sheth, and K. Kochut, Semantic
Enhancement Engine: A Modular Document Enhancement
Platform for Semantic Applications over Heterogeneous
Content, in Real World Semantic Web Applications, V.
Kashyap and L. Shklar, Eds., IOS Press, December 2002, pp.
29--49.
[30]
S. Handschuh and S. Staab. Authoring and annotation of web
pages in CREAM. In The Eleventh International World
Wide Web Conference (WWW2002), Honolulu, Hawaii,
USA, 7-11, May, 2002
[31]
F. Harmelen, P. F. Patel-Schneider, I. Horrocks, eds.
Reference Description of the DAML+OIL (March 2001)
ontology markup language.
[32]
M. Hearst. Distinguishing between Web Data Mining and
Information Access. Position statement for Web Data Mining
KDD 97.
[33]
Y. E. Ioannidis, R. Ramakrishnan, L. Winger: Transitive
Closure Algorithms Based on Graph Traversal. TODS 18(3):
512--576 (1993).
[34]
Y. E. Ioannidis, Y. Lashkari. Incomplete Path Expressions
and their Disambiguation, In Proc. of the 1994 ACM
SIGMOD, International Conference on Management of Data.
p.138-149, May 24-27, 1994, Minneapolis, Minnesota,
United States.
[35]
P. Hayes. RDF Model Theory. W3C Working Draft,
September 2001.
[36]
I. Horrocks, S. Tessaris. The Semantics of DQL.
http://www.cs.man.ac.uk/~horrocks/Private/DAML/DQL-semantics
.pdf
[37]
I. Horrocks and S. Tessaris. A Conjunctive Query Language
for Description Logic Aboxes. In Proc. of AAAI-00, 2000.
[38]
V. Hristidis and Y. Papakonstanti-nou. DISCOVER:
Keyword search in relational databases. In Procs. VLDB,
Aug. 2002.
[39]
ICS-FORTH. The ICS-FORTH RDFSuite web site.
Available at http://139.91.183.30:9090/RDF, March 2002.
[40]
G. Karvounarakis, S. Alexaki, V. Christophides, D.
Plexousakis, M. Scholl, RQL: A Declarative Query
Language for RDF, WWW2002, May 7-11, 2002, Honolulu,
Hawaii, USA.
[41]
M. S. Lacher and S. Decker. On the Integration of Topic
Maps and RDF Data. In Proc. of Semantic Web Working
Symposium. Palo Alto. California. August 2001.
[42]
O. Lassila and R. Swick. Resource Description Framework
(RDF) Model and Syntax Specification, W3C
Recommendation. 1999.
[43]
M. Mannino, L. Shapiro, L. Extensions to Query Languages
for Graph Traversal Problems. TKDE 2(3): 353--363,1990.
[44]
A. O. Mendelzon and P. T. Wood. Finding Regular Simple
Paths in Graph Databases. SIAM J. Comput., 24(6):1235-1258
, 1995.
[45]
L. Miller, A. Seaborne, A. Reggiori. Three Implementations
of SquishQL, a Simple RDF Query Language. In Proc. of 1
st
International Semantic Web Conference. ISWC2002. June 9-12
, 2002, Sardinia, Italy.
[46]
A. Nanopoulos. Y. Manolopoulos. "Mining Patterns from
Graph Traversals", Data and Knowledge Engineering,
Vol.37, No.3, pp.243-266, 2001.
[47]
J. Rumbaugh, I. Jacobson, and G. Booch. The Unified
Modeling Language Reference Manual. Addison-Wesley,
1999.
[48]
A. Seaborne. RDQL: A Data Oriented Query Language for
RDF Models. 2001.
http://www.hpl.hp.com/semweb/rdql-grammar
.html
[49]
A. Sheth, C. Bertram, D. Avant, B. Hammond, K. Kochut,
Y. Warke. Semantic Content Management for Enterprises
and the Web, IEEE Internet Computing, July/August 2002,
pp. 80--87.
[50]
A. Sheth, S. Thacker and S. Patel.
Complex Relationship and
Knowledge Discovery Support in the InfoQuilt System
.
VLDB Journal. September 25, 2002.
[51]
M. Sintek and S. Decker. TRIPLE---A Query, Inference,
and Transformation Language for the Semantic Web.
698
International Semantic Web Conference (ISWC), Sardinia,
June 2002. http://www.dfki.uni-kl.de/frodo/triple/
[52]
Tarjan, R. Fast Algorithms for Solving Path Problems. J.
ACM Vol. 28, No. 3, July 1891, pp. 594--614.
[53]
DQL: DAML Query Language.
http://www.daml.org/2002/08/dql/
[54]
Inkling: RDF query using SquishQL, 2001.
http://swordfish.rdfweb.org/rdfquery/.
[55]
ISO/IEC 13250: 2000 Topic Maps, Jan, 2000.
http://www.topicmaps.org/
[56]
JENA
A Java API for RDF
.
[57]
Whitepaper on National Security and Intelligence, Semagix
Inc. 2002.
http://www.semagix.com/pdf/national_security.pdf
699 | AI;analysis;isomorphism;Complex Data Relationships;RDF;Rooted Directed Path;Semantic Associations;automation;graph traversals;semantic association;Semantic Web Querying;relationship;semantic web;query processing;Property Sequence |
81 | Energy Management Schemes for Memory-Resident Database Systems | With the tremendous growth of system memories, memory-resident databases are increasingly becoming important in various domains. Newer memories provide a structured way of storing data in multiple chips, with each chip having a bank of memory modules. Current memory-resident databases are yet to take full advantage of the banked storage system, which offers a lot of room for performance and energy optimizations. In this paper, we identify the implications of a banked memory environment in supporting memory-resident databases, and propose hardware (memory-directed) and software (query-directed) schemes to reduce the energy consumption of queries executed on these databases. Our results show that high-level query-directed schemes (hosted in the query optimizer) better utilize the low-power modes in reducing the energy consumption than the respective hardware schemes (hosted in the memory controller), due to their complete knowledge of query access patterns. We extend this further and propose a query restructuring scheme and a multi-query optimization . Queries are restructured and regrouped based on their table access patterns to maximize the likelihood that data accesses are clustered. This helps increase the inter-access idle times of memory modules, which in turn enables a more effective control of their energy behavior. This heuristic is eventually integrated with our hardware optimizations to achieve maximum savings. Our experimental results show that the memory energy reduces by 90% if query restructuring method is applied along with basic energy optimizations over the unoptimized version. The system-wide performance impact of each scheme is also studied simultaneously. | INTRODUCTION
Memory-resident databases (also called in-memory databases
<A href="81.html#10">[6]) are emerging to be more significant due to the current era of
memory-intensive computing. These databases are used in a wide
range of systems ranging from real-time trading applications to IP
routing. With the growing complexities of embedded systems (like
real-time constraints), use of a commercially developed structured
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
CIKM'04, November 813, 2004, Washington, DC, USA.
Copyright 2004 ACM 1-58113-874-1/04/0011 ...
$
5.00.
memory database is becoming very critical <A href="81.html#10">[5].
Consequently,
device developers are turning to commercial databases, but existing
embedded DBMS software has not provided the ideal fit.
Embedded databases emerged well over a decade ago to support
business systems, with features including complex caching logic
and abnormal termination recovery. But on a device, within a
set-top box or next-generation fax machine, for example, these
abilities are often unnecessary and cause the application to exceed
available memory and CPU resources. In addition, current
in-memory database support does not consider embedded system
specific issues such as energy consumption.
Memory technology has grown tremendously over the years,
providing larger data storage space at a cheaper cost.
Recent
memory designs have more structured and partitioned layouts
in the form of multiple chips, each having memory banks <A href="81.html#10">[30].
Banked memories are energy efficient by design, as per-access
energy consumption decreases with decreasing memory size (and
a memory bank is typically much smaller compared to a large
monolithic memory). In addition, these memory systems provide
low-power operating modes, which can be used for reducing the
energy consumption of a bank when it is not being used.
An
important question regarding the use of these low-power modes
is when to transition to one once an idleness is detected. Another
important question is whether the application can be modified to
take better advantage of these low-power modes.
While these
questions are slowly being addressed in architecture, compiler, and
OS communities, to our knowledge, there has been no prior work
that examines the energy and performance behavior of databases
under a banked memory architecture. Considering increasingly
widespread use of banked memories, such a study can provide
us with valuable information regarding the behavior of databases
under these memories and potential modifications to DBMSs
for energy efficiency. Since such banked systems are also being
employed in high-end server systems, banked memory friendly
database strategies can also be useful in high-end environments to
help reduce energy consumption.
Our detailed energy characterization of a banked memory architecture
that runs a memory-resident DBMS showed that nearly
59% of overall energy (excluding input/output devices) in a typical
query execution is spent in the memory, making this component
an important target for optimization (see Figure <A href="81.html#2">1). Moreover, for
any system, memory power and energy consumption have become
critical design parameters besides cost and performance. Based on
these observations, this paper evaluates the potential energy benefits
that memory-resident database queries can achieve by making
use of banked memory architectures supported with low-power operating
modes. Since each memory bank is capable of operating
independently, this opens up abundant avenues for energy and performance
optimizations.
In this paper, we focus on a banked memory architecture and
study potential energy benefits when database queries are executed.
Specifically, we focus on two important aspects of the problem:
Characterizing energy benefits of banked memories using hardware
and software techniques: To see whether query execution can
make use of available low-power modes, we study both hardware
and software techniques. The hardware techniques detect the idleness
of memory banks and switch the inactive (idle) banks (during
218
Memory
59%
Cache
16%
ALU
14%
Bus
1%
Others
10%
Figure 1: Breakup of the energy consumption for various system
components. The results are based on the average energy
consumption of TPC-H benchmarks <A href="81.html#10">[35] executed on a
memory-resident DBMS.
query execution) to low-power operating modes. We also present
a query-based memory energy optimization strategy, wherein the
query plan is augmented by explicit bank turn-off/on instructions
that transition memory banks into appropriate operating modes during
the course of execution based on the query access pattern. We
experimentally evaluate all the proposed schemes and obtain energy
consumptions using an energy simulator. Our experiments
using TPC-H queries <A href="81.html#10">[35] and a set of queries suitable for handheld
devices clearly indicate that both hardware-based and query-directed
strategies save significant memory energy.
Query restructuring for memory energy savings: We propose a
query restructuring scheme and a multi-query optimization strategy
to further increase energy benefits coming from using low-power
operating modes. The idea behind these schemes is to increase
bank inter-access times so that more aggressive low-power modes
can be employed and a memory bank can stay in a low-power mode
longer once it is transitioned. Our experimental evaluation indicates
that this query restructuring strategy does not only reduce
energy consumption, but also helps improve overall performance
(execution cycles).
Apart from providing useful input for database designers, our results
can also be used by hardware designers to tune the behavior
of low-power modes so that they handle query access patterns better
. Similar to the observation that creating a lightweight version
of a disk-based database will not serve as a suitable in-memory
database, our belief is that taking an in-memory database system
and using it on a banked architecture without any modification may
not generate the desired results. Therefore, the results presented in
this work also shed light on how database design and memory architecture
design interact with each other.
The remainder of this paper is organized as follows. Section <A href="81.html#2">2
presents related work. Section <A href="81.html#2">3 elaborates on the memory database
that we built and also on the memory banking scheme that we employ
for our experiments. Section <A href="81.html#3">4 presents in detail the proposed
hardware and query-directed energy optimization techniques. The
results of our energy evaluation of these schemes are discussed in
Section <A href="81.html#5">5. Our experiments also account for the performance overhead
incurred in supporting our schemes. Section <A href="81.html#7">6 presents our
query restructuring and regrouping scheme, and Section <A href="81.html#8">7 discusses
its energy/performance benefits within the context of our banked
memory architecture. Finally, Section <A href="81.html#10">8 summarizes the results.
RELATED WORK
In the past, memory has been redesigned, tuned or optimized
to suit emerging fields. Need for customized memory structures
and allocation strategies form the foundation for such studies.
Copeland et al proposed SafeRAM <A href="81.html#10">[11], a modified DRAM model
for safely supporting memory-resident databases alike disk-based
systems, and for achieving good performance. In PicoDBMS <A href="81.html#10">[27],
Pucheral et al present techniques for scaling down a database to
a smart card. This work also investigates some of the constraints
involved in mapping a database to an embedded system, especially
memory constraints and the need for a structured data layout.
Anciaux et al <A href="81.html#10">[3] explicitly model the lower bound of the memory
space that is needed for query execution. Their work focuses on
light weight devices like personal organizers, sensor networks, and
mobile computers. Boncz et al show how memory accesses form a
major bottleneck during database accesses <A href="81.html#10">[7]. In their work, they
also suggest a few remedies to alleviate the memory bottleneck.
An et al analyze the energy behavior of mobile devices when spatial
access methods are used for retrieving memory-resident data
<A href="81.html#10">[2]. They use a cycle accurate simulator to identify the pros and
text
Query Optimizer
Query Execution
Engine
Memory
Database
Queries
Data
Results
Energy & Performance
Optimizations
(Using Cost Plan)
Hardware
Optimizations
Targeting DBMS
Rewrite System
Parser
System Catalog
Figure 2: DBMS architecture.
cons of various indexing schemes. In <A href="81.html#10">[1], Alonso et al investigate
the possibility of increasing the effective battery life of mobile
computers by selecting energy efficient query plans through
the optimizer. Although the ultimate goal seems the same, their
cost plan and the optimization criterion are entirely different from
our scheme. Specifically, their emphasis is on a client-server model
optimizing the network throughput and overall energy consumption
. Gruenwald et al propose an energy-efficient transaction management
system for real-time mobile databases in ad-hoc networks
<A href="81.html#10">[16]. They consider an environment of mobile hosts. In <A href="81.html#10">[22],
Madden et al propose TinyDB, an acquisitional query processor
for sensor networks. They provide SQL-like extensions to sensor
networks, and also propose acquisitional techniques that reduce the
power consumption of these networks. It should be noted that the
queries in such a mobile ad-hoc network or a sensor environment
is different from those in a typical DBMS. This has been shown
by Imielinksi et al in <A href="81.html#10">[19]. In our model, we base our techniques
on a generic banked memory environment and support complex,
memory-intensive typical database operations. There are more opportunities
for energy optimizations in generic memory databases,
which have not yet been studied completely. The approach proposed
in this paper is different from prior energy-aware database
related studies, as we focus on a banked memory architecture, and
use low-power operating modes to save energy.
Gassner et al review some of the key query optimization techniques
required by industrial-strength commercial query optimizers
, using the DB2 family of relational database products as examples
<A href="81.html#10">[15]. This paper provides insight into design of query cost
plans and optimization using various approaches. In <A href="81.html#10">[23], Manegold
studies the performance bottlenecks at the memory hierarchy
level and proposes a detailed cost plan for memory-resident
databases. Our cost plan and optimizer mimics the PostgreSQL
model <A href="81.html#10">[12, 14]. We chose it due to its simple cost models and open
source availability.
A query restructuring algorithm is proposed by Hellerstein
in <A href="81.html#10">[18].
This algorithm uses predicate migration to optimize
expensive data retrievals.
In <A href="81.html#10">[10], Chaudhuri et al extend this
approach to study user-defined predicates and also guarantee an
optimal solution for the migration process. Sarawagi et al present
a query restructuring algorithm that reduces the access times of
data retrieval from tertiary databases <A href="81.html#10">[32]. Monma et al develop
the series-parallel algorithm for reordering primitive database operations
<A href="81.html#10">[24]. This algorithm optimizes an arbitrarily constrained
stream of primitive operations by isolating independent modules.
This work forms the basic motivation for our query restructuring
algorithm. However, our paper is different from all of the above
work in the sense that we reorder queries for reducing energy
consumption. Moreover, our database is memory-resident, with
the presence of banked memory that gives more freedom for
optimizations.
SYSTEM ARCHITECTURE
For our work, we modified the PostgreSQL DMBS to work with
memory-resident data sets as its workload. The block diagram for
our setup is shown in Figure <A href="81.html#2">2. The core components are derived
from PostgreSQL. The flow of our model is similar to PostgreSQL
except that the database is memory resident. A query is parsed for
syntax and then sent to the rewrite system. The rewrite system uses
the system catalog to generate the query tree, which is then sent to
the optimizer. The query optimizer derives the cost of the query in
219
Configuration
Registers
Self-Monitoring/
Prediction
Hardware
Memory
Controller
Bank
To/From
CPU
Module
Memory Bus
Figure 3: Banked memory architecture.
multiple ways using the query tree and issues the best suited plan
to the query execution engine. We incorporate our software-based
techniques at the optimizer stage of the DBMS. These optimizations
are based on the cost that is derived for each of the query plans
(the discussion pertaining to the modified cost model is deferred till
Section <A href="81.html#3">4). Based on the final query execution plan, the execution
engine executes the query by using the database. The database is
entirely memory resident and the memory is organized in a banked
format (elaborated in the following section). The executor recur-sively
iterates the query plan and uses a per-tuple based strategy
(pipelined execution, and not bulk processing) to project the output
results. The proposed hardware optimizations are at the computer
architecture level of the system. Since the base DBMS model is
similar to PostgreSQL, we do not elaborate each component in detail
( <A href="81.html#10">[26] provides an elaborate discussion). Instead, we highlight
our contributions, and modifications to DBMS (shown in blue in
Figure <A href="81.html#2">2) in the following sections. Overall, our strategies require
modification to the query optimizer, memory hardware, and system
software components.
3.2
Memory Model
We use a memory system that contains a memory array organized
as banks (rows) and modules (columns), as is shown picto-rially
in Figure <A href="81.html#3">3 for a 4 4 memory module array. Such banked
systems are already being used in high-end server systems <A href="81.html#10">[30] as
well as low-end embedded systems <A href="81.html#10">[31]. The proposed optimizations
will, however, apply to most bank-organized memory systems
. Accessing a word of data would require activating the corresponding
modules of the shown architecture. Such an organization
allows one to put the unused banks into a low-power operating
mode. To keep the issue tractable, this paper bases the experimental
results on a sequential database environment and does not consider
a multiprocessing environment (like transaction processing which
requires highly complex properties to be satisfied). We assume in
our experiments that there is just one module in a bank; hence, in
the rest of our discussion, we use the terms "bank" and "module"
interchangeably.
3.3
Operating Modes
We assume the existence of five operating modes for a memory
module: active, standby, nap, power-down, and disabled
1
.
Each mode is characterized by its energy consumption and the
time that it takes to transition back to the active mode (termed
resynchronization time or resynchronization cost). Typically, the
lower the energy consumption, the higher the resynchronization
time <A href="81.html#10">[30]. Figure <A href="81.html#3">4 shows possible transitions between the various
low-power modes (the dynamic energy
2
consumed in a cycle is
given for each node) in our model. The resynchronization times
in cycles (based on a cycle time of 3.3ns) are shown along the
arrows (we assume a negligible cost
for transitioning to a lower
power mode). The energy and resynchronization values shown
in this figure have been obtained from the RDRAM memory data
sheet (512MB, 2.5V, 3.3ns cycle time, 8MB modules) <A href="81.html#10">[30]. When
a module in standby, nap, or power-down mode is requested to
perform a memory transaction, it first goes to the active mode, and
1
Current DRAMs <A href="81.html#10">[30] support up to six energy modes of operation
with a few of them supporting only two modes. One may choose to
vary the number of modes based on the target memory.
2
We exclusively concentrate on dynamic power consumption that
arises due to bit switching, and do not consider the static (leakage)
power consumption <A href="81.html#10">[28] in this paper.
Full Power
(2.063 nJ)
Standby
(0.743 nJ)
Nap
(0.035 nJ)
Power
Down
(0.025 nJ)
Disabled
(0 nJ)
1
16
9000
Figure 4: Available operating modes and their resynchronization
costs.
then performs the requested transaction. While one could employ
all possible transitions given in Figure <A href="81.html#3">4 (and maybe more), our
query-directed approach only utilizes the transitions shown by
solid arrows. The runtime (hardware-based) approaches, on the
other hand, can exploit two additional transitions: from standby to
nap, and from nap to power-down.
3.4
System Support for Power Mode Setting
Typically, several of the memory modules (that are shown in Figure
<A href="81.html#3">3) are controlled by a memory controller which interfaces with
the memory bus. For example, the operating mode setting could
be done by programming a specific control register in each memory
module (as in RDRAM <A href="81.html#10">[30]). Next is the issue of how the
memory controller can be told to transition the operating modes
of the individual modules. This is explored in two ways in this
paper: hardware-directed approach and software-directed (query-directed
) approach.
In the hardware-directed approach, there is a Self-Monitoring
and Prediction Hardware block (as shown in Figure <A href="81.html#3">3), which
monitors all ongoing memory transactions. It contains some prediction
hardware (based on the hardware scheme) to estimate the
time until the next access to a memory bank and circuitry to ask the
memory controller to initiate mode transitions (limited amount of
such self-monitored power down is already present in current memory
controllers, for example: Intel 82443BX and Intel 820 Chip
Sets).
In the query-directed approach, the DBMS explicitly requests
the memory controller to issue the control signals for a specific
module's mode transitions. We assume the availability of a set of
configuration registers in the memory controller (see Figure
<A href="81.html#3">3) that are mapped into the address space of the CPU (similar
to the registers in the memory controller in <A href="81.html#10">[20]). These registers
are then made available to the user space (so that the DBMS application
can have a control) through operating system calls.
Regardless of which strategy is used, the main objective of employing
such strategies is to reduce the energy consumption of a
query when some memory banks are idle during the query's execution
. That is, a typical query only accesses a small set of tables
, which corresponds to a small number of banks. The remaining
memory banks can be placed into a low-power operating mode
to save memory energy. However, it is also important to select
the low-power mode to use carefully (when a bank idleness is detected
), as switching to a wrong mode either incurs significant performance
penalties (due to large resynchronization costs) or prevents
us from obtaining maximum potential energy benefits.
Note that energy optimization is our context can be performed
from two angles. First, suitable use of low-power operating modes
can reduce energy consumption of a given query execution. Second
, the query plan can be changed (if it is possible to do so) to further
increase energy benefits. In this work, we explore both these
angles.
POWER MANAGEMENT SCHEMES
In a banked architecture, the memory can be managed through
either of the following two approaches: (1) a runtime approach
wherein the hardware is in full control of operating mode transitions
; and (2) a query-directed scheme wherein explicit bank turn-on/off
instructions are inserted in the query execution plan to invoke
mode transitions. One also has the option of using both the
approaches simultaneously (which we illustrate in later sections).
220
Full Power
Standby
Nap
Power
Down
idle
stanby
idle
nap
idle
down
resynch
stanby
resynch
nap
resynch
down
Figure 5: Dynamic threshold scheme.
4.1
Hardware-Directed Schemes
We explore two hardware-directed approaches that allow the
memory system to automatically transition the idle banks to an
energy conserving state. The problem then is to detect/predict
bank idleness and transition idle banks into appropriate low-power
modes.
4.1.1
Static Standby Scheme
The first approach is a per-access optimization. Most of the recent
DRAMs allow the chips to be put to standby mode immediately
after each reference <A href="81.html#10">[30]. After a read/write access, the memory
module that gets accessed can be placed into the standby mode
in the following cycle. We refer to this scheme as the static standby
mode in the rest of our discussion. Note that, while this scheme is
not very difficult to implement, it may lead to frequent resynchro-nizations
, which can be very harmful as far as execution cycles are
concerned.
4.1.2
Dynamic Threshold Scheme
Our second hardware-guided approach is based on runtime dynamics
of the memory subsystem. The rationale behind this approach
is that if a memory module has not been accessed in a while,
then it is not likely to be needed in the near future (that is, inter-access
times are predicted to be long). A threshold is used to determine
the idleness of a module after which it is transitioned to a
low-power mode. More specifically, we propose a scheme where
each memory module is put into a low-power state with its idle
cycles as the threshold for transition.
The schematic of our dynamic threshold scheme is depicted in
Figure <A href="81.html#4">5. After idle
stndby
cycles of idleness, the corresponding
module is put in the standby mode. Subsequently, if the module
is not referenced for another idle
nap
cycles, it is transitioned to the
nap mode. Finally, if the module is not referenced for a further
idle
down
cycles, it is placed into the power-down mode. Whenever
the module is referenced, it is brought back into the active mode incurring
the corresponding resynchronization costs (based on what
low-power mode it was in). It should be noted that even if a single
bank experiences a resynchronization cost, the other banks will also
incur the corresponding delay (to ensure correct execution). Implementing
the dynamic mechanism requires a set of counters (one for
each bank) that are decremented at each cycle, and set to a threshold
value whenever they expire or the module is accessed. A zero
detector for a counter initiates the memory controller to transmit
the instructions for mode transition to the memory modules.
4.2
Software-Directed Scheme
It is to be noted that a hardware-directed scheme works well
independent of the DBMS and the query optimizer used. This is
because the idleness predictors are attached to the memory banks
and monitor idleness from the perspective of banks. In contrast,
a query-directed scheme gives the task of enforcing mode transitions
to the query. This is possible because the query optimizer,
once it generates the execution plan, has a complete information
about the query access patterns (i.e., which tables will be accessed
and in what order, etc). Consequently, if the optimizer also knows
the table-to-bank mappings, it can have a very good idea about the
bank access patterns. Then, using this information, it can proac-tively
transition memory banks to different modes. In this section,
we elaborate on each step in the particular query-directed approach
that we implemented, which includes customized bank allocation,
query analysis, and insertion of bank turn-on/off (for explicit power
mode control) instructions.
4.2.1
Bank Allocation
In the case of software-directed scheme, the table allocation is
handled by the DBMS. Specifically, the DBMS allocates the newly-created
tables to the banks, and keeps track of the table-to-bank
mappings. When a "create table" operation is issued, the DBMS
first checks for free space. If there is sufficient free space available
in a single bank, the table is allocated from that bank. If a bank is
not able to accommodate the entire table, the table is split across
multiple banks. Also, while creating a new table, the DBMS tries
to reuse the already occupied banks to the highest extent possible;
that is, it does not activate a new bank unless it is necessary. Note
that the unactivated (unused) banks i.e., the banks that do not hold
any data can remain in the disabled mode throughout the execution
. However, it also tries not to split tables excessively. In more
detail, when it considers an already occupied bank for a new table
allocation, the table boundaries are checked first using the available
space in that bank. If a bank is more than two-thirds full with the
table data, the rest of the bank is padded with empty bits and the
new table is created using pages from a new bank. Otherwise, the
table is created beginning in the same bank. Irrespective of whether
the table is created on a new bank or not, the DBMS creates a new
table-to-bank mapping entry after each table creation.
In hardware-directed schemes, we avoid these complexities involved
in bank allocation as we assume that there is absolutely no
software control. Consequently, in the hardware-directed schemes,
we use the sequential first touch placement policy. This policy allocates
new pages sequentially in a single bank until it gets completely
filled, before moving on to the next bank. Also, the table-to
-bank mapping is not stored within the DBMS since the mode
control mechanism is handled by the hardware.
4.2.2
Estimating Idleness and Selecting the
Appropriate Low-Power Mode
It should be emphasized that the main objective of our query-directed
scheme is to identify bank idleness. As explained above,
in order to achieve this, it needs table-to-bank mapping. However
, this is not sufficient as it also needs to know when each table
will be accessed and how long an access will take (i.e., the
query access pattern). To estimate this, we need to estimate the duration
of accesses to each table, which means estimating the time
taken by the database operations. Fortunately, the current DBMSs
already maintain such estimates for query optimization purposes
<A href="81.html#10">[12, 15, 29, 33, 34]. More specifically, given a query, the optimizer
looks at the query access pattern using the generated query plan.
The inter-access times are calculated using the query plan. A query
plan elucidates the operations within a query and also the order in
which these operations access the various tables in the database.
Even in current databases, the query plan generator estimates access
costs using query plans <A href="81.html#10">[12]. We use the same access cost estimation
methodology. These access costs are measured in terms of
page (block) fetches. In our memory-resident database case, a page
is basically the block that is brought from memory to the cache. For
instance, the cost of sequential scan is defined as follows (taken
from <A href="81.html#10">[12]):
Cost
seq scan
= N
blocks
+ CPU N
tuples
Here, N
blocks
is the number of data blocks retrieved, N
tuples
is the
number of output tuples, and CPU is the fudge factor that adjusts
the system tuple-read speed with the actual memory hierarchy data-retrieval
speed. Usually, optimizers use the above cost metric to
choose between multiple query plan options before issuing a query.
We attach a cost to each page (block) read/write operation to obtain
an estimate of the access cost (time) in terms of execution cycles.
For instance, the above scan operation is modified as follows:
Cost
block f etch
= T cycles
Cost
seq scan
= N
blocks
T + CPU N
tuples
block
tuples
T
In these expressions, T is the delay in cycles to fetch a block from
the memory. Thus, our cost plan is projected in terms of access
cycles. We extend this to other database operations like JOIN and
AGGREGATE based on the cost models defined in <A href="81.html#10">[14, 12].
Given a query, we break down each operation within the plan (including
sub-plans) and estimate the access cost (in cycles) for each
221
- > scan A (9000 cycles)
- > aggregate (20 cycles)
- > scan B (9000 cycles)
- > scan A (9000 cycles)
- > scan A
- > Put A=ON
- > aggregate
- > Put B=OFF
- > scan B
- > Put B=ON
- > Put A=OFF
- > scan A
- > Put A=ON
(B is already OFF)
P2
P1
(i)
(ii)
Figure 6: Example application of the query-directed scheme.
(i) The original execution plan. (b) The augmented execution
plan.
primitive operation. Our objective in estimating the per-operation
time in cycles is to eventually identify the inter-access times of operations
in the query (and hence, to put the banks that hold unused
tables to low-power modes). There are table accesses associated
with each operation, and bank inter-access times depend on the table
inter-access times. A query has information of the tables that
it accesses. Thus, knowing the inter-access time for each operation
leads to the inter-access times for each table as well. A table is
mapped to certain banks, and the table-to-bank mapping is available
in the query optimizer.
Consequently, if the table inter-access time is T , and the resynchronization
time is T
p
(assuming less than T ), then the optimizer
can transition the associated modules into a low-power mode (with
a unit time energy of E
p
) for the initial T - T
p
period (which
would consume a total [T - T
p
]E
p
energy), activate the module to
bring it back to the active mode at the end of this period following
which the module will resynchronize before it is accessed again
(consuming T
p
E
a
energy during the transition assuming that E
a
is
the unit time energy for active mode as well as during the transition
period). As a result, the total energy consumption with this transitioning
is [T - T
p
]E
p
+ T
p
E
a
without any resynchronization overheads
, while the consumption would have been T E
a
if there had
been no transitioning (note that this calculation considers only the
idle period). The DBMS optimizer evaluates all possible choices
(low-power modes) based on corresponding per cycle energy costs
and resynchronization times, and table inter-access time to pick up
the best choice. Note that the DBMS can select different low power
modes for different idle periods of the same module depending on
the duration of each idle period. Specifically, we use the most energy
saving low-power mode without increasing the original query
execution time (i.e., when the original idleness is over, the module
should be up in the active mode ready for the operation).
4.2.3
Inserting Bank-On/Off Instructions
The last part of the software-directed scheme is to insert explicit
(operating) mode transitioning instructions in the final query execution
plan. For this, we introduce place-markers (mapped to system
calls) which are interpreted at the low-level (interpreted later by
our memory controller, which actually sets the corresponding low-power
modes). This is done so that the query execution engine can
issue the query without much performance overhead, and with the
same transparency.
As an example, consider the following. Let tables A and B each
have 1000 records, each record being 64 bytes. Consider the query
plan depicted in Figure <A href="81.html#5">6(i), taken from PostgreSQL. The query
plan reads from bottom to top (P2 follows P1). A scan of table
A is done first, followed by a scan of table B. The result of these
operations are then used by an aggregate operation. Another (independent
) scan operation on table A follows the aggregate operation.
The per step access costs are also shown. From the generated query
plan, it is evident that table A is not accessed between point P1 and
point P2. Once the results are extracted after the scan at point P1,
the banks that hold table A can be put to a low-power mode, and the
banks that hold table B can be activated for data extraction. This is
illustrated in Figure <A href="81.html#5">6(ii) using place-markers for tables A and B.
Banks holding Table A are reactivated at point P2 (banks of Table
B remain off).
EXPERIMENTAL EVALUATION OF HARDWARE-DIRECTED AND QUERY-DIRECTED SCHEMES
In this section, we study the potential energy benefits of our hardware
and software-directed schemes. We first explain the experimental
setup that we used in our simulations. Then, the set of
queries that we used to study our schemes is introduced. After that,
we present energy consumption results. While we discuss the energy
benefits of using our schemes, we also elaborate the overheads
associated with supporting each of our schemes.
5.1
Setup
5.1.1
Simulation Environment
As mentioned before, the query-directed schemes are implemented
in the query optimizer of the memory database model
elaborated in Section <A href="81.html#2">3.1. We interface this DBMS to an enhanced
version of the SimpleScalar/Arm simulator <A href="81.html#10">[4] to form a complete
database system.
The intermediate interface (invoked by
DBMS) provides a set of operating system calls (on Linux kernel
2.4.25), which in turn invokes the SimpleScalar simulator. The
SimpleScalar simulator models a modern microprocessor with a
five-stage pipeline: fetch, decode, issue, write-back, and commit.
We implemented our hardware techniques within the framework of
the sim-outorder tool from the SimpleScalar suite, extended with
the ARM-ISA support <A href="81.html#10">[4]. Specifically, we modeled a processor
architecture similar to that of Intel StrongARM SA-1100. The
modeled architecture has a 16KB direct-mapped instruction cache
and a 8KB direct-mapped data cache (each of 32 byte-length). We
also model a 32-entry full associative TLB with a 30-cycle miss
latency. The off-chip bus is 32 bit-wide. For estimating the power
consumption (and hence, the energy consumption), we use the
Wattch simulator from Princeton University <A href="81.html#10">[8].
Our banked memory model is based on <A href="81.html#10">[13,21], as shown in Figure
<A href="81.html#3">4. We use values from Figure <A href="81.html#3">4 for modeling the delay (transition
cycles) in activation and resynchronization of various power-states
. Our simulations account for all performance and energy
overheads incurred by our schemes. In particular, the energy numbers
we present include the energy spent in maintaining the idleness
predictors (in the hardware-directed scheme) and the energy spent
in maintaining the table-to-bank mappings (in the query-directed
scheme), and in fetching and executing the bank turn-on/off instructions
(in the query-directed scheme). The predictors were implemented
using decrementing counters (equal to the number of
banks) and zero detector based on the discussion in Section <A href="81.html#4">4.1.
The predictors are synchronized with the system cycles to maintain
consistency of operation, and to minimize the overheads. The
query optimizer maintains the table-bank mappings, which is modeled
as an array list for instant access. The bank turn-on/off instructions
are executed by setting hardware registers, and hence,
these instructions are modeled as register operations using the existing
instruction set architecture. We present two important statistics
in our experimental results. Energy consumption corresponds
to the energy consumed in the memory system (including the above
mentioned overheads). We also present statistics about the performance
overhead (i.e., increase in execution cycles) for each of our
schemes. This overhead includes the cycles spent in resynchronization
(penalty cycles are modeled based on values in Figure <A href="81.html#3">4)
as well as the cycles spent (in the CPU datapath) in fetching and executing
the turn-on/off instructions (in the query-directed scheme).
5.1.2
Queries
To evaluate our scheme for memory-resident databases, we
considered two classes of queries. The first class is a subset of
queries from the Transaction Processing Council (TPC-H) benchmark
<A href="81.html#10">[35]. TPC-H involves complex queries with a large amount of
data accesses. Operations in decision support benchmarks (TPC-D
evolved to TPC-H) have good spatial locality with abundant data
intensive operations <A href="81.html#10">[9]. This assists us to perform a rigorous test
of our schemes. The top part of Table <A href="81.html#6">1 gives details of the TPC-H
queries we used and the corresponding database parameters. The
selected operations represent a good mix and could be used to
build a variety of complicated queries.
222
Table 1: The two classes of queries considered for our experiments.
Source
Query
Description
Tables
TPC-H
Q6
Simple query
PART, CUSTOMER, ORDERS, and
LINEITEM tables generated using
DBGEN with scale 1.0
Q3
Complex query involving JOIN
Q4
Complex query involving NEST
Q17
Complex query involving JOIN and NEST
Queries targeting a simple
organizer
P1
Simple name and address lookup
ADDRESSBOOK populated with 1.3
million entries, 50% subset of FRIENDS
and 25% subset of COLLEAGUES
P2
Lookup in directory of friends
P3
Lookup in directory of colleagues and
friends
Memory-resident databases run queries that are different from
the typical database queries as seen in TPC-H. The second set of
queries that we consider are representative of applications that execute
on handheld devices. The typical operations that are performed
on an organizer were imitated on our setup (we name the
queries P1, P2, P3). The first query involves a simple address
lookup using a `NAME' as input. The SQL for query P1 is shown
on the left section of Table <A href="81.html#6">2. Recent organizers <A href="81.html#10">[17, 25] provide an
ordered view of the underlying addressbook database. For instance,
organizers provide the creation of folders. A "friends" folder can
be a collection of personnel with a tag set as "friend" in the addressbook
. We defined folder as a restrained/customized view of
the same database (address book). Intuitively, query P2 strives to
do a lookup of friends living in a particular city. The "friends"
view and hence the query P2 is defined on the right section of Table
<A href="81.html#6">2. Query P3 combines views (folders). For this we defined a
new folder called "colleagues". P3 aims to find friends and/or colleagues
whose names start with an `a', living in a particular `CITY'.
Since P3 is very similar to P2 with some extra fields, we do not
present the SQL for P3. The intermediate tables and results during
query execution are also stored in the memory.
5.1.3
Default Parameters
For our experiments, we populate our database using the DBGEN
software from TPC-H benchmark suite with a scale factor of 1.0.
Our organizer database is populated with 1.3 million records.
For dynamic threshold scheme, we use 10, 100 and 10,000 cycles
as idle
stndby
, idle
nap
, and idle
down
, respectively. For all schemes,
the banks are in power-down mode before their first access. On/Off
instructions are inserted based on the inter-access times of table.
We use the same cycles as in idle
stndby
, idle
nap
, and idle
down
for
inserting instructions. As an example, consider the inter-access (T)
of a table as 25 cycles, which lies between 10 (idle
stndby
) and 100
(idle
nap
) cycles. We insert an On/Off instruction at the beginning
of T to put a table to standby mode for 24 cycles, taking into consideration
the resynchronization period of 1 cycle as well. Similar
technique is applied for inter-access times that fall in between other
power modes.
A single page transfer time is needed for access cost calculation
in software-directed scheme. We derive this by executing the TPC-H
queries on the SimpleScalar simulator (with the SA-1100 model)
and by studying the cycle times for transferring a data block from
memory to the cache. For all experiments, the default configuration
is the 512MB RDRAM memory with 8MB banks. In the following
section, we study the energy implications of our hardware and software
schemes using this setup. We then present the performance
overheads.
5.2
Query Energy Evaluation
Figure <A href="81.html#6">7 shows the normalized memory energy consumption for
our hardware-directed schemes. While presenting our results, we
normalize all values with respect to the base case, which is the version
with no query optimizations. "Static Standby" in Figure <A href="81.html#6">7 indicates
the static standby scheme. We see that, by simply putting the
modules to standby mode after each access, this scheme is able to
achieve an average 55% reduction in memory energy consumption
of TPC-H queries when compared to the unoptimized case. The
energy improvements are less pronounced in the case of handheld
Table 2: SQL for organizer queries
Query P1
Query P2
SELECT
CREATE VIEW
a name,
friends AS
P2:
a address,
SELECT
SELECT
a city,
a name,
a address,
a office phone,
a address,
a home phone,
a home phone,
a city,
a mobile phone
a mobile phone,
a home phone,
FROM
a email,
a mobile phone
friends
a web,
FROM
WHERE
a specialnotes
addressbook
a city = `[CITY]'
FROM
WHERE
GROUP BY
addressbook
a tag = `[FRIEND]'
a name;
WHERE
GROUP BY
a name = '[NAME]';
a name;
queries (37% reduction on the average). This is mainly because
of the different number of tables manipulated by these two types
of queries. In the TPC-H case, multiple tables are scattered across
various banks and hence, there is a potential of placing more memory
banks into low-power modes. In the case of handheld queries,
there is just one table scattered across multiple banks, which makes
putting modules to a low-power mode more difficult as modules are
tightly connected, as far as query access patterns are concerned. We
also observe from Figure <A href="81.html#6">7 that the dynamic threshold scheme further
extends these improvements through its ability to put a bank
into any of the possible low-power modes. On an average, there is
a 60% (43%) energy improvement in TPC-H (handheld) queries.
Figure <A href="81.html#6">7 also shows the normalized energy behavior of our
query-directed scheme (denoted On/Off Instr). It is evident that
this scheme outperforms the best hardware-directed scheme (by
an average of 10%) in saving the memory energy consumption.
This is because of two main reasons. First, when a bank idleness is
estimated, the query-directed scheme has a very good idea about
its length (duration). Therefore, it has a potential of choosing the
most appropriate low-power mode for a given idleness. Second,
based on its idleness estimate, it can also preactivate the bank. This
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Q6
Q3
Q4
Q17
P1
P2
P3
No
r
m
aliz
ed E
n
er
g
y
Static Standby
Dynamic Threshold
On/Of f Instr
Figure 7: Energy consumption of hardware and software-directed
modes. The values shown are normalized to the version
with no energy optimizations.
223
eliminates the time and energy that would otherwise have spent
in resynchronization. Consequently, the average memory energy
consumption of the query-directed scheme is just 32% of the unoptimized
version for TPC-H queries, and 44% in case of organizer
(handheld) queries [i.e., an additional 8% (13%) improvement over
the hardware schemes for TPC-H (handheld) queries].
5.3
Performance Overhead Analysis
Our techniques are very effective in reducing the memory energy
consumption. As mentioned earlier, transitions from the low-power
modes to the active mode come with an overhead of resynchronization
(in terms of both performance and energy). The energy values
reported in previous section take into consideration the extra energy
needed to activate the modules as well. In this part, we quantify
the basic performance overheads that are faced in supporting our
schemes.
Figure <A href="81.html#7">8 shows the performance overheads for both the hardware
and software-directed schemes. The static standby scheme has the
maximum overhead, which is expected. This is especially the case
when queries generate frequent memory accesses. The memory is
brought down to the standby mode after each access, and is resyn-chronized
in another access that follows immediately. As a result,
the performance worsens as bad as 28% for the static standby case.
On the other hand, for the dynamic threshold scheme, the performance
overhead is slightly better since the banks are not blindly
put to a low-power mode after each access. This verifies our prediction
that when a module goes to low-power mode, it would either
remain for a while in that mode or may even be transitioned
into a lower power mode. The query-directed scheme has the least
overhead (<2%). The main reason for this is the ability of pre-activating
a bank before it is actually accessed. Therefore, considering
both performance and energy results, one may conclude
that the query-directed scheme is better than the hardware-directed
schemes. However, it is also to be noted that the query-directed
scheme requires access to the query optimizer. In comparison,
the hardware-based schemes can work with any query optimizer.
Therefore, they might be better candidates when it is not possi-ble/profitable
to modify the query plan.
QUERY RESTRUCTURING
The approaches presented above mainly try to optimize energy
consumption without modifying the queries themselves (except
maybe for the query-directed scheme where we insert turn on/off
instructions in the query plan). In this section, we go one step
further, and demonstrate that even larger energy savings are possible
if one has the flexibility of reorganizing query operations. We
show how this can be achieved in the context of both individual
queries and multiple queries (optimized simultaneously). Our main
objective in restructuring queries is to increase memory bank inter-access
times. Note that when bank inter-access time is increased,
we can either remain in a given low-power operating mode longer,
thereby feeling the potential impact of resynchronization less (i.e.,
amortizing the cost of resynchronization); or we can switch to
a more energy saving mode (as we now have a longer idleness),
which means more energy savings. We present different query
restructuring strategies for achieving this.
When considering a single query, the bank inter-access times can
be increased by re-ordering query operations. On the other hand,
the primary goal of the heuristic that targets at multiple-queries is
to cluster the usage of tables from multiple queries together, so
that the overall table accesses are localized. That is, assuming that
we have multiple queries to optimize, our objective is to interleave
these query executions in such a way that the reuse of individual
tables (or of table portions) is maximized. In other words, when
a table is accessed, we want to execute all other query operations
(potentially coming from different queries) to that table (one
after another), before we move to the next table. This also tends
to cluster accesses to the same bank, and tends to increase the
bank inter-access times (which is very important from an energy
perspective as explained above). In the following, we first study
intra-query restructuring and then inter-query restructuring. After
these two steps, bank turn-on/off instructions are inserted at the
relevant points, depending on the bank access patterns.
0
0.2
0.4
0.6
0.8
1
1.2
1.4
Q6
Q3
Q4
Q17
P1
P2
P3
Norm
a
l
i
z
ed
P
e
r
f
orm
a
n
c
e
Static Standby
Dynamic Threshold
On/Off Instr
Figure 8: The performance overhead involved in supporting
our schemes. There is an average overhead of 15%, 8%, and
2% for standby, dynamic and on/off schemes, respectively, over
the unoptimized version.
Step 1 (intra-query optimization):
A query is first examined to see
if there are any potential reuse regions. If there are any reusable
regions, their accesses are grouped together.
We achieve this by examining the query execution plan. The query
plan is studied to see if there are any advantages in rearranging
the operations (primitives) in a query based on its table usage.
Operations that require the same (set of) table(s) are then grouped
together (i.e., they are scheduled to be executed one after another).
The detailed procedure is shown in Figure <A href="81.html#8">9. Each operation in the
query plan is first scanned and placed into a table group based on
the table(s) that it accesses. Then, the operations are rearranged
in the query plan (taking into account the dependencies between
them) based on their corresponding table groups. For this, we look
at the query plan tree. The path from each leaf node to the root,
called stream, is investigated. The ultimate goal is to schedule
operations (nodes in the plan tree) based on their table groups.
We try to schedule operations within one table group (which is
currently active) before scheduling the operations from another
table group (which is not active) in an attempt to increase the bank
inter-access times. That is, a stream is traversed from bottom to
top, and each node within the stream is put to the schedule queue
(as they are encountered) based on its table group. It should be
emphasized that we preserve the original semantics of the operations
(constraints) in the algorithm. This procedure is repeated
for each stream in the tree, and until all streams have the most
energy-efficient schedule based on their table accesses. At the end
of this step, an energy-aware schedule queue gets generated for the
considered query (saved in schedule list).
Step 2 (inter-query optimization):
Tables are examined to optimize
multiple queries simultaneously. For each table that is accessed,
all accesses arising from multiple queries to the particular table
are grouped together.
In this step, the schedule list from multiple queries are grouped
together. Each list is scanned to identify nodes that access a given
table. The nodes that access the same table are then scheduled to
execute together (without disturbing the dependency constraints).
In fact, the nodes from multiple queries are just grouped (combined
) not reordered. Thus, in this step, the constraint flow for
each schedule list (taken care of in Step 1) is automatically maintained
. Additional conditional flow checks could be reinforced at
this stage if desired. Figure <A href="81.html#8">10 shows the regrouping procedure.
f inal schedule list stores the final consolidated schedule of operations
from all the queries.
Step 3 (energy optimizations):
Include energy optimizations by inserting
On/Off instructions into the final schedule list.
In this step, the access costs are calculated for each operation in
the f inal schedule list as shown in Section <A href="81.html#4">4.2.2. Each operation
is attached with an access cost, and the turn-on/off instructions are
inserted based on the table inter-access times. The methodology
used for adding these instructions to the f inal schedule list is the
same as in Section <A href="81.html#4">4.2.2, and the on/off markers are placed as elaborated
in Section <A href="81.html#5">4.2.3.
As an example, consider two queries Q1 and Q2. Their original
224
table_group is a table-to-operations mapping list.
schedule_list stores the final schedule of operations.
/* identify the group to which an
* operation belongs */
operation_rearragement (){
for (each operation in query i) {
identify the table(s) in i;
for (each table j in i) {
add operation to table_group[j];
}
}
schedule_operations();
}
/* schedule operations */
schedule_operations() {
schedule_list = empty;
do {
for (each stream in query plan tree) {
start from leaf node;
for (each node in stream) {
identify its constraint nodes that follow;
/* the rest are independent nodes */
group(constraint nodes);
group(independent nodes);
check for new violations;
add new constraints if necessary;
save the schedule_list;
move up a node in the stream;
}
move to the next stream;
}
} until no more changes
}
/* group nodes */
group(node_list) {
if(node_list is constraint node list)
{
for (each node in node_list) {
lookup table_group of node;
add node to schedule_list based on table_group;
/* preserve the dependency order */
preserve flow of node_list in schedule_list;
}
}
else
{ /* set of independent nodes */
add node to schedule_list based on table_group;
/* no need to preserve constraint flow */
regroup to put all table_group nodes together;
}
Figure 9: Reorganizing operations within a query to optimize
for energy (Step 1). The query tree is investigated from the
bottom to top for grouping operations based on their table accesses
.
group_multiple_queries {
for (each schedule_list) {
do {
pick an unscheduled node i in schedule_list;
/* i.e. pick a node without a "complete" tag */
for (other schedule_lists) {
if (node j has same table_group as node i) {
schedule node j after node i;
mark node j as "complete";
/* with respect to multi-query schedule */
}
}
} until all node in schedule_list is "complete"
}
}
Towards the end of the procedure,
final_schedule_list stores the
entire list of "complete" schedule.
Figure 10: Grouping schedule list from multiple queries (Step
2). Operations from multiple queries are grouped based on
their table accesses using their corresponding schedule lists.
query plan is shown in Figure <A href="81.html#9">11(i). Q1 is revised as the table accesses
are optimizable. Figure <A href="81.html#9">11(ii) shows the result after applying
Step 1. Step 2 results in the output depicted in Figure <A href="81.html#9">11(iii). Finally
, in Step 3, we insert on/off instructions in appropriate places
(see Figure <A href="81.html#9">11(iv)).
EXPERIMENTAL EVALUATION OF QUERY RESTRUCTURING
In this section, we evaluate our query restructuring approach
by extending our database and queries discussed in Section <A href="81.html#5">5.1.2.
As before, our focus is on memory energy consumption. We also
study the impact of the technique on the overall performance.
Towards the end, other alternative options are also elaborated.
7.1
Multi-Query Setup
Since simultaneous processing of multiple queries is needed to
validate our approach, we considered a combination of queries,
which we term as scenarios in the rest of this paper. Among the
queries considered in Section <A href="81.html#5">5.1.2, there can be multiple combinations
of queries that arrive sequentially, and that (which) are optimizable
using our technique. The various combination (scenarios)
of organizer queries and their naming schemes are shown in Table
<A href="81.html#8">3. For instance, P12 indicates that P1 is sequentially processed
along with P2. The combination scenarios for TPC-H queries are
shown in Table <A href="81.html#8">4. The combinations shown in these tables are the
prominent ones and the behavior of other combinations are very
similar to these, hence, they are not included in this paper.
Table 3: Scenarios for organizer queries.
Type
Legend
Combination
Two query combinations
P11
P1 + P1
P12
P1 + P2
P23
P2 + P3
Three query combination
P123
P1 + P2 + P3
Table 4: Scenarios for TPC-H queries.
Type
Legend
Combination
Two query combinations
S11
Q6 + Q6
S12
Q6 + Q3
S13
Q6 + Q4
S14
Q6 + Q17
S23
Q3 + Q4
S24
Q3 + Q17
S34
Q4 + Q17
Three query combinations
S222
Q3 + Q3 + Q3
S123
Q6 + Q3 + Q4
Four query combinations
S1111
Q6 + Q6 + Q6 + Q6
S1234
Q6 + Q3 + Q4 + Q17
7.2
Query Energy Evaluation
In this section, we evaluate the query energy of the various scenarios
we presented in the previous section. We first study the improvements
obtained from our query restructuring heuristic, and
further extend our study to combine query restructuring with various
hardware and software-directed schemes (of Section <A href="81.html#3">4) meant
to improve the energy consumption.
Figure <A href="81.html#9">12 shows the sole contribution of query restructuring
scheme in improving the energy consumption. The energy reduces
by an average 55% from the unoptimized version when our query
restructuring scheme is used. By just grouping similar accesses
(to ensure data reuse), query restructuring can achieve significant
reduction in the energy consumption of multiple queries.
In order to identify the benefits coming solely from Step 1 (intra-query
optimization) in our query restructuring scheme, we also
combined Step 1 and Step 3, and compared it with our query-directed
scheme (studied in Section <A href="81.html#4">4.2 -- which is simply Step
3 of our query restructuring). Figure <A href="81.html#9">13 shows the results. There
is up to 19% improvement in energy when operations are shuffled
with a query based on their table usage.
When the query restructuring scheme is combined with
hardware-directed schemes, there is further improvement in
energy savings (Figure <A href="81.html#9">14). The static standby scheme works only
for small queries that have a uniform access pattern, but when
complex queries are encountered, the dynamic runtime scheme
outperforms the static standby scheme due to its good prediction of
225
- > function(A) (9000 cycles)
- > hash join
- > scan B (9000 cycles)
- > scan A (9000 cycles)
Q1
- > aggregate (20 cycles)
- > scan B (9000 cycles)
- > scan A (9000 cycles)
Q2
- > hash join
- > scan B (9000 cycles)
- > function(A) (9000 cycles)
- > scan A (9000 cycles)
Q1
- > aggregate (20 cycles)
- > scan B (9000 cycles)
- > scan A (9000 cycles)
Q2
(i) Original Queries
(ii) After applying Step 1
- > aggregate (from Q2)
- > hash join (from Q1)
- > scan B (from Q2)
- > scan B (from Q1)
- > function(A) (from Q1)
- > scan A (from Q2)
- > scan A (from Q1)
Q1 + Q2
(iii) After applying Step 2
- > aggregate (from Q2)
- > Put B=OFF (A is already OFF)
- > hash join (from Q1)
- > scan B (from Q2)
- > scan B (from Q1)
- > Put B=ON
- > Put A=OFF
- > function(A) (from Q1)
- > scan A (from Q2)
- > scan A (from Q1)
- > Put A=ON
(B is already OFF)
(iv) After applying Step 3
Q1 + Q2
Figure 11: Example of query restructuring and regrouping based on energy behavior.
the application behavior. This can be seen in Figure <A href="81.html#9">14, where the
dynamic threshold scheme performs better in the TPC-H scenarios
than for the handheld query scenarios.
The savings obtained
by putting a module into multiple low-power modes for longer
periods are more than the savings obtained by periodically putting
a module to just standby mode.
The software-directed schemes perform similar to dynamic the
runtime threshold strategy when combined with the query restructuring
algorithm. In Figure <A href="81.html#9">14, the insertion of explicit turn-on/off
instructions improves the energy by an average 78%, when compared
to the unoptimized version. This result is comparable to
the improvements obtained using the dynamic threshold scheme.
In fact, the dynamic threshold scheme performs slightly better for
some TPC-H queries (e.g., S12, S13, and S14). This situation occurs
due to the following factor. When multiple queries are combined
using query restructuring, it becomes difficult to predict the
inter-access times since each query has a varying access pattern,
and combining random access patterns complicates the job of the
predictor (and requires a more sophisticated predictor). The runtime
schemes work at the hardware instruction level without any
knowledge of the DBMS application. But, this illustrates how a
simple software technique implemented at the query optimizer (by
just analyzing the high-level query structure) is able to achieve improvements
as good as an equivalent but expensive hardware technique
.
As mentioned earlier in the paper, when queries are restructured
and grouped, the memory access pattern changes.
The
bank turn-on/off instructions can be inserted only in prominent
"hot" and "cold" access regions, respectively. There are a few
modules, which is beyond the control of software. For instance,
we insert turn-on/off instructions based on tables. A given table
could be scattered across many modules. Our predictor estimates
the inter-access time for which the entire table needs to be put to
low-power mode. However, even during a table access, there are
regions (modules) that are hardly used. Dynamic runtime scheme
is extremely good in handling this situation by its ability to put
individual modules to a low-power state based on just that module's
access. This implies that the combination of hardware and
software schemes form the best strategy when query restructuring
is deployed.
Figure <A href="81.html#9">14 also shows the case when both dynamic runtime
scheme and the turn-on/off instructions are used in tandem after
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
P11
P12
P23
P123
S11
S12
S13
S14
S23
S24
S34
S222 S123 S1111 S1234
N
o
r
m
al
i
z
ed
E
ner
gy
Figure 12: Contribution of query restructuring towards energy
improvements. The energy values shown are normalized to the
version with no optimizations.
query restructuring. The benefits obtained from such a hardware-software
interaction is prominent.
There is an average 90%
reduction in the memory energy consumption across the applications
. In some cases, there is up to 95% improvement in the energy
consumption. These results clearly show that query restructuring
combined with the use of low-power operating modes can lead to
significant energy savings.
7.3
Performance Overhead Analysis
Query restructuring combined with the use of low-power modes
has an impact on the performance. In Figure <A href="81.html#10">15, we present the
normalized system-wide performance of our query restructuring
scheme. It is evident that the performance improves by an average
of 48% when multiple queries are restructured and grouped. The
improvement in performance is mainly due to the improved locality
utilization in the memory hierarchy. That is, the data brought
to the cache by one query is reused by other queries (as a result
of restructuring). We do not present here detailed cache behavior
statistics due to lack of space.
Figure <A href="81.html#10">16 shows the normalized performance for the combination
schemes as well. When static standby scheme is used with
query restructuring, the performance improvements obtained from
query restructuring gets negated by the resynchronization overhead
from the standby mode for each access. Thus, the performance
0%
2%
4%
6%
8%
10%
12%
14%
16%
18%
20%
Q6
Q3
Q4
Q17
P1
P2
P3
E
n
erg
y
I
m
prov
e
m
e
n
t
s
Figure 13: Benefits obtained by restructuring operations within
a query (contribution of Step 1).
0
0.1
0.2
0.3
0.4
0.5
0.6
P11
P12
P23
P123
S11
S12
S13
S14
S23
S24
S34
S222
S123
S1111
S1234
Norm
a
l
i
z
ed
E
n
erg
y
Restructuring + Static Standby
Restructuring + Dynamic Threshold
Restructuring + On/Off Instr
Restructuring + On/Off Instr + Dynamic Threshold
Figure 14: Energy consumption reduces significantly when
low-power modes are utilized along with query restructuring
scheme. Values shown are normalized to the unoptimized version
.
Best energy savings comes from a hybrid hardware-software
scheme.
226
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
P11
P12
P23
P123
S12
S13
S14
S23
S24
S34
S222
S123 S1111 S1234
Norm
a
l
i
z
ed
P
e
r
f
orm
a
nc
e
Figure 15: Performance improvement obtained from basic
query restructuring over the unoptimized version.
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
P11
P12
P23
P123
S12
S13
S14
S23
S24
S34
S222
S123
S1111
S1234
N
o
r
m
al
i
z
e
d
P
e
r
f
o
r
m
anc
e
Restructuring + Static Standby
Restructuring + Dynamic Threshold
Restructuring + On/Off Instr
Restructuring + On/Off Instr + Dynamic Threshold
Figure 16: Overall performance after applying energy optimizations
along with query restructuring. Values shown are
normalized to the unoptimized version.
worsens in some cases by even 65% for complicated queries. However
, overall, there is still a 10% performance improvement for all
applications on the average. The turn-on/off instructions have the
least performance overhead, and hence, preserve the performance
improvements obtained from query restructuring. From Figure <A href="81.html#10">16,
this combination shows a 47% improvement in performance (negating
the improvements obtained from basic query restructuring by a
mere 1%). Dynamic runtime threshold on the other hand negates
the performance improvements from query restructuring by average
6%. Combining turn-on/off instructions with dynamic runtime
threshold shows an average performance improvement of 45% for
applications, which implies a 3% overhead addition from the low-power
schemes towards query restructuring. Thus, it is clear that
query restructuring with both turn-on/off instructions and runtime
threshold forms the best alternative from both energy consumption
and performance perspectives.
CONCLUDING REMARKS
This paper is an attempt to study the potential of employing
low-power operating modes to save memory energy during query
execution. We propose hardware-directed and software-directed
(query-directed) schemes that periodically transition the memory
to low-power modes in order to reduce the energy consumption
of memory-resident databases. Our experimental evaluations using
two sets of queries clearly demonstrate that query-directed schemes
perform better than hardware-directed schemes since the query optimizer
knows the query access pattern prior to query execution,
and can make use of this information in selecting the most suitable
mode to use when idleness is detected. This scheme brings about
68% reduction in energy consumption. In addition, the query-directed
scheme can also preactivate memory banks before they are
actually needed to reduce potential performance penalty.
Our query restructuring scheme based on memory bank accesses
provides another scope for optimization. One can re-order operations
within a query to increase bank inter-access times. It is also
possible to go beyond this, and consider the access patterns of multiple
queries at the same time. Multiple queries are optimized based
on their table accesses, i.e., all accesses to a table are clustered as
much as possible. This scheme is able to put memory banks to low-power
operating modes for longer periods of time due to fewer table
activations. There is up to 90% improvement in energy and 45%
improvement in performance when queries are restructured and regrouped
based on their table accesses. Overall, we can conclude
that a suitable combination of query restructuring and low-power
mode management can bring large energy benefits without hurting
performance.
REFERENCES
[1] R. Alonso and S. Ganguly. Query optimization for energy efficiency in mobile
environments. In Proc. of the Fifth Workshop on Foundations of Models and
Languages for Data and Objects, 1993.
[2] N. An, S. Gurumurthi, A. Sivasubramaniam, N. Vijaykrishnan, M. Kandemir,
and M.J. Irwin. Energy-performance trade-offs for spatial access methods on
memory-resident data. The VLDB Journal, 11(3):179197, 2002.
[3] N. Anciaux, L. Bouganim, and P. Pucheral. On finding a memory lower bound
for query evaluation in lightweight devices. Technical report, PRiSM Laboratoire
de recherche en informatique, 2003.
[4] T. M. Austin. The simplescalar/arm toolset. SimpleScalar LLC.
http://www.simplescalar.com/.
[5] Birdstep Technology. Database Management In Real-time and Embedded
Systems - Technical White Paper, 2003. http://www.birdstep.com/collaterals/.
[6] Bloor Research Ltd. Main Memory Databases, November 1999.
[7] P.A. Boncz, S. Manegold, and M.L. Kersten. Database architecture optimized
for the new bottleneck: Memory access. In The VLDB Journal, pages 5465,
1999.
[8] D. Brooks, V. Tiwari, and M. Martonosi. Wattch: a framework for
architectural-level power analysis and optimizations. In Proc. International
Symposium on Computer Architecture, 2000.
[9] Q. Cao, P. Trancoso, J.-L Larriba-Pey, J. Torrellas, R. Knighten, and Y. Won.
Detailed characterization of a quad pentium pro server running tpc-d. In Proc.
of the International Conference on Computer Design, 1999.
[10] S. Chaudhuri and K. Shim. Optimization of queries with user-defined
predicates. ACM Transactions on Database Systems, 24(2):177228, 1999.
[11] G.P. Copeland, T. Keller, R. Krishnamurthy, and M. Smith. The case for safe
ram. In Proc. of the Fifteenth International Conference on Very Large Data
Bases, pages 327335, 1989.
[12] Database Management System, The PostgreSQL Global Development Group.
PostgreSQL 7.2, 2001. http://www.postgresql.org/.
[13] V. Delaluz, M. Kandemir, N. Vijaykrishnan, A. Sivasubramaniam, and M.J.
Irwin. Dram energy management using software and hardware directed power
mode control. In Proc. of the International Symposium on High-Performance
Computer Architecture, 2001.
[14] Z. Fong. The design and implementation of the postgres query optimizer.
Technical report, University of California, Berkeley.
http://s2k-ftp.cs.berkeley.edu:8000/postgres/papers/.
[15] P. Gassner, G.M. Lohman, K.B. Schiefer, and Y. Wang. Query optimization in
the ibm db2 family. Data Engineering Bulletin, 16(4):418, 1993.
[16] Le Gruenwald and S.M. Banik. Energy-efficient transaction management for
real-time mobile databases in ad-hoc network environments. In Proc. of the
Second International Conference on Mobile Data Management, 2001.
[17] Handspring. Handspring Organizers, 2004.
http://www.handspring.com/products/.
[18] J.M. Hellerstein. Optimization techniques for queries with expensive methods.
ACM Transactions on Database Systems, 23(2):113157, 1998.
[19] T. Imielinski, S. Viswanathan, and B.R. Badrinath. Energy efficient indexing on
air. In Proc. of ACM SIGMOD Conference, 1994.
[20] Intel Corporation. Intel 440BX AGPset: 82443BX Host Bridge/Controller Data
Sheet, April 1998.
[21] A.R. Lebeck, X. Fan, H. Zeng, and C.S. Ellis. Power aware page allocation. In
Proc. of the International Conference on Architectural Support for
Programming Languages and Operating Systems, 2000.
[22] S. Madden, M.J. Franklin, J.M. Hellerstein, and W. Hong. The design of an
acquisitional query processor for sensor networks. In Proc. of the ACM
SIGMOD International Conference on Management of Data, pages 491502.
ACM Press, 2003.
[23] S. Manegold. Understanding, Modeling, and Improving Main-Memory
Database Performance. Ph.d. thesis, Universiteit van Amsterdam, Amsterdam,
The Netherlands, December 2002.
[24] C.L. Monma and J.B. Sidney. Sequencing with series-parallel precedence
constraints. Mathematics of Operations Research, 4:215224, 1979.
[25] Palm Inc. Palm Handhelds, 2004. http://www.palm.com/products/.
[26] The PostgreSQL Global Development Group. PostgreSQL 7.2 Developers
Guide, 2002. http://www.postgresql.org/docs/.
[27] P. Pucheral, L. Bouganim, P. Valduriez, and C. Bobineau. Picodbms: Scaling
down database techniques for the smartcard. The VLDB Journal,
12(1):120132, 2001.
[28] J.M. Rabaey, A. Chandrakasan, and B. Nikolic. Digital Integrated Circuits.
Prentice Hall, second edition, 2002.
[29] R. Ramakrishnan and J. Gehrke. Database Management Systems. McGraw-Hill
publishers, third edition, 2002.
[30] Rambus Inc. Rambus RDRAM 512MB Datasheet, 2003.
[31] Samsung Microelectronics. Mobile 512MB DRAM Chip Series.
http://www.samsung.com/Products/Semiconductor/.
[32] S. Sarawagi and M. Stonebraker. Reordering query execution in tertiary
memory databases. In The VLDB Journal, pages 156167, 1996.
[33] A. Silberschatz, H.F. Korth, and S. Sudarshan. Database System Concepts.
McGraw-Hill, fourth edition, 2001.
[34] Sleepycat Software. Berkeley DB V4.2, 2004.
http://www.sleepycat.com/docs/index.html.
[35] Transaction Processing Performance Council. TPC-H Benchmark Revision
2.0.0, 2003.
227 | hardware energy scheme;query-directed energy management;power consumption;memory-resident databases;database;energy;low-power modes;query-directed scheme;banked memory;multi-query optimization;DRAM;energy optimization;query restructuring |
82 | Enforcing Security and Safety Models with an Information Flow Analysis Tool | Existing security models require that information of a given security level be prevented from "leaking" into lower-security information. High-security applications must be demonstrably free of such leaks, but such demonstration may require substantial manual analysis. Other authors have argued that the natural way to enforce these models automatically is with information-flow analysis, but have not shown this to be practicable for general purpose programming languages in current use. Modern safety-critical systems can contain software components with differing safety integrity levels, potentially operating in the same address space. This case poses problems similar to systems with differing security levels; failure to show separation of data may require the entire system to be validated at the higher integrity level. In this paper we show how the information flow model enforced by the SPARK Examiner provides support for enforcing these security and safety models. We describe an extension to the SPARK variable annotations which allows the specification of a security or safety level for each state variable, and an extension to the SPARK analysis which automatically enforces a given information flow policy on a SPARK program. | INTRODUCTION
Software is often used to develop security-critical applications
. Some of these applications are required to manage information
of different classification levels, ensuring that each
user may only access the data for which he or she has adequate
authorisation. This requirement has two components;
the developer must produce a design and implementation
which supports this model and its constraints, and the implementation
must support verification that the constraints
are never violated. It is not enough for the implementation
to be correct; for it to be secure, it must be seen to be
correct.
In this paper we examine the problem of developing and
verifying a security-critical application containing this security
structure (often termed multi-level security). We will
analyse recent work on information flow and security and
show how the information flow analysis of the SPARK Examiner
tool is appropriate for solving this problem. We will
show how it is also important for the efficient analysis of
safety-critical systems. We will then describe modifications
to the current definition of the SPARK Ada language[2]
and the Examiner which permit complete validation of Ada
programs against defined security models such as the Bell-LaPadula
model[3].
We intend that the additional flow analysis features described
here will appear in a future commercial Examiner
release. Although we anticipate possible minor changes to
the syntax and analysis performed, we expect the released
version to implement the analysis described here in all substantial
aspects.
EXISTING WORK
In this section we identify typical standards which will inform
the development and verification of a security-critical
or safety-critical application. We then analyse a survey paper
on information flow and security, and compare its conclusions
against the information flow model of the SPARK
annotated subset of Ada95.
2.1 Standards
The Common Criteria for IT Security [5] specify the development
and verification activities that are suitable for
applications at differing levels of security. These Evaluation
Assurance Levels (EALs) range from EAL-1 (lowest) to
EAL-7 (highest). As the EAL number rises, the required
activities become more rigorous; at the higher levels, formal
specification and analysis of the software becomes required.
39
For safety-related applications there are similar concepts
for the safety criticality of software; RTCA DO-178B[10]
for civil avionics software defines criticality levels E (lowest
) through to A (most critical). As one would expect,
the required development and verification activities become
increasingly more onerous (and expensive) with increasing
criticality level. As a result, if an avionics system were to
contain a "core" of critical functionality at Level A and a
larger body of utility code at Level D then either the entire
system would have to be developed and verified at Level A
or a rigorous argument would have to be applied that the
Level D code could not in any way affect the integrity of the
Level A code.
Another notation for safety criticality is the Safety Integrity
Level (SIL), described in IEC 61508[8]. SIL-1 is the
lowest level of safety integrity, and SIL-4 the highest; DO-178B
level A approximates to SIL-3/SIL-4 when comparing
the development and verification activities required.
In this paper we assume that we are attempting to validate
systems at EAL-5 or greater (for security) and RTCA
Level B or greater (for safety). We are therefore required to
provide a rigorous and comprehensive justification for any
statements which we make about the separation of data.
Therefore we now look at how such statements may be expressed
.
2.2 Information Flow
"Information flow" as it applies to conventional imperative
computer programs has a range of definitions, and is
often confused with "data flow"; we shall take the definition
as expressed in Barnes[2] p.13:
data flow analysis is concerned with the direction of
data flow; whereas
information flow analysis also considers the coupling
between variables.
An example of the difference between data flow and information
flow comes from the following (Ada) code:
while (X < 4) loop
A := A + 2 * B;
X := X * 2;
end loop;
Here, data flow analysis would state that data flows from
X to X and from A and B to A. Information flow analysis
would note additionally that the final value of A is affected
by the initial value of X, and hence that there is information
flow from X to A.
Sabelfeld and Myers[11] recently surveyed the use of information
flow analysis in software. They viewed the fundamental
problem of maintaining security as one of tracking
the flow of information in computing systems. They examined
the various ways that secure information could leak into
less secure information, overtly and covertly. They identify
in particular the concept of implicit information flows, an
example of which is given in the while-loop example above.
Their survey characterised language-based information flow
as requiring:
1. semantics-based security, so that rigorous argument
could be made about a variation of a high-security
value not affecting a low-security value; and
2. a security type system, so that a program or expression
can be assigned security values (types) and the type
changes of the program or expression can be characterised
.
They concluded that "standard" security practices do not
and cannot enforce the end-to-end confidentiality required
by common security models. They characterised the existing
work in security and information flow analysis, but notably
did not address the work of Bergeretti and Carre[4].
2.3 Security models
The Bell-LaPadula (BLP) model of computer security[3]
enforces two properties:
1. no process may read data from a higher security level;
and
2. no process may write data to a lower security level.
Such multi-level security has a number of problems; Anderson
[1] provides a list of them including:
"blind write-up": the inability to inform low-security
data whether a write to high-security data has happened
correctly;
"downgrading": moving information from a high security
level to a lower level is sometimes desirable; and
"TCB bloat": a large subset of the operating system
may end up in the Trusted Computing Base (TCB).
The Dolev-Yao security model[6] makes secret information
indivisible; it cannot be leaked in part but only in total.
2.4 Information Flow in Ada
Bergeretti and Carre wrote a seminal paper[4] describing
a practical implementation of information flow analysis
in the SPADE Pascal language (although the principles
were applicable to most conventional imperative programming
languages). This was managed by the composition
of matrices representing information flow dependencies between
variable imports and exports. Notably, conditional
and infinite loops were permissible and analysable within
the framework of such a language; these were managed by
computing the transitive closure of the information flow matrix
corresponding to one execution of the loop.
This information flow model was implemented in the SPARK
annotated Ada subset[2]. The subset requires each subprogram
to be annotated with the required information flow
("derives" annotations) if information flow analysis is required
. The SPARK subset is enforced by the SPARK Examiner
tool which checks the required information flow of
each subprogram against the actual information flow. The
annotations (and other SPARK rules such as the ban on circular
dependency) are necessary to make this information
flow analysis tractable.
The result of this is that it is possible, in a fully-analysed
SPARK program, to be certain that a given exported variable
is independent of a given imported variable. This is a
key step towards supporting multi-level security in SPARK
Ada, and makes it easier to write demonstrably secure and
correct code, but is not a complete solution. In Section 3
we will describe how SPARK Examiner analysis may be extended
better to implement these checks.
We now examine case studies of the use of SPARK, and
the utility of information flow in real applications.
40
2.5 Security and Safety Applications
An example of a high-security application which was partly
developed in SPARK is the MULTOS CA[7]. This was developed
and verified at a high level of integrity, approximating
to the Common Criteria EAL-5. The delivered system
had a defect rate of 0.04 errors per thousand lines of
code (KLOC). This showed that SPARK was a practical
language for implementing high-security applications. Part
of the analysis required during development and verification
of the CA was to show that secret data could not leak out
directly or indirectly in unclassified outputs.
An example of a safety-critical application with mixed integrity
levels where information flow analysis was helpful
was the SHOLIS information system described by King et
al.[9]. This application mixed SIL-4 and non-critical code,
and was written in the SPARK subset of Ada83. Static
verification was used, including information flow analysis
and partial program proof, to verify significant properties
of SHOLIS. There was a successful argument based partly
on information flow analysis that the high-SIL code was not
compromised by the low-SIL code; however, this argument
had to be made manually, based on the validated information
flow of each subprogram, as the Examiner did not provide
such tracing facilities at the program level.
IMPROVING SPARK ANALYSIS
The outstanding need in SPARK is to be able to mark
state variables in packages with a (numerical) level of security
and / or safety. This has been implemented by allowing
an optional aggregate after a package ("own") variable declaration
.
3.1 Marking security levels
Suppose that a package Classify was defined as follows
to create various secrecy levels:
package Classify is
-- Security levels
UNCLASSIFIED : constant := 0;
RESTRICTED
: constant := 1;
CONFIDENTIAL : constant := 2;
SECRET
: constant := 3;
TOPSECRET
: constant := 4;
end Classify;
Then we define a package KeyStore to store and manage
a symmetric encryption key SymmetricKey, designed to mutate
after each encryption according to a rotation parameter
RotorValue
. The key is clearly a high-security data item,
with the rotor value requiring less security.
KeyStore
marks its state variables with the field Integrity
1
thus:
--# inherit Classify;
package KeyStore
--# own SymmetricKey(Integrity => Classify.SECRET);
--#
RotorValue(Integrity => Classify.RESTRICTED);
is
procedure Rotate;
--# global in RotorValue;
--#
in out SymmetricKey;
1
Integrity
was chosen to make sense for both security and
safety applications
--# derives SymmetricKey from
--#
SymmetricKey, RotorValue;
procedure Encrypt(C : in MessageBlock;
E : out MessageBlock);
--# global in SymmetricKey;
--# derives E from C, SymmetricKey;
...
end KeyStore;
Any security analysis must show, in the case of this code,
that SymmetricKey data cannot leak into RotorValue or
data; i.e. there must be no subprogram (or main program)
whose information annotation shows SymmetricKey as an
import to RotorValue.
Note that in this case package Classify will need to be
inherited by all relevant package, but will never be withed
and so never compiled.
3.2 Implementation
We now define how the integrity levels are marked in the
SPARK language, and how they are enforced by the SPARK
Examiner.
The extra information flow checking is invoked using the
/policy=X
command line switch to the Examiner. Current
valid policy values are security and safety.
3.2.1 Variable declaration
The above Integrity property on package own variables
is a static property; at any point in static analysis of the
code, the actual and required integrity level of an own variable
is known. Own variables without Integrity levels are
taken to have a default integrity level of Natural'Last (i.e.
very highly classified) if imported under a security policy,
and Natural'First (i.e. unclassified) if exported under a
security policy. This gives the most paranoid checking so
that data may not leak from an input to an output through
an intermediate variable with unspecified integrity.
If the analysis policy is safety then the default integrity
values for unspecified own variables are reversed.
3.2.2 Integrity checks
Information flow is declared explicitly in derives annotations
for subprograms; an example from our cryptographic
KeyStore
is the Rotate subprogram which changes the key
based on the value of RotorValue:
procedure Rotate;
--# global in RotorValue;
--#
in out SymmetricKey;
--# derives SymmetricKey from
--#
SymmetricKey, RotorValue;
Using policy security the Examiner will then check that
the integrity level of the export SymmetricKey (SECRET) is
no less than the integrity levels of any import (SECRET and
RESTRICTED
). With policy safety the checks would be that
the integrity of the export is no more than the integrity levels
of any import, i.e. that high-safety data exports cannot
be contaminated by low-safety data imports.
Information flow is also checked at each procedure call by
substituting in actual parameters (which may be own variables
) for formal parameters and rechecking the procedure's
known information flow for integrity flows. As examples we
41
give the procedures which get and set the rotor and key
values:
procedure GetKey(This_Key : out Key);
--# global in SymmetricKey;
--# derives This_Key from SymmetricKey;
procedure GetRotor(This_Rotor : out Rotor);
--# global in RotorValue;
--# derives This_Rotor from RotorValue;
procedure SetKey(New_Key : in Key);
--# global out SymmetricKey;
--# derives SymmetricKey from New_Key;
procedure SetRotor(New_Rotor : in Rotor);
--# global out RotorValue;
--# derives RotorValue from New_Rotor;
Because RotorValue is RESTRICTED and SymmetricKey is
SECRET
, the analysis requires a check at each invocation of
these subprograms that the actual parameters mapped to
formal parameters do not violate integrity flow checks.
3.2.3 Analysis techniques not adopted
One analysis technique which we considered (but rejected)
was to track the integrity flows within each subprogram
body and raise warnings at each individual statement where
a violation occurs. We now explain how this would have
worked and why we rejected it.
In the following code, the programmer generates a key
and rotor, encrypts a message with it, then tries to create a
new rotor based on the old key.
-- Key and rotor generation
R1 := KeyStore.MakeRotor(34,56,22,55);
KeyStore.SetRotor(R1);
K1 := KeyStore.MakeKey(66,11,2,4);
KeyStore.SetKey(K1);
-- Encrypt a message
KeyStore.Encrypt(C => Clear, E => Encrypted);
-- Get a copy of the (changed) key and break
-it
down into data
KeyStore.GetKey(K1);
KeyStore.DecomposeKey(K1,I1,I2,I3,I4);
-- Build a new rotor
R1 := KeyStore.MakeRotor(I1,I2,I3,I4);
KeyStore.SetRotor(R1);
The statement-by-statement information flow would proceed
as shown in Table 1, where C denotes Clear, RV denotes
RotorValue, SK denotes SymmetricKey, and E denotes
Encrypted
.
The final step causes RotorValue to exceed its assigned
integrity level, and would generate a static integrity flow
error.
The problem with this technique arises from the need to
track the integrity levels of local variables. It quickly becomes
clear that for practical analysis reasons each local
variable involved needs to be assigned an integrity level.
This is possible, and is done by declaring them as own variables
with integrity levels in a package embedded in the
subprogram in question, but is cumbersome. It also requires
substantial rework of any existing code which we may want
to retro-analyse.
R1
K1
SK
RV
I1
C
E
Instruction
0
0
1
0
0
1
R1
:=
0
0
1
1
SetRotor
0
0
1
1
K1
:=
0
0
3
1
1
SetKey
0
0
3
1
1
3
Encrypt
0
3
3
1
1
3
GetKey
0
3
3
1
3
1
3
Decompose
3
3
3
1
3
1
3
R1 :=
3
3
3
3
3
1
3
SetRotor
Table 1: Information flow for example
3.2.4 Example of checking
We placed the code described above into procedure Operate
in a package Crypto and annotated the declaration with the
correct derives annotation thus:
--# inherit KeyStore,Classify,BitString;
package Crypto
--# own Clear,
--#
Encrypted(Integrity => Classify.SECRET);
is
procedure Operate;
--# global out KeyStore.RotorValue,Encrypted;
--#
in out KeyStore.SymmetricKey;
--#
in Clear;
--# derives
--#
KeyStore.SymmetricKey,
--#
KeyStore.RotorValue
--#
from
--#
KeyStore.SymmetricKey
--#
&
--#
Encrypted
--#
from
--#
Clear,
--#
KeyStore.SymmetricKey
--#
;
end Crypto;
Analysis using /policy=security gives the following static
semantic error:
Unit name:
Crypto
Unit type:
package specification
Unit has been analysed, any errors are listed below
1 error(s) or warning(s)
Line
30
--#
KeyStore.SymmetricKey
^1
*** (
1) Semantic Error
:175: Information flow
from KeyStore.SymmetricKey to
KeyStore.RotorValue violates the
selected information flow policy..
which correctly identifies the potential leak.
3.3 Case study
SHOLIS is the Royal Navy's Ship Helicopter Operating
Limits Information System [9] designed to assist landing of
42
helicopters on Royal Navy Type 23 frigates. Failure of this
system could result in the death of helicopter pilots and passengers
, loss of a helicopter and damage to the ship. This
is intolerable for normal operation, hence SIL-4 reliability is
required to give sufficient confidence that such an accident
will not happen during the in-service lifetime of the system.
SHOLIS is an on-demand system rather than a continuously
operating system, and so has a required probability of failure
to function on demand of approximately 10
-4
; a more
precise probability would be specified in the system safety
case.
However, the bulk of the SHOLIS code does not relate
to critical system functionality. The code specific to SIL-4
must be analysed at a deep level; the rest of the code can
be analysed less deeply as long as it can be shown not to
affect the SIL-4 data adversely.
3.3.1 Original analysis
The original SHOLIS code is Ada 83 and consists of 75
source files and shadow files amenable to SPARK analysis;
26.5KLoC of non-blank non-comment non-annotation code.
It is a substantial program and therefore a suitable test to
see if integrity level checking scales.
Without any own variables annotated, a full SPARK analysis
by an Examiner with policy=safety generated no integrity
errors, as we would expect.
3.3.2 Identifying critical outputs
To enable easy marking of variable safety criticality we
added a single package Safety:
package Safety is
NSC : constant := 0;
SC
: constant := 1;
end Safety;
We took as an example the safety-critical output Alarm2Z
in a package RMR which represents an alarm signal on the
front panel. This was annotated:
--# own Alarm2Z : BasicTypes.OkFailT
--#
(Integrity => Safety.SC);
A re-analysis of package RMR raised no integrity errors.
The other package that used Alarm2Z was EVENT which was
the main event handler. A SPARK of this package raised
integrity flow errors in subprogram Sync, where Alarm2Z depended
on a large range of other inputs, which had not yet
been marked as safety-critical. This was what we would expect
so far and confirmed that basic safety integrity analysis
was working.
3.3.3 Extending analysis
The next phase of work started in the SENSOR package
which was near the middle of the package hierarchy. We set
the package variables representing current speed, heading,
roll, pitch and wind velocity state to be safety-critical and
then ran a trial SPARK analysis. This indicated many own
variables in this and other packages which caused integrity
flow errors since they had no explicit integrity level.
For each of these packages in turn, we:
1. marked all of the package own variables as non-critical
(Safety.NSC);
2. re-analysed the package specification;
3. analysed the package body to ensure that there were
no integrity flow errors at subprogram call points; and
4. if necessary, transformed NSC variables to SC status and
re-ran.
Eventually we converged on a stable SC/NSC partition of
the variables.
3.3.4 Declassification
The DisplayBuffers state variable in I/O package sio5
was a point where safety-critical and non-safety critical data
merged. It was necessary during the actual project to produce
a manual justification that the buffer would never be
filled with non-safety critical data when there was safety-critical
data to be added and displayed to a user. We therefore
set its integrity to NSC and ignored all integrity errors
relating to flows from SIO5.DisplayBuffers.
3.3.5 Results
There were 1282 integrity flow errors, but every single
one of these referred to a flow from SIO5.DisplayBuffers
to Fault.Faults, as expected. Therefore only one manual
argument is needed to validate the separation of SC and NSC
data at this level.
Of the 233 package specification variables which were given
integrity levels, 110 were NSC and 123 were SC.
3.3.6 Lessons learned
SHOLIS was developed using a now out-of-date version
of the SPARK Examiner which did not support proof work
involving abstract state; as a consequence there were many
more public own variables than you would expect in a well-designed
modern SPARK program. This made the conversion
work slower; at the top level, as noted above, it in-creased
the time required beyond what was available for the
study.
The "TCB bloat" problem noted in Section 2.3 did not
seem to be a problem. While working up the calling hierarchy
there was a small amount of returning to lower levels to
make NSC variables into SC, but this did not spread out of
control.
There is a clear need for a declassification mechanism, as
discussed in more detail in Section 4.3. Being able to suppress
the integrity flow errors from DisplayBuffers would
have made the transformation process easier.
3.4 Possible tactical extensions
Given the preceding work, it is relatively simple to extend
the own variable annotation to allow other fields in the
aggregate. Within the security domain, there are considerations
which mean that security cannot be considered on a
linear scale.
An example is a set of documents on an international military
aviation development where markings might include
NATO RESTRICTED, UK RESTRICTED and ICELAND
RESTRICTED. They are all at the same level of security,
but apply to different nationalities in different ways. A UK
citizen could receive the first two, a German could receive
the first one only, and a Russian could not receive any. The
nationality information could be represented by an additional
field, which might be an array of booleans mapping
each country code to an Allowed/Forbidden code.
43
3.5 Security policies
So far we have considered enforcing the Bell-LaPadula security
policy. However, there are other policies which may
be desirable for enforcement. One example is a policy where
information at security level N may only flow into other information
at security level N; there is no concept of ordering
on these security levels, they may simply not be mixed.
There is further work to be done on investigating whether
other information flow policies are desirable and useful for
security-critical or safety-critical code.
In principle they
should not be complicated to enforce. The /policy=X command
line switch provides a hook to specify different policies
in future.
ISSUES FOR FUTURE RESEARCH
In this section we discuss the limitations of the existing
work and examine how the analysis techniques may be extended
in the future.
4.1 Difficulties with analysis
The concept of "label creep" as identified by Sabelfeld and
Myers refers to the tendency of data to increase in security
level as program execution progresses; assigning a
SECRET
value to one field of a large
CONFIDENTIAL
record will
make the entire record
SECRET
. It remains to be seen
how SPARK security programs should be designed in order
to minimise label creep.
Concurrency is more complex because there is the possibility
that security information may leak from the observable
scheduling order of the processes. This is addressed to some
extent because SPARK analysis of Ravenscar programs does
information flow across tasks.
4.2 Wider analysis
One extension suggestion that has come from a software
development project within the company is the idea of subprogram
integrity level. The motivation is similar to that
for the SHOLIS analysis; that only part of a given program
is safety-critical, and that verification activities (proof, unit
testing levels, coverage analysis etc.) may be better focused
on the safety-critical parts. Subprograms are a more useful
unit of division for these activities than package state.
The algorithm for identifying a subprogram's actual integrity
level is to examine its exports. If it only exports own
variables then the subprogram integrity level is the maximum
of all exported own variable integrity levels. If some
exported variables are formal parameters then each invocation
of the subprogram must be examined for own variables
that may be actual parameters, and the maximum integrity
level taken over all possible exported own variables. Functions
are taken to be equivalent to a procedure with a single
exported parameter.
There are two clear choices for implementing this strategy:
1. whole-program analysis, determining subprogram integrity
level once all invocations are known; or
2. partial-program analysis, annotating each critical subprogram
declaration with its integrity level and checking
at declaration and each invocation that the maximum
integrity level is not violated.
The first choice is minimal-impact, but does not admit
analysis until the whole program is written which is likely to
prove troublesome; the integrity level of many subprograms
will not be known until very late in the development process,
by which time testing should already be ramped up.
The second choice is higher impact; it requires an extra
annotation to be added to the SPARK language and checked
by the Examiner, and requires developers to add the annotation
to each potentially critical subprogram as it is specified
. However, the benefits of the partial program analysis
are likely to outweigh these drawbacks.
4.3 Subverting the analysis
Declassification is occasionally necessary in security programs
; this is when the assigned security level of information
is deliberately reduced. An example would be a security filter
which took
SECRET
information, stripped out sensitive
fields and output information which was no more than
CONFIDENTIAL
level. This can be done in SPARK by hiding
the body of a declassifying subprogram from the Examiner,
but this is clearly not an optimal solution. A better solution
should be found.
4.4 Considerations for certification
In very high security applications it may be necessary to
certify the object code as well as the source code. It remains
to be seen whether and how the information known from the
SPARK source code analysis can be carried over to inform
an object code analysis.
There are other ways by which secure information can be
observed, such as covert or timing channels. A full implementation
of multi-level security information analysis should
be followed by an analysis of how much information could
be leaked this way.
CONCLUSIONS
In this paper we have described recent work on applying
information flow analysis techniques to enforcing multi-level
security in a single software application. We have shown
how the requirements listed by Sabelfeld and Myers[11] are
partially satisfied by the information flow analysis possible
with SPARK Ada and the SPARK Examiner. We have further
shown that the existing SPARK language and analysis
may be extended to enforce the Bell-LaPadula security
model with relatively little change.
SPARK Ada has already proven itself in high-security and
safety-critical application development. It now appears to
be an effective choice of language to partition data of differing
criticality, and provide a low-cost but robust argument
of safety or security for an application. SPARK already
provides the semantics-based security required by Sabelfeld
and Myers; the extensions to own variable annotations now
provide the complementary security type system.
For end-to-end confidentiality in a secure system, we believe
that SPARK Ada's extended information flow analysis
provides a hard-to-refute justification of data security.
5.1 Acknowledgements
The authors are grateful to Peter Amey, Neil White and
Will Ward from Praxis Critical Systems for their feedback
on this paper and the prototype integrity checking facilities
of the Examiner.
44
REFERENCES
[1] R. J. Anderson. Security engineering: a guide to
building dependable distributed systems. Wiley
Computer Publishing, 2001. ISBN 0-471-38922-6.
[2] J. Barnes. High Integrity Software: The SPARK
Approach to Safety And Security. Addison Wesley,
April 2003.
[3] D. E. Bell and L. LaPadula. Secure computer systems.
Technical Report ESR-TR-73-278, Mitre Corporation,
November 1973. v. I and II.
[4] J.-F. Bergeretti and B. A. Carre. Information-flow and
data-flow analysis of while-programs. ACM
Transactions on Programming Languages and
Systems, 7(1):3761, January 1985.
[5] Common Criteria. Common Criteria for Information
Technology Security Evaluation, August 1999.
[6] D. Dolev and A. Yao. On the security of public-key
protocols. IEEE Transactions on Information Theory,
2(29):198208, August 1983.
[7] A. Hall and R. Chapman. Correctness by
construction: Developing a commercial secure system.
IEEE Software, pages 1825, Jan/Feb 2002.
[8] International Electrotechnical Commission. IEC
Standard 61508, Functional Safety of Electrical /
Electronic / Programmable Electronic Safety-Related
Systems, March 2000.
[9] S. King, J. Hammond, R. Chapman, and A. Pryor. Is
proof more cost effective than testing? IEEE
Transactions on Software Engineering, 26(8):675686,
August 2000.
[10] RTCA / EUROCAE. RTCA DO-178B / EUROCAE
ED-12: Software Considerations in Airborne Systems
and Equipment Certification, December 1992.
[11] A. Sabelfeld and A. C. Myers. Language-based
information-flow security. IEEE Journal on Selected
Areas in Communications, 21(1), January 2003.
45
| security level;SPARK Ada;integrity;Information flow;Dolev-Yao;subprogram;SPARK;information flow;safety;security;Bell-LaPadula |
83 | Entropy and Self-Organization in Multi-Agent Systems | Emergent self-organization in multi-agent systems appears to contradict the second law of thermodynamics. This paradox has been explained in terms of a coupling between the macro level that hosts self-organization (and an apparent reduction in entropy), and the micro level (where random processes greatly increase entropy). Metaphorically, the micro level serves as an entropy "sink," permitting overall system entropy to increase while sequestering this increase from the interactions where self-organization is desired. We make this metaphor precise by constructing a simple example of pheromone-based coordination, defining a way to measure the Shannon entropy at the macro (agent) and micro (pheromone) levels, and exhibiting an entropy-based view of the coordination. | INTRODUCTION
Researchers who construct multi-agent systems must cope with
the world's natural tendency to disorder. Many applications
require a set of agents that are individually autonomous (in the
sense that each agent determines its actions based on its own state
and the state of the environment, without explicit external
command), but corporately structured. We want individual local
decisions to yield coherent global behavior.
Self-organization in natural systems (e.g., human culture, insect
colonies) is an existence proof that individual autonomy is not
incompatible with global order. However, widespread human
experience warns us that building systems that exhibit both
individual autonomy and global order is not trivial.
Not only agent researchers, but humans in general, seek to impose
structure and organization on the world around us. It is a universal
experience that the structure we desire can be achieved only
through hard work, and that it tends to fall apart if not tended.
This experience is sometimes summarized informally as
"Murphy's Law," the observation that anything that can go
wrong, will go wrong and at the worst possible moment. At the
root of the ubiquity of disorganizing tendencies is the Second Law
of Thermodynamics, that "energy spontaneously tends to flow
only from being concentrated in one place to becoming diffused
and spread out." [9]
Adding energy to a system can overcome the Second Law's
"spontaneous tendency" and lead to increasing structure.
However, the way in which energy is added is critical. Gasoline in
the engines of construction equipment can construct a building
out of raw steel and concrete, while the same gasoline in a bomb
can reduce a building to a mass of raw steel and concrete.
Agents are not immune to Murphy. The natural tendency of a
group of autonomous processes is to disorder, not to organization.
Adding information to a collection of agents can lead to increased
organization, but only if it is added in the right way. We will be
successful in engineering agent-based systems just to the degree
that we understand the interplay between disorder and order.
The fundamental claim of this paper is that the relation between
self-organization in multi-agent systems and thermodynamic
concepts such as the second law is not just a loose metaphor, but
can provide quantitative, analytical guidelines for designing and
operating agent systems. We explain the link between these
concepts, and demonstrate by way of a simple example how they
can be applied in measuring the behavior of multi-agent systems.
Our inspiration is a model for self-organization proposed by
Kugler and Turvey [7], which suggests that the key to reducing
disorder in a multi-agent system is coupling that system to another
in which disorder increases. Section 2 reviews this model and
relates it to the problem of agent coordination. Section 3 describes
a test scenario that we have devised, inspired by self-organization
in pheromone systems, and outlines a method for measuring
entropy in this scenario. Section 4 reports our experimental
results. Section 5 summarizes our conclusions.
AN ENTROPY MODEL FOR SELF-ORGANIZATION
In the context of biomechanical systems, Kugler and Turvey [7]
suggest that self-organization can be reconciled with second-law
tendencies if a system includes multiple coupled levels of
dynamic activity. Purposeful, self-organizing behavior occurs at
the macro level. By itself, such behavior would be contrary to the
second law. However, the system includes a micro level whose
dynamics generate increasing disorder. Thus the system as a
whole is increasingly disordered over time. Crucially, the
behavior of elements at the macro level is coupled to the micro
level dynamics. To understand this model, we begin with an
example, then abstract out the underlying dynamics, and finally
comment on the legitimacy of identifying processes at this level
with principles from thermodynamics.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
AGENTS'01, May 28-June 1, 2001, Montral, Quebec, Canada.
Copyright 2001 ACM 1-58113-326-X/01/0005...$5.00.
124
2.1 An Example: Pheromones
The parade example of such a system is the self-organization of an
insect colony (such as the construction of minimal spanning tree
networks among nests and food sources by ants, or the erection of
multi-storied structures with regularly spaced pillars and floors by
tropical termites), through pheromone-based coordination [1, 11].
Pheromones are scent markers that insects use in two ways. First,
they deposit pheromones in the environment to record their state.
For example, a foraging ant just emerging from the nest in search
of food might deposit nest pheromone, while an ant that has found
food and is carrying it will deposit food pheromone. ([15]
documents use of multiple pheromones by insects.) Second, they
orient their movements to the gradient of the pheromone field. In
the example of foraging ants, those seeking food climb the
gradient of the food pheromone, while those carrying food climb
the gradient of the nest pheromone. The most realistic models of
the ants' pheromone-climbing behavior incorporates a stochastic
element in their movement. That is, they do not follow the
gradient deterministically, but use its strength to weight a roulette
wheel from which they determine their movement.
The environment in which pheromones are deposited plays a
critical role in such a system. It is not passive, but active, and
performs three information-processing functions with the
pheromones.
1. It
aggregates deposits of the same flavor of pheromone from
different ants, thus providing a form of data fusion across
multiple agents at different times, based on their traversal of
a common location.
2. It
evaporates pheromones over time, thus forgetting obsolete
information. This dynamic is usefully viewed as a novel
approach to truth maintenance. Conventional knowledge
bases remember every assertion unless there is cause to
retract it, and execute truth maintenance processes to detect
and resolve the conflicts that result when inconsistent
assertions coexist. Insect systems forget every assertion
unless it is regularly reinforced.
3. Evaporation provides a third function, that of disseminating
information from the location at which it was deposited to
nearby locations. An ant does not have to stumble across the
exact location at which pheromone was deposited in order to
access the information it represents, but can sense the
direction from its current location to the pheromone deposit
in the form of the gradient of evaporated pheromone
molecules.
2.2 The Model in Detail
In the Kugler-Turvey model, ants and their movements constitute
the macro level of the system, while pheromone molecules
constitute the micro level. The purposeful movement of ants,
constructing minimal paths among their nests and food sources,
achieve a reduction in disorder at the macro level, made possible
because the agents at this level are coupled to the micro level,
where the evaporation of pheromone molecules under Brownian
motion results in an overwhelming growth in disorder. As a result,
the disorder of the overall system increases, in keeping with the
Second Law, in spite of the emergence of useful order at the
macro level.
Figure 1 illustrates the interplay among these processes, and how
this model of agent coordination differs from more classical
views. Classically, agents are considered to perceive one another
directly, reason about this perception, and then take rational
action. The Kugler-Turvey model views coordination as being
mediated by an environment that agents change by their actions
(e.g., depositing pheromones), a process known as "stigmergy"
[4]. Processes in the environment generate structures that the
agents perceive, thus permitting ordered behavior at the agent
level. At the same time, these processes increase disorder at the
micro level, so that the system as a whole becomes less ordered
over time.
Research in synthetic pheromones [2, 12, 13] draws directly on
this model of coordination, but the model is of far broader
applicability. In a multi-commodity market, individual agents
follow economic fields generated by myriad individual
transactions, and self-organization in the demand and supply of a
particular commodity is supported by an environment that
distributes resources based on the other transactions in the system.
The movement of currency in such a system provides similar
functions to those of pheromones in insect systems. More broadly,
we hypothesize that a coupling of ordered and disordered systems
is ubiquitous in robust self-organizing systems, and that the lack
of such a coupling correlates with architectures that do not meet
their designers' expectations for emergent cohesiveness.
2.3 A Caveat
At this point, readers with a background in physics and chemistry
may be uneasy. These disciplines formulated the Second Law
within a strict context of processes that result in energy changes.
The fundamental physical measures associated with the second
law are temperature T, heat Q, and (thermodynamic) entropy S,
related by the definition
Equation 1
T
dQ
dS
=
Statistical mechanics identifies this macroscopic measure with the
number
of microscopically defined states accessible to the
system by the relation
Equation 2
S
k ln
where k is Boltzmann's constant, 1.4E-16 erg/deg.
M icro
New to n ian ;
F o rce F ield ;
E n tro p y
F lo w
(E n tro p y
)
F lo w
(E n tro p y
)
M acro
No n -New to n ian
F lo w F ield
"Neg en tro p y"
Pe
rcep
tio
n
Percep
tion
P h ero m o n e
Rational Action
(Entropy
)
Ra
tio
na
l A
cti
on
(Ent
ropy
)
P h ero m o n e
Dynam ics
P ercep tio n
Ratio n al A ctio n
Ag en t 1
Ag en t 2
T rad itio n al A g en t
Dynam ics
Key
M icro
New to n ian ;
F o rce F ield ;
E n tro p y
F lo w
(E n tro p y
)
F lo w
(E n tro p y
)
M acro
No n -New to n ian
F lo w F ield
"Neg en tro p y"
Pe
rcep
tio
n
Percep
tion
P h ero m o n e
Rational Action
(Entropy
)
Ra
tio
na
l A
cti
on
(Ent
ropy
)
P h ero m o n e
Dynam ics
P ercep tio n
Ratio n al A ctio n
Ag en t 1
Ag en t 2
T rad itio n al A g en t
Dynam ics
Key
Figure 1. Comparison of Conventional and Pheromone-Based
Models of Coordination
125
Thus defined, thermodynamic entropy has strong formal
similarities [10] to information entropy [14]
Equation 3
=
i
i
i
p
p
S
log
where i ranges over the possible states of the system and p
i
is the
probability of finding the system in state i. These formal
similarities have led to a widespread appropriation of the notion
of "entropy" as a measure of macro-level disorder, and of the
Second Law as describing a tendency of systems to become more
chaotic. Our approach participates to this appropriation.
It has been objected [8] that such appropriation completely
ignores the role of energy intrinsic to both thermodynamic
definitions (via T and dQ in the macro definition and k in the
micro definition). Such an objection implicitly assumes that
energy is logically prior to the definition, and that unless
information processes are defined in terms of energy changes, it is
illegitimate to identify their changes in entropy with those of
thermodynamics. An alternative approach to the question would
argue that in fact the prior concept is not ergs but bits, the
universe is nothing but a very large cellular automaton with very
small cells [3, 6], and physics and chemistry can in principle be
redefined in terms of information-theoretic concepts. Our
approach is sympathetic with this view. While we are not prepared
at this point to define the precise correspondence between ergs
and bits, we believe that physical models are an under-exploited
resource for understanding computational systems in general and
multi-agent systems in particular. The fact that the thermodynamic
and information approaches work in different fundamental units
(ergs vs. bits) is not a reason to separate them, but a pole star to
guide research that may ultimately bring them together.
EXPERIMENTAL SETUP
We experiment with these concepts using a simple model of
pheromone-based behavior. In this section we describe the
experiment and how one measures entropy over it.
3.1 The Coordination Problem
Consider two agents, one fixed and one mobile, who desire to be
together. Neither knows the location of the other. The mobile
agent, or walker, could travel to the destination of the stationary
one, if it only knew where to go. The stationary agent, or target,
deposits pheromone molecules at its location. As the pheromone
molecules diffuse through the environment, they create a gradient
that the walker can follow.
Initially, the walker is at (30,30) and the target is at (50,50) in a
100x100 field. Every time step, the target deposits one molecule
at (50,50). Both the walker and the pheromone molecules move
by computing an angle
[0,2] relative to their current heading
and taking a step of constant length (1 for the walker, 2 for the
pheromone molecule) in the resulting direction. Thus both
molecules and walkers can be located at any real-valued
coordinates in the field. Molecules move every cycle of the
simulation and the walker every five cycles, so altogether the
molecules move ten times as fast as the walker. Molecules fall off
of the field when they reach the edge, while the walker bounces
off the edges.
Molecules choose the heading for their next step from a uniform
random distribution, and so execute an unbiased random walk.
The walker computes its heading from two inputs.
1. It generates a gradient vector
Gr
from its current location to
each molecule within a specified radius
, with magnitude
Equation 4
<
=
i
r
i
r
g
G
2
r
where r
i
is the distance between the walker and the ith
molecule and g is a "gravitational constant" (currently 1).
2. It generates a random vector
Rr
with random heading and
length equal to a temperature parameter T.
The vector sum
R
G
r
r +
, normalized to the walker's step length
(1 in these experiments), defines the walker's next step. Including
Rr
in the computation permits us to explore the effectiveness of
different degrees of stochasticity in the walker's movement,
following the example of natural pheromone systems.
The state of the walker defines the macro state of the system,
while the states of the molecules define the micro state. This
model can easily be enhanced in a number of directions, including
adding multiple walkers and multiple targets, and permitting
walkers and targets to deposit pheromone molecules of various
flavors. The simple configuration is sufficient to demonstrate our
techniques and their potential for understanding how the walker
finds the target.
3.2 Measuring Entropy
Computing the Shannon or Information Entropy defined in
Equation 3 requires that we measure
1. the set of states accessible to the system and
2. the probability of finding the system in each of those states.
3.2.1 Measuring the Number of System States
In most computational systems, the discreteness of digital
computation makes counting system states straightforward
(though the number of possible states is extremely high). We have
purposely defined the movement of our walker and molecules in
continuous space to highlight the challenge of counting discrete
system states in an application embedded in the physical world
(such as a robotic application). Our approach is to superimpose a
grid on the field, and define a state on the basis of the populations
of the cells of the grid.
We can define state, and thus entropy, in terms either of location
or direction. Location-based state is based on a single snapshot of
the system, while direction-based state is based on how the system
has changed between successive snapshots. Each approach has an
associated gridding technique.
For location-based entropy, we divide the field with a grid. Figure
2 shows a 2x2 grid with four cells, one spanning each quarter of
the field. The state of this system is a four-element vector
reporting the number of molecules in each cell (in the example,
reading row-wise from upper left, <1,1,3,2>. The number of
possible states in an nxn grid with m particles is n
2m
. The
parameters in location-based gridding are the number of divisions
in each direction, their orientation, and the origin of the grid.
126
Rectangular grids are easiest to manage computationally, but one
could also tile the plane with hexagons.
For direction-based entropy, we center a star on the previous
location of each particle and record the sector of the star into
which the particle is found at the current step. Figure 3 shows a
four-rayed star with a two particles. The state of the system is a
vector with one element for each particle in some canonical order.
Counting sectors clockwise from the upper left, the state of this
example is <2,3>. The number of possible states with an n-pointed
star and m particles is mn. The parameters in direction-based
gridding are the number of rays in the star and the rotation
of the star about its center.
In both techniques, the analysis depends critically on the
resolution of the grid (the parameter n) and its origin and
orientation (for location) or rotation (for direction).
To understand the dependency on n, consider two extremes. If n is
very large, the chance of two distributions of particles on the field
having the same state is vanishingly small. For N distributions,
each will be a distinct state, each state will have equal probability
1/N, and the entropy will be log(N). This state of affairs is clearly
not informative. At the other extreme, n = 1, all distributions
represent the same state, which therefore occurs with probability
1, yielding entropy 0, again not informative. We choose the
gridding resolution empirically by observing the length scales
active in the system as it operates.
To understand the dependency on origin/orientation or rotation,
consider two particles in the same cell. After they move, will they
still be in the same cell (keeping entropy the same) or in different
cells (increasing entropy)? Exactly the same movements of the
two particles could yield either result, depending on how the grid
is registered with the field. We follow Gutowitz's technique [5] of
measuring the entropy with several different origins and taking the
minimum, thus minimizing entropy contributions resulting from
the discrete nature of the grid.
3.2.2 Measuring the Probabilities
In principle, one could compute the probability of different
system states analytically. This approach would be arduous even
for our simple system, and completely impractical for a more
complex system. We take a Monte Carlo approach instead. We
run the system repeatedly. At each step in time, we estimate the
probability of each observed state by counting the number of
replications in which that state was observed. The results reported
here are based on 30 replications.
Shannon entropy has a maximum value of log(N) for N different
states, achieved when each state is equally probable. To eliminate
this dependence on N, we normalize the entropies we report by
dividing by log(N) (in our case, log(30)), incidentally making the
choice of base of logarithms irrelevant.
EXPERIMENTAL RESULTS
We report the behavior of entropy first in the micro system, then
in the unguided and guided macro system, and finally in the
complete system.
4.1 Entropy in the Micro System
Figure 4 shows locational entropy in the micro system (the
pheromone molecules), computed from a 5x5 grid. Entropy
increases with time until it saturates at 1. The more molecules
enter the system and the more they disperse throughout the field,
the higher the entropy grows. Increasing the grid resolution has no
effect on the shape of this increase, but reduces the time to
saturation, because the molecules must spread out from a single
(0,0)
(100,0)
(0,100)
(100,100)
(0,0)
(100,0)
(0,100)
(100,100)
Figure 2. Location-based gridding.
(0,0)
(100,0)
(0,100)
(100,100)
1
2
(0,0)
(100,0)
(0,100)
(100,100)
1
2
Figure 3. Direction-based gridding.
0
50
100
150
200
250
Time
0.2
0.4
0.6
0.8
1
o
r
c
i
My
p
o
r
t
n
E
Figure 4. Micro Entropy x Time (5x5 Grid)
127
location and the finer the grid, the sooner they can generate a
large number of different states.
Directional entropy also increases with time to saturation. This
result (not plotted) can be derived analytically. The molecule
population increases linearly with time until molecules start
reaching the edge. Then the growth slows, and eventually reaches
0. Let M be the population of the field at equilibrium, and
consider all M molecules being located at (50,50) through the
entire run. Initially, all are stationary, and each time step one
additional molecule is activated. Then the total number of
possible system states for a 4-star is 4M, but the number actually
sampled during the period of linear population growth is 4t, since
the stationary molecules do not generate any additional states.
Thus the entropy during the linear phase is log(4t)/log(4M). As
the growth becomes sublinear, the entropy asymptotically
approaches 1, as with locational entropy.
4.2 Entropy in the Unguided Macro System
Figure 5 shows the path of a walker uncoupled to the micro
system (when the target is emitting no pheromone molecules).
With no coupling to the micro field, the walker is just a single
molecule executing a random walk. Figure 6 shows that locational
entropy for this walker increases over time, reflecting the
increased number of cells accessible to the walker as its random
walk takes it farther from its base. The grid size (15 divisions in
each direction) is chosen on the basis of observations of the
guided walker, discussed in the next section.
The directional entropy (not plotted) is constant at 1, since the
walker chooses randomly at each step from all available
directions.
4.3 Entropy in Guided Macro System
Now we provide the walker with a micro field by emitting
pheromone molecules from the target. Figure 7 shows the path
followed by a typical walker with radius
= 20 and T = 0. This
path has three distinct parts.
Initially, the walker wanders randomly around its origin at
(30,30), until the wavefront of molecules diffusing from
(50,50) encounters its radius. In this region, the walker has
no guidance, because no molecules are visible.
Once the walker begins to sense molecules, it moves rather
directly from the vicinity of (30,30) to (50,50), following the
pheromone gradient.
When it arrives at (50,50), it again receives no guidance from
the molecules, because they are distributed equally in all
directions. So it again meanders.
The clouds of wandering near the start and finish have diameters
in the range of 5 to 10, suggesting a natural grid between 20x20
and 10x10. We report here experiments with a 15x15 grid.
Because of their initial random walk around their origin, walkers
in different runs will be at different locations when they start to
move, and will follow slightly different paths to the target (Figure
8).
30
35
40
45
50
55
x
30
35
40
45
50
55
y
Figure 5. Unguided Walker Path. Axes are location in the
(100x100) field.
0
50
100
150
200
250
Time
0.2
0.4
0.6
0.8
1
o
r
c
a
My
p
o
r
t
n
E
Figure 6. Unguided Walker Locational Entropy (15x15 Grid)
30
35
40
45
50
55
x
30
35
40
45
50
55
y
Figure 7. Guided Walker Path (
= 20, T = 0)
30
35
40
45
50
55
x
30
35
40
45
50
55
y
Figure 8. Ensemble of Guided Walkers (
= 20, T = 0)
128
The dots in Figure 9 and Figure 10 show the directional and
locational entropies across this ensemble of guided walkers as a
function of time. The solid line in each case plots the normalized
median distance from the walkers to the target (actual maximum
28), while the dashed line plots the normalized median number of
molecules visible to the walkers (actual maximum 151). The lines
show how changes in entropy and reduction in distance to the
target are correlated with the number of molecules that the walker
senses at any given moment.
At the beginning and end of the run, when the walkers are
wandering without guidance, directional entropy is 1,
corresponding to a random walk. During the middle portion of the
run, when the walker is receiving useful guidance from the micro
level, the entropy drops dramatically. As the temperature
parameter T is increased in the range 50 to 100, the bottom of the
entropy well rises, but the overall shape remains the same (plot
not shown).
The locational entropy presents a different story. The
minimization method for avoiding discreteness artifacts has the
effect of selecting at each time step the offset that best centers the
cells on the walkers. At the beginning of the run and again at the
end, most walkers are close together, and fall within the same cell
(because we chose a cell size comparable to these clouds).
Walkers leave the starting cloud at different times, since those
closer to the target sense the pheromones sooner, and follow
different paths, depending on where they were when the
pheromone reached them. Thus they spread out during this
movement phase, and cluster together again once they reach the
target. The effect of raising T to 100 on locational entropy is that
the right end of the curve rises until the curve assumes a similar
shape (plot not shown) to Figure 6.
Comparison of Figure 6 and Figure 10 shows that though the
directed portion of the walker's movement has higher entropy
than the undirected portions, coupling the walker to the micro
level does reduce the walker's overall entropy. Even at its
maximum, the entropy of the guided walker is much lower than
that of the random one, demonstrating the basic dynamics of the
Kugler-Turvey model.
The different behavior of locational and directional entropy is
instructive. Which is more orderly: a randomly moving walker, or
one guided by pheromones? The expected location of a random
walker is stationary (though with a non-zero variance), while that
of a guided walker is non-stationary. In terms of location, the
random walker is thus more regular, and the location entropy
reflects this. However, the movement of the guided walker is more
orderly than that of the random walker, and this difference is
reflected in the directional entropy. This difference highlights the
importance of paying attention to dynamical aspects of agent
behavior. Our intuition that the guided walker is more orderly
than the random one is really an intuition about the movement of
this walker, not its location.
4.4 Entropy in the Overall System
Central to the Kugler-Turvey model is the assertion that entropy
increase at the micro level is sufficient to ensure entropy increase
in the overall system even in the presence of self-organization and
concomitant entropy reduction at the micro level. Our experiment
illustrates this dynamic. As illustrated in Figure 4, by time 60,
normalized entropy in the micro system has reached the maximum
level of 1, indicating that each of the 30 replications of the
experiment results in a distinct state. If each replication is already
distinct on the basis of the locations of the pheromone molecules
alone, adding additional state elements (such as the location of the
walker) cannot cause two replications to become the same. Thus
by time 60 the normalized entropy of the entire system must also
be at a maximum. In particular, decreases in macro entropy, such
as the decrease in locational entropy from time 80 on seen in
Figure 10, do not reduce the entropy of the overall system.
One may ask whether the reduction in macro (walker) entropy is
causally related to the increase in micro entropy, or just
coincidental. After all, a static gradient of pheromone molecules
would guide the walker to the target just as effectively, but would
be identical in every run, and so exhibit zero entropy. This
argument neglects whatever process generates the static gradient
in the first place. An intelligent observer could produce the
gradient, but then the behavior of the system would hardly be
"self-organizing." In our scenario, the gradient emerges as a
natural consequence of a completely random process, the random
walk of the pheromone molecules emerging from the target. The
gradient can then reduce the entropy of a walker at the macro
level, but the price paid for this entropy reduction is the increase
in entropy generated by the random process that produces and
maintains the gradient.
0
10
20
30
40
50
Time
0.2
0.4
0.6
0.8
1
y
p
o
r
t
n
E
e
c
n
a
t
s
i
Dd
n
as
e
l
u
c
e
l
o
M
Figure 9. Guided walker: dots = directional entropy (4 star),
solid line = median distance to target (max 28), dashed line =
median visible molecules (max 151).
0
50
100
150
200
250
Time
0.2
0.4
0.6
0.8
1
o
r
c
a
My
p
o
r
t
n
E
e
c
n
a
t
s
i
Dd
n
as
e
l
u
c
e
l
o
M
Figure 10. Guided walker: dots = locational entropy (15x15
grid), solid line = median distance to target (max 28), dashed
line = median visible molecules (max 151).
129
One may also ask whether our hypothesis requires a quantitative
relation between entropy loss at the macro level and entropy gain
at the micro level. A strict entropy balance is not required; the
micro level might generate more entropy than the macro level
loses. In operational terms, the system may have a greater capacity
for coordination than a particular instantiation exploits. What is
required is that the entropy increase at the micro level be
sufficient to cover the decrease at the macro level, and this we
have shown.
SUMMARY
To be effective, multi-agent systems must yield coordinated
behavior from individually autonomous actions. Concepts from
thermodynamics (in particular, the Second Law and entropy) have
been invoked metaphorically to explain the conditions under
which coordination can emerge. Our work makes this metaphor
more concrete and derives several important insights from it.
This metaphor can be made quantitative, through simple state
partitioning methods and Monte Carlo simulation.
These methods show how coordination can arise through
coupling the macro level (in which we desire agent self-organization
with a concomitant decrease in entropy) to an
entropy-increasing process at a micro level (e.g., pheromone
evaporation). Our demonstration focuses on synthetic
pheromones for the sake of expositional simplicity, but we
believe that the same approach would be fruitful for
understanding self-organization with other mechanisms of
agent coordination, such as market systems.
This confirmation of the Kugler-Turvey model encourages us
as agent designers to think explicitly in terms of macro and
micro levels, with agent behaviors at the macro level coupled
in both directions (causally and perceptually) to entropy-increasing
processes at the micro level.
Some form of pheromone or currency is a convenient
mechanism for creating such an entropy-increasing process.
Researchers must distinguish between static and dynamic
order in a multi-agent system. We have exhibited a system
that appears intuitively to be self-organizing, and shown that
the measure of order underlying this intuition is dynamic
rather than static.
ACKNOWLEDGMENTS
This work is supported in part by the DARPA JFACC program
under contract F30602-99-C-0202 to ERIM CEC. The views and
conclusions in this document are those of the authors and should
not be interpreted as representing the official policies, either
expressed or implied, of the Defense Advanced Research Projects
Agency or the US Government.
REFERENCES
[1] E. Bonabeau, M. Dorigo, and G. Theraulaz. Swarm
Intelligence: From Natural to Artificial Systems. New York,
Oxford University Press, 1999.
[2] S. Brueckner. Return from the Ant: Synthetic Ecosystems for
Manufacturing Control. Thesis at Humboldt University
Berlin, Department of Computer Science, 2000.
[3] E. Fredkin. Finite Nature. In Proceedings of The XXVIIth
Recontre de Moriond, 1992.
[4] P.-P. Grass. La Reconstruction du nid et les Coordinations
Inter-Individuelles chez Bellicositermes Natalensis et
Cubitermes sp. La thorie de la Stigmergie: Essai
d'interprtation du Comportement des Termites Constructeurs.
Insectes Sociaux, 6:41-80, 1959.
[5] H. A. Gutowitz. Complexity-Seeking Ants. In Proceedings of
Third European Conference on Artifical Life, 1993.
[6] B. Hayes. Computational Creationism. American Scientist,
87(5):392-396, 1999.
[7] P. N. Kugler and M. T. Turvey. Information, Natural Law,
and the Self-Assembly of Rhythmic Movement. Lawrence
Erlbaum, 1987.
[8] F. L. Lambert. Shuffled Cards, Messy Desks, and Disorderly
Dorm Rooms - Examples of Entropy Increase? Nonsense!
Journal of Chemical Education, 76:1385, 1999.
[9] F. L. Lambert. The Second Law of Thermodynamics. 2000.
Web Page, http://www.secondlaw.com/.
[10] J. Lukkarinen. Re: continuing on Entropy. 2000. Email
Archive, http://necsi.org:8100/Lists/complex-science/Message/2236
.html.
[11] V. D. Parunak. 'Go to the Ant': Engineering Principles
from Natural Agent Systems. Annals of Operations Research,
75:69-101, 1997.
[12] V. D. Parunak and S. Brueckner. Ant-Like Missionaries and
Cannibals: Synthetic Pheromones for Distributed Motion
Control. In Proceedings of Fourth International Conference
on Autonomous Agents (Agents 2000), pages 467-474, 2000.
[13] Peeters, P. Valckenaers, J. Wyns, and S. Brueckner.
Manufacturing Control Algorithm and Architecture. In
Proceedings of Second International Workshop on Intelligent
Manufacturing Systems, pages 877-888, K.U. Leuven, 1999.
[14] E. Shannon and W. Weaver. The Mathematical Theory of
Communication. Urbana, IL, University of Illinois, 1949.
[15] ithsonian Institution. Encyclopedia Smithsonian:
Pheromones in Insects. 1999. Web Page,
http://www.si.edu/resource/faq/nmnh/buginfo/pheromones.ht
m.
130 | thermodynamic;Pheromones;entropy;Entropy;coordination;autonomy;pheromones;multi-agent system;self-organization;Self-Organization |
84 | Entropy-based Sensor Selection Heuristic for Target Localization | We propose an entropy-based sensor selection heuristic for localization. Given 1) a prior probability distribution of the target location, and 2) the locations and the sensing models of a set of candidate sensors for selection, the heuristic selects an informative sensor such that the fusion of the selected sensor observation with the prior target location distribution would yield on average the greatest or nearly the greatest reduction in the entropy of the target location distribution. The heuristic greedily selects one sensor in each step without retrieving any actual sensor observations. The heuristic is also computationally much simpler than the mutual-information-based approaches. The effectiveness of the heuristic is evaluated using localization simulations in which Gaussian sensing models are assumed for simplicity. The heuristic is more effective when the optimal candidate sensor is more informative. | INTRODUCTION
The recent convergence of micro-electro-mechanical systems
(MEMS) technology, wireless communication and networking
technology, and low-cost low-power miniature digital
hardware design technology has made the concept of
wireless sensor networks viable and a new frontier of research
[2, 1]. The limited on-board energy storage and the limited
wireless channel capacity are the major constraints of wireless
sensor networks. In order to save precious resources,
a sensing task should not involve more sensors than necessary
. From the information-theoretic point of view, sensors
are tasked to observe the target in order to increase the information
(or to reduce the uncertainty) about the target
state. The information gain attributable to one sensor may
be very different from that attributable to another when sensors
have different observation perspectives and sensing uncertainties
. Selective use of informative sensors reduces the
number of sensors needed to obtain information about the
target state and therefore prolongs the system lifetime. In
the scenario of localization or tracking using wireless sensor
networks, the belief state of the target location can be gradually
improved by repeatedly selecting the most informative
unused sensor until the required accuracy (or uncertainty)
level of the target state is achieved.
There have been several investigations into information-theoretic
approaches to sensor fusion and management. The
idea of using information theory in sensor management was
first proposed in [8]. Sensor selection based on expected information
gain was introduced for decentralized sensing systems
in [12]. The mutual information between the predicted
sensor observation and the current target location distribution
was proposed to evaluate the expected information gain
about the target location attributable to a sensor in [11, 6].
On the other hand, without using information theory, Yao
et. al. [16] found that the overall localization accuracy depends
on not only the accuracy of individual sensors but
also the sensor locations relative to the target location during
the development of localization algorithms. We propose
a novel entropy-based heuristic for sensor selection based on
our experiences with target localization. It is computationally
more efficient than mutual-information-based methods
proposed in [11, 6].
36
We use the following notations throughout this paper:
1. S is the set of candidate sensors for selection, i S is
the sensor index;
2. x is the realization of the random vector that denotes the
target location;
3. x
t
is the actual target location;
4. ^
x is the maximum likelihood estimate of the target location
;
5. x
i
is the deterministic location of sensor i;
6. z
i
is the realization of the random variable that denotes
the observation of sensor i about the target location;
7. z
ti
is the actual observation of sensor i about the target
location;
8. z
v
i
is the realization of the random variable that denotes
the view of sensor i about the target location.
The rest of this paper is organized as follows. Section 2
describes the heuristic in detail.
Section 3 evaluates the
heuristic using simulations.
Section 4 discusses the discrepancy
between the heuristic and the mutual information
based approaches. Section 5 outlines future work. Section 6
concludes the paper. Section 7 acknowledges the sponsors.
SENSOR SELECTION HEURISTIC
This Sect. formulates the sensor selection problem in localization
, presents the details of the entropy-based sensor
selection heuristic, and discusses the relation between the
entropy difference proposed in this paper and mutual information
used in previous work about sensor selection.
2.1
Sensor Selection Problem in Localization
There are several information measures. In this paper, we
use Shannon entropy [14] to quantify the information gain
(or uncertainty reduction) about the target location due to
sensor observation. We adopt the greedy sensor selection
strategy used in mutual-information-based approaches [11,
6]. The greedy strategy gradually reduces the uncertainty
of the target location distribution by repeatedly selecting
the currently unused sensor with maximal expected information
gain. The observation of the selected sensor is incorporated
into the target location distribution using sequential
Bayesian filtering [3, 7]. The greedy sensor selection and the
sequential information fusion continue until the uncertainty
of the target location distribution is less than or equal to
the required level. The core problem of the greedy sensor
selection approach is how to efficiently evaluate the expected
information gain attributable to each candidate sensor without
actually retrieving sensor data.
The sensor selection problem is formulated as follows.
Given
1. the prior target location distribution: p(x),
2. the locations of candidate sensors for selection: x
i
, i S,
3.
the sensing models of candidate sensors for selection:
p(z
i
|x), i S,
the objective is to find the sensor ^i whose observation z
^i
minimizes the expected conditional entropy of the posterior
target location distribution,
^i = arg min
iS
H(x|z
i
) .
(1)
Equivalently, the observation of sensor ^i maximizes the expected
target location entropy reduction,
^i = arg max
iS
(H(x) - H(x|z
i
)) .
(2)
H(x) - H(x|z
i
) is one expression of I(x; z
i
), the mutual
information between the target location x and the predicted
sensor observation z
i
,
I(x; z
i
) =
p(x, z
i
) log p(x, z
i
)
p(x)p(z
i
) dxdz
i
,
(3)
where p(x, z
i
) = p(z
i
|x)p(x) and p(z
i
) =
p(x, z
i
)dx. Thus,
the observation of sensor ^i maximizes the mutual information
I(x; z
i
),
^i = arg max
iS
I(x; z
i
) .
(4)
Sensor selection based on (4) is the maximal mutual information
criterion proposed in [11, 6]. The target location
x could be of up to three dimensions. The sensor observation
z
i
(e.g. the direction to a target in a three-dimensional
space ) could be of up to two dimensions. Therefore I(x; z
i
)
is a complex integral in the joint state space (x, z
i
) of up to
five dimensions. The complexity of computing I(x; z
i
) could
be more than that low-end sensor nodes are capable of. If
the observation z
i
is related to the target location x only
through the sufficient statistics z(x), then
I(x; z
i
) = I(z(x); z
i
) .
(5)
If z(x) has fewer dimensions than x, then I(z(x); z
i
) is less
complex to compute than I(x; z
i
). In the above special scenario
, I(z(x); z
i
) has been proposed to replace I(x; z
i
) to
reduce the complexity of computing mutual information in
[11]. In this paper, we propose an alternative entropy-based
sensor selection heuristic. In general, the entropy-based sensor
selection heuristic is computationally much simpler than
the mutual information based approaches. However, the observation
of the sensor selected by the heuristic would still
yield on average the greatest or nearly the greatest entropy
reduction of the target location distribution.
2.2
Entropy-based Sensor Selection Heuristic
During the development of wireless sensor networks for
localization, we have observed that the localization uncertainty
reduction attributable to a sensor is greatly effected
by the difference of two quantities, namely, the entropy of
the distribution of that sensor's view about the target location
, and the entropy of that sensor's sensing model for the
actual target location.
A sensor's view about the target location is the geometric
projection of the target location onto that sensor's observation
perspective. For example, a direction-of-arrival (DOA)
sensor's view of the target location is the direction from the
sensor to the target. The view of sensor i about the target
location is denoted as z
v
i
,which is a function of the target
location x and the sensor location x
i
,
z
v
i
= f(x, x
i
) .
(6)
z
v
i
usually has less dimensions than x. The probability distribution
of the view of sensor i about the target location,
p(z
v
i
), is the projection of the target location distribution
p(x) onto the observation perspective of sensor i,
p(z
v
i
)dz
v
i
=
z
v
i
f(x,x
i
)z
v
i
+dz
v
i
p(x)dx .
(7)
Alternatively, p(z
v
i
) can be regarded as the `noise free' prediction
of the sensor observation distribution p(z
i
) based on
the target location distribution p(x).
37
0
0.2
0.4
0.6
0.8
1
1.2
x 10
-4
East
North
38
o
36
o
0
o
100
200
300
400
50
100
150
200
250
300
350
400
Figure 1: A DOA sensor's view about the target
location. Thestatespaceof thetarge
t location is
gridded in 1
1 cells. Theimagedepicts theprobability
distribution of thetarget location. Theactual
target location is (200, 200), denoted by marker +.
From the perspective of the DOA sensor denoted
by the square, only the direction to the target is
observable. The view of the DOA sensor about the
target is in the interval [36
o
, 38
o
] if and only if the
target is inside the sector delimited by 36
o
lineand
38
o
line.
In practice, the state space of the target location and the
sensor view can be discretized by griding for numerical analysis
. The discrete representation of p(z
v
i
) can be computed
as follows.
1. Let
X be the grid set of the target location x;
2. Let
Z be the grid set of the sensor view z
v
i
;
3. For each grid point z
v
i
Z, initialize p(z
v
i
) to zero;
4. For each grid point x
X , determine the corresponding
grid point z
v
i
Z using equation (6), and update its probability
as p(z
v
i
) = p(z
v
i
) + p(x);
5. Normalize p(z
v
i
) to make the total probability of the sensor
view be 1.
The numerical computation of p(z
v
i
) for a DOA sensor is
illustrated in Fig. 1 and Fig. 2.
The entropy of the probability distribution of the view of
sensor i, H
v
i
, is
H
v
i
=
p
(z
v
i
) log p(z
v
i
)dz
v
i
.
(8)
Given the discrete representation of p(z
v
i
) with a grid size
of z
v
i
, H
v
i
can be numerically computed as
H
v
i
=
p
(z
v
i
) log p(z
v
i
)z
v
i
.
(9)
The sensing model of sensor i for the actual target location
x
t
is p(z
i
|x
t
), which describes the probability distribution of
the observation of sensor i given that the target is at x
t
. The
sensing model incorporates observation uncertainty from all
sources, including the noise corruption to the signal, the signal
modeling error of the sensor estimation algorithm, and
the inaccuracy of the sensor hardware. For a single-modal
target location distribution p(x), we can use the maximum
10
20
30
40
50
60
70
80
0
0.02
0.04
0.06
0.08
0.1
0.12
DOA (degree)
Probability
Figure2: Thediscreteprobability distribution of a
DOA se
nsor's vie
w.
Thestatespaceof theDOA
sensor view is gridded in 2
o
intervals. The target location
distribution and theDOA sensor location are
illustrated in Fig. 1. Marker X denotes the probability
of the DOA view interval [36
o
, 38
o
], which is the
summation of theprobability of all target locations
inside the sector delimited by 36
o
lineand 38
o
line
in Fig. 1. Please note that the sensor view distribution
does not depends on the sensing uncertainty
characteristics at all.
likelihood estimate ^
x of the target location to approximate
the actual target location x
t
. Thus the entropy of the sensing
model of sensor i for the actual target location x
t
is
approximated as
H
si
=
p
(z
i
|^x) log p(z
i
|^x)dz
i
.
(10)
For a multi-modal target location distribution p(x) with M
peaks ^
x
(m)
, where m = 1, . . . , M , the entropy of the sensing
model of sensor i for the actual target location x
t
can be
approximated as a weighted average of the entropy of the
sensing model for all modes,
H
si
=
M
m=1
p(^
x
(m)
)
p(z
i
|^x
(m)
) log p(z
i
|^x
(m)
)dz
i
. (11)
Given a target location distribution p(x), the target location
with maximum likelihood or local maximum likelihood can
be found using standard search algorithms.
We have repeatedly observed that the incorporation of
the observation of sensor i with larger entropy difference
H
v
i
- H
si
yields on average larger reduction in the uncertainty
of the posterior target location distribution p(x|z
i
).
Therefore, given a prior target location distribution and the
location and the sensing uncertainty model of a set of candidate
sensors for selection, the entropy difference H
v
i
- H
si
can sort candidate sensors into nearly the same order as mutual
information I(x; z
i
) does. Specifically, the sensor with
the maximal entropy difference H
v
i
- H
si
also has the maximum
or nearly the maximal mutual information I(x; z
i
).
Hence we propose to use the entropy difference H
v
i
- H
si
as
an alternative to mutual information I(x; z
i
) for selecting
38
the most informative sensor. The entropy-based heuristic is
to compute H
v
i
- H
si
for every candidate sensor i S and
then to select sensor ^i such that
^i = arg max
iS
(H
v
i
- H
si
) .
(12)
In Sect. 3, the validity of the heuristic is evaluated using
simulations and the complexity of the heuristic is analyzed
for two-dimensional localization.
The entropy-based sensor
selection heuristic works nearly as well as the mutual-information
-based approaches. In addition, the heuristic is
computationally much simpler than mutual information.
2.3
Relation of Entropy Difference
and Mutual Information
A brief analysis of the relation between entropy difference
H
v
i
- H
si
and mutual information I(x; z
i
) helps to reveal
fundamental properties of our sensor selection heuristic.
Mutual information I(x; z
i
) has another expression, namely,
H(z
i
)
- H(z
i
|x). The entropy difference H
v
i
- H
si
is closely
related to H(z
i
)
- H(z
i
|x).
H(z
i
) is the entropy of the predicted sensor observation
distribution p(z
i
),
H(z
i
) =
p
(z
i
) log p(z
i
)dz
i
.
(13)
The predicted sensor observation distribution p(z
i
) becomes
the sensor's view distribution p(z
v
i
) when the sensing model
p(z
i
|x) is deterministic without uncertainty. The uncertainty
in the sensing model p(z
i
|x) makes H(z
i
) larger than
the sensor's view entropy H
v
i
defined in (8). H
v
i
closely approximates
H(z
i
) when the entropy of the sensing model
p(z
i
|x) is small relative to H
v
i
.
H(z
i
|x) is actually the expected entropy of the sensing
model p(x) averaged for all possible target locations,
H(z
i
|x) = p
(x, z
i
) log p(z
i
|x)dxdz
i
=
p(x){-p
(z
i
|x) log p(z
i
|x)dz
i
}dx .
(14)
When p(x) is a single-modal distribution, H
si
is defined in
(10), which is the entropy of the sensing model for the most
likely target location estimate ^
x.
When p(x) is a multi-modal
distribution, H
si
is defined in (11), which is the average
entropy of the sensing model for all target locations with
local maximal likelihood. When the entropy of the sensing
model,
p(z
i
|x) log p(z
i
|x)dz
i
, changes gradually with x,
H
si
can reasonably approximate H(z
i
|x).
The entropy difference H
v
i
- H
si
reasonably approximates
the mutual information H(z
i
)
- H(z
i
|x) when H
si
is small
relative to H
v
i
and the entropy of the sensing model changes
gradually with x. However, selection of the most informative
sensor does not require an exact evaluation of sensor
information utility. Instead, an order of sensors in terms of
information utility is needed. H
v
i
- H
si
could sort sensors
into approximately the same order as mutual information
does. Therefore, a sensor with the maximal entropy difference
H
v
i
- H
si
also has the maximal or nearly the maximal
mutual information. The correlation between the entropy
difference H
v
i
- H
si
and mutual information I(x; z
i
) is analyzed
using simulations in Sect. 3. Section 4 discusses the
discrepancy between the heuristic and the mutual information
based approaches.
HEURISTIC EVALUATION
This Sect. presents the evaluation of the entropy-based
sensor selection heuristic using simulations.
The computational
complexity of the heuristic is also analyzed. The
Gaussian noise model has been widely assumed for sensor
observations in many localization and tracking algorithms,
e.g. the Kalman filter [9]. Successes of these algorithms
indicate that the Gaussian sensing model is a reasonable
first-order-approximation of the reality. As a starting point,
we assume Gaussian sensing models in the evaluative simulations
for simplicity. The simple Gaussian sensing models assumed
here are not accurate especially when sensors are very
close to the target. To avoid the problem of over-simplified
sensing models in the simulations, we only analyze sensors
with some middle distance range to the target. The heuristic
will be evaluated further under more realistic sensing
models in the future. Four scenarios of sensor selection for
localization have been studied. Three of them involve DOA
sensors, range sensors, or time-difference-of-arrival (TDOA)
sensors respectively. One of them involves all of the above
sensors mixed together. In every sensor selection scenario,
both the entropy difference H
v
i
- H
si
and mutual information
I(x; z
i
) are evaluated and compared for all candidate
sensors. In all sensor selection scenarios, the entropy difference
H
v
i
- H
si
can sort all candidate sensors into nearly the
same order as mutual information I(x; z
i
) does. Therefore,
the sensor with the maximal entropy difference H
v
i
- H
si
selected
by the heuristic always has the maximum or nearly
the maximal mutual information I(x; z
i
). The larger the
entropy difference H
v
i
- H
si
and mutual information I(x; z
i
)
are, the more consistent their sensor selection decisions are.
3.1
Selection of DOA Sensors
Consider now entropy-based sensor selection when all candidate
sensors are DOA sensors, as depicted in Fig. 3. The
prior probability distribution p(x) of the target location x is
non-zero in a limited area. We assume the unbiased Gaussian
sensing models for DOA sensors in some middle distance
range to the target. Specifically, given a target location such
that 10
x - x
i
600, the probability distribution of
DOA observation z
i
is assumed to be
p(z
i
|x) =
1
2 e
-(z
i
-z
v
i
)
2
/(2
2
)
,
(15)
where z
v
i
= f(x, x
i
) is the direction from sensor i to the
target location x. For many DOA estimation algorithms
like the approximate maximum likelihood (AML) algorithm
[4], DOA estimation usually becomes much more uncertain
when the candidate sensor is either very near or very far
from the target. In this scenario, we exclude sensors that
are either outside the study area or within a distance of 10
to the area of non-zero p(x).
The entropy difference H
v
i
- H
si
and mutual information
I(x; z
i
) of DOA sensors are evaluated and compared in five
cases. In each case, Gaussian sensing models of the same
standard deviation are assumed for all 100 candidate sensors
.
However, the standard deviation varies with the
case. As shown in fig. 4, mutual information I(x; z
i
) vs
the entropy difference H
v
i
- H
si
is plotted for all candidate
sensors in all cases. Mutual information I(x; z
i
) increases
nearly monotonically with the entropy difference H
v
i
- H
si
.
The larger the entropy difference H
v
i
-H
si
and mutual information
I(x; z
i
) are, the more correlated they are. Therefore,
39
0
0.2
0.4
0.6
0.8
1
1.2
x 10
-4
100
200
300
400
50
100
150
200
250
300
350
400
Figure 3: Scenario of sensor selection for localization
using DOA sensors exclusively. The image depicts
theprior probability distribution p(x) of thetarget
location x. p(x) is zero outside the solid rectangle.
The actual target location is (200, 200), denoted by
marker +. The squares denote candidate DOA sensors
for selection. 100 DOA sensors are uniformly
randomly placed outside the dotted rectangle. The
gap between the solid rectangle and the dotted rect-angleis
10.
the entropy difference H
v
i
- H
si
sorts DOA sensors in nearly
the same order as mutual information I(x; z
i
) does, especially
when the entropy difference H
v
i
- H
si
is large. The
candidate DOA sensor selected by the proposed heuristic
has the maximal entropy difference H
v
i
- H
si
, and also has
the maximal mutual information I(x; z
i
).
3.2
Selection of Range Sensors
and TDOA Sensors
This Subsect. evaluates the entropy-based sensor selection
heuristic for range sensors and TDOA sensors respectively.
Fig. 5 shows the sensor selection scenario in which all
candidate sensors can only measure the range to the target
. The prior probability distribution p(x) of the target
location x is non-zero in a limited area. We assume the
unbiased Gaussian sensing models p(z
i
|x) for range sensors
used in [13]. When the actual range is small relative to the
standard deviation of the Gaussian sensing model, p(z
i
|x)
is significantly greater than zero even for negative values
of range observation z
i
. Because a range of negative value
has no physical meaning, the above Gaussian sensing model
is not valid for short ranges. To avoid the above difficulty
of the Gaussian sensing model, we only consider candidate
sensors in some middle distance range to the target. Specifically
, in this range sensor selection scenario, we exclude
sensors that are either outside the study area or within a
distance of 32 to the area of non-zero p(x).
Fig. 6 shows the sensor selection scenario in which only
TDOA sensors are used. The prior probability distribution
p(x) of the target location x is non-zero in a limited area. As
in [15], the signal arrival time difference observed by every
TDOA sensor is relative to a common reference sensor. We
-2
-1
0
1
2
3
4
0
0.5
1
1.5
2
2.5
3
3.5
4
Entropy difference (bit)
Mutual information (bit)
= 32
= 16
= 8
= 4
= 2
Figure4: Mutual information I(x; z
i
) vs entropy difference
H
v
i
- H
si
of DOA sensors. Each symbol denotes
(H
v
i
- H
si
, I(x; z
i
)) pair evaluated for one candidatese
nsor. Theprior targe
t location distribution
and the candidate sensor placements are shown in
Fig. 3. Five cases with different standard deviation
of Gaussian sensing models are studied. In each
case, all candidate sensors are assumed to have the
same value.
also assume the unbiased Gaussian sensing models p(z
i
|x)
for TDOA sensors. In order to be comparable with scenarios
of DOA sensors and range sensors, we only consider TDOA
sensors in middle range distance to the target. Specifically,
we exclude TDOA sensors that are either outside the study
area or within a distance of 10 to the area of non-zero p(x).
Following the same approach to the heuristic evaluation
for DOA sensors, the entropy difference H
v
i
-H
si
and mutual
information I(x; z
i
) of every candidate sensor are evaluated
and compared for range sensor selection scenario in Fig. 5
and for TDOA sensor selection scenario in Fig. 6 respectively
. Mutual information I(x; z
i
) vs the entropy difference
H
v
i
- H
si
is plotted in Fig. 7 for all range sensors and in
Fig. 8 for all TDOA sensors. In both scenarios, mutual information
I(x; z
i
) increases nearly monotonically with the
entropy difference H
v
i
- H
si
. The larger the entropy difference
H
v
i
-H
si
and mutual information I(x; z
i
) are, the more
correlated they are. Using the proposed heuristic, both the
selected range sensor and the selected TDOA sensor have the
maximal entropy difference H
v
i
- H
si
, and also have nearly
the maximal mutual information I(x; z
i
).
3.3
Selection of Mixed Sensors
In order to evaluate the entropy-based sensor selection
heuristic across different sensing modalities, this Subsect. is
devoted to the sensor selection scenario in which candidate
sensors are a mixture of DOA sensors, range sensors and
TDOA sensors.
Fig. 9 shows the sensor selection scenario for mixed candidate
sensors. Each candidate sensor is randomly assigned
one of three sensing modalities, namely, DOA, range, and
TDOA. Gaussian sensing models are assumed for all candidate
sensors with middle range distance to the target. Each
40
0
0.5
1
1.5
2
2.5
3
3.5
x 10
-4
100
200
300
400
50
100
150
200
250
300
350
400
Figure 5: Scenario of sensor selection for localization
using rangese
nsors.
Theimagede
picts theprior
probability distribution p(x) of thetarget location x.
p(x) is zero outside the solid rectangle. The actual
target location is (200, 200), denoted by marker +.
The circles denote candidate range sensors for selection
. 100 range sensors are uniformly randomly
placed outside the dotted rectangle.
The gap between
the solid rectangle and the dotted rectangle
is 32.
candidate sensor is also randomly assigned one of five values
of the standard deviation of the sensing model, namely,
2, 4, 8, 16, and 32. 100 candidate sensors are uniformly
randomly placed in the vicinity of the prior target location
estimation. In order to avoid the difficulties of Gaussian
sensing models for DOA sensors and range sensors close to
the target, we exclude sensors either outside the study area
or within a distance of 32 to the non-zero area of the prior
target location distribution p(x).
The entropy difference H
v
i
- H
si
and mutual information
I(x; z
i
) of every candidate sensor are evaluated and plotted
in Fig.
10.
The correlation between H
v
i
- H
si
and
I(x; z
i
) of mixed sensors is very similar to the correlation
between H
v
i
- H
si
and I(x; z
i
) of sensors with single modality
. Across various sensing modalities, mutual information
I(x; z
i
) increases nearly monotonically with the entropy difference
H
v
i
- H
si
. Therefore, across various sensing modalities
, the candidate sensor with the maximal entropy difference
H
v
i
- H
si
, selected by the proposed heuristic, has the
maximal mutual information I(x; z
i
).
3.4
Computational Complexity
Computational complexity analysis is an important part
of the evaluation of the heuristic. We will analyze the complexity
of the heuristic and compare it to the complexity of
the mutual-information-based approaches.
For two-dimensional localization, the target location x is
two-dimensional. The sensor's view z
v
i
of the target location
x is one-dimensional. The sensor observation z
i
is one-dimensional
. We assume that all random variables are gridded
for numerical computation. Specifically, the area with
non-trivial p(x) is gridded into n n. The interval with
0
0.5
1
1.5
2
2.5
3
3.5
x 10
-4
100
200
300
400
50
100
150
200
250
300
350
400
Figure 6: Scenario of sensor selection for localization
using TDOA sensors. The image depicts the prior
probability distribution p(x) of thetarget location x.
p(x) is zero outside the solid rectangle. The actual
target location is (200, 200), denoted by marker +.
The triangles denote candidate TDOA sensors for
selection.
Every TDOA observation is relative to
a common reference sensor denoted by marker
.
100 TDOA sensors are uniformly randomly placed
outside the dotted rectangle. The gap between the
solid rectangle and the dotted rectangle is 10.
non-trivial p(z
i
) or p(z
v
i
) is also gridded into n. We assume
there are K candidate sensors for selection. K is usually a
small number.
The proposed heuristic evaluates the entropy difference
H
v
i
- H
si
of all sensors and then selects the one with the
maximal H
v
i
- H
si
. As shown in (7), p(z
v
i
) can be computed
from p(x) with cost O(n
2
). As shown in (8), H
v
i
can be
computed from p(z
v
i
) with cost O(n). As shown in (10) and
(11), H
si
can be computed from p(z
i
|x) with cost O(n). The
cost to compute H
v
i
- H
si
for one candidate sensor is O(n
2
).
Therefore, the total cost for the heuristic to select one out
of K candidate sensors is O(n
2
).
The mutual-information-based approaches evaluate the mutual
information I(x; z
i
) of all sensors and then select the
one with the maximal I(x; z
i
). As shown in (3), I(x; z
i
)
can be directly computed from p(x) and p(z
i
|x) with cost of
O(n
3
). Therefore, the total cost to select one out of K candidate
sensors is O(n
3
). As we mentioned early in Subsect.
2.1, the computational cost of mutual information I(x; z
i
)
could be reduced in some special scenarios. In general, however
, the heuristic is computationally much simpler than the
mutual-information-based approaches.
DISCREPANCY BETWEEN HEURISTIC AND MUTUAL INFORMATION
As shown in Sect. 3, when the mutual information I(x; z
i
)
is close to 0 bit, the entropy difference H
v
i
- H
si
might not
sort candidate sensors into exactly the same order as the
mutual information does. Such discrepancy is caused by the
dispersion of the correlation between the entropy difference
41
-2
-1
0
1
2
3
4
5
0
1
2
3
4
5
Entropy difference (bit)
Mutual information (bit)
= 32
= 16
= 8
= 4
= 2
Figure7: Mutual information I(x; z
i
) vs entropy difference
H
v
i
- H
si
of range senors. Each symbol denotes
(H
v
i
- H
si
, I(x; z
i
)) pair evaluated for one candidate
sensor. The prior target location distribution
and the candidate sensor placements are shown in
Fig. 5. Five cases with different standard deviation
of Gaussian sensing models are studied. In each
case, all candidate sensors are assumed to have the
same value.
H
v
i
- H
si
and the mutual information I(x; z
i
) when the mutual
information is small. In this Sect., we examine such
correlation dispersion and evaluate its impact on the discrepancy
of sensor selection decisions of the entropy-based
heuristic and the mutual information based approaches.
4.1
Dispersion
In this Subsect., we describe the dispersion of the correlation
between the entropy difference H
v
i
- H
si
and the
mutual information I(x; z
i
) when the mutual information is
small. We also examine possible sources for such correlation
dispersion.
Close examination on the convex part of the mutual information
vs. entropy difference curve in Fig. 7 and Fig. 8
reveals that the correlation between the mutual information
I(x; z
i
) and the entropy difference H
v
i
- H
si
is not strictly
monotonic. Instead, there is obvious dispersion of the correlation
. The convex part corresponds to the situation in
which candidate sensors are not very informative because
the mutual information between the target location distribution
and the sensor observation is close to 0 bit. In another
words, when candidate sensors are not very informative, the
entropy difference H
v
i
-H
si
might not sort candidate sensors
into the same order as the mutual information I(x; z
i
) does.
Given a set of candidate sensors whose observation could
only reduce a little amount of uncertainty of the target location
distribution, the sensor selected on the basis of the
maximum entropy difference H
v
i
- H
si
might not have the
maximum mutual information I(x; z
i
). Thus, there might
be discrepancy between the sensor selection decision of the
entropy-based heuristic and that of the mutual information
based approaches if no candidate sensor is very informative.
-4
-2
0
2
4
6
0
1
2
3
4
5
6
Entropy difference (bit)
Mutual information (bit)
= 32
= 16
= 8
= 4
= 2
Figure8: Mutual information I(x; z
i
) vs entropy difference
H
v
i
- H
si
of TDOA senors. Each symbol denotes
(H
v
i
- H
si
, I(x; z
i
)) pair evaluated for one candidatese
nsor. Theprior targe
t location distribution
and the candidate sensor placements are shown in
Fig. 6. Five cases with different standard deviation
of Gaussian sensing models are studied. In each
case, all candidate sensors are assumed to have the
same value.
There might be multiple causes of such correlation dispersion
between the entropy difference H
v
i
-H
si
and the mutual
information I(x; z
i
). As pointed out in Subsect. 2.3, the entropy
difference H
v
i
-H
si
can be viewed as an approximation
of the mutual information I(x; z
i
). Thus, the order of sensors
sorted by the entropy difference H
v
i
-H
si
is intrinsically
an approximation of that by the mutual information I(x; z
i
).
In practice, the discretization of the state space of the target
location random variable and the sensor view random
variable might also introduce inaccuracy into the evaluation
of H
v
i
. Besides, as shown in (10) and (11), the maximum
likelihood estimate of the target location is used to approximate
the actual target location when evaluating the entropy
of the sensing model for the actual target location.
4.2
Impact
In this Subsect., we examine the impact of the dispersion
of the correlation between the entropy difference H
v
i
- H
si
and the mutual information I(x; z
i
) when the mutual information
is small. The analysis shows that such correlation
dispersion causes very little degradation to the quality of
sensor selection decision of the entropy-based heuristic.
As shown by the convex part of the mutual information
vs. entropy difference curve in Fig. 7 and Fig. 8, there is
dispersion of the correlation between the entropy difference
H
v
i
-H
si
and the mutual information I(x; z
i
) when candidate
sensors are not very informative. We model such dispersion
using a uniform distribution bounded by a parallelogram illustrated
in Fig. 11. A candidate sensor could assume any
position (H
v
i
- H
si
, I(x; z
i
)) within the parallelogram with
uniform probability. As shown in Fig. 11, the geometry of
the parallelogram is defined by parameters a, b and c. a
42
0
0.2
0.4
0.6
0.8
1
1.2
x 10
-4
100
200
300
400
50
100
150
200
250
300
350
400
Figure 9: Scenario of sensor selection for localization
using sensors with various modalities. The imagedepicts
theprior probability distribution p(x) of
thetarge
t location x. p(x) is zero outside the solid
rectangle. The actual target location is (200, 200),
denoted by marker +. The squares, the circles, and
the triangles denote DOA sensors, range sensors and
TDOA sensors respectively. Every TDOA observation
is relative to a common reference sensor denoted
by marker
. Each sensor is randomly chosen
to bea DOA se
nsor, a rangese
nsor, or a TDOA
sensor. Each sensor is also randomly assigned one
of five values of the standard deviation of Gaussian
sensing models, namely, 2, 4, 8, 16, and 32. The
sizeof a symbol indicates themagnitudeof . 100
sensors of various sensing modalities and values
areuniformly randomly place
d outsidethedotte
d
rectangle. The gap between the solid rectangle and
the dotted rectangle is 32.
is the variation scope of entropy difference H
v
i
- H
si
among
the set of candidate sensors. c indicates the variation scope
of the mutual information I(x; z
i
) among the set of candidate
sensors. b describes the magnitude of dispersion of the
correlation between the entropy difference H
v
i
- H
si
and the
mutual information I(x; z
i
). Although the bounded uniform
distribution is not accurate, it captures the major features
of the correlation dispersion revealed by simulations in Sect.
3. We choose this dispersion model for simplicity. As the
first order approximation, the simple dispersion model does
help to reveal some major characteristics of the impact of
the correlation dispersion on the heuristics-based sensor selection
.
A typical dispersion scenario is illustrated in Fig.
11.
The mutual information I(x; z
i
) of candidate sensors varies
from 0 bit to 1 bit. Correspondingly, the entropy difference
H
v
i
- H
si
of candidate sensors changes from
-2 bit to 0 bit.
For any value of the entropy difference H
v
i
-H
si
, the disperse
of the mutual information I(x; z
i
) is 0.1 bit. Given the above
scenario, we run 10, 000 simulations. In each simulation, 8
candidate sensors randomly assume their (H
v
i
-H
si
, I(x; z
i
))
pairs within the specified dispersion range. In each simulation
, we identify both the sensor with the maximum entropy
-2
0
2
4
6
0
1
2
3
4
5
6
Entropy difference (bit)
Mutual information (bit)
TDOA sensor
DOA sensor
range sensor
Figure10: Mutual information I(x; z
i
) vs entropy
difference H
v
i
- H
si
of mixed senors.
Each symbol
denotes (H
v
i
- H
si
, I(x; z
i
)) pair evaluated for one candidate
sensor. The prior target location distribution
and the candidate sensor placements are shown in
Fig. 9.
difference H
v
i
- H
si
and the sensor with the maximum mutual
information I(x; z
i
). With 87.8% chance, the sensor
selected by the entropy-based heuristic also has the maximum
mutual information. Even when the heuristic fails to
select the sensor of the maximum mutual information, the
mutual information of the selected sensor is on average only
about 0.026 bit less than the maximum mutual information.
Overall, the mutual information of the sensor selected by the
entropy-based heuristic is about 0.026(1-87.8%) = 0.0032
bit less than the maximum mutual information. Therefore,
most of the time, the correlation dispersion does not cause
discrepancy of the sensor selection decisions between the
entropy-based heuristic and the mutual information based
approaches. Over all, the entropy-based heuristic introduces
very little degradation to the quality of the sensor select decision
even when candidate sensors are not very informative.
We have analyzed the impact of the correlation dispersion
for different configurations of a, b, c, and the number
of candidate sensors. In table 1 , a = 2 bit, b = 0.1 bit and
c = 1 bit are fixed. We only change the number of candidate
sensors. The chance for the heuristic to successfully
select the sensor with the maximum mutual information decreases
as the number of candidate sensors increases. When
the heuristic fails to select the sensor with the maximum
mutual information, the degradation of sensor selection decision
based on the heuristic compared to that based on
the mutual information does not change with the number of
candidate sensors. Thus, the overall degradation of sensor
selection decision based on the heuristic compared to that
based on mutual information also increases as the number
of candidate sensors increases.
In table 2 , a = 2 bit and c = 1 bit are fixed and the
number of candidate sensors are fixed to be 8.
We only
change the dispersion width b. The chance for the heuristic
to successfully select the sensor with the maximum mutual
43
-2
-1.5
-1
-0.5
0
0
0.5
1
1.5
2
Entropy difference (bit)
Mutual information (bit)
b
a
c
Figure 11: Discrepancy between the entropy-based
sensor selection heuristic and the mutual information
based approaches when candidate sensors are
not very informative. The dispersion of the correlation
between the entropy difference H
v
i
- H
si
and
themutual information I(x; z
i
) is modeled by a uniform
distribution bounded by a parallelogram. The
geometry of the parallelogram is defined by parameters
a, b and c. Candidate sensors are denoted by
marker
whosecoordinates are(H
v
i
- H
si
, I(x; z
i
)).
The entropy-based heuristic selects the rightmost
sensor, which has the maximum entropy difference
H
v
i
- H
si
and is enclosed by a square marker. The
mutual information based approaches selects the top
sensor, which has the maximum mutual information
I(x; z
i
) and is enclosed by a diamond-shaped marker.
The above two selected sensors might not be the
same. In the scenario of this figure, a = 2 bits, b = 0.1
bit, c = 1 bit, and 8 candidatesensors areavailable
for selection.
information decreases as the dispersion width b increases.
When the heuristic fails to select the sensor with the maximum
mutual information, the degradation of sensor selection
decision based on the heuristic compared to that based
on the mutual information increases as the dispersion width
b increases. Thus, the overall degradation of sensor selection
decision based on the heuristic compared to that based on
mutual information also increases as the dispersion width b
increases.
In table 3 , a = 2 bit and b = 0.1 bit are fixed and
the number of candidate sensors are fixed to be 8.
We
only change the mutual information variation scope c. The
chance for the heuristic to successfully select the sensor with
the maximum mutual information increases as the mutual
information variation scope c increases. When the heuristic
Table 1: Impact Change with Number of Sensors
Number of Candidate Sensors
4
8
16
Chance of Success (%)
93.6
87.8
78.2
Degradation per Failure (bit)
0.026
0.026
0.026
Overall Degradation (bit)
0.0016
0.0032
0.0058
Table2: Impact Changewith Dispersion Width
Dispersion Width b (bit)
0.05
0.1
0.2
Chance of Success (%)
93.6
87.8
78.1
Degradation per Failure (bit)
0.013
0.026
0.054
Overall Degradation (bit)
0.0008
0.0032
0.012
Table3: Impact Changewith Mutual Info. Scope
Mutual Info. Scope c (bit)
0.5
1
2
Chance of Success (%)
78.2
87.8
93.6
Degradation per Failure (bit)
0.027
0.026
0.025
Overall Degradation (bit)
0.0058
0.0032
0.0016
fails to select the sensor with the maximum mutual information
, the degradation of sensor selection decision based on
the heuristic compared to that based on the mutual information
does not change much with the mutual information
variation scope c. Thus, the overall degradation of sensor
selection decision based on the heuristic compared to that
based on mutual information decreases as the mutual information
variation scope c increases.
In table 4 , b = 0.1 bit is fixed and the number of candidate
sensors are fixed to be 8. We proportionally change
the entropy difference variation scope a and the mutual information
variation scope c so that c/a = 1/2 is fixed. The
chance for the heuristic to successfully select the sensor with
the maximum mutual information increases as the entropy
difference variation scope a and the mutual information variation
scope c proportionally increase. When the heuristic
fails to select the sensor with the maximum mutual information
, the degradation of sensor selection decision based
on the heuristic compared to that based on the mutual information
does not change. Thus, the overall degradation of
sensor selection decision based on the heuristic compared to
that based on mutual information decreases as the entropy
difference variation scope a and the mutual information variation
scope c proportionally increase.
FUTURE WORK
When the sensors is selected for tracking a temporally continuous
source, the prior target location distribution at time
t + 1 can be obtained from the posterior target location distribution
at time t by using the target dynamic model as described
in [11]. However, when the sensor selection heuristic
is applied to locate a temporally discontinuous source such
as a bird call, it is not straightforward to obtain the prior
target location distribution used in the sequential Bayesian
fusion. One possible solution to the above problem could be
as follows. First, all sensors buffer the signal once an event
Table4: Impact Changewith Entropy Diff. Scopec
and Mutual Info. Scope a in Proportion
Entropy Diff. Scope a (bit)
1
2
4
Mutual Info. Scope c (bit)
0.5
1
2
Chance of Success (%)
78.2
87.8
93.6
Degradation per Failure (bit)
0.026
0.026
0.026
Overall Degradation (bit)
0.0058
0.0032
0.0016
44
such as a bird call is detected. Then, all triggered sensors
elect a leader that received the strongest signal intensity using
a protocol similar to that described in [10]. Finally, the
leader can pick a few sensors to generate an initial prior target
location distribution assuming a certain sensing model.
With the initial prior target location distribution, we can
apply the sensor selection heuristic to incrementally reduce
the uncertainty of the target location distribution. We plan
to implement and test the above mechanism in the future.
5.2
Discretization of State Space
There is a trade offof computational efficiency and numerical
accuracy in the discretization of the state space of
random variables such as the target location and the sensor
view. The bigger the grid size is, the fewer grids are
involved in the computation. However, a bigger grid size
also introduces more inaccuracy into the evaluation of the
entropy difference heuristic. In the future, we must study
more details about the trade offin order to choose a proper
grid size.
5.3
Sensing Uncertainty Model
We have assumed Gaussian sensing models in the simulations
as the first step to evaluate the heuristic. Inaccuracy
of sensing models diminishes the effectiveness of any
sensor selection criterion. We plan to construct a more realistic
sensing model for the AML-based DOA estimation.
We have implemented AML algorithm for real-time DOA
estimation on a wireless sensor network testbed [5].
We
will first analyze the sensing uncertainty characteristic of
the AML algorithm, and then experimentally validate and
refine it using the testbed. We will also evaluate the effectiveness
of the entropy-based sensor selection heuristic using
realistic sensing models and implement the heuristic on the
real-time wireless sensor network testbed for localization.
CONCLUSION
We have proposed an entropy-based sensor selection heuristic
for localization. The effectiveness of the heuristic has
been evaluated using simulations in which Gaussian sensing
models are assumed for simplicity. Simulations have shown
that the heuristic selects the sensor with nearly the maximal
mutual information between the target location and the sensor
observation. Given the prior target location distribution,
the sensor locations, and the sensing models, on average,
the sensor selected by the heuristic would yield nearly the
greatest reduction in the entropy of the posterior target location
distribution. The heuristic is more effective when the
optimal candidate sensor is more informative. As mutual-information
-based sensor selection approaches [11, 6] do, the
heuristic greedily selects one sensor in each step without retrieving
any actual sensor observations. In addition, in general
, our heuristic is computationally much simpler than the
mutual-information-based approaches.
ACKNOWLEDGMENTS
This material is based upon work partially supported by
the National Science Foundation (NSF) under Cooperative
Agreement #CCR-0121778, and DARPA SensIT program
under contract AFRL/IFG 315 330-1865 and AROD-MURI
PSU 50126.
REFERENCES
[1] I. F. Akyildiz, W. Su, Y. Sankarasubramaniam, and
E. Cayirci. Wireless sensor networks: a survey.
Computer Networks, 38(4):393442, March 2002.
[2] G. Asada, M. Dong, T. Lin, F. Newberg, G. Pottie,
W. Kaiser, and H. Marcy. Wireless integrated network
sensors: low power systems on a chip. In Proc. the
European Solid State Circuits Conference, The Hague,
Netherlands, 1998.
[3] J. M. Bernardo and A. F. M. Smith. Bayesian theory.
Wiley, New York, 1996.
[4] J. Chen, R. Hudson, and K. Yao. Maximum-likelihood
source localization and unknown sensor location
estimation for wideband signals in the near-field. IEEE
T. Signal Proces., 50(8):18431854, August 2002.
[5] J. Chen, L. Yip, J. Elson, H. Wang, D. Maniezzo,
R. Hudson, K. Yao, and D. Estrin. Coherent acoustic
array processing and localization on wireless sensor
networks. Proc. the IEEE, 91(8):11541162, August
2003.
[6] E. Ertin, J. Fisher, and L. Potter. Maximum mutual
information principle for dynamic sensor query
problems. In Proc. IPSN'03, Palo Alto, CA, April
2003.
[7] S. Haykin. Adpative filter theory. Prentice Hall, New
Jersey, USA, 1996.
[8] K. Hintz and E. McVey. A measure of the information
gain attributable to cueing. IEEE T. Syst. Man Cyb.,
21(2):434442, 1991.
[9] R. E. Kalman. A new approach to linear filtering and
prediction problems. Trans. of the ASMEJournal of
Basic Engineering, 82(Series D):3545, 1960.
[10] J. Liu, J. Liu, J. Reich, P. Cheung, and F. Zhao.
Distributed group management for track initiation
and maintenance in target localization applications. In
Proc. International Workshop on Informaiton
Processing in Sensor Networks (IPSN), Palo Alto,
CA, April 2003.
[11] J. Liu, J. Reich, and F. Zhao. Collaborative
in-network processing for target tracking. EURASIP
JASP: Special Issues on Sensor Networks,
2003(4):378391, March 2003.
[12] J. Manyika and H. Durrant-Whyte. Data fusion and
sensor management: a decentralized
information-theoretic approach. Ellis Horwood, New
York, 1994.
[13] A. Savvides, W. Garber, S. Adlakha, R. Moses, and
M. B. Srivastava. On the error characteristics of
multihop node localization in ad-hoc sensor networks.
In Proc. IPSN'03, Palo Alto, CA, USA, April 2003.
[14] C. E. Shannon. A mathematical theory of
communication. Bell Systems Technical Journal,
27(6):379423 and 623656, 1948.
[15] T. Tung, K. Yao, C. Reed, R. Hudson, D. Chen, and
J. Chen. Source localization and time delay estimation
using constrained least squares and best path
smoothing. In Proc. SPIE'99, volume 3807, pages
220223, July 1999.
[16] K. Yao, R. Hudson, C. Reed, D. Chen, and
F. Lorenzelli. Blind beamforming source localization
on a sensor array system. In AWAIRS project
presentation at UCLA, USA, December 1997.
45
| Shannon entropy;entropy;target localization;localization;target tracking;wireless sensor networks;mutual information;information-directed resource management;sensor selection;heuristic;information fusion |
85 | Estimating the Global PageRank of Web Communities | Localized search engines are small-scale systems that index a particular community on the web. They offer several benefits over their large-scale counterparts in that they are relatively inexpensive to build, and can provide more precise and complete search capability over their relevant domains. One disadvantage such systems have over large-scale search engines is the lack of global PageRank values. Such information is needed to assess the value of pages in the localized search domain within the context of the web as a whole. In this paper, we present well-motivated algorithms to estimate the global PageRank values of a local domain. The algorithms are all highly scalable in that, given a local domain of size n, they use O(n) resources that include computation time, bandwidth, and storage. We test our methods across a variety of localized domains, including site-specific domains and topic-specific domains. We demonstrate that by crawling as few as n or 2n additional pages, our methods can give excellent global PageRank estimates. | INTRODUCTION
Localized search engines are small-scale search engines
that index only a single community of the web. Such communities
can be site-specific domains, such as pages within
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for prot or commercial advantage and that copies
bear this notice and the full citation on the rst page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specic
permission and/or a fee.
KDD'06, August 2023, 2006, Philadelphia, Pennsylvania, USA.
Copyright 2006 ACM 1-59593-339-5/06/0008 ...
$
5.00.
the cs.utexas.edu domain, or topic-related communities-for
example, political websites. Compared to the web graph
crawled and indexed by large-scale search engines, the size
of such local communities is typically orders of magnitude
smaller. Consequently, the computational resources needed
to build such a search engine are also similarly lighter. By
restricting themselves to smaller, more manageable sections
of the web, localized search engines can also provide more
precise and complete search capabilities over their respective
domains.
One drawback of localized indexes is the lack of global
information needed to compute link-based rankings. The
PageRank algorithm [3], has proven to be an effective such
measure. In general, the PageRank of a given page is dependent
on pages throughout the entire web graph. In the
context of a localized search engine, if the PageRanks are
computed using only the local subgraph, then we would expect
the resulting PageRanks to reflect the perceived popularity
within the local community and not of the web as a
whole. For example, consider a localized search engine that
indexes political pages with conservative views. A person
wishing to research the opinions on global warming within
the conservative political community may encounter numerous
such opinions across various websites. If only local PageRank
values are available, then the search results will reflect
only strongly held beliefs within the community. However, if
global PageRanks are also available, then the results can ad-ditionally
reflect outsiders' views of the conservative community
(those documents that liberals most often access within
the conservative community).
Thus, for many localized search engines, incorporating
global PageRanks can improve the quality of search results.
However, the number of pages a local search engine indexes
is typically orders of magnitude smaller than the number of
pages indexed by their large-scale counterparts. Localized
search engines do not have the bandwidth, storage capacity,
or computational power to crawl, download, and compute
the global PageRanks of the entire web. In this work, we
present a method of approximating the global PageRanks of
a local domain while only using resources of the same order
as those needed to compute the PageRanks of the local
subgraph.
Our proposed method looks for a supergraph of our local
subgraph such that the local PageRanks within this supergraph
are close to the true global PageRanks. We construct
this supergraph by iteratively crawling global pages on the
current web frontier--i.e., global pages with inlinks from
pages that have already been crawled. In order to provide
116
Research Track Paper
a good approximation to the global PageRanks, care must
be taken when choosing which pages to crawl next; in this
paper, we present a well-motivated page selection algorithm
that also performs well empirically. This algorithm is derived
from a well-defined problem objective and has a running
time linear in the number of local nodes.
We experiment across several types of local subgraphs,
including four topic related communities and several site-specific
domains. To evaluate performance, we measure the
difference between the current global PageRank estimate
and the global PageRank, as a function of the number of
pages crawled. We compare our algorithm against several
heuristics and also against a baseline algorithm that chooses
pages at random, and we show that our method outperforms
these other methods. Finally, we empirically demonstrate
that, given a local domain of size n, we can provide good
approximations to the global PageRank values by crawling
at most n or 2n additional pages.
The paper is organized as follows.
Section 2 gives an
overview of localized search engines and outlines their advantages
over global search. Section 3 provides background
on the PageRank algorithm. Section 4 formally defines our
problem, and section 5 presents our page selection criteria
and derives our algorithms. Section 6 provides experimental
results, section 7 gives an overview of related work, and,
finally, conclusions are given in section 8.
LOCALIZED SEARCH ENGINES
Localized search engines index a single community of the
web, typically either a site-specific community, or a topic-specific
community. Localized search engines enjoy three
major advantages over their large-scale counterparts: they
are relatively inexpensive to build, they can offer more precise
search capability over their local domain, and they can
provide a more complete index.
The resources needed to build a global search engine are
enormous. A 2003 study by Lyman et al. [13] found that
the `surface web' (publicly available static sites) consists of
8.9 billion pages, and that the average size of these pages is
approximately 18.7 kilobytes. To download a crawl of this
size, approximately 167 terabytes of space is needed. For a
researcher who wishes to build a search engine with access
to a couple of workstations or a small server, storage of this
magnitude is simply not available. However, building a localized
search engine over a web community of a hundred
thousand pages would only require a few gigabytes of storage
. The computational burden required to support search
queries over a database this size is more manageable as well.
We note that, for topic-specific search engines, the relevant
community can be efficiently identified and downloaded by
using a focused crawler [21, 4].
For site-specific domains, the local domain is readily available
on their own web server. This obviates the need for
crawling or spidering, and a complete and up-to-date index
of the domain can thus be guaranteed. This is in contrast
to their large-scale counterparts, which suffer from several
shortcomings. First, crawling dynamically generated
pages--pages in the `hidden web'--has been the subject of
research [20] and is a non-trivial task for an external crawler.
Second, site-specific domains can enable the robots exclusion
policy. This prohibits external search engines' crawlers
from downloading content from the domain, and an external
search engine must instead rely on outside links and anchor
text to index these restricted pages.
By restricting itself to only a specific domain of the internet
, a localized search engine can provide more precise
search results.
Consider the canonical ambiguous search
query, `jaguar', which can refer to either the car manufacturer
or the animal. A scientist trying to research the habitat
and evolutionary history of a jaguar may have better
success using a finely tuned zoology-specific search engine
than querying Google with multiple keyword searches and
wading through irrelevant results. A method to learn better
ranking functions for retrieval was recently proposed by
Radlinski and Joachims [19] and has been applied to various
local domains, including Cornell University's website [8].
PAGERANK OVERVIEW
The PageRank algorithm defines the importance of web
pages by analyzing the underlying hyperlink structure of a
web graph. The algorithm works by building a Markov chain
from the link structure of the web graph and computing its
stationary distribution. One way to compute the stationary
distribution of a Markov chain is to find the limiting
distribution of a random walk over the chain. Thus, the
PageRank algorithm uses what is sometimes referred to as
the `random surfer' model. In each step of the random walk,
the `surfer' either follows an outlink from the current page
(i.e. the current node in the chain), or jumps to a random
page on the web.
We now precisely define the PageRank problem. Let U
be an m
m adjacency matrix for a given web graph such
that U
ji
= 1 if page i links to page j and U
ji
= 0 otherwise.
We define the PageRank matrix P
U
to be:
P
U
= U D
-1
U
+ (1
- )ve
T
,
(1)
where D
U
is the (unique) diagonal matrix such that U D
-1
U
is column stochastic, is a given scalar such that 0
1,
e is the vector of all ones, and v is a non-negative, L
1
normalized
vector, sometimes called the `random surfer' vector
. Note that the matrix D
-1
U
is well-defined only if each
column of U has at least one non-zero entry--i.e., each page
in the webgraph has at least one outlink. In the presence of
such `dangling nodes' that have no outlinks, one commonly
used solution, proposed by Brin et al. [3], is to replace each
zero column of U by a non-negative, L
1
-normalized vector.
The PageRank vector r is the dominant eigenvector of the
PageRank matrix, r = P
U
r. We will assume, without loss of
generality, that r has an L
1
-norm of one. Computationally,
r can be computed using the power method. This method
first chooses a random starting vector r
(0)
, and iteratively
multiplies the current vector by the PageRank matrix P
U
;
see Algorithm 1. In general, each iteration of the power
method can take O(m
2
) operations when P
U
is a dense matrix
. However, in practice, the number of links in a web
graph will be of the order of the number of pages. By exploiting
the sparsity of the PageRank matrix, the work per
iteration can be reduced to O(km), where k is the average
number of links per web page. It has also been shown that
the total number of iterations needed for convergence is proportional
to and does not depend on the size of the web
graph [11, 7]. Finally, the total space needed is also O(km),
mainly to store the matrix U .
117
Research Track Paper
Algorithm 1:
A linear time (per iteration) algorithm for
computing PageRank.
ComputePR
(U )
Input:
U : Adjacency matrix.
Output:
r: PageRank vector.
Choose (randomly) an initial non-negative vector r
(0)
such that r
(0) 1
= 1.
i
0
repeat
i
i + 1
U D
-1
U
r
(i-1)
{ is the random surfing probability
}r
(i)
+ (1 - )v {v is the random surfer vector.}
until
r
(i)
- r
(i-1)
<
{ is the convergence threshold.}
r r
(i)
PROBLEM DEFINITION
Given a local domain L, let G be an N
N adjacency
matrix for the entire connected component of the web that
contains L, such that G
ji
= 1 if page i links to page j
and G
ji
= 0 otherwise. Without loss of generality, we will
partition G as:
G =
L
G
out
L
out
G
within
,
(2)
where L is the n
n local subgraph corresponding to links
inside the local domain, L
out
is the subgraph that corresponds
to links from the local domain pointing out to the
global domain, G
out
is the subgraph containing links from
the global domain into the local domain, and G
within
contains
links within the global domain. We assume that when
building a localized search engine, only pages inside the local
domain are crawled, and the links between these pages
are represented by the subgraph L. The links in L
out
are
also known, as these point from crawled pages in the local
domain to uncrawled pages in the global domain.
As defined in equation (1), P
G
is the PageRank matrix
formed from the global graph G, and we define the global
PageRank vector of this graph to be g. Let the n-length
vector p
be the L
1
-normalized vector corresponding to the
global PageRank of the pages in the local domain L:
p
=
E
L
g
E
L
g
1
,
where E
L
= [ I
| 0 ] is the restriction matrix that selects
the components from g corresponding to nodes in L. Let p
denote the PageRank vector constructed from the local domain
subgraph L. In practice, the observed local PageRank
p and the global PageRank p
will be quite different. One
would expect that as the size of local matrix L approaches
the size of global matrix G, the global PageRank and the observed
local PageRank will become more similar. Thus, one
approach to estimating the global PageRank is to crawl the
entire global domain, compute its PageRank, and extract
the PageRanks of the local domain.
Typically, however, n
N , i.e., the number of global
pages is much larger than the number of local pages. Therefore
, crawling all global pages will quickly exhaust all local
resources (computational, storage, and bandwidth) available
to create the local search engine. We instead seek a supergraph
^
F of our local subgraph L with size O(n). Our goal
Algorithm 2:
The FindGlobalPR algorithm.
FindGlobalPR
(L, L
out
, T , k)
Input:
L: zero-one adjacency matrix for the local domain
, L
out
: zero-one outlink matrix from L to global
subgraph as in (2), T : number of iterations, k: number of
pages to crawl per iteration.
Output: ^
p: an improved estimate of the global PageRank
of L.
F L
F
out
L
out
f ComputePR(F )
for (i = 1 to T )
{Determine which pages to crawl next}
pages
SelectNodes(F , F
out
, f , k)
Crawl pages, augment F and modify F
out
{Update PageRanks for new local domain}
f ComputePR(F )
end
{Extract PageRanks of original local domain & normalize}
^
p
E
L
f
E
L
f
1
is to find such a supergraph ^
F with PageRank ^
f , so that
^
f when restricted to L is close to p
. Formally, we seek to
minimize
GlobalDif f ( ^
f ) =
E
L
^
f
E
L
^
f
1
- p
1
.
(3)
We choose the L
1
norm for measuring the error as it does
not place excessive weight on outliers (as the L
2
norm does,
for example), and also because it is the most commonly used
distance measure in the literature for comparing PageRank
vectors, as well as for detecting convergence of the algorithm
[3].
We propose a greedy framework, given in Algorithm 2,
for constructing ^
F . Initially, F is set to the local subgraph
L, and the PageRank f of this graph is computed. The algorithm
then proceeds as follows. First, the SelectNodes
algorithm (which we discuss in the next section) is called
and it returns a set of k nodes to crawl next from the set
of nodes in the current crawl frontier, F
out
. These selected
nodes are then crawled to expand the local subgraph, F , and
the PageRanks of this expanded graph are then recomputed.
These steps are repeated for each of T iterations. Finally,
the PageRank vector ^
p, which is restricted to pages within
the original local domain, is returned. Given our computation
, bandwidth, and memory restrictions, we will assume
that the algorithm will crawl at most O(n) pages. Since the
PageRanks are computed in each iteration of the algorithm,
which is an O(n) operation, we will also assume that the
number of iterations T is a constant. Of course, the main
challenge here is in selecting which set of k nodes to crawl
next. In the next section, we formally define the problem
and give efficient algorithms.
NODE SELECTION
In this section, we present node selection algorithms that
operate within the greedy framework presented in the previous
section. We first give a well-defined criteria for the
page selection problem and provide experimental evidence
that this criteria can effectively identify pages that optimize
our problem objective (3). We then present our main al-118
Research Track Paper
gorithmic contribution of the paper, a method with linear
running time that is derived from this page selection criteria
. Finally, we give an intuitive analysis of our algorithm in
terms of `leaks' and `flows'. We show that if only the `flow'
is considered, then the resulting method is very similar to a
widely used page selection heuristic [6].
5.1
Formulation
For a given page j in the global domain, we define the
expanded local graph F
j
:
F
j
=
F
s
u
T
j
0
,
(4)
where u
j
is the zero-one vector containing the outlinks from
F into page j, and s contains the inlinks from page j into
the local domain. Note that we do not allow self-links in
this framework. In practice, self-links are often removed, as
they only serve to inflate a given page's PageRank.
Observe that the inlinks into F from node j are not known
until after node j is crawled. Therefore, we estimate this
inlink vector as the expectation over inlink counts among
the set of already crawled pages,
s =
F
T
e
F
T
e
1
.
(5)
In practice, for any given page, this estimate may not reflect
the true inlinks from that page. Furthermore, this expectation
is sampled from the set of links within the crawled
domain, whereas a better estimate would also use links from
the global domain. However, the latter distribution is not
known to a localized search engine, and we contend that the
above estimate will, on average, be a better estimate than
the uniform distribution, for example.
Let the PageRank of F be f . We express the PageRank
f
+
j
of the expanded local graph F
j
as
f
+
j
=
(1
- x
j
)f
j
x
j
,
(6)
where x
j
is the PageRank of the candidate global node j,
and f
j
is the L
1
-normalized PageRank vector restricted to
the pages in F .
Since directly optimizing our problem goal requires knowing
the global PageRank p
, we instead propose to crawl
those nodes that will have the greatest influence on the PageRanks
of pages in the original local domain L:
influence(j)
=
kL
|f
j
[k]
- f[k]|
(7)
=
E
L
(f
j
- f )
1
.
Experimentally, the influence score is a very good predictor
of our problem objective (3). For each candidate global node
j, figure 1(a) shows the objective function value Global Diff(f
j
)
as a function of the influence of page j. The local domain
used here is a crawl of conservative political pages (we will
provide more details about this dataset in section 6); we
observed similar results in other domains. The correlation
is quite strong, implying that the influence criteria can effectively
identify pages that improve the global PageRank
estimate. As a baseline, figure 1(b) compares our objective
with an alternative criteria, outlink count. The outlink
count is defined as the number of outlinks from the local
domain to page j. The correlation here is much weaker.
.00001
.0001
.001
.01
0.26
0.262
0.264
0.266
Influence
Objective
1
10
100
1000
0.266
0.264
0.262
0.26
Outlink Count
Objective
(a)
(b)
Figure 1: (a) The correlation between our
influence
page selection criteria (7) and the actual objective
function (3) value is quite strong. (b) This is in contrast
to other criteria, such as outlink count, which
exhibit a much weaker correlation.
5.2
Computation
As described, for each candidate global page j, the influence
score (7) must be computed.
If f
j
is computed
exactly for each global page j, then the PageRank algorithm
would need to be run for each of the O(n) such global
pages j we consider, resulting in an O(n
2
) computational
cost for the node selection method. Thus, computing the
exact value of f
j
will lead to a quadratic algorithm, and we
must instead turn to methods of approximating this vector.
The algorithm we present works by performing one power
method iteration used by the PageRank algorithm (Algorithm
1). The convergence rate for the PageRank algorithm
has been shown to equal the random surfer probability [7,
11]. Given a starting vector x
(0)
, if k PageRank iterations
are performed, the current PageRank solution x
(k)
satisfies:
x
(k)
- x
1
= O(
k
x
(0)
- x
1
),
(8)
where x
is the desired PageRank vector. Therefore, if only
one iteration is performed, choosing a good starting vector
is necessary to achieve an accurate approximation.
We partition the PageRank matrix P
F
j
, corresponding to
the
subgraph F
j
as:
P
F
j
=
~
F
~
s
~
u
T
j
w
,
(9)
where
~
F
=
F (D
F
+ diag(u
j
))
-1
+ (1
- ) e
+ 1 e
T
,
~
s = s + (1 - ) e
+ 1 ,
~
u
j
=
(D
F
+ diag(u
j
))
-1
u
j
+ (1
- ) e
+ 1 ,
w
=
1
-
+ 1 ,
and diag(u
j
) is the diagonal matrix with the (i, i)
th
entry
equal to one if the i
th
element of u
j
equals one, and is zero
otherwise. We have assumed here that the random surfer
vector is the uniform vector, and that L has no `dangling
links'. These assumptions are not necessary and serve only
to simplify discussion and analysis.
A simple approach for estimating f
j
is the following. First,
estimate the PageRank f
+
j
of F
j
by computing one PageRank
iteration over the matrix P
F
j
, using the starting vector
=
f
0
. Then, estimate f
j
by removing the last
119
Research Track Paper
component from our estimate of f
+
j
(i.e., the component
corresponding to the added node j), and renormalizing.
The problem with this approach is in the starting vector.
Recall from (6) that x
j
is the PageRank of the added node
j. The difference between the actual PageRank f
+
j
of P
F
j
and the starting vector is
- f
+
j
1
=
x
j
+ f
- (1 - x
j
)f
j 1
x
j
+
| f
1
- (1 - x
j
) f
j 1
|
=
x
j
+
|x
j
|
=
2x
j
.
Thus, by (8), after one PageRank iteration, we expect our
estimate of f
+
j
to still have an error of about 2x
j
. In particular
, for candidate nodes j with relatively high PageRank
x
j
, this method will yield more inaccurate results. We will
next present a method that eliminates this bias and runs in
O(n) time.
5.2.1
Stochastic Complementation
Since f
+
j
, as given in (6) is the PageRank of the matrix
P
F
j
, we have:
f
j
(1
- x
j
)
x
j
=
~
F
~
s
~
u
T
j
w
f
j
(1
- x
j
)
x
j
=
~
F f
j
(1
- x
j
) + ~
sx
j
~
u
T
j
f
j
(1
- x
j
) + wx
j
.
Solving the above system for f
j
can be shown to yield
f
j
= ( ~
F + (1 - w)
-1
~
s ~
u
T
j
)f
j
.
(10)
The matrix S = ~
F +(1-w)
-1
~
s ~
u
T
j
is known as the stochastic
complement of the column stochastic matrix P
F
j
with respect
to the sub matrix ~
F . The theory of stochastic complementation
is well studied, and it can be shown the stochastic
complement of an irreducible matrix (such as the PageRank
matrix) is unique. Furthermore, the stochastic complement
is also irreducible and therefore has a unique stationary distribution
as well. For an extensive study, see [15].
It can be easily shown that the sub-dominant eigenvalue
of S is at most
+1
, where
is the size of F . For sufficiently
large , this value will be very close to . This is important,
as other properties of the PageRank algorithm, notably the
algorithm's sensitivity, are dependent on this value [11].
In this method, we estimate the length
vector f
j
by
computing one PageRank iteration over the
stochastic
complement S, starting at the vector f :
f
j
Sf.
(11)
This is in contrast to the simple method outlined in the previous
section, which first iterates over the ( + 1)
( + 1)
matrix P
F
j
to estimate f
+
j
, and then removes the last component
from the estimate and renormalizes to approximate
f
j
. The problem with the latter method is in the choice
of the ( + 1) length starting vector, . Consequently, the
PageRank estimate given by the simple method differs from
the true PageRank by at least 2x
j
, where x
j
is the PageRank
of page j. By using the stochastic complement, we
can establish a tight lower bound of zero for this difference.
To see this, consider the case in which a node k is added
to F to form the augmented local subgraph F
k
, and that
the PageRank of this new graph is
(1
- x
k
)f
x
k
. Specifi-cally
, the addition of page k does not change the PageRanks
of the pages in F , and thus f
k
= f . By construction of
the stochastic complement, f
k
= Sf
k
, so the approximation
given in equation (11) will yield the exact solution.
Next, we present the computational details needed to efficiently
compute the quantity f
j
-f
1
over all known global
pages j. We begin by expanding the difference f
j
-f , where
the vector f
j
is estimated as in (11),
f
j
- f Sf - f
=
F (D
F
+ diag(u
j
))
-1
f + (1 - ) e
+ 1 e
T
f
+(1
- w)
-1
( ~
u
T
j
f )~
s - f.
(12)
Note that the matrix (D
F
+diag(u
j
))
-1
is diagonal. Letting
o[k] be the outlink count for page k in F , we can express
the k
th
diagonal element as:
(D
F
+ diag(u
j
))
-1
[k, k] =
1
o[k]+1
if u
j
[k] = 1
1
o[k]
if u
j
[k] = 0
Noting that (o[k] + 1)
-1
= o[k]
-1
- (o[k](o[k] + 1))
-1
and
rewriting this in matrix form yields
(D
F
+diag(u
j
))
-1
= D
-1
F
-D
-1
F
(D
F
+diag(u
j
))
-1
diag(u
j
).
(13)
We use the same identity to express
e
+ 1 =
e - e
( + 1) .
(14)
Recall that, by definition, we have P
F
= F D
-1
F
+(1
-)
e
.
Substituting (13) and (14) in (12) yields
f
j
- f (P
F
f - f)
-F D
-1
F
(D
F
+ diag(u
j
))
-1
diag(u
j
)f
-(1 - )
e
( + 1) + (1 - w)
-1
( ~
u
T
j
f )~
s
=
x + y + ( ~
u
T
j
f )z,
(15)
noting that by definition, f = P
F
f , and defining the vectors
x, y, and z to be
x = -F D
-1
F
(D
F
+ diag(u
j
))
-1
diag(u
j
)f
(16)
y = -(1 - )
e
( + 1)
(17)
z = (1 - w)
-1
~
s.
(18)
The first term x is a sparse vector, and takes non-zero values
only for local pages k that are siblings of the global page
j. We define (i, j)
F if and only if F [j, i] = 1 (equiva-lently
, page i links to page j) and express the value of the
component x[k ] as:
x[k ] =
k
:(k,k )F ,u
j
[k]=1
f [k]
o[k](o[k] + 1) ,
(19)
where o[k], as before, is the number of outlinks from page k
in the local domain. Note that the last two terms, y and z
are not dependent on the current global node j. Given the
function h
j
(f ) =
y + ( ~
u
T
j
f )z
1
, the quantity
f
j
- f
1
120
Research Track Paper
can be expressed as
f
j
- f
1
=
k
x[k] + y[k] + ( ~
u
T
j
f )z[k]
=
k:x[k]=0
y[k] + ( ~
u
T
j
f )z[k]
+
k:x[k]=0
x[k] + y[k] + ( ~
u
T
j
f )z[k]
=
h
j
(f )
k
:x[k]=0
y[k] + ( ~
u
T
j
f )z[k]
+
k:x[k]=0
x[k] + y[k] + ( ~
u
T
j
f )z[k] .(20)
If we can compute the function h
j
in linear time, then we
can compute each value of
f
j
- f
1
using an additional
amount of time that is proportional to the number of non-zero
components in x. These optimizations are carried out
in Algorithm 3. Note that (20) computes the difference between
all components of f and f
j
, whereas our node selection
criteria, given in (7), is restricted to the components
corresponding to nodes in the original local domain L.
Let us examine Algorithm 3 in more detail. First, the
algorithm computes the outlink counts for each page in the
local domain. The algorithm then computes the quantity
~
u
T
j
f for each known global page j. This inner product can
be written as
(1
- ) 1
+ 1 +
k:(k,j)F
out
f [k]
o[k] + 1 ,
where the second term sums over the set of local pages that
link to page j. Since the total number of edges in F
out
was
assumed to have size O( ) (recall that
is the number of
pages in F ), the running time of this step is also O( ).
The algorithm then computes the vectors y and z, as
given in (17) and (18), respectively.
The L
1
NormDiff
method is called on the components of these vectors which
correspond to the pages in L, and it estimates the value of
E
L
(y + ( ~
u
T
j
f )z)
1
for each page j. The estimation works
as follows. First, the values of ~
u
T
j
f are discretized uniformly
into c values
{a
1
, ..., a
c
}. The quantity E
L
(y + a
i
z)
1
is
then computed for each discretized value of a
i
and stored in
a table. To evaluate
E
L
(y + az)
1
for some a
[a
1
, a
c
],
the closest discretized value a
i
is determined, and the corresponding
entry in the table is used. The total running time
for this method is linear in
and the discretization parameter
c (which we take to be a constant). We note that if exact
values are desired, we have also developed an algorithm that
runs in O( log ) time that is not described here.
In the main loop, we compute the vector x, as defined
in equation (16). The nested loops iterate over the set of
pages in F that are siblings of page j. Typically, the size
of this set is bounded by a constant. Finally, for each page
j, the scores vector is updated over the set of non-zero
components k of the vector x with k
L. This set has
size equal to the number of local siblings of page j, and is
a subset of the total number of siblings of page j. Thus,
each iteration of the main loop takes constant time, and the
total running time of the main loop is O( ). Since we have
assumed that the size of F will not grow larger than O(n),
the total running time for the algorithm is O(n).
Algorithm 3:
Node Selection via Stochastic
Complementation.
SC-Select
(F , F
out
, f , k)
Input:
F : zero-one adjacency matrix of size corresponding
to the current local subgraph, F
out
: zero-one
outlink matrix from F to global subgraph, f : PageRank
of F , k: number of pages to return
Output: pages: set of k pages to crawl next
{Compute outlink sums for local subgraph}
foreach (page j
F )
o[j]
k:(j,k)F
F [j, k]
end
{Compute scalar ~u
T
j
f for each global node j }
foreach (page j
F
out
)
g[j]
(1 - )
1
+1
foreach (page k : (k, j)
F
out
)
g[j]
g[j] +
f[k]
o[k]+1
end
end
{Compute vectors y and z as in (17) and (18) }
y -(1 - )
e
( +1)
z (1 - w)
-1
~
s
{Approximate y + g[j] z
1
for all values g[j]
}
norm diffs
L
1
NormDiffs
(g, E
L
y, E
L
z)
foreach (page j
F
out
)
{Compute sparse vector x as in (19)}
x 0
foreach (page k : (k, j)
F
out
)
foreach (page k : (k, k )
F ))
x[k ]
x[k ] f
[k]
o[k](o[k]+1)
end
end
x x
scores[j]
norm diffs[j]
foreach (k : x[k] > 0 and page k
L)
scores[j]
scores[j] - |y[k] + g[j] z[k]|
+
|x[k]+y[k]+g[j]z[k])|
end
end
Return k pages with highest scores
5.2.2
PageRank Flows
We now present an intuitive analysis of the stochastic
complementation method by decomposing the change in PageRank
in terms of `leaks' and `flows'. This analysis is motivated
by the decomposition given in (15). PageRank `flow' is
the increase in the local PageRanks originating from global
page j. The flows are represented by the non-negative vector
( ~
u
T
j
f )z (equations (15) and (18)). The scalar ~
u
T
j
f can be
thought of as the total amount of PageRank flow that page
j has available to distribute. The vector z dictates how the
flow is allocated to the local domain; the flow that local
page k receives is proportional to (within a constant factor
due to the random surfer vector) the expected number of its
inlinks.
The PageRank `leaks' represent the decrease in PageRank
resulting from the addition of page j.
The leakage can
be quantified in terms of the non-positive vectors x and
y (equations (16) and (17)). For vector x, we can see from
equation (19) that the amount of PageRank leaked by a
local page is proportional to the weighted sum of the Page-121
Research Track Paper
Ranks of its siblings. Thus, pages that have siblings with
higher PageRanks (and low outlink counts) will experience
more leakage. The leakage caused by y is an artifact of the
random surfer vector.
We will next show that if only the `flow' term, ( ~
u
T
j
f )z,
is considered, then the resulting method is very similar to
a heuristic proposed by Cho et al. [6] that has been widely
used for the "Crawling Through URL Ordering" problem.
This heuristic is computationally cheaper, but as we will see
later, not as effective as the Stochastic Complementation
method.
Our node selection strategy chooses global nodes that
have the largest influence (equation (7)). If this influence is
approximated using only `flows', the optimal node j
is:
j
=
argmax
j
E
L
~
u
T
j
f z
1
=
argmax
j
~
u
T
j
f E
L
z
1
=
argmax
j
~
u
T
j
f
=
argmax
j
(D
F
+ diag(u
j
))
-1
u
j
+ (1
- ) e
+ 1 , f
=
argmax
j
f
T
(D
F
+ diag(u
j
))
-1
u
j
.
The resulting page selection score can be expressed as a sum
of the PageRanks of each local page k that links to j, where
each PageRank value is normalized by o[k]+1. Interestingly,
the normalization that arises in our method differs from the
heuristic given in [6], which normalizes by o[k].
The algorithm
PF-Select, which is omitted due to lack of space,
first computes the quantity f
T
(D
F
+diag(u
j
))
-1
u
j
for each
global page j, and then returns the pages with the k largest
scores. To see that the running time for this algorithm is
O(n), note that the computation involved in this method is
a subset of that needed for the SC-Select method (Algorithm
3), which was shown to have a running time of O(n).
EXPERIMENTS
In this section, we provide experimental evidence to verify
the effectiveness of our algorithms. We first outline our
experimental methodology and then provide results across
a variety of local domains.
6.1
Methodology
Given the limited resources available at an academic institution
, crawling a section of the web that is of the same
magnitude as that indexed by Google or Yahoo! is clearly
infeasible. Thus, for a given local domain, we approximate
the global graph by crawling a local neighborhood around
the domain that is several orders of magnitude larger than
the local subgraph. Even though such a graph is still orders
of magnitude smaller than the `true' global graph, we contend
that, even if there exist some highly influential pages
that are very far away from our local domain, it is unrealis-tic
for any local node selection algorithm to find them. Such
pages also tend to be highly unrelated to pages within the
local domain.
When explaining our node selection strategies in section
5, we made the simplifying assumption that our local graph
contained no dangling nodes.
This assumption was only
made to ease our analysis. Our implementation efficiently
handles dangling links by replacing each zero column of our
adjacency matrix with the uniform vector. We evaluate the
algorithm using the two node selection strategies given in
Section 5.2, and also against the following baseline methods:
Random: Nodes are chosen uniformly at random among
the known global nodes.
OutlinkCount: Global nodes with the highest number
of outlinks from the local domain are chosen.
At each iteration of the FindGlobalPR algorithm, we evaluate
performance by computing the difference between the
current PageRank estimate of the local domain,
E
L
f
E
L
f
1
, and
the global PageRank of the local domain
E
L
g
E
L
g
1
. All PageRank
calculations were performed using the uniform random
surfer vector. Across all experiments, we set the random
surfer parameter , to be .85, and used a convergence
threshold of 10
-6
. We evaluate the difference between the
local and global PageRank vectors using three different metrics
: the L
1
and L
norms, and Kendall's tau. The L
1
norm
measures the sum of the absolute value of the differences between
the two vectors, and the L
norm measures the absolute
value of the largest difference. Kendall's tau metric is
a popular rank correlation measure used to compare PageRanks
[2, 11]. This metric can be computed by counting
the number of pairs of pairs that agree in ranking, and subtracting
from that the number of pairs of pairs that disagree
in ranking. The final value is then normalized by the total
number of
n
2
such pairs, resulting in a [
-1, 1] range, where
a negative score signifies anti-correlation among rankings,
and values near one correspond to strong rank correlation.
6.2
Results
Our experiments are based on two large web crawls and
were downloaded using the web crawler that is part of the
Nutch open source search engine project [18]. All crawls
were restricted to only `http' pages, and to limit the number
of dynamically generated pages that we crawl, we ig-nored
all pages with urls containing any of the characters
`?', `*', `@', or `='. The first crawl, which we will refer to
as the `edu' dataset, was seeded by homepages of the top
100 graduate computer science departments in the USA, as
rated by the US News and World Report [16], and also by
the home pages of their respective institutions. A crawl of
depth 5 was performed, restricted to pages within the `.edu'
domain, resulting in a graph with approximately 4.7 million
pages and 22.9 million links. The second crawl was seeded
by the set of pages under the `politics' hierarchy in the dmoz
open directory project[17]. We crawled all pages up to four
links away, which yielded a graph with 4.4 million pages and
17.3 million links.
Within the `edu' crawl, we identified the five site-specific
domains corresponding to the websites of the top five graduate
computer science departments, as ranked by the US
News and World Report. This yielded local domains of various
sizes, from 10,626 (UIUC) to 59,895 (Berkeley). For each
of these site-specific domains with size n, we performed 50
iterations of the FindGlobalPR algorithm to crawl a total
of 2n additional nodes. Figure 2(a) gives the (L
1
) difference
from the PageRank estimate at each iteration to the global
PageRank, for the Berkeley local domain.
The performance of this dataset was representative of the
typical performance across the five computer science site-specific
local domains. Initially, the L
1
difference between
the global and local PageRanks ranged from .0469 (Stanford
) to .149 (MIT). For the first several iterations, the
122
Research Track Paper
0
5
10
15
20
25
30
35
40
45
50
0.015
0.02
0.025
0.03
0.035
0.04
0.045
0.05
0.055
Number of Iterations
Global and Local PageRank Difference (L1)
Stochastic Complement
PageRank Flow
Outlink Count
Random
0
10
20
30
40
50
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
Number of Iterations
Global and Local PageRank Difference (L1)
Stochastic Complement
PageRank Flow
Outlink Count
Random
0
5
10
15
20
25
0.16
0.18
0.2
0.22
0.24
0.26
0.28
0.3
0.32
0.34
Number of Iterations
Global and Local PageRank Difference (L1)
Stochastic Complement
PageRank Flow
Outlink Count
Random
(a) www.cs.berkeley.edu
(b) www.enterstageright.com
(c) Politics
Figure 2: L
1
difference between the estimated and true global PageRanks for (a) Berkeley's computer science
website, (b) the site-specific domain, www.enterstageright.com, and (c) the `politics' topic-specific domain. The
stochastic complement method outperforms all other methods across various domains.
three link-based methods all outperform the random selection
heuristic.
After these initial iterations, the random
heuristic tended to be more competitive with (or even outperform
, as in the Berkeley local domain) the outlink count
and PageRank flow heuristics. In all tests, the stochastic
complementation method either outperformed, or was competitive
with, the other methods. Table 1 gives the average
difference between the final estimated global PageRanks and
the true global PageRanks for various distance measures.
Algorithm
L
1
L
Kendall
Stoch. Comp.
.0384
.00154
.9257
PR Flow
.0470
.00272
.8946
Outlink
.0419
.00196
.9053
Random
.0407
.00204
.9086
Table 1: Average final performance of various node
selection strategies for the five site-specific computer
science local domains.
Note that Kendall's
Tau measures similarity, while the other metrics are
dissimilarity measures.
Stochastic Complementation
clearly outperforms the other methods in all
metrics.
Within the `politics' dataset, we also performed two site-specific
tests for the largest websites in the crawl: www.adam-smith
.org, the website for the London based Adam Smith
Institute, and www.enterstageright.com, an online conservative
journal. As with the `edu' local domains, we ran our
algorithm for 50 iterations, crawling a total of 2n nodes. Figure
2 (b) plots the results for the www.enterstageright.com
domain. In contrast to the `edu' local domains, the Random
and OutlinkCount methods were not competitive with either
the SC-Select or the PF-Select methods. Among all
datasets and all node selection methods, the stochastic complementation
method was most impressive in this dataset,
realizing a final estimate that differed only .0279 from the
global PageRank, a ten-fold improvement over the initial local
PageRank difference of .299. For the Adam Smith local
domain, the initial difference between the local and global
PageRanks was .148, and the final estimates given by the
SC-Select
, PF-Select, OutlinkCount, and Random
methods were .0208, .0193, .0222, and .0356, respectively.
Within the `politics' dataset, we constructed four topic-specific
local domains.
The first domain consisted of all
pages in the dmoz politics category, and also all pages within
each of these sites up to two links away. This yielded a local
domain of 90,811 pages, and the results are given in figure 2
(c). Because of the larger size of the topic-specific domains,
we ran our algorithm for only 25 iterations to crawl a total
of n nodes.
We also created topic-specific domains from three political
sub-topics: liberalism, conservatism, and socialism. The
pages in these domains were identified by their corresponding
dmoz categories. For each sub-topic, we set the local
domain to be all pages within three links from the corresponding
dmoz category pages.
Table 2 summarizes the
performance of these three topic-specific domains, and also
the larger political domain.
To quantify a global page j's effect on the global PageRank
values of pages in the local domain, we define page
j's impact to be its PageRank value, g[j], normalized by the
fraction of its outlinks pointing to the local domain:
impact(j) = o
L
[j]
o[j] g[j],
where, o
L
[j] is the number of outlinks from page j to pages
in the local domain L, and o[j] is the total number of j's
outlinks. In terms of the random surfer model, the impact
of page j is the probability that the random surfer (1) is
currently at global page j in her random walk and (2) takes
an outlink to a local page, given that she has already decided
not to jump to a random page.
For the politics local domain, we found that many of the
pages with high impact were in fact political pages that
should have been included in the dmoz politics topic, but
were not. For example, the two most influential global pages
were the political search engine www.askhenry.com, and the
home page of the online political magazine, www.policy-review
.com. Among non-political pages, the home page of
the journal "Education Next" was most influential.
The
journal is freely available online and contains articles regarding
various aspect of K-12 education in America. To provide
some anecdotal evidence for the effectiveness of our page selection
methods, we note that the SC-Select method chose
11 pages within the www.educationnext.org domain, the
PF-Select
method discovered 7 such pages, while the OutlinkCount
and Random methods found only 6 pages each.
For the conservative political local domain, the socialist
website www.ornery.org had a very high impact score. This
123
Research Track Paper
All Politics:
Algorithm
L
1
L
2
Kendall
Stoch. Comp.
.1253
.000700
.8671
PR Flow
.1446
.000710
.8518
Outlink
.1470
.00225
.8642
Random
.2055
.00203
.8271
Conservativism:
Algorithm
L
1
L
2
Kendall
Stoch. Comp.
.0496
.000990
.9158
PR Flow
.0554
.000939
.9028
Outlink
.0602
.00527
.9144
Random
.1197
.00102
.8843
Liberalism:
Algorithm
L
1
L
2
Kendall
Stoch. Comp.
.0622
.001360
.8848
PR Flow
.0799
.001378
.8669
Outlink
.0763
.001379
.8844
Random
.1127
.001899
.8372
Socialism:
Algorithm
L
1
L
Kendall
Stoch. Comp.
.04318
.00439
.9604
PR Flow
.0450
.004251
.9559
Outlink
.04282
.00344
.9591
Random
.0631
.005123
.9350
Table 2: Final performance among node selection
strategies for the four political topic-specific crawls.
Note that Kendall's Tau measures similarity, while
the other metrics are dissimilarity measures.
was largely due to a link from the front page of this site
to an article regarding global warming published by the
National Center for Public Policy Research, a conservative
research group in Washington, DC. Not surprisingly, the
global PageRank of this article (which happens to be on the
home page of the NCCPR, www.nationalresearch.com),
was approximately .002, whereas the local PageRank of this
page was only .00158. The SC-Select method yielded a
global PageRank estimate of approximately .00182, the PF-Select
method estimated a value of .00167, and the Random
and OutlinkCount methods yielded values of .01522
and .00171, respectively.
RELATED WORK
The node selection framework we have proposed is similar
to the url ordering for crawling problem proposed by Cho
et al. in [6]. Whereas our framework seeks to minimize the
difference between the global and local PageRank, the objective
used in [6] is to crawl the most highly (globally) ranked
pages first. They propose several node selection algorithms,
including the outlink count heuristic, as well as a variant of
our PF-Select algorithm which they refer to as the `Page-Rank
ordering metric'. They found this method to be most
effective in optimizing their objective, as did a recent survey
of these methods by Baeza-Yates et al. [1]. Boldi et al. also
experiment within a similar crawling framework in [2], but
quantify their results by comparing Kendall's rank correlation
between the PageRanks of the current set of crawled
pages and those of the entire global graph. They found that
node selection strategies that crawled pages with the highest
global PageRank first actually performed worse (with
respect to Kendall's Tau correlation between the local and
global PageRanks) than basic depth first or breadth first
strategies. However, their experiments differ from our work
in that our node selection algorithms do not use (or have
access to) global PageRank values.
Many algorithmic improvements for computing exact PageRank
values have been proposed [9, 10, 14]. If such algorithms
are used to compute the global PageRanks of our
local domain, they would all require O(N ) computation,
storage, and bandwidth, where N is the size of the global
domain. This is in contrast to our method, which approximates
the global PageRank and scales linearly with the size
of the local domain.
Wang and Dewitt [22] propose a system where the set of
web servers that comprise the global domain communicate
with each other to compute their respective global PageRanks
. For a given web server hosting n pages, the computational
, bandwidth, and storage requirements are also
linear in n. One drawback of this system is that the number
of distinct web servers that comprise the global domain
can be very large. For example, our `edu' dataset contains
websites from over 3,200 different universities; coordinating
such a system among a large number of sites can be very
difficult.
Gan, Chen, and Suel propose a method for estimating the
PageRank of a single page [5] which uses only constant bandwidth
, computation, and space. Their approach relies on the
availability of a remote connectivity server that can supply
the set of inlinks to a given page, an assumption not used in
our framework. They experimentally show that a reasonable
estimate of the node's PageRank can be obtained by visiting
at most a few hundred nodes. Using their algorithm for our
problem would require that either the entire global domain
first be downloaded or a connectivity server be used, both
of which would lead to very large web graphs.
CONCLUSIONS AND FUTURE WORK
The internet is growing exponentially, and in order to navigate
such a large repository as the web, global search engines
have established themselves as a necessity. Along with
the ubiquity of these large-scale search engines comes an increase
in search users' expectations. By providing complete
and isolated coverage of a particular web domain, localized
search engines are an effective outlet to quickly locate content
that could otherwise be difficult to find. In this work,
we contend that the use of global PageRank in a localized
search engine can improve performance.
To estimate the global PageRank, we have proposed an
iterative node selection framework where we select which
pages from the global frontier to crawl next. Our primary
contribution is our stochastic complementation page selection
algorithm. This method crawls nodes that will most
significantly impact the local domain and has running time
linear in the number of nodes in the local domain. Experimentally
, we validate these methods across a diverse set of
local domains, including seven site-specific domains and four
topic-specific domains. We conclude that by crawling an additional
n or 2n pages, our methods find an estimate of the
global PageRanks that is up to ten times better than just
using the local PageRanks. Furthermore, we demonstrate
that our algorithm consistently outperforms other existing
heuristics.
124
Research Track Paper
Often times, topic-specific domains are discovered using
a focused web crawler which considers a page's content in
conjunction with link anchor text to decide which pages to
crawl next [4]. Although such crawlers have proven to be
quite effective in discovering topic-related content, many irrelevant
pages are also crawled in the process. Typically,
these pages are deleted and not indexed by the localized
search engine. These pages can of course provide valuable
information regarding the global PageRank of the local domain
. One way to integrate these pages into our framework
is to start the FindGlobalPR algorithm with the current
subgraph F equal to the set of pages that were crawled by
the focused crawler.
The global PageRank estimation framework, along with
the node selection algorithms presented, all require O(n)
computation per iteration and bandwidth proportional to
the number of pages crawled, T k. If the number of iterations
T is relatively small compared to the number of pages
crawled per iteration, k, then the bottleneck of the algorithm
will be the crawling phase. However, as the number of iterations
increases (relative to k), the bottleneck will reside in
the node selection computation. In this case, our algorithms
would benefit from constant factor optimizations. Recall
that the FindGlobalPR algorithm (Algorithm 2) requires
that the PageRanks of the current expanded local domain be
recomputed in each iteration. Recent work by Langville and
Meyer [12] gives an algorithm to quickly recompute PageRanks
of a given webgraph if a small number of nodes are
added. This algorithm was shown to give speedup of five to
ten times on some datasets. We plan to investigate this and
other such optimizations as future work.
In this paper, we have objectively evaluated our methods
by measuring how close our global PageRank estimates are
to the actual global PageRanks. To determine the benefit
of using global PageRanks in a localized search engine,
we suggest a user study in which users are asked to rate
the quality of search results for various search queries. For
some queries, only the local PageRanks are used in ranking
, and for the remaining queries, local PageRanks and the
approximate global PageRanks, as computed by our algorithms
, are used. The results of such a study can then be
analyzed to determine the added benefit of using the global
PageRanks computed by our methods, over just using the
local PageRanks.
Acknowledgements. This research was supported by NSF
grant CCF-0431257, NSF Career Award ACI-0093404, and
a grant from Sabre, Inc.
REFERENCES
[1] R. Baeza-Yates, M. Marin, C. Castillo, and
A. Rodriguez. Crawling a country: better strategies
than breadth-first for web page ordering. World-Wide
Web Conference, 2005.
[2] P. Boldi, M. Santini, and S. Vigna. Do your worst to
make the best: paradoxical effects in pagerank
incremental computations. Workshop on Web Graphs,
3243:168180, 2004.
[3] S. Brin and L. Page. The anatomy of a large-scale
hypertextual web search engine. Computer Networks
and ISDN Systems, 33(17):107117, 1998.
[4] S. Chakrabarti, M. van den Berg, and B. Dom.
Focused crawling: a new approach to topic-specific
web resource discovery. World-Wide Web Conference,
1999.
[5] Y. Chen, Q. Gan, and T. Suel. Local methods for
estimating pagerank values. Conference on
Information and Knowledge Management, 2004.
[6] J. Cho, H. Garcia-Molina, and L. Page. Efficient
crawling through url ordering. World-Wide Web
Conference, 1998.
[7] T. H. Haveliwala and S. D. Kamvar. The second
eigenvalue of the Google matrix. Technical report,
Stanford University, 2003.
[8] T. Joachims, F. Radlinski, L. Granka, A. Cheng,
C. Tillekeratne, and A. Patel. Learning retrieval
functions from implicit feedback.
http://www.cs.cornell.edu/People/tj/career.
[9] S. D. Kamvar, T. H. Haveliwala, C. D. Manning, and
G. H. Golub. Exploiting the block structure of the
web for computing pagerank. World-Wide Web
Conference, 2003.
[10] S. D. Kamvar, T. H. Haveliwala, C. D. Manning, and
G. H. Golub. Extrapolation methods for accelerating
pagerank computation. World-Wide Web Conference,
2003.
[11] A. N. Langville and C. D. Meyer. Deeper inside
pagerank. Internet Mathematics, 2004.
[12] A. N. Langville and C. D. Meyer. Updating the
stationary vector of an irreducible markov chain with
an eye on Google's pagerank. SIAM Journal on
Matrix Analysis, 2005.
[13] P. Lyman, H. R. Varian, K. Swearingen, P. Charles,
N. Good, L. L. Jordan, and J. Pal. How much
information 2003? School of Information Management
and System, University of California at Berkely, 2003.
[14] F. McSherry. A uniform approach to accelerated
pagerank computation. World-Wide Web Conference,
2005.
[15] C. D. Meyer. Stochastic complementation, uncoupling
markov chains, and the theory of nearly reducible
systems. SIAM Review, 31:240272, 1989.
[16] US News and World Report. http://www.usnews.com.
[17] Dmoz open directory project. http://www.dmoz.org.
[18] Nutch open source search engine.
http://www.nutch.org.
[19] F. Radlinski and T. Joachims. Query chains: learning
to rank from implicit feedback. ACM SIGKDD
International Conference on Knowledge Discovery and
Data Mining, 2005.
[20] S. Raghavan and H. Garcia-Molina. Crawling the
hidden web. In Proceedings of the Twenty-seventh
International Conference on Very Large Databases,
2001.
[21] T. Tin Tang, D. Hawking, N. Craswell, and
K. Griffiths. Focused crawling for both topical
relevance and quality of medical information.
Conference on Information and Knowledge
Management, 2005.
[22] Y. Wang and D. J. DeWitt. Computing pagerank in a
distributed internet search system. Proceedings of the
30th VLDB Conference, 2004.
125
Research Track Paper
| node selection;Experimentation;global PageRank;Algorithms;crawling;site specific domain;localized search engines |
86 | Evaluating Similarity Measures: A Large-Scale Study in the Orkut Social Network | Online information services have grown too large for users to navigate without the help of automated tools such as collaborative filtering, which makes recommendations to users based on their collective past behavior. While many similarity measures have been proposed and individually evaluated, they have not been evaluated relative to each other in a large real-world environment. We present an extensive empirical comparison of six distinct measures of similarity for recommending online communities to members of the Orkut social network. We determine the usefulness of the different recommendations by actually measuring users' propensity to visit and join recommended communities. We also examine how the ordering of recommendations influenced user selection, as well as interesting social issues that arise in recommending communities within a real social network. | INTRODUCTION
The amount of information available online grows far faster
than an individual's ability to assimilate it. For example,
consider "communities" (user-created discussion groups) within
Orkut, a social-networking website (http://www.orkut.com)
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
KDD'05, August 2124, 2005, Chicago, Illinois, USA.
Copyright 2005 ACM 1-59593-135-X/05/0008 ...
$
5.00.
affiliated with Google. The original mechanisms for users
to find communities were labor-intensive, including searching
for keywords in community titles and descriptions or
browsing other users' memberships. Four months after its
January 2004 debut, Orkut had over 50,000 communities,
providing the necessity and opportunity for data-mining for
automated recommendations. There are now (May 2005)
over 1,500,000 communities.
While there are many forms of recommender systems [3],
we chose a collaborative filtering approach [13] based on
overlapping membership of pairs of communities. We did
not make use of semantic information, such as the description
of or messages in a community (although this may be an
area of future work). Our recommendations were on a per-community
, rather than a per-user basis; that is, all members
of a given community would see the same recommendations
when visiting that community's page. We chose this
approach out of the belief, which was confirmed, that community
memberships were rich enough to make very useful
recommendations without having to perform more compu-tationally
intensive operations, such as clustering of users or
communities or computing nearest neighbor relations among
users. Indeed, Sarwar et al. have found such item-based algorithms
to be both more efficient and successful than user-based
algorithms [13]. By measuring user acceptance of recommendations
, we were able to evaluate the absolute and
relative utility of six different similarity measures on a large
volume of data.
MEASURES OF SIMILARITY
The input data came from the membership relation M =
{(u, c) | u U , c C} , where C is the set of communities
with at least 20 members and U the set of users belonging
to at least one such community. When we began our
experiment in May 2004, |C| = 19, 792, |U | = 181, 160, and
|M| = 2, 144, 435. Table 1 summarizes the distribution.
All of our measures of community similarity involve the
overlap between two communities, i.e., the number of com-Table
1: Distribution of community memberships
min
max
median
Users per community
20
9077
50
230.5
Communities per user
1
4173
6
28.0
678
Research Track Poster
mon users. If a base community b and a (potentially) related
community r are considered as sets of users, the overlap is
|B R|, where we use capital letters to represent the set containing
a community's members. Note that overlap cannot
be the sole factor in relatedness, as the size of communities
varies greatly. If we only considered overlap, practically every
community would be considered related to the "Linux"
community, which was the most popular, with 9,077 members
. The similarity measures in the next section normalize
the overlap in different ways.
2.1
Similarity Measure Functions
Each similarity measure we consider is presented as a (possibly
asymmetric) function of b and r indicating how appropriate
the related community r is as a recommendation for
the base community b.
We do not use the result of the
function as an absolute measure of similarity, only to rank
recommendations for a given base community.
2.1.1
L1-Norm
If we consider the base and related communities to be
vectors
b and
r , where the i
th
element of a vector is 1 if
user i is a member and 0 if not, we can measure the overlap
as the product of their L1-norms:
L1(
b ,
r ) =
b
r
b
1
r
1
This quantity can also be expressed in set notation, where
we use a capital letter to represent the set containing a community's
members:
L1(B, R) = |B R|
|B| |R|
Note that this evaluates to the overlap between the two
groups divided by the product of their sizes.
When the
base community is held constant (as when we determine the
base community's recommendations), this evaluates to the
overlap divided by the size of the related community, favoring
small communities. Kitts et al. [9] reported this to
be a successful measure of similarity in their recommender
system.
2.1.2
L2-Norm
Similarly, we can measure the overlap with the product of
the L2-norms ("cosine distance" [3, 6, 12]) of
b and
r :
L2(
b ,
r ) =
b
r
b
2
r
2
In set notation:
L2(B, R) =
|B R|
|B| |R|
Note that the square-root in the denominator causes L2 to
penalize large communities less severely than L1.
Observe that the L2-norm presented here is equivalent to
the widely used cosine coefficient applied to binary data.
Moreover, while Pearson correlation has been used previ-ously
in recommender systems where ranking data is available
, we did not use this measure here since it is generally
considered inappropriate for binary data.
2.1.3
Pointwise Mutual-Information: positive correlations
(MI1)
Information theory motivates other measures of correlation
, such as "mutual information" [2]. We chose pointwise
mutual information where we only count "positive" correlations
(membership in both B and R). Such a formulation
essentially focuses on how membership in one group is pre-dictive
of membership in another (without considering how
base non-membership in a group effects membership in another
group), yielding:
M I1(b, r) = P(r, b) lg
P(r, b)
P(r) P(b)
2.1.4
Pointwise Mutual-Information: positive and
negative correlations (MI2)
Similarly, we can compute the pointwise mutual information
with both positive and negative correlations (e.g.,
membership in both B and R, or non-membership in both
groups). Again, we don't compute the full expected mutual
information, since we believe cross-correlations (e.g., how
membership in B affects non-membership in R) tend to be
distortive with the recommendation task since such cross-correlations
are plentiful but not very informative.
This
yields:
M I2(b, r) = P(r, b) lg
P(r, b)
P(r) P(b) + P(
r,
b) lg
P(
r,
b)
P(
r) P(
b)
2.1.5
Salton (IDF)
Salton proposed a measure of similarity based on inverse
document frequency scaling (tf-idf) [12]:
IDF (b, r) = P (r|b) (- lg P(r))
IDF (B, R) = |B R|
|B|
(- lg |R|
|U | )
2.1.6
Log-Odds
We first considered the standard log-odds function, which
measures the relative likelihood that presence or absence in
a base community predicts membership in a related community
:
LogOdds0(b, r) = lg P(r|b)
P(r|
b)
Empirically, we found this generated the exact same rankings
as using the L1-Norm, which makes sense because:
1. Logarithm is monotonic and, while affecting scores,
does not affect rankings.
2. Constant factors, such as |B|, do not affect rankings.
3. For |B|
|U |, P(r|
b) P(r)
We formulated a different log-odds metric, which measures
whether membership in the base community is likelier to
predict membership or absence in the related community:
LogOdds(b, r) = lg P(r|b)
P(
r|b)
679
Research Track Poster
Table 2: Average size of top-ranked community for
each measure
measure
Average size
rank 1
rank 2
rank 3
L1
332
482
571
L2
460
618
694
MI1
903
931
998
MI2
966
1003
1077
IDF
923
985
1074
LogOdds
357
513
598
Table 3: Agreement in top-ranked results between
measures. For example, MI1 and IDF rank the same
related community first for 98% of base communities
. Correlations greater than 85% are in bold.
L1
.70
L2
.41
.60
MI1
.39
.57
.96
MI2
.41
.59
.98
.97
IDF
.88
.79
.46
.44
.46
LogOdds
2.2
Discussion
For a given measure, we refer to the related community
yielding the highest value to be the top-ranked related community
relative to a base community.
The average size
of top-ranked communities for each measure, which varies
greatly, is shown in Table 2. Table 3 shows how often two
functions yield the same top-ranking result. Table 4 shows
the top recommendations for the "I love wine" community.
Note that MI1, MI2, and IDF favor very large communities,
while L1 and LogOdds favor small communities.
Note that in addition to the obvious correlations between
the two mutual-information functions (96%), there is a very
strong correlation between IDF and the mutual-information
functions (97-98%). Manipulation of the formulas for MI1
and IDF shows:
M I1(b, r)
=
P(r, b) lg
P(r, b)
P(r) P(b)
=
P(r|b)P(b) lg P(r|b) - P(r|b)P(b) lg P(r)
=
P(r|b)P(b) lg P(r|b)
-P(r|b) [1 - P(
b)] lg P(r)
=
P(r|b)[P(b) lg P(r|b) + P(r|b)P(
b) lg P(r)]
-P(r|b) lg P(r)
Substituting IDF (b, r) = -P (r|b) lg P(r), we get:
M I1(b, r)
=
P(r|b) P(b) lg P(r|b) + P(
b) lg P(r)
+IDF (b, r)
Since for virtually all communities b, P (b)
P (
b), we can
approximate:
M I1(b, r) IDF (b, r) + P(r|b) P(
b) lg P(r)
Thus, MI1 yields a ranking that can be thought of as starting
with the ranking of IDF and perturbing the score of each
element in the ranking by P(r|b) P(
b) lg P(r), which generally
is not great enough to change the relative ranking of the
top scores, leading to MI1 and IDF often giving the same
ranking to top-scoring communities. (Note that this perturbation
quantity is given only to explain the high correlation
between MI1 and IDF. Statistically, it is meaningless, since
b and
b cannot simultaneously hold.)
EXPERIMENT DESIGN
We designed an experiment to determine the relative value
of the recommendations produced by each similarity measure
. This involved interleaving different pairs of recommendations
and tracking user clicks. Specifically, we measured
the efficacy of different similarity measures using pair-wise
binomial sign tests on click-through data rather than using
traditional supervised learning measures such as preci-sion/recall
or accuracy since there is no "true" labeled data
for this task (i.e., we do not know what are the correct communities
that should be recommended to a user). Rather,
we focused on the task of determining which of the similarity
measures performs best on a relative performance scale
with regard to acceptance by users.
3.1
Combination
When a user viewed a community page, we hashed the
combined user and community identifiers to one of 30 values
, specifying an ordered pair of similarity measures to
compare. Let S and T be the ordered lists of recommendations
for the two measures, where S = (s
1
, s
2
, . . . , s
|S|
) and
T = (t
1
, t
2
, . . . , t
|T |
) and |S| = |T |. The recommendations of
each measure are combined by Joachims' "Combined Rank-ing"
algorithm [7], restated in Figure 1. The resulting list is
guaranteed to contain the top k
S
and k
T
recommendations
for each measure, where k
T
k
S
k
T
+ 1 [7, Theorem 1].
3.2
Measurements
Whenever a user visited a community, two measures were
chosen and their recommendations interleaved, as discussed
above. This was done in a deterministic manner so that
a given user always saw the same recommendations for a
given community. To minimize feedback effects, we did not
regenerate recommendations after the experiment began.
A user who views a base community (e.g., "I love wine") is
either a member (denoted by "M") or non-member (denoted
by "n"). (We capitalize "M" but not "n" to make them eas-ier
to visually distinguish.) In either case, recommendations
are shown. When a user clicks on a recommendation, there
are three possibilities: (1) the user is already a member of
the recommended community ("M"), (2) the user joins the
recommended community ("j"), or (3) the user visits but
does not join the recommended community ("n"). The combination
of base and related community memberships can be
combined in six different ways. For example "Mj" denotes
a click where a member of the base community clicks on a
recommendation to another community to which she does
not belong and joins that community. Traditionally, analyses
of recommender systems focus on "Mj", also known
informally as "if you like this, you'll like that" or formally
as "similarity" or "conversion". "Mn" recommendations
are considered distracters, having negative utility, since they
waste a user's time with an item not of interest. Before running
the experiment, we decided that the measures should
be judged on their "Mj" performance.
Other interpretations are possible: "Mn" links could be
considered to have positive utility for any of the following
680
Research Track Poster
Table 4: Top recommendations for each measure for the "I love wine" community, with each recommended
community's overlap with the base community and size. The size of "I love wine" is 2400.
L1
L2
MI1
MI2
IDF
LogOdds
1
Ice Wine
(Eiswein)
(33/51)
Red Wine
(208/690)
Japanese
Food/Sushi
Lovers
(370/3206)
Japanese
Food/Sushi
Lovers
(370/3206)
Japanese
Food/Sushi
Lovers
(370/3206)
Japanese
Food/Sushi
Lovers
(370/3206)
2
California
Pinot Noir
(26/41)
Cheeses of the
World
(200/675)
Red Wine
(208/690)
Red Wine
(208/690)
Photography
(319/4679)
Photography
(319/4679)
3
Winery
Visitor Worldwide
(44/74)
I love red
wine!
(170/510)
Cheeses of the
World
(200/675)
Cheeses of the
World
(200/675)
Red Wine
(208/690)
Linux
(299/9077)
Figure 1: Joachims' "Combine Rankings" algorithm [7]
Input:
ordered recommendation lists S = (s
1
, s
2
, . . . , s
|S|
) and T = (t
1
, t
2
, . . . , t
|T |
) where |S| = |T |
Call:
combine (S, T, 0, 0, )
Output:
combined ordered recommendation list D
combine(S, T, k
s
, k
t
, D){
if (k
s
< |S| k
t
< |T |)
if (k
s
= k
t
) {
if (S[k
s
+ 1] /
D){D := D + S[k
s
+ 1]; }
combine(S, T, k
s
+ 1, k
t
, D);
} else {
if (T [k
t
+ 1] /
D){D := D + T [k
t
+ 1]; }
combine(S, T, k
s
, k
t
+ 1, D);
}
}
}
Table 5: Clicks on recommendations, by membership status in the base and recommended communities, as
counts and as percentages of total clicks. The last column shows the conversion rate, defined as the percentage
of non-members clicking on a related community who then joined it (
j
n+j
).
membership in base community
membership in recommended community
M (member)
n (non-member)
j (join)
total
conversion rate
M (member): number of clicks
36353
184214
212982
433549
54%
percent of total clicks
4%
20%
24%
48%
n (non-member): number of clicks
8771
381241
77905
467917
17%
percent of total clicks
1%
42%
9%
52%
total: number of clicks
45124
565455
290887
901466
34%
percent of total clicks
5%
63%
32%
100%
681
Research Track Poster
reasons:
1. As the user found the link sufficiently interesting to
click on, it was of more utility than a link not eliciting
a click.
2. The user is genuinely interested in the related community
but does not want to proclaim her interest, as
membership information is public and some communities
focus on taboo or embarrassing topics. For example
, a recommendation given for the popular "Choco-late"
community is "PMS". Note that this effect is specific
to social networks and not, for example, Usenet
groups, where the user's list of communities is not revealed
to other users.
Similarly, it is unclear how to value clicks from a base community
that the user does not belong to. Does an "nj"
click indicate failure, since the base community was not
joined by the user, but the recommended community was,
indicating a degree of dissimilarity?
Or is it of positive
utility, since it helped a user find a community of interest?
For these reasons, we tracked all clicks, recording the user's
membership status in the base and recommended communities
for later analysis. (We did not track whether users
returned to communities in the future because of the logging
overhead that would be required.)
3.3
User Interface
On community pages, our recommendations were provided
in a table, each cell of which contained a recommended
community's name, optional picture, and link (Figure 2).
Recommendations were shown by decreasing rank from left
to right, top to bottom, in up to 4 rows of 3. For aesthetic
reasons, we only showed entire rows; thus, no recommendations
were displayed if there were fewer than 3. We also
provided a control that allowed users to send us comments
on the recommendations.
RESULTS
We analyzed all accesses between July 1, 2004, to July 18,
2004, of users who joined Orkut during that period. The system
served 4,106,050 community pages with recommendations
, which provides a lower bound on the number of views.
(Unfortunately, we could not determine the total number of
views due to browser caching.) There were 901,466 clicks on
recommendations, 48% by members of the base community,
52% by non-members (Table 5). Clicks to related communities
to which the user already belonged were rare, accounting
for only 5% of clicks. The most common case was for a non-member
of a base community to click through and not join
a related community (42%).
We defined conversion rate (also called precision) as the
percentage of non-members who clicked through to a community
who then joined it. The conversion rate was three
times as high (54%) when the member belonged to the base
community (from which the recommendation came) than
not (17%).
4.1
Relative performance of different measures
We compared each measure pairwise against every other
measure by analyzing clicks of their merged recommendations
. If the click was on a recommendation ranked higher
by measure L2 than measure L1, for example, we considered
it a "win" for L2 and a loss for L1. If both measures
ranked it equally, the result was considered to be a tie. Table
6 shows the outcomes of all clicks, with conversions by
members ("Mj") and non-members of the base community
("nj") broken out separately.
We say that a measure dominates another if, in their pairwise
comparison, the former has more "wins". For example,
L2 dominates L1. This definition, combined with the data
in Table 6, yielded a total order (to our surprise) among
the measures: L2, MI1, MI2, IDF, L1, LogOdds. The same
total order occurred if only "nj" clicks were considered.
The order was different if all clicks were considered: L2, L1,
MI1, MI2, IDF, LogOdds.
4.2
Conversion rates
There was great variance in conversion rate by recommended
community.
We examined the 93 recommended
communities that were clicked through to more than 1000
times. Unsurprisingly, the ten with the lowest conversion
rate all were about sex (e.g., Amateur Porn). Note that
members of the base community were far more willing than
non-members to join, perhaps because they had already
shown their willingness to join a sex-related community. At
the other extreme, none of the ten with the highest conversion
rate were sexual (e.g., Flamenco). Table 7 provides
selected data by each membership combination. Unsurprisingly
, for all 93 base communities, members were more likely
than non-members to join the recommended community.
4.3
User comments
Users were also able to submit feedback on related communities
. Most of the feedback was from users who wanted
recommendations added or removed. Some complained about
inappropriate recommendations of sexual or political
communities, especially if they found the displayed image
offensive. A few objected to our generating related community
recommendations at all, instead of allowing community
creators to specify them. In one case, poor recommendations
destroyed a community: The creator of a feminist sexuality
community disbanded it both because of the prurient
recommendations that appeared on her page and the disruptive
new members who joined as a result of recommendations
from such communities. We agreed with her that the recommendations
were problematic and offered to remove them.
While anecdotal, this example illustrates how a recommendation
can have unanticipated consequences that cannot be
captured in simple statistical measures. (An informal discussion
of users' behavior when we allowed them to choose
related communities can be found elsewhere [14].)
POSITIONAL EFFECTS
During the above experiment, we became curious how the
relative placement of recommendations affected users' selections
and performed a second experiment.
5.1
Design
After determining that L2 was the best measure of similarity
, we recomputed the recommendations and studied the
effect of position on click-through. While in our original
experiment we displayed up to 12 recommendations in decreasing
rank, for this experiment we displayed up to 9 recommendations
in random order, again ensuring that each
682
Research Track Poster
Table 6: The relative performance of each measure in pairwise combination on clicks leading to joins, divided
by base community membership status, and on all clicks.
Except where numbers appear in italics, the
superioriority of one measure over another was statistically significant (p < .01) using a binomial sign test
[10].
measures
M j
n j
all clicks
win
equal
loss
win
equal
loss
win
equal
loss
L2
MI1
6899
2977
4993
2600
1073
1853
30664
12277
20332
L2
MI2
6940
2743
5008
2636
1078
1872
31134
11260
19832
L2
IDF
6929
2697
5064
2610
1064
1865
30710
11271
20107
L2
L1
7039
2539
4834
2547
941
1983
28506
13081
23998
L2
LogOdds
8186
1638
4442
2852
564
1655
34954
6664
18631
MI1
MI2
3339
9372
1855
1223
3401
683
14812
37632
7529
MI1
IDF
3431
8854
1891
1139
3288
629
14671
37049
7758
MI1
LogOdds
7099
3546
3341
2514
1213
1193
29837
13869
13921
MI1
L1
6915
1005
6059
2547
407
2338
27786
4308
29418
MI2
IDF
1564
11575
1031
533
4266
359
6003
47885
4490
MI2
LogOdds
6920
3959
3177
2484
1418
598
2881
15308
13188
MI2
L1
6830
950
6419
2383
362
2333
26865
3872
29864
IDF
L1
6799
1006
6304
2467
392
2352
27042
4069
29755
IDF
LogOdds
6691
3804
3096
2452
1378
1085
28224
15013
13330
L1
LogOdds
6730
518
5975
2521
108
2059
31903
2097
24431
Table 7: Conversion rates by status of membership in base community, for communities to which more than
1000 clicks on recommendations occurred.
member of base community
non-member of base community
Related community
MM
Mj
Mj
conversion rate
nM
nn
nM
conversion rate
10 communities with highest
conversion rates
583
2273
6984
75%
198
3454
2017
37%
10 communities with lowest
conversion rates
326
1984
826
29%
68
26287
472
1.8%
all 93 communities
13524
54415
52614
46%
3488
127819
19007
17%
user always saw the same ordering of recommendations for
a given community. By randomizing the position of recommendations
, we sought to measure ordering primacy effects
in the recommendations as opposed to their ranked quality.
5.2
Results
We measured all 1,279,226 clicks on related community
recommendations from September 22, 2004, through October
21, 2004.
Table 8 shows the relative likelihood of
clicks on each position. When there was only a single row,
the middle recommendation was clicked most, followed by
the leftmost, then rightmost recommendations, although the
differences were not statistically significant.
When there
were two or three rows, the differences were very significant
(p < .001), with preferences for higher rows. P-values were
computed using a Chi-Squared test comparing the observed
click-through rates with a uniform distribution over all positions
[10].
CONCLUSION AND FUTURE PLANS
Orkut's large number of community memberships and users
allowed us to evaluate the relative performance of six
different measures of similarity in a large-scale real-world
study. We are not aware of any comparable published large-scale
experiments.
We were surprised that a total order
emerged among the similarity measures and that L2 vector
normalization showed the best empirical results despite
other measures, such as log-odds and pointwise mutual information
, which we found more intuitive. For future work,
we would like to see how recommendations handpicked by
community owners compare.
Just as we can estimate communities' similarity through
common users, we can estimate users' similarity through
common community memberships: i.e., user A might be
similar to user B because they belong to n of the same communities
. It will be interesting to see whether L2 also proves
superior in such a domain. We could also take advantage
of Orkut's being a social network [8], i.e., containing information
on social connections between pairs of users. In
addition to considering common community memberships,
we could consider distance between users in the "friendship
graph". Users close to each other (e.g., friends or friends-of
-friends) might be judged more likely to be similar than
distant strangers, although some users might prefer the latter
type of link, since it would introduce them to someone
they would be unlikely to meet otherwise, perhaps from a
different country or culture.
Similarly, friendship graph information can be taken into
account when making community recommendations, which
would require that recommendations be computed on a per-user
(or per-clique), rather than per-community, basis. In
such a setting, we could make community recommendations
based on weighted community overlap vectors where weights
are determined based on the graph distances of other community
members to a given user. This is a fertile area for
future work and yet another example of how the interaction
683
Research Track Poster
Figure 2: Displays of recommendations for three different communities
Table 8: The relative likelihood of clicks on link by position when there are (a) one, (b) two, or (c) three
rows of three recommendations.
(a) n=28108, p=.12
(b) n=24459, p<.001
(c) n=1226659, p<.001
1.00
1.01
.98
1.04
1.05
1.08
1.11
1.06
1.04
.97
.94
.92
1.01
.97
.99
1.01
.94
.87
of data mining and social networks is becoming an exciting
new research area [4] [11].
ACKNOWLEDGMENTS
This work was performed while Ellen Spertus was on sabbatical
from Mills College and a visiting scientist at Google.
We are grateful to Patrick Barry, Alex Drobychev, Jen Fitz-patrick
, Julie Jalalpour, Dave Jeske, Katherine Lew, Marissa
Mayer, Tom Nielsen, Seva Petrov, Greg Reshko, Catherine
Rondeau, Adam Sawyer, Eric Sachs, and Lauren Simpson
for their help on the Orkut project and to Corey Anderson,
Alex Drobychev, Oren Etzioni, Alan Eustace, Keith Golden,
Carrie Grimes, Pedram Keyani, John Lamping, Tom Nielsen,
Peter Norvig, Kerry Rodden, Gavin Tachibana, and Yonatan
Zunger for their help on this research or its exposition.
REFERENCES
[1] Breese, J.; Heckerman, D.; Kadie, C. Empirical Analysis of
Predictive Algorithms for Collaborative Filtering. In
Proceedings of the Fourteenth Conference on Uncertainty
in Artificial Intelligence (Madison, Wisconsin, 1998).
Morgan Kaufmann.
[2] Cover, T.M., and Thomas, J.A. Elements of Information
Theory. Wiley, New York, 1991.
[3] Deshpande, M., and Karypis, G. Item-Based Top-N
Recommendation Algorithms. ACM Transactions on
Information Systems 22(1) (January 2004), 143-177.
[4] Domingos, P. Prospects and Challenges for
Multi-Relational Data Mining. ACM SIGKDD Exploration
Newsletter 5(1) (July 2003).
[5] Dumais, S.; Joachims, T.; Bharat, K.; Weigend, A. SIGIR
2003 Workshop Report: Implicit Measures of User Interests
and Preferences. SIGIR Forum 37(2) (Fall 2003).
[6] Harman, D. Ranking Algorithms. In W. B. Frakes and R.
Baeza-Yates (ed.), Information Retrieval: Data Structures
& Algorithms (chapter 14). Upper Saddle River, NJ, USA:
Prentice Hall, 1992.
[7] Joachims, T. Evaluating Retrieval Performance Using
Clickthrough Data. In Proceedings of the SIGIR Workshop
on Mathematical/Formal Methods in Information
Retrieval (2002). ACM Press, New York, NY.
[8] Kautz, H.; Selman, Bart; Shah, M. Referral Web:
Combining Social Networks and Collaborative Filtering.
Communications of the ACM 45(8) (March 1997).
[9] Kitts, B.; Freed, D.; Vrieze, M. Cross-Sell: A Fast
Promotion-Tunable Customer-Item Recommendation
Method based on Conditionally Independent Probabilities.
In Proceedings of the Sixth ACM SIGKDD International
Conference on Knowledge Discovery and Data Mining
(Boston, 2000). ACM Press, New York, NY, 437-446.
[10] Lehmann, E.L. Testing Statistical Hypotheses (second
edition). Springer-Verlag, 1986.
[11] Raghavan, P. Social Networks and the Web (Invited Talk).
In Advances in Web Intelligence: Proceedings of the
Second International Atlantic Web Intelligence
Conference, May 2004. Springer-Verlag, Heidelberg.
[12] Salton, G. Automatic Text Processing: The
Transformation, Analysis, and Retrieval of Information by
Computer. Addison Wesley, Reading, MA, 1989.
[13] Sarwar, B.; Karypis, G.; Konstan, J.; Reidl, J. Item-Based
Collaborative Filtering Recommendation Algorithms. In
Proceedings of the Tenth International Conference on the
World Wide Web (WWW10) (Hong Kong, 2001). ACM
Press, New York, NY, 285-295.
[14] Spertus, Ellen. Too Much Information. Orkut Media
Selections, January 19, 2005. Available online at
"http://media.orkut.com/articles/0078.html".
684
Research Track Poster
| collaborative filtering;online communities;community;recommender system;social network;social networks;similarity measure;Data mining |
87 | Evaluation and Evolution of a Browse and Search Interface: Relation Browser++ | We present in this paper the design and an evaluation of a novel interface called the Relation Browser++ (RB++) for searching and browsing large information collections. RB++ provides visualized category overviews of an information space and allows dynamic filtering and exploration of the result set by tightly coupling the browsing and searching functions. A user study was conducted to compare the effectiveness, efficiency and user satisfaction of completing various types of searching and browsing using the RB++ interface and a traditional form-fillin interface for a video library. An exploration set of tasks was also included to examine the effectiveness of and user satisfaction with the RB++ when applied to a large federal statistics website. The comparison study strongly supported that RB++ was more effective, efficient, and satisfying for completing data exploration tasks. Based on the results, efforts to automatically populate the underlying database using machine learning techniques are underway. Preliminary implementations for two large-scale federal statistical websites have been installed on government servers for internal evaluation. | INTRODUCTION
The size and breadth of large government websites and digital
libraries makes it difficult for people to quickly grasp what
content is and is not available. Dynamic overviews and previews
of collections can help people decide if it is worthwhile to look
further [8]. As they do look further, it is helpful for their
searching and browsing to quickly see partitions of the
collection and how many items are available in different
partitions. We believe that government website users will be
well-served by highly interactive user interfaces that support
alternative views of the collections, partitions, and results sets.
This paper describes a user interface that aims to provide agile
control for browse and search, reports results from a user study
comparing this interface to a typical WWW search interface, and
describes the ongoing evolution of the interface, including
efforts to automate discovery of topical categories and
assignment of webpages to those categories.
Faceted category structure is one way to help people understand
the composition of an information collection. A faceted
approach provides different ways to slice and dice the
information space, which allows people to look at the
information space from different perspectives. Allowing people
to explore the relationships among different facets may further
deepen their understanding and support new insights. The
relation browser (RB) is an interface which provides an
overview of the collection by displaying different categories and
enables people to explore the relationships among these
categories [13]. The different facet values also serve as
selectable objects that may be employed as query widgets for a
search so that the entire space can quickly be partitioned with
simple mouse moves and with consequent immediate display of
the resulting partition in the results panel. Figure 1 shows the
mock-up interface of an early version of the relation browser in
the domain of U.S. federal statistics websites. The web pages in
the site were sliced into four different facets: by topic, data type,
region, and date. The numbers beside the bars indicate the
number of websites associated with the attributes. By mousing
over any of the topics, the distribution of the specific topics in
other facets are visualized as graphic bars. The underlying data
for this instance of the interface was manually extracted from a
small set of 200 webpages contained in more than 70 federal
statistical agency websites.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies
are not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
Conference'04, Month 12, 2004, City, State, Country.
Copyright 2004 ACM 1-58113-000-0/00/0004...$5.00.
Copyright held by the author
179
Figure 1. Relation Browser (RB)
This early version of the RB has been redesigned based on user
studies and experience applying the interface to more than a
dozen different database instances [13]. The new version is
called RB++, which improves the RB significantly in several
ways (see Figure 2) [23,24]. First, RB++ displays multiple
facets (categories) visually and on the same screen rather than
only two facets with tab options to others. The multiple facets
provide an overview of the information space. The facet values
are visually represented by graphic bars with different lengths,
which indicate the number of items associated with them.
Second, RB++ allows more flexibility to explore relationships.
One of the features of RB++ is that you can restrict the
information items (partition the information space) by mousing
over any bars and other bars are proportionally highlighted to
show the conditional distribution across all the facets. Note that
the previous RB was limited to visualizing pairwise
relationships with one main facet. Third, the RB++ added a
dynamic filtering function for the result set (see Figure 3). Once
the search results are displayed in the table, further filtering can
be done by typing in keywords (string patterns) in the boxes
located immediately above the result fields. The filtering is
dynamic, which means that with each character typed in or
removed from the boxes, RB++ matches the string patterns in
the boxes with the corresponding field of the results. Only the
matched results are then displayed immediately in the results
panel and the matched string in the results is highlighted. This
dynamic feature gives users instant and constant feedback about
the filtered results and how many items they will get with
different keywords, which allows users to try out different
filtering keywords very easily and efficiently. Fourth, the RB++
provides an overview of the results set and tightly couples the
overview and results set panels. The overview panel is
dynamically updated to give users a contextualized overview of
the updated result set. These new features give users more
power to understand and explore the information collection and
give them a flexible and rapid way to find the information they
want. A linguistic model of BNF grammar to model the user
interaction with the interface is provided in section 2.3 to help
reveal the dynamic nature of the RB++.
In the paper, we argue that the RB++ interface will bring users
added values beyond simple searching and browsing by in fact
combining these search strategies seamlessly. In the next
section, the methodology of a user study is described. The
results of the user study are then presented and discussed.
Limitations of the interface and current efforts to deal with data
classification are then described.
Figure 2. Initial display of RB++ with visualized category
overview on the top
Figure 3. RB++ with dynamic filtering of the results (note
the changes in the overview and updated results)
METHODOLOGY
The purpose of the user study was two-fold: first, we wanted to
compare the effectiveness, efficiency and user satisfaction
associated with completing certain tasks using RB++ against
that obtained by the traditional form-fillin search interface
(baseline interface). Second, we wanted to explore if the RB++
interface would lead to new interaction patterns with the
interface and if so, to determine what these new interaction
patterns might be.
180
Seventeen undergraduate and graduate students were recruited
from the UNC-Chapel Hill campus for this study. They came
from various schools and departments. There were 10 females
and 7 males with an age range from 19 to 44, (15 were in their
20s, and all were familiar with www browsers. The participants
were given $15 for their participation. The data from the first
two participants was used as a pilot test; based on which the
experimental protocol and instruments were revised. The data
from the other 15 participants was used for the data analysis.
The study proceeded in two phases, a within subjects
comparison across two different interfaces for the same
database, and an exploratory investigation with a single
government statistical website instance of RB++.
2.1 Phase One: RB++ to Baseline
Comparison for a Film Database
The first phase was a comparison study in which participants
used both the RB++ interface and the baseline interface. The
order of using these interfaces was counter balanced. The
domain of the information items in both interfaces was the video
collection in the UNC-CH library
(http://www.lib.unc.edu/house/mrc/index.html?page=filmograph
y) that contains about 10000 films. The library online video
search interface (Filmfinder) was used as our baseline interface.
FilmFinder is a fairly typical www form-fillin search interface
(see Figure 4 and Figure 5), where users can specify queries
within fields such as title, release year, director, description,
genre, origin, and format.
Figure 4. Filmfinder with Form-fillin Interface
Figure 5. Results Page of Filmfinder
All participants were run individually in sessions ranging from
60-90 minutes and all sessions were video taped. The protocol
for the first phase was as follows: First, a demographic pre-test
questionnaire was completed. Second, the participant was
trained for the first interface assigned in their condition. The
training consisted of: an introduction to the features of the
interface, a demo of each type of task with the interface, and
participant practice using the interface until s/he was
comfortable with it. Third, the participant used the interface to
complete 10 search tasks. Tasks were assigned to participants
one by one by handing them pieces of paper for each task. A
timer was used to count time used to complete each task except
for task 10 (see description of task 10 below). After each task, a
short satisfaction questionnaire was completed by the
participant. Fourth, a usability questionnaire was filled out after
the participant finished using the first interface. Next, the
participant was trained for the second interface and the same
procedures were used to complete 10 more search tasks.
Finally, an open-ended questionnaire about perceived
differences and preferences for the two interfaces was
completed.
The tasks were classified into three different types: 1. Simple
look up task. Tasks 1 to 3 in each task set were of this type. For
example, "Check if the movie titled "The Matrix" is in the
library movie collection." 2. Data exploration and analysis tasks.
Tasks 4 to 9 in each task set were of this type. This kind of task
requires users to understand and make sense of the information
collection, which could be a starting point for them to further
their searching or browsing. Two examples of this type are: "In
which decade did "Steven Spielberg" direct the most movies?";
and "How many movie titles does the library hold that were
released in the year 2000?" 3. Task 10 was a free exploration
task, which asked participants to find five favorite videos
without any time constraints. The tasks assigned for the two
interfaces were different but comparable. For example, the
comparable tasks for two interfaces simply substituted different
video titles or directors.
2.2 Phase Two: Explore RB++ for EIA
Website
The second phase was an exploratory study of the RB++ applied
to roughly 10,000 pages in the Energy Information
Administration (EIA) website. Based on intensive manual
181
inspection of the EIA website, four facets were identified with
associated facet values: fuel type (with the facet values:
alternatives, coal, electricity, natural gas, nuclear, petroleum,
and renewable); geography (state level, regional level, national
level, and international level); sector (commercial, electric
utility, industrial, and residential); and process (delivery,
import/export, price/cost, production, resources/reserves, and
usage). All the facets were displayed on the overview panel (see
Figure 6). The results panel displayed the title, page size, and
description of the web pages.
Figure 6. RB++ interface applied to EIA website
The protocol of the second phase was as follows: First, the
RB++ EIA application was introduced to the participant.
Second, the participant practiced using the interface until s/he
was comfortable with it. Third, the participant used the interface
to complete four tasks
1
. The process was recorded and a short
satisfaction questionnaire was filled out after finishing each task.
Fourth, an open-ended questionnaire was completed after
finishing all the tasks. Lastly, the participant was briefly
interviewed.
Data collected included both quantitative and qualitative data
from the two phases of the study. Data collected for the first
phase included performance data (time spent finishing tasks),
error rates of tasks, ratings on the satisfaction questionnaire after
finishing each task, ratings on the usability questionnaire after
finishing each interface, and comments on the open
questionnaire about perceived differences and preferences for
the two interfaces. Data collected for the second phase included
ratings on the satisfaction questionnaire after finishing each task,
comments on the post-session questionnaire and the verbal
comments made in the interview.
1
Tasks for the second phase study:
1. I want to learn the current status of Chinese nuclear energy.
2. Find the most recent weekly data on petroleum prices in the
USA.
3. Find the statistical data on coal production across different
states in the year 2001.
4. What kinds of information can I and can I not find from the
website?
2.3 Modeling User Interaction
To help us form hypotheses and analyze and make sense of the
experimental data, we employed a linguistic model, called BNF
grammar, to model the user's interaction with the interface. BNF
grammar was originally used by Reisner to describe the dialog
grammar of an interactive graphics system [15], where the user's
interaction with a system was seen as an action language and
BNF grammar was used to formally describe the language. The
BNF grammar consists of a set of rules which define higher
level user behaviors in terms of lower level ones. Each rule can
be composed of terminals, non-terminals, and a set of symbols.
Terminals usually represent the lowest level of user behavior,
such as pressing a key or clicking a mouse button and can not be
further defined. Non-terminals represent a high level abstraction
and can be defined in terms of other non-terminals and
terminals. Terminals are written with upper case letters and non-terminals
are written with lower case letters. The "::=" symbol is
read as " is defined as". The "+", "|" and "-" symbols are used at
the right hand side of rules to connect, respectively, sequence of
user behavior, set of options, and concurrent user behaviors.
With the BNF grammar, we can describe the user's interaction
with the RB++ as follows:
A1 information seeking ::= explore collection(A3) | (formulate
query(A2) + CLICK SEARCH BUTTON + navigate
results(A5))
A2 formulate query ::= (explore collection(A3) + form
query(A4)) | form query(A4)
A3 explore collection ::= (CLICK VISUAL BAR-OBSERVE
VISUAL BAR + explore collection(A3)) | (MOUSE OVER
VISUAL BAR-OBSERVE VISUAL BAR + explore
collection(A3))
A4 form query ::= (CLICK VISUAL BAR + form query(A4)) |
(TYPE IN KEYWORD + form query(A4))
A5 navigate results ::= (browse results(A6) + navigate
results(A5)) | (CLICK RESTART BUTTON + information
seeking(A1))
A6 browse results ::= (show results(A7)-OBSERVE RESULTS
+ browse results(A6)) | (CLICK RESULT ITEM + browse
results(A6)) | (CLICK SORTING BUTTON + browse
results(A6))| (explore results(A8) + browse results(A6))
A7 show results ::= CLICK SIDEBAR
A8 explore results ::= (observe system state(A9) + explore
results(A8)) | (filter results(A10) + explore results(A8))
A9 observe system state ::= (OBSERVE VISUAL BAR +
observe system state(A9)) | (OBSERVE NUMBER + observe
system state (A9))
A10 filter results ::= CLICK VISUAL BAR | MOUSE OVER
VISUAL BAR | TYPE IN KEYWORD
The interaction with baseline interface can be described as:
B1 information seeking ::= formulate query(B2) + CLICK
SEARCH BUTTON + navigate results(B4)
B2 formulate query ::= (TYPE IN KEYWORD + formulate
query(B2)) | (select item(B3) + formulate query(B2))
182
B3 select item ::= CLICK PULL DOWN MENU + CLICK
ITEM
B4 navigate results ::= (browse results(B5) + navigate
results(B4)) | (CLICK NEW SEARCH LINK + information
seeking(B1))
B5 browse results ::= (show results(B6)-OBSERVE RESULTS
+ browse results(B5)) | (show results(B6)-COUNT RESULTS +
browse results(B5)) | (CLICK ITEM + browse results(B5)) |
(CLICK SORTING LINK + browse results(B5))
B6 show results ::= CLICK SIDEBAR | (CLICK SIDEBAR +
CLICK NEXT PAGE LINK)
The number of rules and options within rules reflects the
interactive nature and number of alternative choices provided by
these two interfaces. Note that we used the terminals such as
CLICK SEARCH BUTTON and CLICK VISUAL BAR which
strictly speaking are not the lowest level of user behaviors,
however, using higher level abstraction as terminals is suitable
for interactive display-based systems [4] and ensures later data
analysis. Many rules are defined recursively and consist of
several options, which essentially reflect the interactivity of the
graphical user interface (GUI). For example, a fairly interactive
user behavior in RB++, "browse results (A6)", consists of either
`OBSERVE RESULTS', `CLICK RESULT ITEM', `CLICK
SORTING BUTTON', explore results, or any combination of
the above.
From the BNF definition, we can see that RB++ is a more
interactive interface than the baseline because it involves more
rules and recursive definitions. However, it is not necessarily a
complicated interface, since the rules for the RB++ interface are
largely composed of a set of options instead of a sequence of
user behaviors, which means that many rules are not executed
for some types of tasks. Based on the BNF grammars, we
hypothesize that for the simple search tasks, the RB++ interface
will not necessarily be significantly different from the baseline
interface, but for complicated searching and browsing tasks, that
require more interaction or collection exploration, the RB++ will
be significantly more effective, efficient, and satisfying than the
baseline. For simple look up type tasks, both interfaces involve
the sequence of user actions: formulate query, CLICK SEARCH
BUTTON, and navigate results (see rule A1 and B1).
Navigation of results is simple for this type of task in that it only
involves the judgment of zero or non-zero results, which is
trivial in both interfaces. Formulation of the query in this case
involves typing in keywords and/or selecting the items from the
interfaces (see rule A2, A4 and B2, B3). Even though item
selection in the baseline interface involves two clicks (see rule
B3) which means a slightly longer time to execute than in
RB++, which only needs one click on the visual bar for item
selection (see rule A4), we expected no significant difference.
For type 2 tasks that involve data exploration and analysis,
interaction with the visual bars of the RB++ interface provides
an effective and efficient interaction style. Two typical
sequences of user behaviors to complete type 2 tasks are:
explore the collection by clicking (or mousing over) and
observing the visual bars (see rule A1 and A3), or formulate a
query and then explore the results by observing the visual bars
(see rule A1, A5, A6, A8 and A9). With the traditional interface
to finish type 2 tasks, users have to formulate a query and then
literally scan and count all the results (see rule B1, B4, B5, and
B6), which is time consuming.
We also hypothesized that users would exhibit rich interaction
during their navigation of the results with RB++ (see rule A5 to
A10). Actions of typing in keywords and clicking visual bars to
filter results (rule A10) would be used frequently and
interchangeably by the users to finish complex search tasks,
especially when large numbers of results are returned.
RESULTS
Table 1 lists the average time (in seconds) across all the
participants to finish tasks 1 to 9 using the two different
interfaces. Notice that we allowed the participants to stop the
task if they felt that the task was too hard or too time-consuming
to finish. It turned out that there were five participants who
stopped task 5 and eight participants stopped task 6 before
completion when they used the FilmFinder. Performance data of
these participants were discarded for the unfinished tasks.
Table 1. Performance data (in seconds)
Task
1 (.879) 2 (.522) 3 (.026) 4 (.000) 5 (.000)
RB++ 14.4 16.1 17.0 18.9 15.7
FilmFinder
14.7 14.4 29.7 40.7 204.0
Task
6 (.000) 7 (.000) 8 (.000) 9 (.000)
10
RB++ 12.7 13.5 27.1 20.6 N/A
FilmFinder 328.0 87.2 101.3 112.8 N/A
Paired sample t tests on the performance data were computed
and the p values are shown in the parenthesis for each task. We
can see that except for the first two tasks (which were type 1
tasks), the performance differences between the two interfaces
were all statistically significant at the .05 level. Clearly, RB++
supported superior performance for type 2 tasks.
We also counted error rates for tasks 1 to 9, which are listed in
Table 2. The error rate was calculated as the number of
participants who gave the wrong answer to the task divided by
the total number of participants. We can see that except for the
8th task, no participants got wrong answers for any of the tasks
using the RB++ interface. The error rates of the baseline
interface were much higher than that of the RB++ interface,
especially for tasks 5, 6, and 7. Notice that we did not consider
those participants who gave up the task 5 or 6 using the
Filmfinder, so the actual denominators used for calculating the
error rates for these tasks were smaller than the total number of
participants.
Table 2. Error rates
Task 1 2 3 4 5
RB++ 0/15 0/15 0/15 0/15 0/15
FilmFinder 0/15 0/15 0/15 2/15 5/10
Task 6 7 8 9
10
183
RB++ 0/15 0/15 1/15 0/15 N/A
FilmFinder 4/7 13/15
2/15
5/15
N/A
We also did paired sample t tests on the three satisfaction
questions
2
which were completed after each task. Each response
was given on a 5 point scale from strongly agree (5) to strongly
disagree (1). For the first three tasks (simple lookups), there
were no statistically significant differences between the two
interfaces on any of the 3 questions. On the exploratory tasks
(4-9), statistically significant differences favoring the RB++
were found on all three of the satisfaction questions.
We also compared the results for the seven overall usability
questions on each interface asked after participants had done the
tasks with each interface. Their responses were also given on a
five point scale from strongly agree to strongly disagree. In each
of the seven ratings, statistically significant differences were
found favoring the RB+ interface. Clearly, satisfaction with the
RB++ was greater than that with the Filmfinder.
There were also open-ended questions that the participants
answered after finishing both interfaces. All of the participants
considered the RB++ interface to be easier to use, especially for
the complex searches. They commented on the easy use of the
visual display with the multiple categories, which made it easy
to combine the search criteria and narrow down the data, and
they also thought it was good to be able to manipulate the search
results in multiple ways. Thirteen out of 15 participants
indicated that the RB++ interface gave them more confidence to
complete the tasks. It was easy to go back and forth and to verify
the results and the informative overview panel gave the
participants more confidence to finish tasks. There was one
participant who thought that both interfaces gave equal
confidence and there was one participant who thought that the
Filmfinder interface gave more confidence since he was more
familiar with the Filmfinder and he felt somewhat confused by
the dynamic feature of the RB++, but he acknowledged the
usefulness of the dynamic feature in narrowing the results in the
results panel.
When asked which interface better helped them gain an
understanding of the library movie collection, the RB++
interface was chosen by all the participants. Again the visual
display of the multiple categories and the cross reference of
these categories was considered to be useful features for them to
understand the whole collection. In addition, 10 out of the 15
participants indicated that they were more likely to use the
RB++ interface if both were available. Three participants chose
both interfaces, depending on the type of tasks, and two
participants chose the FilmFinder because of its familiarity and
aesthetic appeal.
For the question on the best thing about the RB++, participants
pointed out the visual display of the multiple categories, its cross
reference ability, the dynamic matching ability of the searching
boxes and the one screen display of the results as opposed to the
multiple page display of results in Filmfinder. As the worst thing
about the RB++, participants indicated that it was not as
2
It is easy to use the interface
I feel satisfied with the results I got
I feel confident with the results I got
aesthetically appealing as the Filmfinder and not quite as
intuitive to use as the Filmfinder. Two participants specifically
mentioned that the constant changing and updating of the
interface made it a bit confusing.
3.2 Phase Two Results
During the second phase we also asked participants to fill out
the satisfaction questionnaires after finishing each task and these
ratings were predictably high (all means above 3.5) More
importantly, participants were also required to answer a set of
open-ended questions after finishing the second phase. For the
first question: "What is your overall impression of this interface
for finding the statistical data?" the overall impression was
positive. Participants used phrases such as "fairly easy to use",
"very helpful in finding the information", "good for quick
searching". There were also a couple of negative comments such
as: "interface still came up with many results after filtering",
"title of the results are not descriptive enough". Only one
participant said that he did not like the interface, because of the
poor categorization of information items under some categories
which made him frustrated.
When answering the second question: "Was it helpful to
understanding what is available at EIA?" all the participants
thought the interface was helpful in that regard, which was
largely attributed to the visual display of the categories, which
gave them a sense of what the website covered. One participant
wished that there were more categories displayed. The
questionnaire also asked if the search boxes were helpful in
completing the tasks. Participants gave high praise to this
feature with comments such as "it's great to be taken directly to
the page but not to have your results lost", "I like the way it
narrows the focus and sort of guides a person to the info
sought", "I didn't have to be concerned with performing a
complex search that may return a null set-the results reflected
my search string instantly". Two participants also commented
that the feature was somewhat limited in use since relevant
information may not appear in the title, or description.
DISCUSSION
The results strongly support that the RB++ interface was more
effective and efficient in completing type 2 tasks than the
baseline interface and that users felt more confident and satisfied
with the RB++ in completing type 2 tasks. The higher
effectiveness, efficiency and satisfaction gained in the RB++
resulted mainly from two aspects: the visual display of the
statistical summary of the information items and the dynamic
keyword searching capability in the results panel. The
visualization bars helped the users understand relative
proportions of items at a glance and use the posting numbers
directly, which is much faster than literally counting. If we look
at the BNF grammar, completion of type 2 tasks in RB++ only
required participants to explore the collection (see first option of
rule A1) without submitting queries to the database and then
observing and counting returned results, which are necessary
steps for the baseline interface to complete the same tasks (see
rule B1).
The dynamic search boxes allow users do further filtering based
on certain criteria and give users feedback on the filtered results
instantly and continuously, which not only encourages the users
to use this function, but also improves their efficiency. Another
184
interface feature: displaying all the results on one screen might
also help improve the efficiency and satisfaction, as several
users mentioned.
Several components were tightly coupled in the interface with
displayed search results. The search boxes are tightly coupled
with the results, which means that any input in the search boxes
will invoke instant filtering on the results. The visual bars are
tightly coupled with the results and as such they support two
functions. One is that any operations on the visual bars such as
mouse over and selection, invoke the instant filtering of the
results. The other is that any update of the results also updates
the summary statistics in the visualization on the bars. Coupling
provides users more ways to interact with the system and make
the interaction more natural and smooth (see rule A8, A9, A10),
which suggests a different interaction style for finding
information than traditional search interfaces which tend to
require discrete, well-defined turn-taking between the user and
system. Traditionally, when users get to the results page, all they
can do is browse the results. If they want to refine the results,
they have to go back to the search interface, type in the refined
keywords, click the search, and browse the new results, which
not only interrupts the normal results browsing interaction, but
also loses the current result set. RB++ encourages users to get an
initial manageable result set and then refine it using one
interface window without the need to go back and forth. Instead
of displaying a set of static results, RB++ offers an effective and
efficient means for users to understand the results by displaying
summary statistics bars which give both visual and numeric data
(see rule A9), and to explore results by providing ways to
dynamically and continuously filter (see rule A10). The result
set can be as large as displaying the whole collection, or as small
as only one item, which depends on the initial query on the
collection. In the second phase study, most of the participants
completed their search tasks without doing a second query on
the initial interface. The study showed that participants could
utilize the initial interface to get an initial result set by selecting
relevant categories and then narrow down results and find
relevant web pages by exploring the results set. Typing in
keywords (or string patterns) in search boxes was found to be
the most frequently used means to explore and filter result set.
These features were highly appreciated by the participants as
seen from their comments.
RELATED WORK
Many information access interfaces try to provide a starting
point for users by presenting overviews of the collection [9].
Overviews can help users understand the whole collection and
select or eliminate sources from consideration. Overviews can
direct users to subcollections quickly, where they can explore
the details. Usually two types of overviews are employed:
category overview and graphic overview. The category approach
of Yahoo is a good example for the category overview. The
HiBrowse interface for viewing category labels hierarchically
based on the facets is another example [14]. A more recent
information access interface using the category overview by
presenting faceted metadata is the Flamenco interface [21]. The
last two interfaces not only present the category labels to the
users but also inform the users of the number of documents
under each category. However, these interfaces do not allow
users to employ simple mouse moves to quickly explore the
relationship between different categories (or facets). The
Flamenco interface could do this as part of its browsing and
searching efforts, but it requires many commitments from users
such as clicking the category and waiting. The previous version
of the relation browser [13] presented various categories and
allowed users to explore the relations by mouse over operation,
but the interface only allowed the users to mouse over the main
category.
The graphic overview is another type of overview, which
usually employs various information visualization techniques.
Lin [12] used the Kohenen map to visually present a topical
overview of the collection. Each block on the map represents a
subcollection with similar topics which are labeled by one or
two salient words extracted from the subcollection. The
adjacency of blocks indicates the topic similarity between
subcollections. Wise, et al. [19] developed a three dimensional
interface to visually present various topics. Zhang, et al. [25]
exacted the key concepts from a collection and visually
presented the concepts in a spring-embedded graph. Similar
concepts were clustered together and usually represented as
subtopics. The graphical overview is visually appealing, but the
usability of this kind of interface has yet to be explored. 3-D
interfaces are more problematic than 2-D interface in terms of
ease of use and learnability. It seems that textual labels of
category structure are more understandable than graphical
representation.
Some research has been conducted on how to present the
retrieved results in context. Hearst [10] used clustering
techniques to cluster retrieval results on the fly and presented
different clusters with labeled words to the users to help them
understand of the results. Chen, and Dumais [3] employed
classification techniques to categorize retrieved results based on
the existing category structure and displayed them in
hierarchical categories. Zamir and Etzioni [22] developed an
interface that used on-the-fly clustering of metasearch results.
These interfaces cluster or categorize the retrieval results on the
fly, so scaling is problematic. The RB++ categorizes the
collection offline and uses a uniform category structure to
present overviews of the collection and the retrieval results.
Consequently, RB++ can be scaled up easily. However, because
RB++ depends on the metadata to reside on the client side to
achieve its dynamics, it also suffers a different kind of
scalability limitation. To date, we have had good success and
response with data sets with tens of thousands of records and a
dozen or so facets, however data sets with millions of records
and scores of facets are problematic.
3
.
There also has been some work on fast location of specific
information items. Sorting is a prevalent means to help users
locate a specific item. However, users still need to visually go
through a list of items. The Alphaslider [1] is a visual
component to help users quickly locate a known string of items,
but it's not very easy to use, especially for novice users. Besides,
The Alphaslider can only locate the information items based on
the first letter alphabetically. RB++ provides an easy and
flexible way to locate the information items by typing in string
patterns and the patterns can be matched anywhere in the
information items. A similar technique is actually used in some
applications such as the address box of Internet Explorer
3
Various RB++ examples are available at
http://idl.ils.unc.edu/rave/examples.html
185
browser, but the patterns are limited to matching from the
beginning of the query string.
Dynamic query was a new type of interface [16] that inspired
the original relation browser work. The interface visually
displays the information items and provides the visual
controlling components to explore the information items by
tightly coupling search and visual display of results. RB++ uses
this design concept, but instead of providing a visual interface,
RB++ employs a more understandable (especially for topical
overview) category structure for the information items.
Moreover, the search box is a very effective and efficient
component for the non categorized attributes of the items, while
the visual controlling components such as sliders or check boxes
can only be used for controlling categorical attributes of the
items.
Query preview [18], attribute explorer [17] and other interfaces
[11], and [20] provide similar ways to explore the relationships
between different facets of the classification. These interfaces
worked for structured information such as that found in
databases. Of course, all these types of interfaces depend on
good underlying categorization of data. Our long-term goal is to
make the interface work for unstructured textual information.
The search boxes provided are a first step in this direction,
although they are currently limited to search within the fields
specified in the results display.
LIMITATIONS OF RB++ AND ONGOING WORK
One constraint in RB++ is the limited number of categories that
can be displayed, which is affected by two factors. One factor is
screen real estate. We can partially alleviate the issue of screen
real estate by utilizing a Zoomable User Interface (ZUI) to
display the categories. We have experimented with integrating
the Jazz toolkit [2] into the interface and this provides
aproximately a ten-fold increase in the number of facet values
that can be supported within each facet, although at the expense
of some of the mouseover dynamics since the mouse must now
be used for zooming as well as normal hovering. Another factor
is size of the memory to hold the client-side distribution counts
data, the number of which increases exponentially with
increased number of displayed categories. One way to solve the
issue is to only calculate part of the distribution counts data,
which hopefully are most frequently used during the user's
interaction with the interface. Other approaches, such as
employing novel data structures were also suggested by Doan, et
al. [5]. However, all these solutions have to sacrifice the
interactivity of the interface that depends on client-side metadata
to support rapid mouse activities. For example, preloading
partial distribution counts data for large numbers of categories
make some distributional data and visualization unavailable
when users try to re-partition the information space by mouse
moves. At present, the best development path seems to be
hierarchical partitioning of very large information spaces with
multiple RB++ instances that require a new download for each
of the cascading subsets.
Another constraint of RB++ is the limited matching function of
the search boxes. The interface currently matches input string
patterns to the corresponding result fields on the lexical level.
Matching in this level is sufficient in many cases such as
matching with fields with numbers or short textual strings such
as titles, but for the fields with more semantic bearing strings
such as descriptions of web pages, a more sophisticated match
function based on semantics might be needed--perhaps a kind
of full-text engine in each text field, although the close coupling
with the facet panel will then be in doubt.
Currently, the interface provides a uniform category structure
for both the entire collection and the retrieved results set. This is
good for its consistency. However, for the retrieved result set, a
more fine-grained category structure might be better for users to
understand it and conduct string searches.
Overall, the RB++ represents an example of a highly interactive
user interface that offers improved performance and satisfaction
for users of large websites and digital libraries. It can find
application as the entry point for a large website or as a way to
work with large results sets returned from search engines as long
as the data is structured in advance.
WORK ON AUTOMATIC CLUSTERING AND CLASSIFICATION
Over the past few years we have created more than two dozen
instances of data sets using RB and RB++, demonstrating its
applicability as an interface to many different types of data. If
the data resides in a database, it is possible to map the scheme to
the underlying RB++ scheme (see [24] for details on the system
architecture) and simply import the data automatically. For
many WWW applications, this is not possible so we have been
developing ways to automate facet discovery and webpage
assignment to those facets. The basic approach is to crawl the
website(s), create term-document representations and then use
machine learning techniques to cluster the webpages and extract
candidate labels, use human judgment to select the best labels,
and then classify the webpages into those categories using the
statistical model produced in the process. See Efron et al [6,7]
for details of the techniques we have applied to date.
To illustrate the current state of development, consider the EIA
example used in the study reported here. The categories
displayed on the EIA RB++ instance were originally created
manually, which certainly did not scale well to many other large
information collections such as various other government
statistical web sites. Figure 7 is a screen shot of the RB++
instance for the Bureau of Labor Statistics web site that uses
topic facets and webpage assignment that were automatically
determined. About 13,000 HTML web pages were crawled from
web site and a soft clustering method was then applied to those
pages. For each webpage, the statistical model yielded a
probability of belonging to every cluster. Thus, every page
`belongs' to every cluster at some level of probability. The first
topic column contains all the pages with highest probability of
belonging to those facet values. The second column contains all
the pages with the second highest probability values for those
facet values. The third and fourth columns are the months and
years of page update extracted from web pages themselves-facets
that are know to be problematic but used for illustration
since our primary emphasis was on topic discovery. The display
of two topical columns reflects the underlying characteristic of
soft clustering where items can appear in multiple clusters.
However, our discussions with BLS staff and demonstrations to
other potential users show that this two-column display is
confusing. Therefore, only the first topical column was kept in a
later version of the RB++ BLS instance (see figure 8). An
186
additional facet: geographical coverage, which is an important
facet of government statistical web sites, was added in this
version. The assignments were made using a rule based
classification method to classify the web pages into categories of
various geographical coverages.
Figure 7. First version of RB instance for BLS web site
Figure 8. Latest version of RB instance for BLS web site.
In addition to the BLS instance, we have used these techniques
to create RB++ instances for the FedStats website and are
working to create instances for other federal statistical agencies.
At present, the FedStats and BLS instances are installed on
FedStats servers and are being tested by federal statistical
agency personnel. We also hope to address the important facet
of time coverage of data itself in future work.
Overall, the RB++ user interface has evolved to a state where it
can be easily applied to many kinds of well-structured data. The
user testing reported here demonstrates the efficacy of the
interface for search and browse tasks that would be very difficult
to execute with SQL syntax or form fillin interfaces. Our
current efforts are to develop techniques for automatically
populating the RB++ database with unstructured data from the
WWW. To date, these efforts have led to promising prototypes
for several federal statistical websites.
ACKNOWLEDGMENTS
This work was supported by NSF Grant EIA 0131824. The
authors wish to thank Paul Solomon and other anonymous
reviewers for their valuable comments on the paper and Tim
Shearer for designing the underlying database structure of the
interface. We also thank Jonathan Elsas and Miles Efron for
developing the clustering software and techniques.
REFERENCES
[1] Ahlberg, C., and Shneiderman, B. The alphaslider: a
compact and rapid selector. In Proceedings of the SIGCHI
conference on human factors in computing systems. Boston,
Massachusetts. 1994.
[2] Bederson, B., Meyer, J., and Good, L. Jazz:An Extensible
zoomable user interface graphics toolkit in java. In ACM
UIST2000, 171-180.
[3] Chen, H, and Dumais, S. Bringing order to the web:
Automatically categorizing searching results. In Proceedings of
the SIGCHI conference on human factors in computing systems.
The Hague, Amsterdam. 2000
[4] Dix, A., Finlay, J. Abowd, G. Beale, R, and Finley, J.
Human-Computer Interaction (2
nd
Ed.). Prentice Hall, Hillsdale,
NJ, 1998
[5] Doan, K., Plaisant, C., Shneiderman, B., and Bruns, T.
Interface and Data Architecture for Query Preview in
Networked Information Systems. ACM Transactions on
Information Systems, July 1999, Vol. 17, No. 3, 320-341.
[6] Efron, M., Marchionini, G. and Zhang, J. Implications of the
recursive representation problem for automatic concept
identification in on-line governmental information. ASIST SIG-CR
Workshop, Long Beach, CA, 2003.
[7] Efron, M., Elsas, J., Marchionini, G., and Zhang J.. Machine
learning for information architecture in a large governmental
website. Joint Conference on Digital Libraries 2004 (Tuscon,
AZ, June 7-11, 2004)
[8] Greene, S., Marchionini, G., Plaisant, C., & Shneiderman, B.
(2000). Previews and overviews in digital libraries: Designing
surrogates to support visual information seeking. Journal of the
American Society for Information Science, 51(4), 380-393.
[9] Hearst, M. User interfaces and visualization. In Modern
information retrieval. Ed. by Baeza-Yates, R., and Ribeiro-Neto,
B. Chapter 10, ACM Press, New York, NY, 1999 257-324.
[10] Hearst, M. and Pedersen, P. Reexamining the cluster
hypothesis: Scatter/Gather on retrieval results, Proceedings of
19th Annual International ACM/SIGIR Conference, Zurich,
1996
[11] Lanning, T., Wittenburg, K., Heinrichs, M., Fyock, C., and
Li, G. Multidimensional information visualization through
sliding rods. AVI'02 Palermo, Italy.
[12] Lin, X. Map displays for information retrieval. Journal of
the American society for information science. 1997 48(1), 40-54.
187
[13] Marchionini, G., and Brunk, B. Toward a general relation
browser: A GUI for information architects. Journal of Digital
Information. Article No. 179, 2003-04-09 2003 4(1).
http://jodi.ecs.soton.ac.uk/Articles/v04/i01/Marchionini/
[14] Pollitt A. S., Ellis G. P., and Smith M. P. HIBROWSE for
Bibliographic Databases Journal of Information Science, 1994
20 (6), 413-426.
[15] Reisner, P., Formal Grammar and human factor design of
an interactive graphics system. IEEE Trans. on Software
Engineering, 7(2), 229-240, 1981
[16] Shneiderman, B., Dynamic queries for visual information
seeking, IEEE Software 11, 6 (1994), 70-77.
[17] Spence, R, and Tweedie, L. The attribute explore:
information synthesis via exploration. Interacting with
Computers. 1998 11, 137-146.
[18] Tanin, E., Lotem, A., Haddadin, I., Shneiderman, B.,
Plaisant, C., and Slaughter, L. Facilitating data exploration with
query previews: a study of user performance and preference.
Behaviour & information technology. 2000 19(6). 393-403.
[19] Wise, J., Thomas, J., Pennock, K., Lantrip, D., Pottier, M.
and Schur, A. Visualizing the non-visual: spatial analysis and
interaction with information from text documents. In Proc. of
the Information visualization Symposium 95, pages 51-58. IEEE
Computer Society Press, 1995 | searching;efficiency;user satisfaction;user study;information search;Information Storage and Retrieval;interaction patterns with interface;Interface design;visualization;Relation Browser++;browsing;Browse and Search Interface;RB++;interactive system;Faceted category structure;information browsing;effectiveness;search;dynamic query;facets;browse;Human Factor;visual display;Modeling User Interaction;satisfaction;User interface;category overview;user interface |
88 | Event Threading within News Topics | With the overwhelming volume of online news available today, there is an increasing need for automatic techniques to analyze and present news to the user in a meaningful and efficient manner. Previous research focused only on organizing news stories by their topics into a flat hierarchy. We believe viewing a news topic as a flat collection of stories is too restrictive and inefficient for a user to understand the topic quickly. In this work, we attempt to capture the rich structure of events and their dependencies in a news topic through our event models. We call the process of recognizing events and their dependencies event threading. We believe our perspective of modeling the structure of a topic is more effective in capturing its semantics than a flat list of on-topic stories. We formally define the novel problem, suggest evaluation metrics and present a few techniques for solving the problem. Besides the standard word based features, our approaches take into account novel features such as temporal locality of stories for event recognition and time-ordering for capturing dependencies. Our experiments on a manually labeled data sets show that our models effec-tively identify the events and capture dependencies among them. | INTRODUCTION
News forms a major portion of information disseminated in the
world everyday. Common people and news analysts alike are very
interested in keeping abreast of new things that happen in the news,
but it is becoming very difficult to cope with the huge volumes
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
CIKM'04, November 813, 2004, Washington,DC,USA.
Copyright 2004 ACM 1-58113-874-1/04/0011 ...
$
5.00.
of information that arrives each day. Hence there is an increasing
need for automatic techniques to organize news stories in a way that
helps users interpret and analyze them quickly. This problem is addressed
by a research program called Topic Detection and Tracking
(TDT) [3] that runs an open annual competition on standardized
tasks of news organization.
One of the shortcomings of current TDT evaluation is its view of
news topics as flat collection of stories. For example, the detection
task of TDT is to arrange a collection of news stories into clusters
of topics. However, a topic in news is more than a mere collection
of stories: it is characterized by a definite structure of inter-related
events. This is indeed recognized by TDT which defines a topic as
`a set of news stories that are strongly related by some seminal real-world
event' where an event is defined as `something that happens
at a specific time and location' [3]. For example, when a bomb
explodes in a building, that is the seminal event that triggers the
topic. Other events in the topic may include the rescue attempts,
the search for perpetrators, arrests and trials and so on. We see
that there is a pattern of dependencies between pairs of events in
the topic. In the above example, the event of rescue attempts is
`influenced' by the event of bombing and so is the event of search
for perpetrators.
In this work we investigate methods for modeling the structure
of a topic in terms of its events. By structure, we mean not only
identifying the events that make up a topic, but also establishing
dependencies--generally causal--among them. We call the process
of recognizing events and identifying dependencies among
them event threading, an analogy to email threading that shows
connections between related email messages. We refer to the resulting
interconnected structure of events as the event model of the
topic. Although this paper focuses on threading events within an
existing news topic, we expect that such event based dependency
structure more accurately reflects the structure of news than strictly
bounded topics do. From a user's perspective, we believe that our
view of a news topic as a set of interconnected events helps him/her
get a quick overview of the topic and also allows him/her navigate
through the topic faster.
The rest of the paper is organized as follows. In section 2, we
discuss related work. In section 3, we define the problem and use
an example to illustrate threading of events within a news topic. In
section 4, we describe how we built the corpus for our problem.
Section 5 presents our evaluation techniques while section 6 describes
the techniques we use for modeling event structure. In section
7 we present our experiments and results. Section 8 concludes
the paper with a few observations on our results and comments on
future work.
446
RELATED WORK
The process of threading events together is related to threading
of electronic mail only by name for the most part. Email usually
incorporates a strong structure of referenced messages and consistently
formatted subject headings--though information retrieval
techniques are useful when the structure breaks down [7]. Email
threading captures reference dependencies between messages and
does not attempt to reflect any underlying real-world structure of
the matter under discussion.
Another area of research that looks at the structure within a topic
is hierarchical text classification of topics [9, 6]. The hierarchy
within a topic does impose a structure on the topic, but we do not
know of an effort to explore the extent to which that structure reflects
the underlying event relationships.
Barzilay and Lee [5] proposed a content structure modeling
technique where topics within text are learnt using unsupervised
methods, and a linear order of these topics is modeled using hidden
Markov models. Our work differs from theirs in that we do not constrain
the dependency to be linear. Also their algorithms are tuned
to work on specific genres of topics such as earthquakes, accidents,
etc., while we expect our algorithms to generalize over any topic.
In TDT, researchers have traditionally considered topics as flat-clusters
[1]. However, in TDT-2003, a hierarchical structure of
topic detection has been proposed and [2] made useful attempts
to adopt the new structure. However this structure still did not ex-plicitly
model any dependencies between events.
In a work closest to ours, Makkonen [8] suggested modeling
news topics in terms of its evolving events. However, the paper
stopped short of proposing any models to the problem. Other related
work that dealt with analysis within a news topic includes
temporal summarization of news topics [4].
PROBLEM DEFINITION AND NOTATION
In this work, we have adhered to the definition of event and topic
as defined in TDT. We present some definitions (in italics) and our
interpretations (regular-faced) below for clarity.
1. Story: A story is a news article delivering some information
to users. In TDT, a story is assumed to refer to only a single
topic. In this work, we also assume that each story discusses
a single event. In other words, a story is the smallest atomic
unit in the hierarchy (topic
event
story). Clearly, both
the assumptions are not necessarily true in reality, but we
accept them for simplicity in modeling.
2. Event: An event is something that happens at some specific
time and place [10]. In our work, we represent an event by
a set of stories that discuss it. Following the assumption of
atomicity of a story, this means that any set of distinct events
can be represented by a set of non-overlapping clusters of
news stories.
3. Topic: A set of news stories strongly connected by a seminal
event. We expand on this definition and interpret a topic as
a series of related events. Thus a topic can be represented
by clusters of stories each representing an event and a set of
(directed or undirected) edges between pairs of these clusters
representing the dependencies between these events. We will
describe this representation of a topic in more detail in the
next section.
4. Topic detection and tracking (TDT) :Topic detection detects
clusters of stories that discuss the same topic; Topic
tracking detects stories that discuss a previously known topic [3].
Thus TDT concerns itself mainly with clustering stories into
topics that discuss them.
5. Event threading: Event threading detects events within in a
topic, and also captures the dependencies among the events.
Thus the main difference between event threading and TDT
is that we focus our modeling effort on microscopic events
rather than larger topics. Additionally event threading models
the relatedness or dependencies between pairs of events
in a topic while TDT models topics as unrelated clusters of
stories.
We first define our problem and representation of our model
formally and then illustrate with the help of an example. We are
given a set of
news stories
on a given topic
and their time of publication. We define a set of events
with the following constraints:
(1)
(2)
(3)
While the first constraint says that each event is an element in the
power set of S, the second constraint ensures that each story can
belong to at most one event. The last constraint tells us that every
story belongs to one of the events in
. In fact this allows us to
define a mapping function
from stories to events as follows:
iff
(4)
Further, we also define a set of directed edges
which denote dependencies between events. It is important to explain
what we mean by this directional dependency: While the existence
of an edge itself represents relatedness of two events, the
direction could imply causality or temporal-ordering. By causal
dependency we mean that the occurrence of event B is related to
and is a consequence of the occurrence of event A. By temporal ordering
, we mean that event B happened after event A and is related
to A but is not necessarily a consequence of A. For example, consider
the following two events: `plane crash' (event A) and `subse-quent
investigations' (event B) in a topic on a plane crash incident.
Clearly, the investigations are a result of the crash. Hence an arrow
from A to B falls under the category of causal dependency.
Now consider the pair of events `Pope arrives in Cuba'(event A)
and `Pope meets Castro'(event B) in a topic that discusses Pope's
visit to Cuba. Now events A and B are closely related through their
association with the Pope and Cuba but event B is not necessarily
a consequence of the occurrence of event A. An arrow in such scenario
captures what we call time ordering. In this work, we do not
make an attempt to distinguish between these two kinds of dependencies
and our models treats them as identical. A simpler (and
hence less controversial) choice would be to ignore direction in the
dependencies altogether and consider only undirected edges. This
choice definitely makes sense as a first step but we chose the former
since we believe directional edges make more sense to the user as
they provide a more illustrative flow-chart perspective to the topic.
To make the idea of event threading more concrete, consider the
example of TDT3 topic 30005, titled `Osama bin Laden's Indict-ment'
(in the 1998 news). This topic has 23 stories which form 5
events. An event model of this topic can be represented as in figure
1. Each box in the figure indicates an event in the topic of Osama's
indictment. The occurrence of event 2, namely `Trial and Indictment
of Osama' is dependent on the event of `evidence gathered
by CIA', i.e., event 1. Similarly, event 2 influences the occurrences
of events 3, 4 and 5, namely `Threats from Militants', `Reactions
447
from Muslim World' and `announcement of reward'. Thus all the
dependencies in the example are causal.
Extending our notation further, we call an event A a parent of B
and B the child of A, if
. We define an event model
to be a tuple of the set of events and set of dependencies
.
Trial and
(5)
(3)
(4)
CIA announces reward
Muslim world
Reactions from
Islamic militants
Threats from
(2)
(1)
Osama
Indictment of
CIA
gathered by
Evidence
Figure 1: An event model of TDT topic `Osama bin Laden's
indictment'.
Event threading is strongly related to topic detection and tracking
, but also different from it significantly. It goes beyond topics,
and models the relationships between events. Thus, event threading
can be considered as a further extension of topic detection and
tracking and is more challenging due to at least the following difficulties
.
1. The number of events is unknown.
2. The granularity of events is hard to define.
3. The dependencies among events are hard to model.
4. Since it is a brand new research area, no standard evaluation
metrics and benchmark data is available.
In the next few sections, we will describe our attempts to tackle
these problems.
LABELED DATA
We picked 28 topics from the TDT2 corpus and 25 topics from
the TDT3 corpus. The criterion we used for selecting a topic is that
it should contain at least 15 on-topic stories from CNN headline
news. If the topic contained more than 30 CNN stories, we picked
only the first 30 stories to keep the topic short enough for annota-tors
. The reason for choosing only CNN as the source is that the
stories from this source tend to be short and precise and do not tend
to digress or drift too far away from the central theme. We believe
modeling such stories would be a useful first step before dealing
with more complex data sets.
We hired an annotator to create truth data. Annotation includes
defining the event membership for each story and also the dependencies
. We supervised the annotator on a set of three topics that
we did our own annotations on and then asked her to annotate the
28 topics from TDT2 and 25 topics from TDT3.
In identifying events in a topic, the annotator was asked to broadly
follow the TDT definition of an event, i.e., `something that happens
at a specific time and location'. The annotator was encouraged to
merge two events A and B into a single event C if any of the stories
discusses both A and B. This is to satisfy our assumption that
each story corresponds to a unique event. The annotator was also
encouraged to avoid singleton events, events that contain a single
news story, if possible. We realized from our own experience that
people differ in their perception of an event especially when the
number of stories in that event is small. As part of the guidelines,
we instructed the annotator to assign titles to all the events in each
topic. We believe that this would help make her understanding of
the events more concrete. We however, do not use or model these
titles in our algorithms.
In defining dependencies between events, we imposed no restrictions
on the graph structure. Each event could have single, multiple
or no parents. Further, the graph could have cycles or orphan-nodes
. The annotator was however instructed to assign a dependency
from event A to event B if and only if the occurrence of B
is `either causally influenced by A or is closely related to A and
follows A in time'.
From the annotated topics, we created a training set of 26 topics
and a test set of 27 topics by merging the 28 topics from TDT2 and
25 from TDT3 and splitting them randomly. Table 1 shows that the
training and test sets have fairly similar statistics.
Feature
Training set
Test set
Num. topics
26
27
Avg. Num. Stories/Topic
28.69
26.74
Avg. Doc. Len.
64.60
64.04
Avg. Num. Stories/Event
5.65
6.22
Avg. Num. Events/Topic
5.07
4.29
Avg. Num. Dependencies/Topic
3.07
2.92
Avg. Num. Dependencies/Event
0.61
0.68
Avg. Num. Days/Topic
30.65
34.48
Table 1: Statistics of annotated data
EVALUATION
A system can generate some event model
using
certain algorithms, which is usually different from the truth model
(we assume the annotator did not make any mistake
). Comparing a system event model
with the true model
requires comparing the entire event models including their dependency
structure. And different event granularities may bring
huge discrepancy between
and
. This is certainly non-trivial
as even testing whether two graphs are isomorphic has no known
polynomial time solution. Hence instead of comparing the actual
structure we examine a pair of stories at a time and verify if the
system and true labels agree on their event-memberships and dependencies
. Specifically, we compare two kinds of story pairs:
Cluster pairs (
)
: These are the complete set of un-ordered
pairs
of stories
and
that fall within the
same event given a model
. Formally,
(5)
where
is the function in
that maps stories to events as
defined in equation 4.
Dependency pairs (
)
: These are the set of all ordered
pairs of stories
such that there is a dependency from
the event of
to the event of
in the model
.
(6)
Note the story pair is ordered here, so
is not equivalent
to
. In our evaluation, a correct pair with wrong
448
(B->D)
Cluster pairs
(A,C)
Dependency pairs
(A->B)
(C->B)
(B->D)
D,E
D,E
(D,E)
(D,E)
(A->C) (A->E)
(B->C) (B->E)
(B->E)
Cluster precision: 1/2
Cluster Recall: 1/2
Dependency Recall: 2/6
Dependency Precision: 2/4
(A->D)
True event model
System event model
A,B
C
A,C
B
Cluster pairs
(A,B)
Dependency pairs
Figure 2: Evaluation measures
direction will be considered a mistake. As we mentioned earlier
in section 3, ignoring the direction may make the problem
simpler, but we will lose the expressiveness of our representation
.
Given these two sets of story pairs corresponding to the true
event model
and the system event model
, we define recall
and precision for each category as follows.
Cluster Precision (CP)
: It is the probability that two randomly
selected stories
and
are in the same true-event
given that they are in the same system event.
(7)
where
is the story-event mapping function corresponding
to the model
.
Cluster Recall(CR)
: It is the probability that two randomly
selected stories
and
are in the same system-event given
that they are in the same true event.
(8)
Dependency Precision(DP)
: It is the probability that there is
a dependency between the events of two randomly selected
stories
and
in the true model
given that they have a
dependency in the system model
. Note that the direction
of dependency is important in comparison.
(9)
Dependency Recall(DR)
: It is the probability that there is
a dependency between the events of two randomly selected
stories
and
in the system model
given that they have
a dependency in the true model
. Again, the direction of
dependency is taken into consideration.
(10)
The measures are illustrated by an example in figure 2. We also
combine these measures using the well known F1-measure commonly
used in text classification and other research areas as shown
below.
(11)
where
and
are the cluster and dependency F1-measures
respectively and
is the Joint F1-measure (
) that we use to
measure the overall performance.
TECHNIQUES
The task of event modeling can be split into two parts: clustering
the stories into unique events in the topic and constructing dependencies
among them. In the following subsections, we describe
techniques we developed for each of these sub-tasks.
6.1
Clustering
Each topic is composed of multiple events, so stories must be
clustered into events before we can model the dependencies among
them. For simplicity, all stories in the same topic are assumed to
be available at one time, rather than coming in a text stream. This
task is similar to traditional clustering but features other than word
distributions may also be critical in our application.
In many text clustering systems, the similarity between two stories
is the inner product of their tf-idf vectors, hence we use it as
one of our features. Stories in the same event tend to follow temporal
locality, so the time stamp of each story can be a useful feature.
Additionally, named-entities such as person and location names are
another obvious feature when forming events. Stories in the same
event tend to be related to the same person(s) and locations(s).
In this subsection, we present an agglomerative clustering algorithm
that combines all these features. In our experiments, however
, we study the effect of each feature on the performance sepa-rately
using modified versions of this algorithm.
6.1.1
Agglomerative clustering with
time decay (ACDT)
We initialize our events to singleton events (clusters), i.e., each
cluster contains exactly one story. So the similarity between two
events, to start with, is exactly the similarity between the corresponding
stories. The similarity
Ѵ
between two stories
and
is given by the following formula:
Ѵ
״
(12)
Here
,
,
are the weights on different features. In this work,
we determined them empirically, but in the future, one can consider
more sophisticated learning techniques to determine them.
״
is the cosine similarity of the term vectors.
is 1 if there is some location that appears in both stories, otherwise
it is 0.
is similarly defined for person name.
We use time decay when calculating similarity of story pairs,
i.e., the larger time difference between two stories, the smaller their
similarities. The time period of each topic differs a lot, from a few
days to a few months. So we normalize the time difference using
the whole duration of that topic. The time decay adjusted similarity
449
Ѵ
is given by
Ѵ
Ѵ
(13)
where
and
are the time stamps for story 1 and 2 respectively.
T is the time difference between the earliest and the latest story in
the given topic.
is the time decay factor.
In each iteration, we find the most similar event pair and merge
them. We have three different ways to compute the similarity between
two events
and
:
Average link: In this case the similarity is the average of the
similarities of all pairs of stories between
and
as shown
below:
Ѵ
Ѵ
(14)
Complete link: The similarity between two events is given
by the smallest of the pair-wise similarities.
Ѵ
Ѵ
(15)
Single link: Here the similarity is given by the best similarity
between all pairs of stories.
Ѵ
Ѵ
(16)
This process continues until the maximum similarity falls below
the threshold or the number of clusters is smaller than a given number
.
6.2
Dependency modeling
Capturing dependencies is an extremely hard problem because
it may require a `deeper understanding' of the events in question.
A human annotator decides on dependencies not just based on the
information in the events but also based on his/her vast repertoire
of domain-knowledge and general understanding of how things operate
in the world. For example, in Figure 1 a human knows `Trial
and indictment of Osama' is influenced by `Evidence gathered by
CIA' because he/she understands the process of law in general.
We believe a robust model should incorporate such domain knowledge
in capturing dependencies, but in this work, as a first step, we
will rely on surface-features such as time-ordering of news stories
and word distributions to model them. Our experiments in later sections
demonstrate that such features are indeed useful in capturing
dependencies to a large extent.
In this subsection, we describe the models we considered for capturing
dependencies. In the rest of the discussion in this subsection,
we assume that we are already given the mapping
and
we focus only on modeling the edges
. First we define a couple
of features that the following models will employ.
First we define a 1-1 time-ordering function
that sorts stories in ascending order by their time of publication.
Now, the event-time-ordering function
is defined as follows.
ش
ش
(17)
In other words,
time-orders events based on the time-ordering of
their respective first stories.
We will also use average cosine similarity between two events as
a feature and it is defined as follows.
Ѵ
״
(18)
6.2.1
Complete-Link model
In this model, we assume that there are dependencies between all
pairs of events. The direction of dependency is determined by the
time-ordering of the first stories in the respective events. Formally,
the system edges are defined as follows.
(19)
where
is the event-time-ordering function. In other words, the
dependency edge is directed from event
to event
, if the first
story in event
is earlier than the first story in event
. We point
out that this is not to be confused with the complete-link algorithm
in clustering. Although we use the same names, it will be clear
from the context which one we refer to.
6.2.2
Simple Thresholding
This model is an extension of the complete link model with an
additional constraint that there is a dependency between any two
events
and
only if the average cosine similarity between
event
and event
is greater than a threshold
. Formally,
Ѵ
(20)
6.2.3
Nearest Parent Model
In this model, we assume that each event can have at most one
parent. We define the set of dependencies as follows.
Ѵ
(21)
Thus, for each event
, the nearest parent model considers only
the event preceding it as defined by
as a potential candidate. The
candidate is assigned as the parent only if the average similarity
exceeds a pre-defined threshold
.
6.2.4
Best Similarity Model
This model also assumes that each event can have at most one
parent. An event
is assigned a parent
if and only if
is
the most similar earlier event to
and the similarity exceeds a
threshold
. Mathematically, this can be expressed as:
Ѵ
Ѵ
(22)
6.2.5
Maximum Spanning Tree model
In this model, we first build a maximum spanning tree (MST) using
a greedy algorithm on the following fully connected weighted,
undirected graph whose vertices are the events and whose edges
are defined as follows:
Ѵ
(23)
Let
be the set of edges in the maximum spanning tree of
. Now our directed dependency edges
are defined as follows.
Ѵ
(24)
450
Thus in this model, we assign dependencies between the most similar
events in the topic.
EXPERIMENTS
Our experiments consists of three parts. First we modeled only
the event clustering part (defining the mapping function
) using
clustering algorithms described in section 6.1. Then we modeled
only the dependencies by providing to the system the true clusters
and running only the dependency algorithms of section 6.2. Finally,
we experimented with combinations of clustering and dependency
algorithms to produce the complete event model. This way of experimentation
allows us to compare the performance of our algorithms
in isolation and in association with other components. The
following subsections present the three parts of our experimentation
.
7.1
Clustering
We have tried several variations of the
algorithm to study
the effects of various features on the clustering performance. All
the parameters are learned by tuning on the training set. We also
tested the algorithms on the test set with parameters fixed at their
optimal values learned from training. We used agglomerative clus-Model
best T
CP
CR
CF
P-value
cos+1-lnk
0.15
0.41
0.56
0.43
cos+all
-lnk
0.00
0.40
0.62
0.45
cos+Loc+avg
-lnk
0.07
0.37
0.74
0.45
cos+Per+avg
-lnk
0.07
0.39
0.70
0.46
cos+TD+avg
-lnk
0.04
0.45
0.70
0.53
2.9e-4*
cos+N(T)+avg-lnk
0
.41
0.62
0.48
7.5e-2
cos+N(T)+T+avg-lnk
0.03
0.42
0.62
0.49
2.4e-2*
cos+TD+N(T)+avg-lnk
0
.44
0.66
0.52
7.0e-3*
cos+TD+N(T)+T+avg-lnk
0.03
0.47
0.64
0.53
1.1e-3*
Baseline(cos+avg-lnk)
0.05
0.39
0.67
0.46
Table
2: Comparison of agglomerative clustering algorithms
(training set)
tering based on only cosine similarity as our clustering baseline.
The results on the training and test sets are in Table 2 and 3 respectively
. We use the Cluster F1-measure (CF) averaged over all topics
as our evaluation criterion.
Model
CP
CR
CF
P-value
cos+1-lnk
0.43
0.49
0.39
cos+all
-lnk
0.43
0.62
0.47
cos+Loc+avg
-lnk
0.37
0.73
0.45
cos+Per+avg
-lnk
0.44
0.62
0.45
cos+TD+avg
-lnk
0.48
0.70
0.54
0.014*
cos+N(T)+avg-lnk
0.41
0.71
0.51
0.31
cos+N(T)+T+avg-lnk
0.43
0.69*
0.52
0.14
cos+TD+N(T)+avg-lnk
0.43
0.76
0.54
0.025*
cos+TD+N(T)+T+avg-lnk
0.47
0.69
0.54
0.0095*
Baseline(cos+avg-lnk)
0.44
0.67
0.50
Table
3: Comparison of agglomerative clustering algorithms
(test set)
P-value marked with a
means that it is a statistically significant
improvement over the baseline (95% confidence level, one tailed
T-test). The methods shown in table 2 and 3 are:
Baseline: tf-idf vector weight, cosine similarity, average link
in clustering. In equation 12,
,
. And
in equation 13. This F-value is the maximum obtained
by tuning the threshold.
cos+1-lnk: Single link comparison (see equation 16) is used
where similarity of two clusters is the maximum of all story
pairs, other configurations are the same as the baseline run.
cos+all-lnk: Complete link algorithm of equation 15 is used.
Similar to single link but it takes the minimum similarity of
all story pairs.
cos+Loc+avg-lnk: Location names are used when calculating
similarity.
in equation 12. All algorithms
starting from this one use average link (equation 14), since
single link and complete link do not show any improvement
of performance.
cos+Per+avg-lnk:
in equation 12, i.e., we put
some weight on person names in the similarity.
cos+TD+avg-lnk: Time Decay coefficient
in equation
13, which means the similarity between two stories will be
decayed to
if they are at different ends of the topic.
cos+N(T)+avg-lnk: Use the number of true events to control
the agglomerative clustering algorithm. When the number
of clusters is fewer than that of truth events, stop merging
clusters.
cos+N(T)+T+avg-lnk: similar to N(T) but also stop agglomeration
if the maximal similarity is below the threshold
.
cos+TD:+N(T)+avg-lnk: similar to N(T) but the similarities
are decayed,
in equation 13.
cos+TD+N(T)+T+avg-lnk: similar to TD+N(Truth) but calculation
halts when the maximal similarity is smaller than
the threshold
.
Our experiments demonstrate that single link and complete link
similarities perform worse than average link, which is reasonable
since average link is less sensitive to one or two story pairs. We
had expected locations and person names to improve the result, but
it is not the case. Analysis of topics shows that many on-topic
stories share the same locations or persons irrespective of the event
they belong to, so these features may be more useful in identifying
topics rather than events. Time decay is successful because events
are temporally localized, i.e., stories discussing the same event tend
to be adjacent to each other in terms of time. Also we noticed
that providing the number of true events improves the performance
since it guides the clustering algorithm to get correct granularity.
However, for most applications, it is not available. We used it only
as a "cheat" experiment for comparison with other algorithms. On
the whole, time decay proved to the most powerful feature besides
cosine similarity on both training and test sets.
7.2
Dependencies
In this subsection, our goal is to model only dependencies. We
use the true mapping function
and by implication the true events
. We build our dependency structure
using all the five models
described in section 6.2. We first train our models on the 26
training topics. Training involves learning the best threshold
for each of the models. We then test the performances of all the
trained models on the 27 test topics. We evaluate our performance
451
using the average values of Dependency Precision (DP), Dependency
Recall (DR) and Dependency F-measure (DF). We consider
the complete-link model to be our baseline since for each event, it
trivially considers all earlier events to be parents.
Table 4 lists the results on the training set. We see that while all
the algorithms except MST outperform the baseline complete-link
algorithm , the nearest Parent algorithm is statistically significant
from the baseline in terms of its DF-value using a one-tailed paired
T-test at 95% confidence level.
Model
best
DP
DR
DF
P-value
Nearest Parent
0.025
0.55
0.62
0.56
0.04*
Best Similarity
0.02
0.51
0.62
0.53
0.24
MST
0.0
0.46
0.58
0.48
Simple
Thresh.
0.045
0.45
0.76
0.52
0.14
Complete-link
0
.36
0.93
0.48
Table
4: Results on the training set: Best
is the optimal value
of the threshold
. * indicates the corresponding model is statistically
significant compared to the baseline using a one-tailed,
paired T-test at 95% confidence level.
In table 5 we present the comparison of the models on the test
set. Here, we do not use any tuning but set the threshold to the
corresponding optimal values learned from the training set. The results
throw some surprises: The nearest parent model, which was
significantly better than the baseline on training set, turns out to be
worse than the baseline on the test set. However all the other models
are better than the baseline including the best similarity which
is statistically significant. Notice that all the models that perform
better than the baseline in terms of DF, actually sacrifice their recall
performance compared to the baseline, but improve on their
precision substantially thereby improving their performance on the
DF-measure.
We notice that both simple-thresholding and best similarity are
better than the baseline on both training and test sets although the
improvement is not significant. On the whole, we observe that the
surface-level features we used capture the dependencies to a reasonable
level achieving a best value of 0.72 DF on the test set.
Although there is a lot of room for improvement, we believe this is
a good first step.
Model
DP
DR
DF
P-value
Nearest Parent
0.61
0.60
0.60
Best
Similarity
0.71
0.74
0.72
0.04*
MST
0.70
0.68
0.69
0.22
Simple Thresh.
0.57
0.75
0.64
0.24
Baseline (Complete-link)
0.50
0.94
0.63
Table
5: Results on the test set
7.3
Combining Clustering and Dependencies
Now that we have studied the clustering and dependency algorithms
in isolation, we combine the best performing algorithms and
build the entire event model. Since none of the dependency algorithms
has been shown to be consistently and significantly better
than the others, we use all of them in our experimentation. From
the clustering techniques, we choose the best performing Cos+TD.
As a baseline, we use a combination of the baselines in each components
, i.e., cos for clustering and complete-link for dependencies.
Note that we need to retrain all the algorithms on the training
set because our objective function to optimize is now JF, the joint
F-measure. For each algorithm, we need to optimize both the clustering
threshold and the dependency threshold. We did this empirically
on the training set and the optimal values are listed in table
6.
The results on the training set, also presented in table 6, indicate
that cos+TD+Simple-Thresholding is significantly better than the
baseline in terms of the joint F-value JF, using a one-tailed paired T-test
at 95% confidence level. On the whole, we notice that while the
clustering performance is comparable to the experiments in section
7.1, the overall performance is undermined by the low dependency
performance. Unlike our experiments in section 7.2 where we had
provided the true clusters to the system, in this case, the system
has to deal with deterioration in the cluster quality. Hence the performance
of the dependency algorithms has suffered substantially
thereby lowering the overall performance.
The results on the test set present a very similar story as shown
in table 7. We also notice a fair amount of consistency in the performance
of the combination algorithms. cos+TD+Simple-Thresholding
outperforms the baseline significantly. The test set results also point
to the fact that the clustering component remains a bottleneck in
achieving an overall good performance.
DISCUSSION AND CONCLUSIONS
In this paper, we have presented a new perspective of modeling
news topics. Contrary to the TDT view of topics as flat collection
of news stories, we view a news topic as a relational structure
of events interconnected by dependencies. In this paper, we also
proposed a few approaches for both clustering stories into events
and constructing dependencies among them. We developed a time-decay
based clustering approach that takes advantage of temporal-localization
of news stories on the same event and showed that it
performs significantly better than the baseline approach based on
cosine similarity. Our experiments also show that we can do fairly
well on dependencies using only surface-features such as cosine-similarity
and time-stamps of news stories as long as true events
are provided to the system. However, the performance deteriorates
rapidly if the system has to discover the events by itself. Despite
that discouraging result, we have shown that our combined algorithms
perform significantly better than the baselines.
Our results indicate modeling dependencies can be a very hard
problem especially when the clustering performance is below ideal
level. Errors in clustering have a magnifying effect on errors in dependencies
as we have seen in our experiments. Hence, we should
focus not only on improving dependencies but also on clustering at
the same time.
As part of our future work, we plan to investigate further into
the data and discover new features that influence clustering as well
as dependencies. And for modeling dependencies, a probabilistic
framework should be a better choice since there is no definite answer
of yes/no for the causal relations among some events. We also
hope to devise an iterative algorithm which can improve clustering
and dependency performance alternately as suggested by one of
the reviewers. We also hope to expand our labeled corpus further
to include more diverse news sources and larger and more complex
event structures.
Acknowledgments
We would like to thank the three anonymous reviewers for their
valuable comments. This work was supported in part by the Center
452
Model
Cluster T
Dep. T
CP
CR
CF
DP
DR
DF
JF
P-value
cos+TD+Nearest-Parent
0.055
0.02
0.51
0.53
0.49
0.21
0.19
0.19
0.27
cos+TD+Best
-Similarity
0.04
0.02
0.45
0.70
0.53
0.21
0.33
0.23
0.32
cos+TD+MST
0.04
0.00
0.45
0.70
0.53
0.22
0.35
0.25
0.33
cos+TD+Simple
-Thresholding
0.065
0.02
0.56
0.47
0.48
0.23
0.61
0.32
0.38
0.0004*
Baseline (cos+Complete-link)
0.10
0
.58
0.31
0.38
0.20
0.67
0.30
0.33
Table
6: Combined results on the training set
Model
CP
CR
CF
DP
DR
DF
JF
P-value
cos+TD+Nearest Parent
0.57
0.50
0.50
0.27
0.19
0.21
0.30
cos+TD+Best
Similarity
0.48
0.70
0.54
0.31
0.27
0.26
0.35
cos+TD+MST
0.48
0.70
0.54
0.31
0.30
0.28
0.37
cos+TD+Simple
Thresholding
0.60
0.39
0.44
0.32
0.66
0.42
0.43
0.0081*
Baseline (cos+Complete-link)
0.66
0.27
0.36
0.30
0.72
0.43
0.39
Table
7: Combined results on the test set
for Intelligent Information Retrieval and in part by SPAWARSYSCEN-SD
grant number N66001-02-1-8903. Any opinions, findings and
conclusions or recommendations expressed in this material are the
authors' and do not necessarily reflect those of the sponsor.
REFERENCES
[1] J. Allan, J. Carbonell, G. Doddington, J. Yamron, and
Y. Yang. Topic detection and tracking pilot study: Final
report. In Proceedings of the DARPA Broadcast News
Transcription and Understanding Workshop, pages 194218,
1998.
[2] J. Allan, A. Feng, and A. Bolivar. Flexible intrinsic
evaluation of hierarchical clustering for tdt. volume In the
Proc. of the ACM Twelfth International Conference on
Information and Knowledge Management, pages 263270,
Nov 2003.
[3] James Allan, editor. Topic Detection and Tracking:Event
based Information Organization. Kluwer Academic
Publishers, 2000.
[4] James Allan, Rahul Gupta, and Vikas Khandelwal. Temporal
summaries of new topics. In Proceedings of the 24th annual
international ACM SIGIR conference on Research and
development in information retrieval, pages 1018. ACM
Press, 2001.
[5] Regina Barzilay and Lillian Lee. Catching the drift:
Probabilistic content models, with applications to generation
and summarization. In Proceedings of Human Language
Technology Conference and North American Chapter of the
Association for Computational Linguistics(HLT-NAACL),
pages 113120, 2004.
[6] D. Lawrie and W. B. Croft. Discovering and comparing topic
hierarchies. In Proceedings of RIAO 2000 Conference, pages
314330, 1999.
[7] David D. Lewis and Kimberly A. Knowles. Threading
electronic mail: a preliminary study. Inf. Process. Manage.,
33(2):209217, 1997.
[8] Juha Makkonen. Investigations on event evolution in tdt. In
Proceedings of HLT-NAACL 2003 Student Workshop, pages
4348, 2004.
[9] Aixin Sun and Ee-Peng Lim. Hierarchical text classification
and evaluation. In Proceedings of the 2001 IEEE
International Conference on Data Mining, pages 521528.
IEEE Computer Society, 2001.
[10] Yiming Yang, Jaime Carbonell, Ralf Brown, Thomas Pierce,
Brian T. Archibald, and Xin Liu. Learning approaches for
detecting and tracking news events. In IEEE Intelligent
Systems Special Issue on Applications of Intelligent
Information Retrieval, volume 14 (4), pages 3243, 1999.
453
| Complete-Link model;Event;Intelligent Information Retrieval;Event Threading;threading;meaningful and efficient analysis and presentation of news;Information browsing and organization;Nearest Parent Model;information searching;Dependency modeling;Agglomerative clustering with time decay;dependency;News topic modeling;Topic detection and tracking;clustering;temporal localization of news stories |