Introduction
Overview
&zebra; is a free, fast, friendly information management system. It can
index records in &acro.xml;/&acro.sgml;, &acro.marc;, e-mail archives and many other
formats, and quickly find them using a combination of boolean
searching and relevance ranking. Search-and-retrieve applications can
be written using &acro.api;s in a wide variety of languages, communicating
with the &zebra; server using industry-standard information-retrieval
protocols or web services.
&zebra; is licensed Open Source, and can be
deployed by anyone for any purpose without license fees. The C source
code is open to anybody to read and change under the GPL license.
&zebra; is a networked component which acts as a
reliable &acro.z3950; server
for both record/document search, presentation, insert, update and
delete operations. In addition, it understands the &acro.sru; family of
webservices, which exist in &acro.rest; &acro.get;/&acro.post; and truly
&acro.soap; flavors.
&zebra; is available as MS Windows 2003 Server (32 bit) self-extracting
package as well as GNU/Debian Linux (32 bit and 64 bit) precompiled
packages. It has been deployed successfully on other Unix systems,
including Sun Sparc, HP Unix, and many variants of Linux and BSD
based systems.
http://www.indexdata.com/zebra/
http://ftp.indexdata.dk/pub/zebra/win32/
http://ftp.indexdata.dk/pub/zebra/debian/
&zebra;
is a high-performance, general-purpose structured text
indexing and retrieval engine. It reads records in a
variety of input formats (e.g. email, &acro.xml;, &acro.marc;) and provides access
to them through a powerful combination of boolean search
expressions and relevance-ranked free-text queries.
&zebra; supports large databases (tens of millions of records,
tens of gigabytes of data). It allows safe, incremental
database updates on live systems. Because &zebra; supports
the industry-standard information retrieval protocol, &acro.z3950;,
you can search &zebra; databases using an enormous variety of
programs and toolkits, both commercial and free, which understand
this protocol. Application libraries are available to allow
bespoke clients to be written in Perl, C, C++, Java, Tcl, Visual
Basic, Python, &acro.php; and more - see the
&acro.zoom; web site
for more information on some of these client toolkits.
This document is an introduction to the &zebra; system. It explains
how to compile the software, how to prepare your first database,
and how to configure the server to give you the
functionality that you need.
&zebra; Features Overview
&zebra; Document Model
&zebra; document model
Feature
Availability
Notes
Reference
Complex semi-structured Documents
&acro.xml; and &acro.grs1; Documents
Both &acro.xml; and &acro.grs1; documents exhibit a &acro.dom; like internal
representation allowing for complex indexing and display rules
and
Input document formats
&acro.xml;, &acro.sgml;, Text, ISO2709 (&acro.marc;)
A system of input filters driven by
regular expressions allows most ASCII-based
data formats to be easily processed.
&acro.sgml;, &acro.xml;, ISO2709 (&acro.marc;), and raw text are also
supported.
Document storage
Index-only, Key storage, Document storage
Data can be, and usually is, imported
into &zebra;'s own storage, but &zebra; can also refer to
external files, building and maintaining indexes of "live"
collections.
&zebra; Search Features
&zebra; search functionality
Feature
Availability
Notes
Reference
Query languages
&acro.cql; and &acro.rpn;/&acro.pqf;
The type-1 Reverse Polish Notation (&acro.rpn;)
and its textual representation Prefix Query Format (&acro.pqf;) are
supported. The Common Query Language (&acro.cql;) can be configured as
a mapping from &acro.cql; to &acro.rpn;/&acro.pqf;
and
Complex boolean query tree
&acro.cql; and &acro.rpn;/&acro.pqf;
Both &acro.cql; and &acro.rpn;/&acro.pqf; allow atomic query parts (&acro.apt;) to
be combined into complex boolean query trees
Field search
user defined
Atomic query parts (&acro.apt;) are either general, or
directed at user-specified document fields
,
,
, and
Data normalization
user defined
Data normalization, text tokenization and character
mappings can be applied during indexing and searching
Predefined field types
user defined
Data fields can be indexed as phrase, as into word
tokenized text, as numeric values, URLs, dates, and raw binary
data.
and
Regular expression matching
available
Full regular expression matching and "approximate
matching" (e.g. spelling mistake corrections) are handled.
Term truncation
left, right, left-and-right
The truncation attribute specifies whether variations of
one or more characters are allowed between search term and hit
terms, or not. Using non-default truncation attributes will
broaden the document hit set of a search query.
Fuzzy searches
Spelling correction
In addition, fuzzy searches are implemented, where one
spelling mistake in search terms is matched
&zebra; Index Scanning
&zebra; index scanning
Feature
Availability
Notes
Reference
Scan
term suggestions
Scan on a given named index returns all the
indexed terms in lexicographical order near the given start
term. This can be used to create drop-down menus and search
suggestions.
and
Facetted browsing
available
Zebra 2.1 and allows retrieval of facets for
a result set.
Drill-down or refine-search
partially
scanning in result sets can be used to implement
drill-down in search clients
&zebra; Document Presentation
&zebra; document presentation
Feature
Availability
Notes
Reference
Hit count
yes
Search results include at any time the total hit count of a given
query, either exact computed, or approximative, in case that the
hit count exceeds a possible pre-defined hit set truncation
level.
and
Paged result sets
yes
Paging of search requests and present/display request
can return any successive number of records from any start
position in the hit set, i.e. it is trivial to provide search
results in successive pages of any size.
&acro.xml; document transformations
&acro.xslt; based
Record presentation can be performed in many
pre-defined &acro.xml; data
formats, where the original &acro.xml; records are on-the-fly transformed
through any preconfigured &acro.xslt; transformation. It is therefore
trivial to present records in short/full &acro.xml; views, transforming to
RSS, Dublin Core, or other &acro.xml; based data formats, or transform
records to XHTML snippets ready for inserting in XHTML pages.
Binary record transformations
&acro.marc;, &acro.usmarc;, &acro.marc21; and &acro.marcxml;
post-filter record transformations
Record Syntaxes
Multiple record syntaxes
for data retrieval: &acro.grs1;, &acro.sutrs;,
&acro.xml;, ISO2709 (&acro.marc;), etc. Records can be mapped between
record syntaxes and schemas on the fly.
&zebra; internal metadata
yes
&zebra; internal document metadata can be fetched in
&acro.sutrs; and &acro.xml; record syntaxes. Those are useful in client
applications.
&zebra; internal raw record data
yes
&zebra; internal raw, binary record data can be fetched in
&acro.sutrs; and &acro.xml; record syntaxes, leveraging %zebra; to a
binary storage system
&zebra; internal record field data
yes
&zebra; internal record field data can be fetched in
&acro.sutrs; and &acro.xml; record syntaxes. This makes very fast minimal
record data displays possible.
&zebra; Sorting and Ranking
&zebra; sorting and ranking
Feature
Availability
Notes
Reference
Sort
numeric, lexicographic
Sorting on the basis of alpha-numeric and numeric data
is supported. Alphanumeric sorts can be configured for
different data encodings and locales for European languages.
and
Combined sorting
yes
Sorting on the basis of combined sorts e.g. combinations of
ascending/descending sorts of lexicographical/numeric/date field data
is supported
Relevance ranking
TF-IDF like
Relevance-ranking of free-text queries is supported
using a TF-IDF like algorithm.
Static pre-ranking
yes
Enables pre-index time ranking of documents where hit
lists are ordered first by ascending static rank, then by
ascending document ID.
&zebra; Live Updates
&zebra; live updates
Feature
Availability
Notes
Reference
Incremental and batch updates
It is possible to schedule record inserts/updates/deletes in any
quantity, from single individual handled records to batch updates
in strikes of any size, as well as total re-indexing of all records
from file system.
Remote updates
&acro.z3950; extended services
Updates can be performed from remote locations using the
&acro.z3950; extended services. Access to extended services can be
login-password protected.
and
Live updates
transaction based
Data updates are transaction based and can be performed
on running &zebra; systems. Full searchability is preserved
during life data update due to use of shadow disk areas for
update operations. Multiple update transactions at the same
time are lined up, to be performed one after each other. Data
integrity is preserved.
&zebra; Networked Protocols
&zebra; networked protocols
Feature
Availability
Notes
Reference
Fundamental operations
&acro.z3950;/&acro.sru; explain,
search, scan, and
update
&acro.z3950; protocol support
yes
Protocol facilities supported are:
init, search,
present (retrieval),
Segmentation (support for very large records),
delete, scan
(index browsing), sort,
close and support for the update
Extended Service to add or replace an existing &acro.xml;
record. Piggy-backed presents are honored in the search
request. Named result sets are supported.
Web Service support
&acro.sru;
The protocol operations explain,
searchRetrieve and scan
are supported. &acro.cql; to internal
query model &acro.rpn;
conversion is supported. Extended RPN queries
for search/retrieve and scan are supported.
&zebra; Data Size and Scalability
&zebra; data size and scalability
Feature
Availability
Notes
Reference
No of records
40-60 million
Data size
100 GB of record data
&zebra; based applications have successfully indexed up
to 100 GB of record data
Scale out
multiple discs
Performance
O(n * log N)
&zebra; query speed and performance is affected roughly by
O(log N),
where N is the total database size, and by
O(n), where n is the
specific query hit set size.
Average search times
Even on very large size databases hit rates of 20 queries per
seconds with average query answering time of 1 second are possible,
provided that the boolean queries are constructed sufficiently
precise to result in hit sets of the order of 1000 to 5.000
documents.
Large databases
64 bit file pointers
64 file pointers assure that register files can extend
the 2 GB limit. Logical files can be
automatically partitioned over multiple disks, thus allowing for
large databases.
References and &zebra; based Applications
&zebra; has been deployed in numerous applications, in both the
academic and commercial worlds, in application domains as diverse
as bibliographic catalogues, Geo-spatial information, structured
vocabulary browsing, government information locators, civic
information systems, environmental observations, museum information
and web indexes.
Notable applications include the following:
Koha free open-source ILS
Koha is a full-featured
open-source ILS, initially developed in
New Zealand by Katipo Communications Ltd, and first deployed in
January of 2000 for Horowhenua Library Trust. It is currently
maintained by a team of software providers and library technology
staff from around the globe.
LibLime,
a company that is marketing and supporting Koha, adds in
the new release of Koha 3.0 the &zebra;
database server to drive its bibliographic database.
In early 2005, the Koha project development team began looking at
ways to improve &acro.marc; support and overcome scalability limitations
in the Koha 2.x series. After extensive evaluations of the best
of the Open Source textual database engines - including MySQL
full-text searching, PostgreSQL, Lucene and Plucene - the team
selected &zebra;.
"&zebra; completely eliminates scalability limitations, because it
can support tens of millions of records." explained Joshua
Ferraro, LibLime's Technology President and Koha's Project
Release Manager. "Our performance tests showed search results in
under a second for databases with over 5 million records on a
modest i386 900Mhz test server."
"&zebra; also includes support for true boolean search expressions
and relevance-ranked free-text queries, both of which the Koha
2.x series lack. &zebra; also supports incremental and safe
database updates, which allow on-the-fly record
management. Finally, since &zebra; has at its heart the &acro.z3950;
protocol, it greatly improves Koha's support for that critical
library standard."
Although the bibliographic database will be moved to &zebra;, Koha
3.0 will continue to use a relational SQL-based database design
for the 'factual' database. "Relational database managers have
their strengths, in spite of their inability to handle large
numbers of bibliographic records efficiently," summed up Ferraro,
"We're taking the best from both worlds in our redesigned Koha
3.0.
See also LibLime's newsletter article
Koha Earns its Stripes.
Kete Open Source Digital Library and Archiving software
Kete is a digital object
management repository, initially developed in
New Zealand. Initial development has
been a partnership between the Horowhenua Library Trust and
Katipo Communications Ltd. funded as part of the Community
Partnership Fund in 2006.
Kete is purpose built
software to enable communities to build their own digital
libraries, archives and repositories.
It is based on Ruby-on-Rails and MySQL, and integrates the &zebra; server
and the &yaz; toolkit for indexing and retrieval of it's content.
Zebra is run as separate computer process from the Kete
application.
See
how Kete manages
Zebra.
Why does Kete wants to use Zebra?? Speed, Scalability and easy
integration with Koha. Read their
detailed
reasoning here.
ReIndex.Net web based ILS
Reindex.net
is a netbased library service offering all
traditional functions on a very high level plus many new
services. Reindex.net is a comprehensive and powerful WEB system
based on standards such as &acro.xml; and &acro.z3950;.
updates. Reindex supports &acro.marc21;, dan&acro.marc; eller Dublin Core with
UTF8-encoding.
Reindex.net runs on GNU/Debian Linux with &zebra; and Simpleserver
from Index
Data for bibliographic data. The relational database system
Sybase 9 &acro.xml; is used for
administrative data.
Internally &acro.marcxml; is used for bibliographical records. Update
utilizes &acro.z3950; extended services.
DADS - the DTV Article Database
Service
DADS is a huge database of more than ten million records, totalling
over ten gigabytes of data. The records are metadata about academic
journal articles, primarily scientific; about 10% of these
metadata records link to the full text of the articles they
describe, a body of about a terabyte of information (although the
full text is not indexed.)
It allows students and researchers at DTU (Danmarks Tekniske
Universitet, the Technical College of Denmark) to find and order
articles from multiple databases in a single query. The database
contains literature on all engineering subjects. It's available
on-line through a web gateway, though currently only to registered
users.
More information can be found at
and
ULS (Union List of Serials)
The M25 Systems Team
has created a union catalogue for the periodicals of the
twenty-one constituent libraries of the University of London and
the University of Westminster
().
They have achieved this using an
unusual architecture, which they describe as a
``non-distributed virtual union catalogue''.
The member libraries send in data files representing their
periodicals, including both brief bibliographic data and summary
holdings. Then 21 individual &acro.z3950; targets are created, each
using &zebra;, and all mounted on the single hardware server.
The live service provides a web gateway allowing &acro.z3950; searching
of all of the targets or a selection of them. &zebra;'s small
footprint allows a relatively modest system to comfortably host
the 21 servers.
More information can be found at
Various web indexes
&zebra; has been used by a variety of institutions to construct
indexes of large web sites, typically in the region of tens of
millions of pages. In this role, it functions somewhat similarly
to the engine of Google or AltaVista, but for a selected intranet
or a subset of the whole Web.
For example, Liverpool University's web-search facility (see on
the home page at
and many sub-pages) works by relevance-searching a &zebra; database
which is populated by the Harvest-NG web-crawling software.
For more information on Liverpool university's intranet search
architecture, contact John Gilbertson
jgilbert@liverpool.ac.uk
Kang-Jin Lee
has recently modified the Harvest web indexer to use &zebra; as
its native repository engine. His comments on the switch over
from the old engine are revealing:
The first results after some testing with &zebra; are very
promising. The tests were done with around 220,000 SOIF files,
which occupies 1.6GB of disk space.
Building the index from scratch takes around one hour with &zebra;
where [old-engine] needs around five hours. While [old-engine]
blocks search requests when updating its index, &zebra; can still
answer search requests.
[...]
&zebra; supports incremental indexing which will speed up indexing
even further.
While the search time of [old-engine] varies from some seconds
to some minutes depending how expensive the query is, &zebra;
usually takes around one to three seconds, even for expensive
queries.
[...]
&zebra; can search more than 100 times faster than [old-engine]
and can process multiple search requests simultaneously
I am very happy to see such nice software available under GPL.
Support
You can get support for &zebra; from at least three sources.
First, there's the &zebra; web site at
,
which always has the most recent version available for download.
If you have a problem with &zebra;, the first thing to do is see
whether it's fixed in the current release.
Second, there's the &zebra; mailing list. Its home page at
includes a complete archive of all messages that have ever been
posted on the list. The &zebra; mailing list is used both for
announcements from the authors (new
releases, bug fixes, etc.) and general discussion. You are welcome
to seek support there. Join by filling the form on the list home page.