X-Git-Url: http://git.indexdata.com/?a=blobdiff_plain;f=doc%2Fadministration.xml;h=8905d11f9223fc9286aa157f361571e88d852c50;hb=eb44a82db835f33bbd984072ee18dd2ec34ae9cb;hp=ccb156f56567693a1d335483455679bff2fd23cf;hpb=a31f9b2d25006c89ae7e9fb5870c0d222ee88a3a;p=idzebra-moved-to-github.git
diff --git a/doc/administration.xml b/doc/administration.xml
index ccb156f..8905d11 100644
--- a/doc/administration.xml
+++ b/doc/administration.xml
@@ -1,3430 +1,1814 @@
-
-Quick Start
-
-
-In this section, we will test the system by indexing a small set of sample
-GILS records that are included with the software distribution. Go to the
-test/gils subdirectory of the distribution archive. There you will
-find a configuration
-file named zebra.cfg with the following contents:
-
-
-# Where are the YAZ tables located.
-profilePath: ../../../yaz/tab ../../tab
-
-# Files that describe the attribute sets supported.
-attset: bib1.att
-attset: gils.att
-
-
-
-
-
-Now, edit the file and set profilePath to the path of the
-YAZ profile tables (sub directory tab of the YAZ distribution
-archive).
-
-
-
-The 48 test records are located in the sub directory records.
-To index these, type:
-
-
-$ ../../index/zebraidx -t grs.sgml update records
-
-
-
-
-
-In the command above the option -t specified the record
-type — in this case grs.sgml. The word update followed
-by a directory root updates all files below that directory node.
-
-
-
-If your indexing command was successful, you are now ready to
-fire up a server. To start a server on port 2100, type:
-
-
-$ ../../index/zebrasrv tcp:@:2100
-
-
-
-
-
-The Zebra index that you have just created has a single database
-named Default. The database contains records structured according to
-the GILS profile, and the server will
-return records in either either USMARC, GRS-1, or SUTRS depending
-on what your client asks
-for.
-
-
-
-To test the server, you can use any Z39.50 client (1992 or later). For
-instance, you can use the demo client that comes with YAZ: Just cd to
-the client subdirectory of the YAZ distribution and type:
-
-
-
-
-
-$ client tcp:localhost:2100
-
-
-
-
-
-When the client has connected, you can type:
-
-
-
-
-
-Z> find surficial
-Z> show 1
-
-
-
-
-
-The default retrieval syntax for the client is USMARC. To try other
-formats for the same record, try:
-
-
-
-
-
-Z>format sutrs
-Z>show 1
-Z>format grs-1
-Z>show 1
-Z>elements B
-Z>show 1
-
-
-
-
-
-NOTE: You may notice that more fields are returned when your
-client requests SUTRS or GRS-1 records. When retrieving GILS records,
-this is normal - not all of the GILS data elements have mappings in
-the USMARC record format.
-
-
-
-If you've made it this far, there's a good chance that
-you've got through the compilation OK.
-
-
-
-
-Administrating Zebra
-
-
-Unlike many simpler retrieval systems, Zebra supports safe, incremental
-updates to an existing index.
-
-
-
-Normally, when Zebra modifies the index it reads a number of records
-that you specify.
-Depending on your specifications and on the contents of each record
-one the following events take place for each record:
-
-
-
-Insert
-
-
-The record is indexed as if it never occurred
-before. Either the Zebra system doesn't know how to identify the record or
-Zebra can identify the record but didn't find it to be already indexed.
-
-
-
-
-Modify
-
-
-The record has already been indexed. In this case
-either the contents of the record or the location (file) of the record
-indicates that it has been indexed before.
-
-
-
-
-Delete
-
-
-The record is deleted from the index. As in the
-update-case it must be able to identify the record.
-
-
-
-
-
-
-
-Please note that in both the modify- and delete- case the Zebra
-indexer must be able to generate a unique key that identifies the record in
-question (more on this below).
-
-
-
-To administrate the Zebra retrieval system, you run the
-zebraidx program. This program supports a number of options
-which are preceded by a minus, and a few commands (not preceded by
-minus).
-
-
-
-Both the Zebra administrative tool and the Z39.50 server share a
-set of index files and a global configuration file. The
-name of the configuration file defaults to zebra.cfg.
-The configuration file includes specifications on how to index
-various kinds of records and where the other configuration files
-are located. zebrasrv and zebraidx must
-be run in the directory where the configuration file lives unless you
-indicate the location of the configuration file by option
--c.
-
-
-
-Record Types
-
-
-Indexing is a per-record process, in which either insert/modify/delete
-will occur. Before a record is indexed search keys are extracted from
-whatever might be the layout the original record (sgml,html,text, etc..).
-The Zebra system currently supports two fundamantal types of records:
-structured and simple text.
-To specify a particular extraction process, use either the
-command line option -t or specify a
-recordType setting in the configuration file.
-
-
-
-
-
-The Zebra Configuration File
-
-
-The Zebra configuration file, read by zebraidx and
-zebrasrv defaults to zebra.cfg unless specified
-by -c option.
-
-
-
-You can edit the configuration file with a normal text editor.
-parameter names and values are seperated by colons in the file. Lines
-starting with a hash sign (#) are treated as comments.
-
-
-
-If you manage different sets of records that share common
-characteristics, you can organize the configuration settings for each
-type into "groups".
-When zebraidx is run and you wish to address a given group
-you specify the group name with the -g option. In this case
-settings that have the group name as their prefix will be used
-by zebraidx. If no -g option is specified, the settings
-with no prefix are used.
-
-
-
-In the configuration file, the group name is placed before the option
-name itself, separated by a dot (.). For instance, to set the record type
-for group public to grs.sgml (the SGML-like format for structured
-records) you would write:
-
-
-
-
-
-public.recordType: grs.sgml
-
-
-
-
-
-To set the default value of the record type to text write:
-
-
-
-
-
-recordType: text
-
-
-
-
-
-The available configuration settings are summarized below. They will be
-explained further in the following sections.
-
-
-
-
-
-
-group.recordType[.name]
-
-
-Specifies how records with the file extension name should
-be handled by the indexer. This option may also be specified
-as a command line option (-t). Note that if you do not
-specify a name, the setting applies to all files. In general,
-the record type specifier consists of the elements (each
-element separated by dot), fundamental-type,
-file-read-type and arguments. Currently, two
-fundamental types exist, text and grs.
-
-
-
-
-group.recordId
-
-
-Specifies how the records are to be identified when updated. See
-section .
-
-
-
-
-group.database
-
-
-Specifies the Z39.50 database name.
-
-
-
-
-group.storeKeys
-
-
-Specifies whether key information should be saved for a given
-group of records. If you plan to update/delete this type of
-records later this should be specified as 1; otherwise it
-should be 0 (default), to save register space. See section
-.
-
-
-
-
-group.storeData
-
-
-Specifies whether the records should be stored internally
-in the Zebra system files. If you want to maintain the raw records yourself,
-this option should be false (0). If you want Zebra to take care of the records
-for you, it should be true(1).
-
-
-
-
-register
-
-
-Specifies the location of the various register files that Zebra uses
-to represent your databases. See section
-.
-
-
-
-
-shadow
-
-
-Enables the safe update facility of Zebra, and tells the system
-where to place the required, temporary files. See section
-.
-
-
-
-
-lockDir
-
-
-Directory in which various lock files are stored.
-
-
-
-
-keyTmpDir
-
-
-Directory in which temporary files used during zebraidx' update
-phase are stored.
-
-
-
-
-setTmpDir
-
-
-Specifies the directory that the server uses for temporary result sets.
-If not specified /tmp will be used.
-
-
-
-
-profilePath
-
-
-Specifies the location of profile specification files.
-
-
-
-
-attset
-
-
-Specifies the filename(s) of attribute set files for use in
-searching. At least the Bib-1 set should be loaded (bib1.att).
-The profilePath setting is used to look for the specified files.
-See section
-
-
-
-
-memMax
-
-
-Specifies size of internal memory to use for the zebraidx program. The
-amount is given in megabytes - default is 4 (4 MB).
-
-
-
-
-
-
-
-
-
-Locating Records
-
-
-The default behaviour of the Zebra system is to reference the
-records from their original location, i.e. where they were found when you
-ran zebraidx. That is, when a client wishes to retrieve a record
-following a search operation, the files are accessed from the place
-where you originally put them - if you remove the files (without
-running zebraidx again, the client will receive a diagnostic
-message.
-
-
-
-If your input files are not permanent - for example if you retrieve
-your records from an outside source, or if they were temporarily
-mounted on a CD-ROM drive,
-you may want Zebra to make an internal copy of them. To do this,
-you specify 1 (true) in the storeData setting. When
-the Z39.50 server retrieves the records they will be read from the
-internal file structures of the system.
-
-
-
-
-
-Indexing with no Record IDs (Simple Indexing)
-
-
-If you have a set of records that are not expected to change over time
-you may can build your database without record IDs.
-This indexing method uses less space than the other methods and
-is simple to use.
-
-
-
-To use this method, you simply omit the recordId entry
-for the group of files that you index. To add a set of records you use
-zebraidx with the update command. The
-update command will always add all of the records that it
-encounters to the index - whether they have already been indexed or
-not. If the set of indexed files change, you should delete all of the
-index files, and build a new index from scratch.
-
-
-
-Consider a system in which you have a group of text files called
-simple. That group of records should belong to a Z39.50 database
-called textbase. The following zebra.cfg file will suffice:
-
-
-
-
-
-profilePath: /usr/local/yaz
-attset: bib1.att
-simple.recordType: text
-simple.database: textbase
-
-
-
-
-
-Since the existing records in an index can not be addressed by their
-IDs, it is impossible to delete or modify records when using this method.
-
-
-
-
-
-Indexing with File Record IDs
-
-
-If you have a set of files that regularly change over time: Old files
-are deleted, new ones are added, or existing files are modified, you
-can benefit from using the file ID indexing methodology. Examples
-of this type of database might include an index of WWW resources, or a
-USENET news spool area. Briefly speaking, the file key methodology
-uses the directory paths of the individual records as a unique
-identifier for each record. To perform indexing of a directory with
-file keys, again, you specify the top-level directory after the
-update command. The command will recursively traverse the
-directories and compare each one with whatever have been indexed before in
-that same directory. If a file is new (not in the previous version of
-the directory) it is inserted into the registers; if a file was
-already indexed and it has been modified since the last update,
-the index is also modified; if a file has been removed since the last
-visit, it is deleted from the index.
-
-
-
-The resulting system is easy to administrate. To delete a record you
-simply have to delete the corresponding file (say, with the rm
-command). And to add records you create new files (or directories with
-files). For your changes to take effect in the register you must run
-zebraidx update with the same directory root again. This mode
-of operation requires more disk space than simpler indexing methods,
-but it makes it easier for you to keep the index in sync with a
-frequently changing set of data. If you combine this system with the
-safe update facility (see below), you never have to take your
-server offline for maintenance or register updating purposes.
-
-
-
-To enable indexing with pathname IDs, you must specify file as
-the value of recordId in the configuration file. In addition,
-you should set storeKeys to 1, since the Zebra
-indexer must save additional information about the contents of each record
-in order to modify the indices correctly at a later time.
-
-
-
-For example, to update records of group esdd located below
-/data1/records/ you should type:
-
-
-$ zebraidx -g esdd update /data1/records
-
-
-
-
-
-The corresponding configuration file includes:
-
-
-esdd.recordId: file
-esdd.recordType: grs.sgml
-esdd.storeKeys: 1
-
-
-
-
-
-Important note: You cannot start out with a group of records with simple
-indexing (no record IDs as in the previous section) and then later
-enable file record Ids. Zebra must know from the first time that you
-index the group that
-the files should be indexed with file record IDs.
-
-
-
-You cannot explicitly delete records when using this method (using the
-delete command to zebraidx. Instead
-you have to delete the files from the file system (or move them to a
-different location)
-and then run zebraidx with the update command.
-
-
-
-
-
-Indexing with General Record IDs
-
-
-When using this method you construct an (almost) arbritrary, internal
-record key based on the contents of the record itself and other system
-information. If you have a group of records that explicitly associates
-an ID with each record, this method is convenient. For example, the
-record format may contain a title or a ID-number - unique within the group.
-In either case you specify the Z39.50 attribute set and use-attribute
-location in which this information is stored, and the system looks at
-that field to determine the identity of the record.
-
-
-
-As before, the record ID is defined by the recordId setting
-in the configuration file. The value of the record ID specification
-consists of one or more tokens separated by whitespace. The resulting
-ID is
-represented in the index by concatenating the tokens and separating them by
-ASCII value (1).
-
-
-
-There are three kinds of tokens:
-
-
-
-Internal record info
-
-
-The token refers to a key that is
-extracted from the record. The syntax of this token is
-( set , use ), where set is the
-attribute set name use is the name or value of the attribute.
-
-
-
-
-System variable
-
-
-The system variables are preceded by
-
-
-$
-
- and immediately followed by the system variable name, which
-may one of
-
-
-
-group
-
-
-Group name.
-
-
-
-
-database
-
-
-Current database specified.
-
-
-
-
-type
-
-
-Record type.
-
-
-
-
-
-
-
-
-Constant string
-
-
-A string used as part of the ID — surrounded
-by single- or double quotes.
-
-
-
-
-
-
-
-For instance, the sample GILS records that come with the Zebra
-distribution contain a unique ID in the data tagged Control-Identifier.
-The data is mapped to the Bib-1 use attribute Identifier-standard
-(code 1007). To use this field as a record id, specify
-(bib1,Identifier-standard) as the value of the
-recordId in the configuration file.
-If you have other record types that uses the same field for a
-different purpose, you might add the record type
-(or group or database name) to the record id of the gils
-records as well, to prevent matches with other types of records.
-In this case the recordId might be set like this:
-
-
-gils.recordId: $type (bib1,Identifier-standard)
-
-
-
-
-
-(see section
-for details of how the mapping between elements of your records and
-searchable attributes is established).
-
-
-
-As for the file record ID case described in the previous section,
-updating your system is simply a matter of running zebraidx
-with the update command. However, the update with general
-keys is considerably slower than with file record IDs, since all files
-visited must be (re)read to discover their IDs.
-
-
-
-As you might expect, when using the general record IDs
-method, you can only add or modify existing records with the update
-command. If you wish to delete records, you must use the,
-delete command, with a directory as a parameter.
-This will remove all records that match the files below that root
-directory.
-
-
-
-
-
-Register Location
-
-
-Normally, the index files that form dictionaries, inverted
-files, record info, etc., are stored in the directory where you run
-zebraidx. If you wish to store these, possibly large, files
-somewhere else, you must add the register entry to the
-zebra.cfg file. Furthermore, the Zebra system allows its file
-structures to
-span multiple file systems, which is useful for managing very large
-databases.
-
-
-
-The value of the register setting is a sequence of tokens.
-Each token takes the form:
-
-
-dir:size.
-
-
-The dir specifies a directory in which index files will be
-stored and the size specifies the maximum size of all
-files in that directory. The Zebra indexer system fills each directory
-in the order specified and use the next specified directories as needed.
-The size is an integer followed by a qualifier
-code, M for megabytes, k for kilobytes.
-
-
-
-For instance, if you have allocated two disks for your register, and
-the first disk is mounted
-on /d1 and has 200 Mb of free space and the
-second, mounted on /d2 has 300 Mb, you could
-put this entry in your configuration file:
-
-
-register: /d1:200M /d2:300M
-
-
-
-
-
-Note that Zebra does not verify that the amount of space specified is
-actually available on the directory (file system) specified - it is
-your responsibility to ensure that enough space is available, and that
-other applications do not attempt to use the free space. In a large production system,
-it is recommended that you allocate one or more filesystem exclusively
-to the Zebra register files.
-
-
-
-
-
-Safe Updating - Using Shadow Registers
-
-
-Description
-
-
-The Zebra server supports updating of the index structures. That is,
-you can add, modify, or remove records from databases managed by Zebra
-without rebuilding the entire index. Since this process involves
-modifying structured files with various references between blocks of
-data in the files, the update process is inherently sensitive to
-system crashes, or to process interruptions: Anything but a
-successfully completed update process will leave the register files in
-an unknown state, and you will essentially have no recourse but to
-re-index everything, or to restore the register files from a backup
-medium. Further, while the update process is active, users cannot be
-allowed to access the system, as the contents of the register files
-may change unpredictably.
-
-
-
-You can solve these problems by enabling the shadow register system in
-Zebra. During the updating procedure, zebraidx will temporarily
-write changes to the involved files in a set of "shadow
-files", without modifying the files that are accessed by the
-active server processes. If the update procedure is interrupted by a
-system crash or a signal, you simply repeat the procedure - the
-register files have not been changed or damaged, and the partially
-written shadow files are automatically deleted before the new updating
-procedure commences.
-
-
-
-At the end of the updating procedure (or in a separate operation, if
-you so desire), the system enters a "commit mode". First,
-any active server processes are forced to access those blocks that
-have been changed from the shadow files rather than from the main
-register files; the unmodified blocks are still accessed at their
-normal location (the shadow files are not a complete copy of the
-register files - they only contain those parts that have actually been
-modified). If the commit process is interrupted at any point during the
-commit process, the server processes will continue to access the
-shadow files until you can repeat the commit procedure and complete
-the writing of data to the main register files. You can perform
-multiple update operations to the registers before you commit the
-changes to the system files, or you can execute the commit operation
-at the end of each update operation. When the commit phase has
-completed successfully, any running server processes are instructed to
-switch their operations to the new, operational register, and the
-temporary shadow files are deleted.
-
-
-
-
-
-How to Use Shadow Register Files
-
-
-The first step is to allocate space on your system for the shadow
-files. You do this by adding a shadow entry to the zebra.cfg
-file. The syntax of the shadow entry is exactly the same as for
-the register entry (see section ). The location of the shadow area should be
-different from the location of the main register area (if you
-have specified one - remember that if you provide no register
-setting, the default register area is the
-working directory of the server and indexing processes).
-
-
-
-The following excerpt from a zebra.cfg file shows one example of
-a setup that configures both the main register location and the shadow
-file area. Note that two directories or partitions have been set aside
-for the shadow file area. You can specify any number of directories
-for each of the file areas, but remember that there should be no
-overlaps between the directories used for the main registers and the
-shadow files, respectively.
-
-
-
-
-
-register: /d1:500M
-
-shadow: /scratch1:100M /scratch2:200M
-
-
-
-
-
-When shadow files are enabled, an extra command is available at the
-zebraidx command line. In order to make changes to the system
-take effect for the users, you'll have to submit a
-"commit" command after a (sequence of) update
-operation(s). You can ask the indexer to commit the changes
-immediately after the update operation:
-
-
-
-
-
-$ zebraidx update /d1/records update /d2/more-records commit
-
-
-
-
-
-Or you can execute multiple updates before committing the changes:
-
-
-
-
-
-$ zebraidx -g books update /d1/records update /d2/more-records
-$ zebraidx -g fun update /d3/fun-records
-$ zebraidx commit
-
-
-
-
-
-If one of the update operations above had been interrupted, the commit
-operation on the last line would fail: zebraidx will not let you
-commit changes that would destroy the running register. You'll have to
-rerun all of the update operations since your last commit operation,
-before you can commit the new changes.
-
-
-
-Similarly, if the commit operation fails, zebraidx will not let
-you start a new update operation before you have successfully repeated
-the commit operation. The server processes will keep accessing the
-shadow files rather than the (possibly damaged) blocks of the main
-register files until the commit operation has successfully completed.
-
-
-
-You should be aware that update operations may take slightly longer
-when the shadow register system is enabled, since more file access
-operations are involved. Further, while the disk space required for
-the shadow register data is modest for a small update operation, you
-may prefer to disable the system if you are adding a very large number
-of records to an already very large database (we use the terms
-large and modest very loosely here, since every
-application will have a different perception of size). To update the system
-without the use of the the shadow files, simply run zebraidx with
-the -n option (note that you do not have to execute the
-commit command of zebraidx when you temporarily disable the
-use of the shadow registers in this fashion. Note also that, just as
-when the shadow registers are not enabled, server processes will be
-barred from accessing the main register while the update procedure
-takes place.
-
-
-
-
+
+ Administrating Zebra
+
+
+
+ Unlike many simpler retrieval systems, Zebra supports safe, incremental
+ updates to an existing index.
+
+
+
+ Normally, when Zebra modifies the index it reads a number of records
+ that you specify.
+ Depending on your specifications and on the contents of each record
+ one the following events take place for each record:
+
+
+
+ Insert
+
+
+ The record is indexed as if it never occurred before.
+ Either the Zebra system doesn't know how to identify the record or
+ Zebra can identify the record but didn't find it to be already indexed.
+
+
+
+
+ Modify
+
+
+ The record has already been indexed.
+ In this case either the contents of the record or the location
+ (file) of the record indicates that it has been indexed before.
+
+
+
+
+ Delete
+
+
+ The record is deleted from the index. As in the
+ update-case it must be able to identify the record.
+
+
+
+
+
+
+
+ Please note that in both the modify- and delete- case the Zebra
+ indexer must be able to generate a unique key that identifies the record
+ in question (more on this below).
+
+
+
+ To administrate the Zebra retrieval system, you run the
+ zebraidx program.
+ This program supports a number of options which are preceded by a dash,
+ and a few commands (not preceded by dash).
+
+
+
+ Both the Zebra administrative tool and the Z39.50 server share a
+ set of index files and a global configuration file.
+ The name of the configuration file defaults to
+ zebra.cfg.
+ The configuration file includes specifications on how to index
+ various kinds of records and where the other configuration files
+ are located. zebrasrv and zebraidx
+ must be run in the directory where the
+ configuration file lives unless you indicate the location of the
+ configuration file by option -c.
+
+
+
+ Record Types
+
+
+ Indexing is a per-record process, in which either insert/modify/delete
+ will occur. Before a record is indexed search keys are extracted from
+ whatever might be the layout the original record (sgml,html,text, etc..).
+ The Zebra system currently supports two fundamental types of records:
+ structured and simple text.
+ To specify a particular extraction process, use either the
+ command line option -t or specify a
+ recordType setting in the configuration file.
+
+
+
+
+
+ The Zebra Configuration File
+
+
+ The Zebra configuration file, read by zebraidx and
+ zebrasrv defaults to zebra.cfg
+ unless specified by -c option.
+
+
+
+ You can edit the configuration file with a normal text editor.
+ parameter names and values are separated by colons in the file. Lines
+ starting with a hash sign (#) are
+ treated as comments.
+
+
+
+ If you manage different sets of records that share common
+ characteristics, you can organize the configuration settings for each
+ type into "groups".
+ When zebraidx is run and you wish to address a
+ given group you specify the group name with the -g
+ option.
+ In this case settings that have the group name as their prefix
+ will be used by zebraidx.
+ If no -g option is specified, the settings
+ without prefix are used.
+
+
+
+ In the configuration file, the group name is placed before the option
+ name itself, separated by a dot (.). For instance, to set the record type
+ for group public to grs.sgml
+ (the SGML-like format for structured records) you would write:
+
+
+
+
+ public.recordType: grs.sgml
+
+
+
+
+ To set the default value of the record type to text
+ write:
+
+
+
+
+ recordType: text
+
+
+
+
+ The available configuration settings are summarized below. They will be
+ explained further in the following sections.
+
+
+
+
+
+
+
+
+
+ group
+ .recordType[.name]:
+ type
+
+
+
+ Specifies how records with the file extension
+ name should be handled by the indexer.
+ This option may also be specified as a command line option
+ (-t). Note that if you do not specify a
+ name, the setting applies to all files.
+ In general, the record type specifier consists of the elements (each
+ element separated by dot), fundamental-type,
+ file-read-type and arguments. Currently, two
+ fundamental types exist, text and
+ grs.
+
+
+
+
+ group.recordId:
+ record-id-spec
+
+
+ Specifies how the records are to be identified when updated. See
+ .
+
+
+
+
+ group.database:
+ database
+
+
+ Specifies the Z39.50 database name.
+
+
+
+
+
+ group.storeKeys:
+ boolean
+
+
+ Specifies whether key information should be saved for a given
+ group of records. If you plan to update/delete this type of
+ records later this should be specified as 1; otherwise it
+ should be 0 (default), to save register space.
+
+ See .
+
+
+
+
+ group.storeData:
+ boolean
+
+
+ Specifies whether the records should be stored internally
+ in the Zebra system files.
+ If you want to maintain the raw records yourself,
+ this option should be false (0).
+ If you want Zebra to take care of the records for you, it
+ should be true(1).
+
+
+
+
+
+ register: register-location
+
+
+ Specifies the location of the various register files that Zebra uses
+ to represent your databases.
+ See .
+
+
+
+
+ shadow: register-location
+
+
+ Enables the safe update facility of Zebra, and
+ tells the system where to place the required, temporary files.
+ See .
+
+
+
+
+ lockDir: directory
+
+
+ Directory in which various lock files are stored.
+
+
+
+
+ keyTmpDir: directory
+
+
+ Directory in which temporary files used during zebraidx's update
+ phase are stored.
+
+
+
+
+ setTmpDir: directory
+
+
+ Specifies the directory that the server uses for temporary result sets.
+ If not specified /tmp will be used.
+
+
+
+
+ profilePath: path
+
+
+ Specifies a path of profile specification files.
+ The path is composed of one or more directories separated by
+ colon. Similar to PATH for UNIX systems.
+
+
+
+
+
+ modulePath: path
+
+
+ Specifies a path of record filter modules.
+ The path is composed of one or more directories separated by
+ colon. Similar to PATH for UNIX systems.
+ The 'make install' procedure typically puts modules in
+ /usr/local/lib/idzebra-2.0/modules.
+
+
+
+
+
+ staticrank: integer
+
+
+ Enables whether static ranking is to be enabled (1) or
+ disabled (0). If omitted, it is disabled - corresponding
+ to a value of 0.
+ Refer to .
+
+
+
+
+
+
+ estimatehits:: integer
+
+
+ Controls whether Zebra should calculate approximite hit counts and
+ at which hit count it is to be enabled.
+ A value of 0 disables approximiate hit counts.
+ For a positive value approximaite hit count is enabled
+ if it is known to be larger than integer.
+
+
+ Approximate hit counts can also be triggered by a particular
+ attribute in a query.
+ Refer to .
+
+
+
+
+
+ attset: filename
+
+
+ Specifies the filename(s) of attribute set files for use in
+ searching. In many configurations bib1.att
+ is used, but that is not required. If Classic Explain
+ attributes is to be used for searching,
+ explain.att must be given.
+ The path to att-files in general can be given using
+ profilePath setting.
+ See also .
+
+
+
+
+ memMax: size
+
+
+ Specifies size of internal memory
+ to use for the zebraidx program.
+ The amount is given in megabytes - default is 4 (4 MB).
+ The more memory, the faster large updates happen, up to about
+ half the free memory available on the computer.
+
+
+
+
+ tempfiles: Yes/Auto/No
+
+
+ Tells zebra if it should use temporary files when indexing. The
+ default is Auto, in which case zebra uses temporary files only
+ if it would need more that memMax
+ megabytes of memory. This should be good for most uses.
+
+
+
+
+
+ root: dir
+
+
+ Specifies a directory base for Zebra. All relative paths
+ given (in profilePath, register, shadow) are based on this
+ directory. This setting is useful if your Zebra server
+ is running in a different directory from where
+ zebra.cfg is located.
+
+
+
+
+
+ passwd: file
+
+
+ Specifies a file with description of user accounts for Zebra.
+ The format is similar to that known to Apache's htpasswd files
+ and UNIX' passwd files. Non-empty lines not beginning with
+ # are considered account lines. There is one account per-line.
+ A line consists of fields separate by a single colon character.
+ First field is username, second is password.
+
+
+
+
+
+ passwd.c: file
+
+
+ Specifies a file with description of user accounts for Zebra.
+ File format is similar to that used by the passwd directive except
+ that the password are encrypted. Use Apache's htpasswd or similar
+ for maintenance.
+
+
+
+
+
+ perm.user:
+ permstring
+
+
+ Specifies permissions (priviledge) for a user that are allowed
+ to access Zebra via the passwd system. There are two kinds
+ of permissions currently: read (r) and write(w). By default
+ users not listed in a permission directive are given the read
+ privilege. To specify permissions for a user with no
+ username, or Z39.50 anonymous style use
+ anonymous. The permstring consists of
+ a sequence of characters. Include character w
+ for write/update access, r for read access and
+ a to allow anonymous access through this account.
+
+
+
+
+
+ dbaccess accessfile
+
+
+ Names a file which lists database subscriptions for individual users.
+ The access file should consists of lines of the form username:
+ dbnames, where dbnames is a list of database names, seprated by
+ '+'. No whitespace is allowed in the database list.
+
+
+
+
+
+
+
+
+
+
+ Locating Records
+
+
+ The default behavior of the Zebra system is to reference the
+ records from their original location, i.e. where they were found when you
+ run zebraidx.
+ That is, when a client wishes to retrieve a record
+ following a search operation, the files are accessed from the place
+ where you originally put them - if you remove the files (without
+ running zebraidx again, the server will return
+ diagnostic number 14 (``System error in presenting records'') to
+ the client.
+
+
+
+ If your input files are not permanent - for example if you retrieve
+ your records from an outside source, or if they were temporarily
+ mounted on a CD-ROM drive,
+ you may want Zebra to make an internal copy of them. To do this,
+ you specify 1 (true) in the storeData setting. When
+ the Z39.50 server retrieves the records they will be read from the
+ internal file structures of the system.
+
+
+
+
+
+ Indexing with no Record IDs (Simple Indexing)
+
+
+ If you have a set of records that are not expected to change over time
+ you may can build your database without record IDs.
+ This indexing method uses less space than the other methods and
+ is simple to use.
+
+
+
+ To use this method, you simply omit the recordId entry
+ for the group of files that you index. To add a set of records you use
+ zebraidx with the update command. The
+ update command will always add all of the records that it
+ encounters to the index - whether they have already been indexed or
+ not. If the set of indexed files change, you should delete all of the
+ index files, and build a new index from scratch.
+
+
+
+ Consider a system in which you have a group of text files called
+ simple.
+ That group of records should belong to a Z39.50 database called
+ textbase.
+ The following zebra.cfg file will suffice:
+
+
+
+
+ profilePath: /usr/local/idzebra/tab
+ attset: bib1.att
+ simple.recordType: text
+ simple.database: textbase
+
+
+
+
+
+ Since the existing records in an index can not be addressed by their
+ IDs, it is impossible to delete or modify records when using this method.
+
+
+
+
+
+ Indexing with File Record IDs
+
+
+ If you have a set of files that regularly change over time: Old files
+ are deleted, new ones are added, or existing files are modified, you
+ can benefit from using the file ID
+ indexing methodology.
+ Examples of this type of database might include an index of WWW
+ resources, or a USENET news spool area.
+ Briefly speaking, the file key methodology uses the directory paths
+ of the individual records as a unique identifier for each record.
+ To perform indexing of a directory with file keys, again, you specify
+ the top-level directory after the update command.
+ The command will recursively traverse the directories and compare
+ each one with whatever have been indexed before in that same directory.
+ If a file is new (not in the previous version of the directory) it
+ is inserted into the registers; if a file was already indexed and
+ it has been modified since the last update, the index is also
+ modified; if a file has been removed since the last
+ visit, it is deleted from the index.
+
+
+
+ The resulting system is easy to administrate. To delete a record you
+ simply have to delete the corresponding file (say, with the
+ rm command). And to add records you create new
+ files (or directories with files). For your changes to take effect
+ in the register you must run zebraidx update with
+ the same directory root again. This mode of operation requires more
+ disk space than simpler indexing methods, but it makes it easier for
+ you to keep the index in sync with a frequently changing set of data.
+ If you combine this system with the safe update
+ facility (see below), you never have to take your server off-line for
+ maintenance or register updating purposes.
+
+
+
+ To enable indexing with pathname IDs, you must specify
+ file as the value of recordId
+ in the configuration file. In addition, you should set
+ storeKeys to 1, since the Zebra
+ indexer must save additional information about the contents of each record
+ in order to modify the indexes correctly at a later time.
+
+
+
+
+
+ For example, to update records of group esdd
+ located below
+ /data1/records/ you should type:
+
+ $ zebraidx -g esdd update /data1/records
+
+
+
+
+ The corresponding configuration file includes:
+
+ esdd.recordId: file
+ esdd.recordType: grs.sgml
+ esdd.storeKeys: 1
+
+
+
+
+ You cannot start out with a group of records with simple
+ indexing (no record IDs as in the previous section) and then later
+ enable file record Ids. Zebra must know from the first time that you
+ index the group that
+ the files should be indexed with file record IDs.
+
+
+
+
+ You cannot explicitly delete records when using this method (using the
+ delete command to zebraidx. Instead
+ you have to delete the files from the file system (or move them to a
+ different location)
+ and then run zebraidx with the
+ update command.
+
+
+
+
+ Indexing with General Record IDs
+
+
+ When using this method you construct an (almost) arbitrary, internal
+ record key based on the contents of the record itself and other system
+ information. If you have a group of records that explicitly associates
+ an ID with each record, this method is convenient. For example, the
+ record format may contain a title or a ID-number - unique within the group.
+ In either case you specify the Z39.50 attribute set and use-attribute
+ location in which this information is stored, and the system looks at
+ that field to determine the identity of the record.
+
+
+
+ As before, the record ID is defined by the recordId
+ setting in the configuration file. The value of the record ID specification
+ consists of one or more tokens separated by whitespace. The resulting
+ ID is represented in the index by concatenating the tokens and
+ separating them by ASCII value (1).
+
+
+
+ There are three kinds of tokens:
+
+
+
+ Internal record info
+
+
+ The token refers to a key that is
+ extracted from the record. The syntax of this token is
+ ( set ,
+ use ),
+ where set is the
+ attribute set name use is the
+ name or value of the attribute.
+
+
+
+
+ System variable
+
+
+ The system variables are preceded by
+
+
+ $
+
+ and immediately followed by the system variable name, which
+ may one of
+
+
+
+ group
+
+
+ Group name.
+
+
+
+
+ database
+
+
+ Current database specified.
+
+
+
+
+ type
+
+
+ Record type.
+
+
+
+
+
+
+
+
+ Constant string
+
+
+ A string used as part of the ID — surrounded
+ by single- or double quotes.
+
+
+
+
+
+
+
+ For instance, the sample GILS records that come with the Zebra
+ distribution contain a unique ID in the data tagged Control-Identifier.
+ The data is mapped to the Bib-1 use attribute Identifier-standard
+ (code 1007). To use this field as a record id, specify
+ (bib1,Identifier-standard) as the value of the
+ recordId in the configuration file.
+ If you have other record types that uses the same field for a
+ different purpose, you might add the record type
+ (or group or database name) to the record id of the gils
+ records as well, to prevent matches with other types of records.
+ In this case the recordId might be set like this:
+
+
+ gils.recordId: $type (bib1,Identifier-standard)
+
+
+
+
+
+ (see
+ for details of how the mapping between elements of your records and
+ searchable attributes is established).
+
+
+
+ As for the file record ID case described in the previous section,
+ updating your system is simply a matter of running
+ zebraidx
+ with the update command. However, the update with general
+ keys is considerably slower than with file record IDs, since all files
+ visited must be (re)read to discover their IDs.
+
+
+
+ As you might expect, when using the general record IDs
+ method, you can only add or modify existing records with the
+ update command.
+ If you wish to delete records, you must use the,
+ delete command, with a directory as a parameter.
+ This will remove all records that match the files below that root
+ directory.
+
+
+
+
+
+ Register Location
+
+
+ Normally, the index files that form dictionaries, inverted
+ files, record info, etc., are stored in the directory where you run
+ zebraidx. If you wish to store these, possibly large,
+ files somewhere else, you must add the register
+ entry to the zebra.cfg file.
+ Furthermore, the Zebra system allows its file
+ structures to span multiple file systems, which is useful for
+ managing very large databases.
+
+
+
+ The value of the register setting is a sequence
+ of tokens. Each token takes the form:
+
+
+ dir:size.
+
+
+ The dir specifies a directory in which index files
+ will be stored and the size specifies the maximum
+ size of all files in that directory. The Zebra indexer system fills
+ each directory in the order specified and use the next specified
+ directories as needed.
+ The size is an integer followed by a qualifier
+ code,
+ b for bytes,
+ k for kilobytes.
+ M for megabytes,
+ G for gigabytes.
+
+
+
+ For instance, if you have allocated two disks for your register, and
+ the first disk is mounted
+ on /d1 and has 2GB of free space and the
+ second, mounted on /d2 has 3.6 GB, you could
+ put this entry in your configuration file:
+
+
+ register: /d1:2G /d2:3600M
+
+
+
+
+
+ Note that Zebra does not verify that the amount of space specified is
+ actually available on the directory (file system) specified - it is
+ your responsibility to ensure that enough space is available, and that
+ other applications do not attempt to use the free space. In a large
+ production system, it is recommended that you allocate one or more
+ file system exclusively to the Zebra register files.
+
+
+
+
+
+ Safe Updating - Using Shadow Registers
+
+
+ Description
+
+
+ The Zebra server supports updating of the index
+ structures. That is, you can add, modify, or remove records from
+ databases managed by Zebra without rebuilding the entire index.
+ Since this process involves modifying structured files with various
+ references between blocks of data in the files, the update process
+ is inherently sensitive to system crashes, or to process interruptions:
+ Anything but a successfully completed update process will leave the
+ register files in an unknown state, and you will essentially have no
+ recourse but to re-index everything, or to restore the register files
+ from a backup medium.
+ Further, while the update process is active, users cannot be
+ allowed to access the system, as the contents of the register files
+ may change unpredictably.
+
+
+
+ You can solve these problems by enabling the shadow register system in
+ Zebra.
+ During the updating procedure, zebraidx will temporarily
+ write changes to the involved files in a set of "shadow
+ files", without modifying the files that are accessed by the
+ active server processes. If the update procedure is interrupted by a
+ system crash or a signal, you simply repeat the procedure - the
+ register files have not been changed or damaged, and the partially
+ written shadow files are automatically deleted before the new updating
+ procedure commences.
+
+
+
+ At the end of the updating procedure (or in a separate operation, if
+ you so desire), the system enters a "commit mode". First,
+ any active server processes are forced to access those blocks that
+ have been changed from the shadow files rather than from the main
+ register files; the unmodified blocks are still accessed at their
+ normal location (the shadow files are not a complete copy of the
+ register files - they only contain those parts that have actually been
+ modified). If the commit process is interrupted at any point during the
+ commit process, the server processes will continue to access the
+ shadow files until you can repeat the commit procedure and complete
+ the writing of data to the main register files. You can perform
+ multiple update operations to the registers before you commit the
+ changes to the system files, or you can execute the commit operation
+ at the end of each update operation. When the commit phase has
+ completed successfully, any running server processes are instructed to
+ switch their operations to the new, operational register, and the
+ temporary shadow files are deleted.
+
+
+
+
+
+ How to Use Shadow Register Files
+
+
+ The first step is to allocate space on your system for the shadow
+ files.
+ You do this by adding a shadow entry to the
+ zebra.cfg file.
+ The syntax of the shadow entry is exactly the
+ same as for the register entry
+ (see ).
+ The location of the shadow area should be
+ different from the location of the main register
+ area (if you have specified one - remember that if you provide no
+ register setting, the default register area is the
+ working directory of the server and indexing processes).
+
+
+
+ The following excerpt from a zebra.cfg file shows
+ one example of a setup that configures both the main register
+ location and the shadow file area.
+ Note that two directories or partitions have been set aside
+ for the shadow file area. You can specify any number of directories
+ for each of the file areas, but remember that there should be no
+ overlaps between the directories used for the main registers and the
+ shadow files, respectively.
+
+
+
+
+ register: /d1:500M
+ shadow: /scratch1:100M /scratch2:200M
+
+
+
+
+
+ When shadow files are enabled, an extra command is available at the
+ zebraidx command line.
+ In order to make changes to the system take effect for the
+ users, you'll have to submit a "commit" command after a
+ (sequence of) update operation(s).
+
+
+
+
+
+ $ zebraidx update /d1/records
+ $ zebraidx commit
+
+
+
+
+
+ Or you can execute multiple updates before committing the changes:
+
+
+
+
+
+ $ zebraidx -g books update /d1/records /d2/more-records
+ $ zebraidx -g fun update /d3/fun-records
+ $ zebraidx commit
+
+
+
+
+
+ If one of the update operations above had been interrupted, the commit
+ operation on the last line would fail: zebraidx
+ will not let you commit changes that would destroy the running register.
+ You'll have to rerun all of the update operations since your last
+ commit operation, before you can commit the new changes.
+
+
+
+ Similarly, if the commit operation fails, zebraidx
+ will not let you start a new update operation before you have
+ successfully repeated the commit operation.
+ The server processes will keep accessing the shadow files rather
+ than the (possibly damaged) blocks of the main register files
+ until the commit operation has successfully completed.
+
+
+
+ You should be aware that update operations may take slightly longer
+ when the shadow register system is enabled, since more file access
+ operations are involved. Further, while the disk space required for
+ the shadow register data is modest for a small update operation, you
+ may prefer to disable the system if you are adding a very large number
+ of records to an already very large database (we use the terms
+ large and modest
+ very loosely here, since every application will have a
+ different perception of size).
+ To update the system without the use of the the shadow files,
+ simply run zebraidx with the -n
+ option (note that you do not have to execute the
+ commit command of zebraidx
+ when you temporarily disable the use of the shadow registers in
+ this fashion.
+ Note also that, just as when the shadow registers are not enabled,
+ server processes will be barred from accessing the main register
+ while the update procedure takes place.
+
+
+
+
+
+
+
+
+ Relevance Ranking and Sorting of Result Sets
+
+
+ Overview
+
+ The default ordering of a result set is left up to the server,
+ which inside Zebra means sorting in ascending document ID order.
+ This is not always the order humans want to browse the sometimes
+ quite large hit sets. Ranking and sorting comes to the rescue.
+
+
+
+ In cases where a good presentation ordering can be computed at
+ indexing time, we can use a fixed static ranking
+ scheme, which is provided for the alvis
+ indexing filter. This defines a fixed ordering of hit lists,
+ independently of the query issued.
+
+
+
+ There are cases, however, where relevance of hit set documents is
+ highly dependent on the query processed.
+ Simply put, dynamic relevance ranking
+ sorts a set of retrieved records such that those most likely to be
+ relevant to your request are retrieved first.
+ Internally, Zebra retrieves all documents that satisfy your
+ query, and re-orders the hit list to arrange them based on
+ a measurement of similarity between your query and the content of
+ each record.
+
+
+
+ Finally, there are situations where hit sets of documents should be
+ sorted during query time according to the
+ lexicographical ordering of certain sort indexes created at
+ indexing time.
+
+
+
+
+
+ Static Ranking
+
+
+ Zebra uses internally inverted indexes to look up term occurencies
+ in documents. Multiple queries from different indexes can be
+ combined by the binary boolean operations AND,
+ OR and/or NOT (which
+ is in fact a binary AND NOT operation).
+ To ensure fast query execution
+ speed, all indexes have to be sorted in the same order.
+
+
+ The indexes are normally sorted according to document
+ ID in
+ ascending order, and any query which does not invoke a special
+ re-ranking function will therefore retrieve the result set in
+ document
+ ID
+ order.
+
+
+ If one defines the
+
+ staticrank: 1
+
+ directive in the main core Zebra configuration file, the internal document
+ keys used for ordering are augmented by a preceding integer, which
+ contains the static rank of a given document, and the index lists
+ are ordered
+ first by ascending static rank,
+ then by ascending document ID.
+ Zero
+ is the ``best'' rank, as it occurs at the
+ beginning of the list; higher numbers represent worse scores.
+
+
+ The experimental alvis filter provides a
+ directive to fetch static rank information out of the indexed XML
+ records, thus making all hit sets ordered
+ after ascending static
+ rank, and for those doc's which have the same static rank, ordered
+ after ascending doc ID.
+ See for the gory details.
+
+
+
+
+
+ Dynamic Ranking
+
+ In order to fiddle with the static rank order, it is necessary to
+ invoke additional re-ranking/re-ordering using dynamic
+ ranking or score functions. These functions return positive
+ integer scores, where highest score is
+ ``best'';
+ hit sets are sorted according to descending
+ scores (in contrary
+ to the index lists which are sorted according to
+ ascending rank number and document ID).
+
+
+ Dynamic ranking is enabled by a directive like one of the
+ following in the zebra configuration file (use only one of these a time!):
+
+ rank: rank-1 # default TDF-IDF like
+ rank: rank-static # dummy do-nothing
+
+
+
+
+ Dynamic ranking is done at query time rather than
+ indexing time (this is why we
+ call it ``dynamic ranking'' in the first place ...)
+ It is invoked by adding
+ the Bib-1 relation attribute with
+ value ``relevance'' to the PQF query (that is,
+ @attr 2=102, see also
+
+ The BIB-1 Attribute Set Semantics, also in
+ HTML).
+ To find all articles with the word Eoraptor in
+ the title, and present them relevance ranked, issue the PQF query:
+
+ @attr 2=102 @attr 1=4 Eoraptor
+
+
+
+
+ Dynamically ranking using PQF queries with the 'rank-1'
+ algorithm
+
+
+ The default rank-1 ranking module implements a
+ TF/IDF (Term Frequecy over Inverse Document Frequency) like
+ algorithm. In contrast to the usual defintion of TF/IDF
+ algorithms, which only considers searching in one full-text
+ index, this one works on multiple indexes at the same time.
+ More precisely,
+ Zebra does boolean queries and searches in specific addressed
+ indexes (there are inverted indexes pointing from terms in the
+ dictionary to documents and term positions inside documents).
+ It works like this:
+
+
+ Query Components
+
+
+ First, the boolean query is dismantled into it's principal components,
+ i.e. atomic queries where one term is looked up in one index.
+ For example, the query
+
+ @attr 2=102 @and @attr 1=1010 Utah @attr 1=1018 Springer
+
+ is a boolean AND between the atomic parts
+
+ @attr 2=102 @attr 1=1010 Utah
+
+ and
+
+ @attr 2=102 @attr 1=1018 Springer
+
+ which gets processed each for itself.
+
+
+
+
+
+ Atomic hit lists
+
+
+ Second, for each atomic query, the hit list of documents is
+ computed.
+
+
+ In this example, two hit lists for each index
+ @attr 1=1010 and
+ @attr 1=1018 are computed.
+
+
+
+
+
+ Atomic scores
+
+
+ Third, each document in the hit list is assigned a score (_if_ ranking
+ is enabled and requested in the query) using a TF/IDF scheme.
+
+
+ In this example, both atomic parts of the query assign the magic
+ @attr 2=102 relevance attribute, and are
+ to be used in the relevance ranking functions.
+
+
+ It is possible to apply dynamic ranking on only parts of the
+ PQF query:
+
+ @and @attr 2=102 @attr 1=1010 Utah @attr 1=1018 Springer
+
+ searches for all documents which have the term 'Utah' on the
+ body of text, and which have the term 'Springer' in the publisher
+ field, and sort them in the order of the relevance ranking made on
+ the body-of-text index only.
+
+
+
+
+
+ Hit list merging
+
+
+ Fourth, the atomic hit lists are merged according to the boolean
+ conditions to a final hit list of documents to be returned.
+
+
+ This step is always performed, independently of the fact that
+ dynamic ranking is enabled or not.
+
+
+
+
+
+ Document score computation
+
+
+ Fifth, the total score of a document is computed as a linear
+ combination of the atomic scores of the atomic hit lists
+
+
+ Ranking weights may be used to pass a value to a ranking
+ algorithm, using the non-standard BIB-1 attribute type 9.
+ This allows one branch of a query to use one value while
+ another branch uses a different one. For example, we can search
+ for utah in the
+ @attr 1=4 index with weight 30, as
+ well as in the @attr 1=1010 index with weight 20:
+
+ @attr 2=102 @or @attr 9=30 @attr 1=4 utah @attr 9=20 @attr 1=1010 city
+
+
+
+ The default weight is
+ sqrt(1000) ~ 34 , as the Z39.50 standard prescribes that the top score
+ is 1000 and the bottom score is 0, encoded in integers.
+
+
+
+ The ranking-weight feature is experimental. It may change in future
+ releases of zebra.
+
+
+
+
+
+
+ Re-sorting of hit list
+
+
+ Finally, the final hit list is re-ordered according to scores.
+
+
+
+
+
+
+
+
+
+
+
+
+
+ The rank-1 algorithm
+ does not use the static rank
+ information in the list keys, and will produce the same ordering
+ with or without static ranking enabled.
+
+
+
+
+
+
+
+ Dynamic ranking is not compatible
+ with estimated hit sizes, as all documents in
+ a hit set must be accessed to compute the correct placing in a
+ ranking sorted list. Therefore the use attribute setting
+ @attr 2=102 clashes with
+ @attr 9=integer.
+
+
+
+
+
+
+
+
+ Dynamically ranking CQL queries
+
+ Dynamic ranking can be enabled during sever side CQL
+ query expansion by adding @attr 2=102
+ chunks to the CQL config file. For example
+
+ relationModifier.relevant = 2=102
+
+ invokes dynamic ranking each time a CQL query of the form
+
+ Z> querytype cql
+ Z> f alvis.text =/relevant house
+
+ is issued. Dynamic ranking can also be automatically used on
+ specific CQL indexes by (for example) setting
+
+ index.alvis.text = 1=text 2=102
+
+ which then invokes dynamic ranking each time a CQL query of the form
+
+ Z> querytype cql
+ Z> f alvis.text = house
+
+ is issued.
+
+
+
+
+
+
+
+
+ Sorting
+
+ Zebra sorts efficiently using special sorting indexes
+ (type=s; so each sortable index must be known
+ at indexing time, specified in the configuration of record
+ indexing. For example, to enable sorting according to the BIB-1
+ Date/time-added-to-db field, one could add the line
+
+ xelm /*/@created Date/time-added-to-db:s
+
+ to any .abs record-indexing configuration file.
+ Similarly, one could add an indexing element of the form
+
+
+
+ ]]>
+ to any alvis-filter indexing stylesheet.
+
+
+ Indexing can be specified at searching time using a query term
+ carrying the non-standard
+ BIB-1 attribute-type 7. This removes the
+ need to send a Z39.50 Sort Request
+ separately, and can dramatically improve latency when the client
+ and server are on separate networks.
+ The sorting part of the query is separate from the rest of the
+ query - the actual search specification - and must be combined
+ with it using OR.
+
+
+ A sorting subquery needs two attributes: an index (such as a
+ BIB-1 type-1 attribute) specifying which index to sort on, and a
+ type-7 attribute whose value is be 1 for
+ ascending sorting, or 2 for descending. The
+ term associated with the sorting attribute is the priority of
+ the sort key, where 0 specifies the primary
+ sort key, 1 the secondary sort key, and so
+ on.
+
+ For example, a search for water, sort by title (ascending),
+ is expressed by the PQF query
+
+ @or @attr 1=1016 water @attr 7=1 @attr 1=4 0
+
+ whereas a search for water, sort by title ascending,
+ then date descending would be
+
+ @or @or @attr 1=1016 water @attr 7=1 @attr 1=4 0 @attr 7=2 @attr 1=30 1
+
+
+
+ Notice the fundamental differences between dynamic
+ ranking and sorting: there can be
+ only one ranking function defined and configured; but multiple
+ sorting indexes can be specified dynamically at search
+ time. Ranking does not need to use specific indexes, so
+ dynamic ranking can be enabled and disabled without
+ re-indexing; whereas, sorting indexes need to be
+ defined before indexing.
+
+
+
+
+
+
+
+
+ Extended Services: Remote Insert, Update and Delete
+
+
+
+ Extended services are only supported when accessing the Zebra
+ server using the Z39.50
+ protocol. The SRU protocol does
+ not support extended services.
+
+
+
+
+ The extended services are not enabled by default in zebra - due to the
+ fact that they modify the system. Zebra can be configured
+ to allow anybody to
+ search, and to allow only updates for a particular admin user
+ in the main zebra configuration file zebra.cfg.
+ For user admin, you could use:
+
+ perm.anonymous: r
+ perm.admin: rw
+ passwd: passwordfile
+
+ And in the password file
+ passwordfile, you have to specify users and
+ encrypted passwords as colon separated strings.
+ Use a tool like htpasswd
+ to maintain the encrypted passwords.
+
+ admin:secret
+
+ It is essential to configure Zebra to store records internally,
+ and to support
+ modifications and deletion of records:
+
+ storeData: 1
+ storeKeys: 1
+
+ The general record type should be set to any record filter which
+ is able to parse XML records, you may use any of the two
+ declarations (but not both simultaneously!)
+
+ recordType: grs.xml
+ # recordType: alvis.filter_alvis_config.xml
+
+ To enable transaction safe shadow indexing,
+ which is extra important for this kind of operation, set
+
+ shadow: directoryname: size (e.g. 1000M)
+
+
+
+
+ It is not possible to carry information about record types or
+ similar to Zebra when using extended services, due to
+ limitations of the Z39.50
+ protocol. Therefore, indexing filters can not be chosen on a
+ per-record basis. One and only one general XML indexing filter
+ must be defined.
+
+
+
+
+
+
+
+ Extended services in the Z39.50 protocol
+
+
+ The Z39.50 standard allows
+ servers to accept special binary extended services
+ protocol packages, which may be used to insert, update and delete
+ records into servers. These carry control and update
+ information to the servers, which are encoded in seven package fields:
+
+
+
+ Extended services Z39.50 Package Fields
+
+
+
+ Parameter
+ Value
+ Notes
+
+
+
+
+ type
+ 'update'
+ Must be set to trigger extended services
+
+
+ action
+ string
+
+ Extended service action type with
+ one of four possible values: recordInsert,
+ recordReplace,
+ recordDelete,
+ and specialUpdate
+
+
+
+ record
+ XML string
+ An XML formatted string containing the record
+
+
+ syntax
+ 'xml'
+ Only XML record syntax is supported
+
+
+ recordIdOpaque
+ string
+
+ Optional client-supplied, opaque record
+ identifier used under insert operations.
+
+
+
+ recordIdNumber
+ positive number
+ Zebra's internal system number, only for update
+ actions.
+
+
+
+ databaseName
+ database identifier
+
+ The name of the database to which the extended services should be
+ applied.
+
+
+
+
+
+
+
+
+ The action parameter can be any of
+ recordInsert (will fail if the record already exists),
+ recordReplace (will fail if the record does not exist),
+ recordDelete (will fail if the record does not
+ exist), and
+ specialUpdate (will insert or update the record
+ as needed).
+
+
+
+ During a recordInsert action, the
+ usual rules for internal record ID generation apply, unless an
+ optional recordIdNumber Zebra internal ID or a
+ recordIdOpaque string identifier is assigned.
+ The default ID generation is
+ configured using the recordId: from
+ zebra.cfg.
+
+
+
+ The actions recordReplace or
+ recordDelete need specification of the additional
+ recordIdNumber parameter, which must be an
+ existing Zebra internal system ID number, or the optional
+ recordIdOpaque string parameter.
+
+
+
+ When retrieving existing
+ records indexed with GRS indexing filters, the Zebra internal
+ ID number is returned in the field
+ /*/id:idzebra/localnumber in the namespace
+ xmlns:id="http://www.indexdata.dk/zebra/",
+ where it can be picked up for later record updates or deletes.
+
+
+ Records indexed with the alvis filter
+ have similar means to discover the internal Zebra ID.
+
+
+
+ The recordIdOpaque string parameter
+ is an client-supplied, opaque record
+ identifier, which may be used under
+ insert, update and delete operations. The
+ client software is responsible for assigning these to
+ records. This identifier will
+ replace zebra's own automagic identifier generation with a unique
+ mapping from recordIdOpaque to the
+ Zebra internal recordIdNumber.
+ The opaque recordIdOpaque string
+ identifiers
+ are not visible in retrieval records, nor are
+ searchable, so the value of this parameter is
+ questionable. It serves mostly as a convenient mapping from
+ application domain string identifiers to Zebra internal ID's.
+
+
+
+
+
+
+ Extended services from yaz-client
+
+
+ We can now start a yaz-client admin session and create a database:
+
+ adm-create
+ ]]>
+
+ Now the Default database was created,
+ we can insert an XML file (esdd0006.grs
+ from example/gils/records) and index it:
+
+ update insert id1234 esdd0006.grs
+ ]]>
+
+ The 3rd parameter - id1234 here -
+ is the recordIdOpaque package field.
+
+
+ Actually, we should have a way to specify "no opaque record id" for
+ yaz-client's update command.. We'll fix that.
+
+
+ The newly inserted record can be searched as usual:
+
+ f utah
+ Sent searchRequest.
+ Received SearchResponse.
+ Search was a success.
+ Number of hits: 1, setno 1
+ SearchResult-1: term=utah cnt=1
+ records returned: 0
+ Elapsed: 0.014179
+ ]]>
+
+
+
+ Let's delete the beast, using the same
+ recordIdOpaque string parameter:
+
+ update delete id1234
+ No last record (update ignored)
+ Z> update delete 1 esdd0006.grs
+ Got extended services response
+ Status: done
+ Elapsed: 0.072441
+ Z> f utah
+ Sent searchRequest.
+ Received SearchResponse.
+ Search was a success.
+ Number of hits: 0, setno 2
+ SearchResult-1: term=utah cnt=0
+ records returned: 0
+ Elapsed: 0.013610
+ ]]>
+
+
+
+ If shadow register is enabled in your
+ zebra.cfg,
+ you must run the adm-commit command
+
+ adm-commit
+ ]]>
+
+ after each update session in order write your changes from the
+ shadow to the life register space.
+
+
+
+
+
+ Extended services from yaz-php
+
+
+ Extended services are also available from the YAZ PHP client layer. An
+ example of an YAZ-PHP extended service transaction is given here:
+
+ A fine specimen of a record';
+
+ $options = array('action' => 'recordInsert',
+ 'syntax' => 'xml',
+ 'record' => $record,
+ 'databaseName' => 'mydatabase'
+ );
+
+ yaz_es($yaz, 'update', $options);
+ yaz_es($yaz, 'commit', array());
+ yaz_wait();
+
+ if ($error = yaz_error($yaz))
+ echo "$error";
+ ]]>
+
+
+
+
-
-Running the Maintenance Interface (zebraidx)
-
-
-The following is a complete reference to the command line interface to
-the zebraidx application.
-
-
-
-Syntax
-
-
-$ zebraidx [options] command [directory] ...
-
-
-Options
-
-
-
--t type
-
-
-Update all files as type. Currently, the
-types supported are text and grs.subtype. If no
-subtype is provided for the GRS (General Record Structure) type,
-the canonical input format is assumed (see section ). Generally, it
-is probably advisable to specify the record types in the
-zebra.cfg file
-(see section ), to avoid
-confusion at subsequent updates.
-
-
-
-
--c config-file
-
-
-Read the configuration file
-config-file instead of zebra.cfg.
-
-
-
-
--g group
-
-
-Update the files according to the group
-settings for group (see section
-).
-
-
-
-
--d database
-
-
-The records located should be associated
-with the database name database for access through the Z39.50
-server.
-
-
-
-
--m mbytes
-
-
-Use mbytes of megabytes before flushing
-keys to background storage. This setting affects performance when
-updating large databases.
-
-
-
-
--n
-
-
-Disable the use of shadow registers for this operation
-(see section ).
-
-
-
-
--s
-
-
-Show analysis of the indexing process. The maintenance
-program works in a read-only mode and doesn't change the state
-of the index. This options is very useful when you wish to test a
-new profile.
-
-
-
-
--V
-
-
-Show Zebra version.
-
-
-
-
--v level
-
-
-Set the log level to level. level
-should be one of none, debug, and all.
-
-
-
-
-
-
-
-Commands
-
-
-
-Update directory
-
-
-Update the register with the files
-contained in directory. If no directory is provided, a list of
-files is read from stdin.
-See section .
-
-
-
-
-Delete directory
-
-
-Remove the records corresponding to
-the files found under directory from the register.
-
-
-
-
-Commit
-
-
-Write the changes resulting from the last update
-commands to the register. This command is only available if the use of
-shadow register files is enabled (see section
-).
-
-
-
-
-
-
-
-
-
-The Z39.50 Server
-
-
-Running the Z39.50 Server (zebrasrv)
-
-
-Syntax
-
-
-zebrasrv [options] [listener-address ...]
-
-
-
-
-
-Options
-
-
-
--a APDU file
-
-
-Specify a file for dumping PDUs (for diagnostic purposes).
-The special name "-" sends output to stderr.
-
-
-
-
--c config-file
-
-
-Read configuration information from config-file. The default configuration is ./zebra.cfg.
-
-
-
-
--S
-
-
-Don't fork on connection requests. This can be useful for
-symbolic-level debugging. The server can only accept a single
-connection in this mode.
-
-
-
-
--s
-
-
-Use the SR protocol.
-
-
-
-
--z
-
-
-Use the Z39.50 protocol (default). These two options complement
-eachother. You can use both multiple times on the same command
-line, between listener-specifications (see below). This way, you
-can set up the server to listen for connections in both protocols
-concurrently, on different local ports.
-
-
-
-
--l logfile
-
-
-Specify an output file for the diagnostic
-messages. The default is to write this information to stderr.
-
-
-
-
--v log-level
-
-
-The log level. Use a comma-separated list of members of the set
-{fatal,debug,warn,log,all,none}.
-
-
-
-
--u username
-
-
-Set user ID. Sets the real UID of the server process to that of the
-given username. It's useful if you aren't comfortable with having the
-server run as root, but you need to start it as such to bind a
-privileged port.
-
-
-
-
--w working-directory
-
-
-Change working directory.
-
-
-
-
--i
-
-
-Run under the Internet superserver, inetd. Make
-sure you use the logfile option -l in conjunction with this
-mode and specify the -l option before any other options.
-
-
-
-
--t timeout
-
-
-Set the idle session timeout (default 60 minutes).
-
-
-
-
--k kilobytes
-
-
-Set the (approximate) maximum size of
-present response messages. Default is 1024 Kb (1 Mb).
-
-
-
-
-
-
-
-A listener-address consists of a transport mode followed by a
-colon (:) followed by a listener address. The transport mode is
-either osi or tcp.
-
-
-
-For TCP, an address has the form
-
-
-
-
-
-hostname | IP-number [: portnumber]
-
-
-
-
-
-The port number defaults to 210 (standard Z39.50 port).
-
-
-
-For OSI (only available if the server is compiled with XTI/mOSI
-support enabled), the address form is
-
-
-
-
-
-[t-selector /] hostname | IP-number [: portnumber]
-
-
-
-
-
-The transport selector is given as a string of hex digits (with an even
-number of digits). The default port number is 102 (RFC1006 port).
-
-
-
-Examples
-
-
-
-
-
-tcp:dranet.dra.com
-
-osi:0402/dbserver.osiworld.com:3000
-
-
-
-
-
-In both cases, the special hostname "@" is mapped to
-the address INADDR_ANY, which causes the server to listen on any local
-interface. To start the server listening on the registered ports for
-Z39.50 and SR over OSI/RFC1006, and to drop root privileges once the
-ports are bound, execute the server like this (from a root shell):
-
-
-
-
-
-zebrasrv -u daemon tcp:@ -s osi:@
-
-
-
-
-
-You can replace daemon with another user, eg. your own account, or
-a dedicated IR server account.
-
-
-
-The default behavior for zebrasrv is to establish a single TCP/IP
-listener, for the Z39.50 protocol, on port 9999.
-
-
-
-
-
-Z39.50 Protocol Support and Behavior
-
-
-Initialization
-
-
-During initialization, the server will negotiate to version 3 of the
-Z39.50 protocol, and the option bits for Search, Present, Scan,
-NamedResultSets, and concurrentOperations will be set, if requested by
-the client. The maximum PDU size is negotiated down to a maximum of
-1Mb by default.
-
-
-
-
-
-Search
-
-
-The supported query type are 1 and 101. All operators are currently
-supported with the restriction that only proximity units of type "word" are
-supported for the proximity operator.
-Queries can be arbitrarily complex.
-Named result sets are supported, and result sets can be used as operands
-without limitations.
-Searches may span multiple databases.
-
-
-
-The server has full support for piggy-backed present requests (see
-also the following section).
-
-
-
-Use attributes are interpreted according to the attribute sets which
-have been loaded in the zebra.cfg file, and are matched against
-specific fields as specified in the .abs file which describes the
-profile of the records which have been loaded. If no Use
-attribute is provided, a default of Bib-1 Any is assumed.
-
-
-
-If a Structure attribute of Phrase is used in conjunction with a
-Completeness attribute of Complete (Sub)field, the term is
-matched against the contents of the phrase (long word) register, if one
-exists for the given Use attribute.
-A phrase register is created for those fields in the .abs
-file that contains a p-specifier.
-
-
-
-If Structure=Phrase is used in conjunction with
-Incomplete Field - the default value for Completeness, the
-search is directed against the normal word registers, but if the term
-contains multiple words, the term will only match if all of the words
-are found immediately adjacent, and in the given order.
-The word search is performed on those fields that are indexed as
-type w in the .abs file.
-
-
-
-If the Structure attribute is Word List,
-Free-form Text, or Document Text, the term is treated as a
-natural-language, relevance-ranked query.
-This search type uses the word register, i.e. those fields
-that are indexed as type w in the .abs file.
-
-
-
-If the Structure attribute is Numeric String the
-term is treated as an integer. The search is performed on those
-fields that are indexed as type n in the .abs file.
-
-
-
-If the Structure attribute is URx the
-term is treated as a URX (URL) entity. The search is performed on those
-fields that are indexed as type u in the .abs file.
-
-
-
-If the Structure attribute is Local Number the
-term is treated as native Zebra Record Identifier.
-
-
-
-If the Relation attribute is Equals (default), the term is
-matched in a normal fashion (modulo truncation and processing of
-individual words, if required). If Relation is Less Than,
-Less Than or Equal, Greater than, or Greater than or
-Equal, the term is assumed to be numerical, and a standard regular
-expression is constructed to match the given expression. If
-Relation is Relevance, the standard natural-language query
-processor is invoked.
-
-
-
-For the Truncation attribute, No Truncation is the default.
-Left Truncation is not supported. Process # is supported, as
-is Regxp-1. Regxp-2 enables the fault-tolerant (fuzzy)
-search. As a default, a single error (deletion, insertion,
-replacement) is accepted when terms are matched against the register
-contents.
-
-
-
-Regular expressions
-
-
-Each term in a query is interpreted as a regular expression if
-the truncation value is either Regxp-1 (102) or Regxp-2 (103).
-Both query types follow the same syntax with the operands:
-
-
-
-x
-
-
-Matches the character x.
-
-
-
-
-.
-
-
-Matches any character.
-
-
-
-
-[..]
-
-
-Matches the set of characters specified;
-such as [abc] or [a-c].
-
-
-
-
-and the operators:
-
-
-
-x*
-
-
-Matches x zero or more times. Priority: high.
-
-
-
-
-x+
-
-
-Matches x one or more times. Priority: high.
-
-
-
-
-x?
-
-
-Matches x once or twice. Priority: high.
-
-
-
-
-xy
-
-
-Matches x, then y. Priority: medium.
-
-
-
-
-x|y
-
-
-Matches either x or y. Priority: low.
-
-
-
-
-The order of evaluation may be changed by using parentheses.
-
-
-
-If the first character of the Regxp-2 query is a plus character
-(+) it marks the beginning of a section with non-standard
-specifiers. The next plus character marks the end of the section.
-Currently Zebra only supports one specifier, the error tolerance,
-which consists one digit.
-
-
-
-Since the plus operator is normally a suffix operator the addition to
-the query syntax doesn't violate the syntax for standard regular
-expressions.
-
-
-
-
-
-Query examples
-
-
-Phrase search for information retrieval in the title-register:
-
-
- @attr 1=4 "information retrieval"
-
-
-
-
-
-Ranked search for the same thing:
-
-
- @attr 1=4 @attr 2=102 "Information retrieval"
-
-
-
-
-
-Phrase search with a regular expression:
-
-
- @attr 1=4 @attr 5=102 "informat.* retrieval"
-
-
-
-
-
-Ranked search with a regular expression:
-
-
- @attr 1=4 @attr 5=102 @attr 2=102 "informat.* retrieval"
-
-
-
-
-
-In the GILS schema (gils.abs), the west-bounding-coordinate is
-indexed as type n, and is therefore searched by specifying
-structure=Numeric String.
-To match all those records with west-bounding-coordinate greater
-than -114 we use the following query:
-
-
- @attr 4=109 @attr 2=5 @attr gils 1=2038 -114
-
-
-
-
-
-
-
-
-
-Present
-
-
-The present facility is supported in a standard fashion. The requested
-record syntax is matched against the ones supported by the profile of
-each record retrieved. If no record syntax is given, SUTRS is the
-default. The requested element set name, again, is matched against any
-provided by the relevant record profiles.
-
-
-
-
-
-Scan
-
-
-The attribute combinations provided with the termListAndStartPoint are
-processed in the same way as operands in a query (see above).
-Currently, only the term and the globalOccurrences are returned with
-the termInfo structure.
-
-
-
-
-
-Sort
-
-
-Z39.50 specifies three diffent types of sort criterias.
-Of these Zebra supports the attribute specification type in which
-case the use attribute specifies the "Sort register".
-Sort registers are created for those fields that are of type "sort" in
-the default.idx file.
-The corresponding character mapping file in default.idx specifies the
-ordinal of each character used in the actual sort.
-
-
-
-Z39.50 allows the client to specify sorting on one or more input
-result sets and one output result set.
-Zebra supports sorting on one result set only which may or may not
-be the same as the output result set.
-
-
-
-
-
-Close
-
-
-If a Close PDU is received, the server will respond with a Close PDU
-with reason=FINISHED, no matter which protocol version was negotiated
-during initialization. If the protocol version is 3 or more, the
-server will generate a Close PDU under certain circumstances,
-including a session timeout (60 minutes by default), and certain kinds of
-protocol errors. Once a Close PDU has been sent, the protocol
-association is considered broken, and the transport connection will be
-closed immediately upon receipt of further data, or following a short
-timeout.
-
-
-
-
-
-
-
-
-
-The Record Model
-
-
-The Zebra system is designed to support a wide range of data management
-applications. The system can be configured to handle virtually any
-kind of structured data. Each record in the system is associated with
-a record schema which lends context to the data elements of the
-record. Any number of record schema can coexist in the system.
-Although it may be wise to use only a single schema within
-one database, the system poses no such restrictions.
-
-
-
-The record model described in this chapter applies to the fundamental,
-structured
-record type grs as introduced in
-section .
-
-
-
-Records pass through three different states during processing in the
-system.
-
-
-
-
-
-
-
-
-When records are accessed by the system, they are represented
-in their local, or native format. This might be SGML or HTML files,
-News or Mail archives, MARC records. If the system doesn't already
-know how to read the type of data you need to store, you can set up an
-input filter by preparing conversion rules based on regular
-expressions and possibly augmented by a flexible scripting language (Tcl). The input filter
-produces as output an internal representation:
-
-
-
-
-
-
-When records are processed by the system, they are represented
-in a tree-structure, constructed by tagged data elements hanging off a
-root node. The tagged elements may contain data or yet more tagged
-elements in a recursive structure. The system performs various
-actions on this tree structure (indexing, element selection, schema
-mapping, etc.),
-
-
-
-
-
-
-Before transmitting records to the client, they are first
-converted from the internal structure to a form suitable for exchange
-over the network - according to the Z39.50 standard.
-
-
-
-
-
-
-
-
-Local Representation
-
-
-As mentioned earlier, Zebra places few restrictions on the type of
-data that you can index and manage. Generally, whatever the form of
-the data, it is parsed by an input filter specific to that format, and
-turned into an internal structure that Zebra knows how to handle. This
-process takes place whenever the record is accessed - for indexing and
-retrieval.
-
-
-
-The RecordType parameter in the zebra.cfg file, or the -t
-option to the indexer tells Zebra how to process input records. Two
-basic types of processing are available - raw text and structured
-data. Raw text is just that, and it is selected by providing the
-argument text to Zebra. Structured records are all handled
-internally using the basic mechanisms described in the subsequent
-sections. Zebra can read structured records in many different formats.
-How this is done is governed by additional parameters after the
-"grs" keyboard, separated by "." characters.
-
-
-
-Three basic subtypes to the grs type are currently available:
-
-
-
-
-
-
-grs.sgml
-
-
-This is the canonical input format —
-described below. It is a simple SGML-like syntax.
-
-
-
-
-grs.regx.filter
-
-
-This enables a user-supplied input
-filter. The mechanisms of these filters are described below.
-
-
-
-
-grs.marc.abstract syntax
-
-
-This allows Zebra to read
-records in the ISO2709 (MARC) encoding standard. In this case, the
-last paramemeter abstract syntax names the .abs file (see below)
-which describes the specific MARC structure of the input record as
-well as the indexing rules.
-
-
-
-
-
-
-
-Canonical Input Format
-
-
-Although input data can take any form, it is sometimes useful to
-describe the record processing capabilities of the system in terms of
-a single, canonical input format that gives access to the full
-spectrum of structure and flexibility in the system. In Zebra, this
-canonical format is an "SGML-like" syntax.
-
-
-
-To use the canonical format specify grs.sgml as the record
-type,
-
-
-
-Consider a record describing an information resource (such a record is
-sometimes known as a locator record). It might contain a field
-describing the distributor of the information resource, which might in
-turn be partitioned into various fields providing details about the
-distributor, like this:
-
-
-
-
-
-<Distributor>
- <Name> USGS/WRD </Name>
- <Organization> USGS/WRD </Organization>
- <Street-Address>
- U.S. GEOLOGICAL SURVEY, 505 MARQUETTE, NW
- </Street-Address>
- <City> ALBUQUERQUE </City>
- <State> NM </State>
- <Zip-Code> 87102 </Zip-Code>
- <Country> USA </Country>
- <Telephone> (505) 766-5560 </Telephone>
-</Distributor>
-
-
-
-
-
-NOTE: The indentation used above is used to illustrate how Zebra
-interprets the markup. The indentation, in itself, has no
-significance to the parser for the canonical input format, which
-discards superfluous whitespace.
-
-
-
-The keywords surrounded by <...> are tags, while the
-sections of text in between are the data elements. A data element
-is characterized by its location in the tree that is made up by the
-nested elements. Each element is terminated by a closing tag -
-beginning with </, and containing the same symbolic tag-name as
-the corresponding opening tag. The general closing tag - <>/ -
-terminates the element started by the last opening tag. The
-structuring of elements is significant. The element Telephone,
-for instance, may be indexed and presented to the client differently,
-depending on whether it appears inside the Distributor element,
-or some other, structured data element such a Supplier element.
-
-
-
-Record Root
-
-
-The first tag in a record describes the root node of the tree that
-makes up the total record. In the canonical input format, the root tag
-should contain the name of the schema that lends context to the
-elements of the record (see section
-).
-The following is a GILS record that
-contains only a single element (strictly speaking, that makes it an
-illegal GILS record, since the GILS profile includes several mandatory
-elements - Zebra does not validate the contents of a record against
-the Z39.50 profile, however - it merely attempts to match up elements
-of a local representation with the given schema):
-
-
-
-
-
-<gils>
- <title>Zen and the Art of Motorcycle Maintenance</title>
-</gils>
-
-
-
-
-
-
-
-Variants
-
-
-Zebra allows you to provide individual data elements in a number of
-variant forms. Examples of variant forms are textual data
-elements which might appear in different languages, and images which
-may appear in different formats or layouts. The variant system in
-Zebra is
-essentially a representation of the variant mechanism of
-Z39.50-1995.
-
-
-
-The following is an example of a title element which occurs in two
-different languages.
-
-
-
-
-
-<title>
- <var lang lang "eng">
- Zen and the Art of Motorcycle Maintenance</>
- <var lang lang "dan">
- Zen og Kunsten at Vedligeholde en Motorcykel</>
-</title>
-
-
-
-
-
-The syntax of the variant element is <var class
-type value>. The available values for the class and
-type fields are given by the variant set that is associated with the
-current schema (see section ).
-
-
-
-Variant elements are terminated by the general end-tag </>, by
-the variant end-tag </var>, by the appearance of another variant
-tag with the same class and value settings, or by the
-appearance of another, normal tag. In other words, the end-tags for
-the variants used in the example above could have been saved.
-
-
-
-Variant elements can be nested. The element
-
-
-
-
-
-<title>
- <var lang lang "eng"><var body iana "text/plain">
- Zen and the Art of Motorcycle Maintenance
-</title>
-
-
-
-
-
-Associates two variant components to the variant list for the title
-element.
-
-
-
-Given the nesting rules described above, we could write
-
-
-
-
-
-<title>
- <var body iana "text/plain>
- <var lang lang "eng">
- Zen and the Art of Motorcycle Maintenance
- <var lang lang "dan">
- Zen og Kunsten at Vedligeholde en Motorcykel
-</title>
-
-
-
-
-
-The title element above comes in two variants. Both have the IANA body
-type "text/plain", but one is in English, and the other in
-Danish. The client, using the element selection mechanism of Z39.50,
-can retrieve information about the available variant forms of data
-elements, or it can select specific variants based on the requirements
-of the end-user.
-
-
-
-
-
-
-
-Input Filters
-
-
-In order to handle general input formats, Zebra allows the
-operator to define filters which read individual records in their native format
-and produce an internal representation that the system can
-work with.
-
-
-
-Input filters are ASCII files, generally with the suffix .flt.
-The system looks for the files in the directories given in the
-profilePath setting in the zebra.cfg files. The record type
-for the filter is grs.regx.filter-filename
-(fundamental type grs, file read type regx, argument
-filter-filename).
-
-
-
-Generally, an input filter consists of a sequence of rules, where each
-rule consists of a sequence of expressions, followed by an action. The
-expressions are evaluated against the contents of the input record,
-and the actions normally contribute to the generation of an internal
-representation of the record.
-
-
-
-An expression can be either of the following:
-
-
-
-
-
-
-INIT
-
-
-The action associated with this expression is evaluated
-exactly once in the lifetime of the application, before any records
-are read. It can be used in conjunction with an action that
-initializes tables or other resources that are used in the processing
-of input records.
-
-
-
-
-BEGIN
-
-
-Matches the beginning of the record. It can be used to
-initialize variables, etc. Typically, the BEGIN rule is also used
-to establish the root node of the record.
-
-
-
-
-END
-
-
-Matches the end of the record - when all of the contents
-of the record has been processed.
-
-
-
-
-/pattern/
-
-
-Matches a string of characters from the input
-record.
-
-
-
-
-BODY
-
-
-This keyword may only be used between two patterns. It
-matches everything between (not including) those patterns.
-
-
-
-
-FINISH
-
-
-The expression asssociated with this pattern is evaluated
-once, before the application terminates. It can be used to release
-system resources - typically ones allocated in the INIT step.
-
-
-
-
-
-
-
-An action is surrounded by curly braces ({...}), and consists of a
-sequence of statements. Statements may be separated by newlines or
-semicolons (;). Within actions, the strings that matched the
-expressions immediately preceding the action can be referred to as
-$0, $1, $2, etc.
-
-
-
-The available statements are:
-
-
-
-
-
-
-begin type [parameter ... ]
-
-
-Begin a new
-data element. The type is one of the following:
-
-
-
-record
-
-
-Begin a new record. The followingparameter should be the
-name of the schema that describes the structure of the record, eg.
-gils or wais (see below). The begin record call should
-precede
-any other use of the begin statement.
-
-
-
-
-element
-
-
-Begin a new tagged element. The parameter is the
-name of the tag. If the tag is not matched anywhere in the tagsets
-referenced by the current schema, it is treated as a local string
-tag.
-
-
-
-
-variant
-
-
-Begin a new node in a variant tree. The parameters are
-class type value.
-
-
-
-
-
-
-
-
-data
-
-
-Create a data element. The concatenated arguments make
-up the value of the data element. The option -text signals that
-the layout (whitespace) of the data should be retained for
-transmission. The option -element tag wraps the data up in
-the tag. The use of the -element option is equivalent to
-preceding the command with a begin element command, and following
-it with the end command.
-
-
-
-
-end [type]
-
-
-Close a tagged element. If no parameter is given,
-the last element on the stack is terminated. The first parameter, if
-any, is a type name, similar to the begin statement. For the
-element type, a tag name can be provided to terminate a specific tag.
-
-
-
-
-
-
-
-The following input filter reads a Usenet news file, producing a
-record in the WAIS schema. Note that the body of a news posting is
-separated from the list of headers by a blank line (or rather a
-sequence of two newline characters.
-
-
-
-
-
-BEGIN { begin record wais }
-
-/^From:/ BODY /$/ { data -element name $1 }
-/^Subject:/ BODY /$/ { data -element title $1 }
-/^Date:/ BODY /$/ { data -element lastModified $1 }
-/\n\n/ BODY END {
- begin element bodyOfDisplay
- begin variant body iana "text/plain"
- data -text $1
- end record
- }
-
-
-
-
-
-If Zebra is compiled with support for Tcl (Tool Command Language)
-enabled, the statements described above are supplemented with a complete
-scripting environment, including control structures (conditional
-expressions and loop constructs), and powerful string manipulation
-mechanisms for modifying the elements of a record. Tcl is a popular
-scripting environment, with several tutorials available both online
-and in hardcopy.
-
-
-
-NOTE: Tcl support is not currently available, but will be
-included with one of the next alpha or beta releases.
-
-
-
-NOTE: Variant support is not currently available in the input
-filter, but will be included with one of the next alpha or beta
-releases.
-
-
-
-
-
-
-
-Internal Representation
-
-
-When records are manipulated by the system, they're represented in a
-tree-structure, with data elements at the leaf nodes, and tags or
-variant components at the non-leaf nodes. The root-node identifies the
-schema that lends context to the tagging and structuring of the
-record. Imagine a simple record, consisting of a 'title' element and
-an 'author' element:
-
-
-
-
-
- TITLE "Zen and the Art of Motorcycle Maintenance"
-ROOT
- AUTHOR "Robert Pirsig"
-
-
-
-
-
-A slightly more complex record would have the author element consist
-of two elements, a surname and a first name:
-
-
-
-
-
- TITLE "Zen and the Art of Motorcycle Maintenance"
-ROOT
- FIRST-NAME "Robert"
- AUTHOR
- SURNAME "Pirsig"
-
-
-
-
-
-The root of the record will refer to the record schema that describes
-the structuring of this particular record. The schema defines the
-element tags (TITLE, FIRST-NAME, etc.) that may occur in the record, as
-well as the structuring (SURNAME should appear below AUTHOR, etc.). In
-addition, the schema establishes element set names that are used by
-the client to request a subset of the elements of a given record. The
-schema may also establish rules for converting the record to a
-different schema, by stating, for each element, a mapping to a
-different tag path.
-
-
-
-Tagged Elements
-
-
-A data element is characterized by its tag, and its position in the
-structure of the record. For instance, while the tag "telephone
-number" may be used different places in a record, we may need to
-distinguish between these occurrences, both for searching and
-presentation purposes. For instance, while the phone numbers for the
-"customer" and the "service provider" are both
-representatives for the same type of resource (a telephone number), it
-is essential that they be kept separate. The record schema provides
-the structure of the record, and names each data element (defined by
-the sequence of tags - the tag path - by which the element can be
-reached from the root of the record).
-
-
-
-
-
-Variants
-
-
-The children of a tag node may be either more tag nodes, a data node
-(possibly accompanied by tag nodes),
-or a tree of variant nodes. The children of variant nodes are either
-more variant nodes or a data node (possibly accompanied by more
-variant nodes). Each leaf node, which is normally a
-data node, corresponds to a variant form of the tagged element
-identified by the tag which parents the variant tree. The following
-title element occurs in two different languages:
-
-
-
-
-
- VARIANT LANG=ENG "War and Peace"
-TITLE
- VARIANT LANG=DAN "Krig og Fred"
-
-
-
-
-
-Which of the two elements are transmitted to the client by the server
-depends on the specifications provided by the client, if any.
-
-
-
-In practice, each variant node is associated with a triple of class,
-type, value, corresponding to the variant mechanism of Z39.50.
-
-
-
-
-
-Data Elements
-
-
-Data nodes have no children (they are always leaf nodes in the record
-tree).
-
-
-
-NOTE: Documentation needs extension here about types of nodes - numerical,
-textual, etc., plus the various types of inclusion notes.
-
-
-
-
-
-
-
-Configuring Your Data Model
-
-
-The following sections describe the configuration files that govern
-the internal management of data records. The system searches for the files
-in the directories specified by the profilePath setting in the
-zebra.cfg file.
-
-
-
-The Abstract Syntax
-
-
-The abstract syntax definition (also known as an Abstract Record
-Structure, or ARS) is the focal point of the
-record schema description. For a given schema, the ABS file may state any
-or all of the following:
-
-
-
-
-
-
-
-
-The object identifier of the Z39.50 schema associated
-with the ARS, so that it can be referred to by the client.
-
-
-
-
-
-
-The attribute set (which can possibly be a compound of multiple
-sets) which applies in the profile. This is used when indexing and
-searching the records belonging to the given profile.
-
-
-
-
-
-
-The Tag set (again, this can consist of several different sets).
-This is used when reading the records from a file, to recognize the
-different tags, and when transmitting the record to the client -
-mapping the tags to their numerical representation, if they are
-known.
-
-
-
-
-
-
-The variant set which is used in the profile. This provides a
-vocabulary for specifying the forms of data that appear inside
-the records.
-
-
-
-
-
-
-Element set names, which are a shorthand way for the client to
-ask for a subset of the data elements contained in a record. Element
-set names, in the retrieval module, are mapped to element
-specifications, which contain information equivalent to the
-Espec-1 syntax of Z39.50.
-
-
-
-
-
-
-Map tables, which may specify mappings to other database
-profiles, if desired.
-
-
-
-
-
-
-Possibly, a set of rules describing the mapping of elements to a
-MARC representation.
-
-
-
-
-
-
-A list of element descriptions (this is the actual ARS of the
-schema, in Z39.50 terms), which lists the ways in which the various
-tags can be used and organized hierarchically.
-
-
-
-
-
-
-
-
-Several of the entries above simply refer to other files, which
-describe the given objects.
-
-
-
-
-
-The Configuration Files
-
-
-This section describes the syntax and use of the various tables which
-are used by the retrieval module.
-
-
-
-The number of different file types may appear daunting at first, but
-each type corresponds fairly clearly to a single aspect of the Z39.50
-retrieval facilities. Further, the average database administrator,
-who is simply reusing an existing profile for which tables already
-exist, shouldn't have to worry too much about the contents of these tables.
-
-
-
-Generally, the files are simple ASCII files, which can be maintained
-using any text editor. Blank lines, and lines beginning with a (#) are
-ignored. Any characters on a line followed by a (#) are also ignored.
-All other
-lines contain directives, which provide some setting or value
-to the system. Generally, settings are characterized by a single
-keyword, identifying the setting, followed by a number of parameters.
-Some settings are repeatable (r), while others may occur only once in a
-file. Some settings are optional (o), whicle others again are
-mandatory (m).
-
-
-
-
-
-The Abstract Syntax (.abs) Files
-
-
-The name of this file type is slightly misleading in Z39.50 terms,
-since, apart from the actual abstract syntax of the profile, it also
-includes most of the other definitions that go into a database
-profile.
-
-
-
-When a record in the canonical, SGML-like format is read from a file
-or from the database, the first tag of the file should reference the
-profile that governs the layout of the record. If the first tag of the
-record is, say, <gils>, the system will look for the profile
-definition in the file gils.abs. Profile definitions are cached,
-so they only have to be read once during the lifespan of the current
-process.
-
-
-
-When writing your own input filters, the record-begin command
-introduces the profile, and should always be called first thing when
-introducing a new record.
-
-
-
-The file may contain the following directives:
-
-
-
-
-
-
-name symbolic-name
-
-
-(m) This provides a shorthand name or
-description for the profile. Mostly useful for diagnostic purposes.
-
-
-
-
-reference OID-name
-
-
-(m) The reference name of the OID for
-the profile. The reference names can be found in the util
-module of YAZ.
-
-
-
-
-attset filename
-
-
-(m) The attribute set that is used for
-indexing and searching records belonging to this profile.
-
-
-
-
-tagset filename
-
-
-(o) The tag set (if any) that describe
-that fields of the records.
-
-
-
-
-varset filename
-
-
-(o) The variant set used in the profile.
-
-
-
-
-maptab filename
-
-
-(o,r) This points to a
-conversion table that might be used if the client asks for the record
-in a different schema from the native one.
-
-
-
-marc filename
-
-
-(o) Points to a file containing parameters
-for representing the record contents in the ISO2709 syntax. Read the
-description of the MARC representation facility below.
-
-
-
-esetname name filename
-
-
-(o,r) Associates the
-given element set name with an element selection file. If an (@) is
-given in place of the filename, this corresponds to a null mapping for
-the given element set name.
-
-
-
-any tags
-
-
-(o) This directive specifies a list of
-attributes which should be appended to the attribute list given for each
-element. The effect is to make every single element in the abstract
-syntax searchable by way of the given attributes. This directive
-provides an efficient way of supporting free-text searching across all
-elements. However, it does increase the size of the index
-significantly. The attributes can be qualified with a structure, as in
-the elm directive below.
-
-
-
-elm path name attributes
-
-
-(o,r) Adds an element
-to the abstract record syntax of the schema. The path follows the
-syntax which is suggested by the Z39.50 document - that is, a sequence
-of tags separated by slashes (/). Each tag is given as a
-comma-separated pair of tag type and -value surrounded by parenthesis.
-The name is the name of the element, and the attributes
-specifies which attributes to use when indexing the element in a
-comma-separated list. A ! in
-place of the attribute name is equivalent to specifying an attribute
-name identical to the element name. A - in place of the attribute name
-specifies that no indexing is to take place for the given element. The
-attributes can be qualified with field types to specify which
-character set should govern the indexing procedure for that field. The
-same data element may be indexed into several different fields, using
-different character set definitions. See the section
-.
-The default field type is "w" for
-word.
-
-
-
-
-
-
-NOTE: The mechanism for controlling indexing is not adequate for
-complex databases, and will probably be moved into a separate
-configuration table eventually.
-
-
-
-The following is an excerpt from the abstract syntax file for the GILS
-profile.
-
-
-
-
-
-name gils
-reference GILS-schema
-attset gils.att
-tagset gils.tag
-varset var1.var
-
-maptab gils-usmarc.map
-
-# Element set names
-
-esetname VARIANT gils-variant.est # for WAIS-compliance
-esetname B gils-b.est
-esetname G gils-g.est
-esetname F @
-
-elm (1,10) rank -
-elm (1,12) url -
-elm (1,14) localControlNumber Local-number
-elm (1,16) dateOfLastModification Date/time-last-modified
-elm (2,1) title w:!,p:!
-elm (4,1) controlIdentifier Identifier-standard
-elm (2,6) abstract Abstract
-elm (4,51) purpose !
-elm (4,52) originator -
-elm (4,53) accessConstraints !
-elm (4,54) useConstraints !
-elm (4,70) availability -
-elm (4,70)/(4,90) distributor -
-elm (4,70)/(4,90)/(2,7) distributorName !
-elm (4,70)/(4,90)/(2,10 distributorOrganization !
-elm (4,70)/(4,90)/(4,2) distributorStreetAddress !
-elm (4,70)/(4,90)/(4,3) distributorCity !
-
-
-
-
-
-
-
-The Attribute Set (.att) Files
-
-
-This file type describes the Use elements of an attribute set.
-It contains the following directives.
-
-
-
-
-
-
-name symbolic-name
-
-
-(m) This provides a shorthand name or
-description for the attribute set. Mostly useful for diagnostic purposes.
-
-
-
-reference OID-name
-
-
-(m) The reference name of the OID for
-the attribute set. The reference names can be found in the util
-module of YAZ.
-
-
-
-ordinal integer
-
-
-(m) This value will be used to represent the
-attribute set in the index. Care should be taken that each attribute
-set has a unique ordinal value.
-
-
-
-include filename
-
-
-(o,r) This directive is used to
-include another attribute set as a part of the current one. This is
-used when a new attribute set is defined as an extension to another
-set. For instance, many new attribute sets are defined as extensions
-to the bib-1 set. This is an important feature of the retrieval
-system of Z39.50, as it ensures the highest possible level of
-interoperability, as those access points of your database which are
-derived from the external set (say, bib-1) can be used even by clients
-who are unaware of the new set.
-
-
-
-att att-value att-name [local-value]
-
-
-(o,r) This
-repeatable directive introduces a new attribute to the set. The
-attribute value is stored in the index (unless a local-value is
-given, in which case this is stored). The name is used to refer to the
-attribute from the abstract syntax.
-
-
-
-
-
-
-This is an excerpt from the GILS attribute set definition. Notice how
-the file describing the bib-1 attribute set is referenced.
-
-
-
-
-
-name gils
-reference GILS-attset
-include bib1.att
-ordinal 2
-
-att 2001 distributorName
-att 2002 indextermsControlled
-att 2003 purpose
-att 2004 accessConstraints
-att 2005 useConstraints
-
-
-
-
-
-
-
-The Tag Set (.tag) Files
-
-
-This file type defines the tagset of the profile, possibly by
-referencing other tag sets (most tag sets, for instance, will include
-tagsetG and tagsetM from the Z39.50 specification. The file may
-contain the following directives.
-
-
-
-
-
-
-name symbolic-name
-
-
-(m) This provides a shorthand name or
-description for the tag set. Mostly useful for diagnostic purposes.
-
-
-
-reference OID-name
-
-
-(o) The reference name of the OID for
-the tag set. The reference names can be found in the util
-module of YAZ. The directive is optional, since not all tag sets
-are registered outside of their schema.
-
-
-
-type integer
-
-
-(m) The type number of the tagset within the schema
-profile (note: this specification really should belong to the .abs
-file. This will be fixed in a future release).
-
-
-
-include filename
-
-
-(o,r) This directive is used
-to include the definitions of other tag sets into the current one.
-
-
-
-tag number names type
-
-
-(o,r) Introduces a new
-tag to the set. The number is the tag number as used in the protocol
-(there is currently no mechanism for specifying string tags at this
-point, but this would be quick work to add). The names parameter
-is a list of names by which the tag should be recognized in the input
-file format. The names should be separated by slashes (/). The
-type is th recommended datatype of the tag. It should be one of
-the following:
-
-
-
-
-
-structured
-
-
-
-
-
-string
-
-
-
-
-
-numeric
-
-
-
-
-
-bool
-
-
-
-
-
-oid
-
-
-
-
-
-generalizedtime
-
-
-
-
-
-intunit
-
-
-
-
-
-int
-
-
-
-
-
-octetstring
-
-
-
-
-
-null
-
-
-
-
-
-
-
-
-
-
-
-The following is an excerpt from the TagsetG definition file.
-
-
-
-
-
-name tagsetg
-reference TagsetG
-type 2
-
-tag 1 title string
-tag 2 author string
-tag 3 publicationPlace string
-tag 4 publicationDate string
-tag 5 documentId string
-tag 6 abstract string
-tag 7 name string
-tag 8 date generalizedtime
-tag 9 bodyOfDisplay string
-tag 10 organization string
-
-
-
-
-
-
-
-The Variant Set (.var) Files
-
-
-The variant set file is a straightforward representation of the
-variant set definitions associated with the protocol. At present, only
-the Variant-1 set is known.
-
-
-
-These are the directives allowed in the file.
-
-
-
-
-
-
-name symbolic-name
-
-
-(m) This provides a shorthand name or
-description for the variant set. Mostly useful for diagnostic purposes.
-
-
-
-reference OID-name
-
-
-(o) The reference name of the OID for
-the variant set, if one is required. The reference names can be found
-in the util module of YAZ.
-
-
-
-class integer class-name
-
-
-(m,r) Introduces a new
-class to the variant set.
-
-
-
-type integer type-name datatype
-
-
-(m,r) Addes a
-new type to the current class (the one introduced by the most recent
-class directive). The type names belong to the same name space as
-the one used in the tag set definition file.
-
-
-
-
-
-
-The following is an excerpt from the file describing the variant set
-Variant-1.
-
-
-
-
-
-name variant-1
-reference Variant-1
-
-class 1 variantId
-
- type 1 variantId octetstring
-
-class 2 body
-
- type 1 iana string
- type 2 z39.50 string
- type 3 other string
-
-
-
-
-
-
-
-The Element Set (.est) Files
-
-
-The element set specification files describe a selection of a subset
-of the elements of a database record. The element selection mechanism
-is equivalent to the one supplied by the Espec-1 syntax of the
-Z39.50 specification. In fact, the internal representation of an
-element set specification is identical to the Espec-1 structure,
-and we'll refer you to the description of that structure for most of
-the detailed semantics of the directives below.
-
-
-
-NOTE: Not all of the Espec-1 functionality has been implemented yet.
-The fields that are mentioned below all work as expected, unless
-otherwise is noted.
-
-
-
-The directives available in the element set file are as follows:
-
-
-
-
-
-
-defaultVariantSetId OID-name
-
-
-(o) If variants are used in
-the following, this should provide the name of the variantset used
-(it's not currently possible to specify a different set in the
-individual variant request). In almost all cases (certainly all
-profiles known to us), the name Variant-1 should be given here.
-
-
-
-defaultVariantRequest variant-request
-
-
-(o) This directive
-provides a default variant request for
-use when the individual element requests (see below) do not contain a
-variant request. Variant requests consist of a blank-separated list of
-variant components. A variant compont is a comma-separated,
-parenthesized triple of variant class, type, and value (the two former
-values being represented as integers). The value can currently only be
-entered as a string (this will change to depend on the definition of
-the variant in question). The special value (@) is interpreted as a
-null value, however.
-
-
-
-simpleElement path ['variant' variant-request]
-
-
-(o,r) This corresponds to a simple element request in Espec-1. The
-path consists of a sequence of tag-selectors, where each of these can
-consist of either:
-
-
-
-
-
-
-
-
-A simple tag, consisting of a comma-separated type-value pair in
-parenthesis, possibly followed by a colon (:) followed by an
-occurrences-specification (see below). The tag-value can be a number
-or a string. If the first character is an apostrophe ('), this forces
-the value to be interpreted as a string, even if it appears to be numerical.
-
-
-
-
-
-
-A WildThing, represented as a question mark (?), possibly
-followed by a colon (:) followed by an occurrences specification (see
-below).
-
-
-
-
-
-
-A WildPath, represented as an asterisk (*). Note that the last
-element of the path should not be a wildPath (wildpaths don't work in
-this version).
-
-
-
-
-
-
-
-
-The occurrences-specification can be either the string all, the
-string last, or an explicit value-range. The value-range is
-represented as an integer (the starting point), possibly followed by a
-plus (+) and a second integer (the number of elements, default being
-one).
-
-
-
-The variant-request has the same syntax as the defaultVariantRequest
-above. Note that it may sometimes be useful to give an empty variant
-request, simply to disable the default for a specific set of fields
-(we aren't certain if this is proper Espec-1, but it works in
-this implementation).
-
-
-
-
-
-
-The following is an example of an element specification belonging to
-the GILS profile.
-
-
-
-
-
-simpleelement (1,10)
-simpleelement (1,12)
-simpleelement (2,1)
-simpleelement (1,14)
-simpleelement (4,1)
-simpleelement (4,52)
-
-
-
-
-
-
-
-The Schema Mapping (.map) Files
-
-
-Sometimes, the client might want to receive a database record in
-a schema that differs from the native schema of the record. For
-instance, a client might only know how to process WAIS records, while
-the database record is represented in a more specific schema, such as
-GILS. In this module, a mapping of data to one of the MARC formats is
-also thought of as a schema mapping (mapping the elements of the
-record into fields consistent with the given MARC specification, prior
-to actually converting the data to the ISO2709). This use of the
-object identifier for USMARC as a schema identifier represents an
-overloading of the OID which might not be entirely proper. However,
-it represents the dual role of schema and record syntax which
-is assumed by the MARC family in Z39.50.
-
-
-
-NOTE: The schema-mapping functions are so far limited to a
-straightforward mapping of elements. This should be extended with
-mechanisms for conversions of the element contents, and conditional
-mappings of elements based on the record contents.
-
-
-
-These are the directives of the schema mapping file format:
-
-
-
-
-
-
-targetName name
-
-
-(m) A symbolic name for the target schema
-of the table. Useful mostly for diagnostic purposes.
-
-
-
-targetRef OID-name
-
-
-(m) An OID name for the target schema.
-This is used, for instance, by a server receiving a request to present
-a record in a different schema from the native one. The name, again,
-is found in the oid module of YAZ.
-
-
-
-map element-name target-path
-
-
-(o,r) Adds
-an element mapping rule to the table.
-
-
-
-
-
-
-
-
-The MARC (ISO2709) Representation (.mar) Files
-
-
-This file provides rules for representing a record in the ISO2709
-format. The rules pertain mostly to the values of the constant-length
-header of the record.
-
-
-
-NOTE: This will be described better. We're in the process of
-re-evaluating and most likely changing the way that MARC records are
-handled by the system.
-
-
-
-
-
-Field Structure and Character Sets
-
-
-
-In order to provide a flexible approach to national character set
-handling, Zebra allows the administrator to configure the set up the
-system to handle any 8-bit character set — including sets that
-require multi-octet diacritics or other multi-octet characters. The
-definition of a character set includes a specification of the
-permissible values, their sort order (this affects the display in the
-SCAN function), and relationships between upper- and lowercase
-characters. Finally, the definition includes the specification of
-space characters for the set.
-
-
-
-The operator can define different character sets for different fields,
-typical examples being standard text fields, numerical fields, and
-special-purpose fields such as WWW-style linkages (URx).
-
-
-
-The field types, and hence character sets, are associated with data
-elements by the .abs files (see above). The file default.idx
-provides the association between field type codes (as used in the .abs
-files) and the character map files (with the .chr suffix). The format
-of the .idx file is as follows
-
-
-
-
-
-
-index field type code
-
-
-This directive introduces a new
-search index code. The argument is a one-character code to be used in the
-.abs files to select this particular index type. An index, roughly,
-corresponds to a particular structure attribute during search. Refer
-to section .
-
-
-
-sort field code type
-
-
-This directive introduces a
-sort index. The argument is a one-character code to be used in the
-.abs fie to select this particular index type. The corresponding
-use attribute must be used in the sort request to refer to this
-particular sort index. The corresponding character map (see below)
-is used in the sort process.
-
-
-
-completeness boolean
-
-
-This directive enables or disables
-complete field indexing. The value of the boolean should be 0
-(disable) or 1. If completeness is enabled, the index entry will
-contain the complete contents of the field (up to a limit), with words
-(non-space characters) separated by single space characters
-(normalized to " " on display). When completeness is
-disabled, each word is indexed as a separate entry. Complete subfield
-indexing is most useful for fields which are typically browsed (eg.
-titles, authors, or subjects), or instances where a match on a
-complete subfield is essential (eg. exact title searching). For fields
-where completeness is disabled, the search engine will interpret a
-search containing space characters as a word proximity search.
-
-
-
-charmap filename
-
-
-This is the filename of the character
-map to be used for this index for field type.
-
-
-
-
-
-
-The contents of the character map files are structured as follows:
-
-
-
-
-
-
-lowercase value-set
-
-
-This directive introduces the basic
-value set of the field type. The format is an ordered list (without
-spaces) of the characters which may occur in "words" of
-the given type. The order of the entries in the list determines the
-sort order of the index. In addition to single characters, the
-following combinations are legal:
-
-
-
-
-
-
-
-
-Backslashes may be used to introduce three-digit octal, or
-two-digit hex representations of single characters (preceded by x).
-In addition, the combinations
-\\, \\r, \\n, \\t, \\s (space — remember that real space-characters
-may ot occur in the value definition), and \\ are recognised,
-with their usual interpretation.
-
-
-
-
-
-
-Curly braces {} may be used to enclose ranges of single
-characters (possibly using the escape convention described in the
-preceding point), eg. {a-z} to entroduce the standard range of ASCII
-characters. Note that the interpretation of such a range depends on
-the concrete representation in your local, physical character set.
-
-
-
-
-
-
-paranthesises () may be used to enclose multi-byte characters -
-eg. diacritics or special national combinations (eg. Spanish
-"ll"). When found in the input stream (or a search term),
-these characters are viewed and sorted as a single character, with a
-sorting value depending on the position of the group in the value
-statement.
-
-
-
-
-
-
-
-
-uppercase value-set
-
-
-This directive introduces the
-upper-case equivalencis to the value set (if any). The number and
-order of the entries in the list should be the same as in the
-lowercase directive.
-
-
-
-space value-set
-
-
-This directive introduces the character
-which separate words in the input stream. Depending on the
-completeness mode of the field in question, these characters either
-terminate an index entry, or delimit individual "words" in
-the input stream. The order of the elements is not significant —
-otherwise the representation is the same as for the upercase and
-lowercase directives.
-
-
-
-map value-set target
-
-
-This directive introduces a
-mapping between each of the members of the value-set on the left to
-the character on the right. The character on the right must occur in
-the value set (the lowercase directive) of the character set, but
-it may be a paranthesis-enclosed multi-octet character. This directive
-may be used to map diacritics to their base characters, or to map
-HTML-style character-representations to their natural form, etc.
-
-
-
-
-
-
-
-
-
-
-Exchange Formats
-
-
-Converting records from the internal structure to en exchange format
-is largely an automatic process. Currently, the following exchange
-formats are supported:
-
-
-
-
-
-
-
-
-GRS-1. The internal representation is based on GRS-1, so the
-conversion here is straightforward. The system will create
-applied variant and supported variant lists as required, if a record
-contains variant information.
-
-
-
-
-
-
-SUTRS. Again, the mapping is fairly straighforward. Indentation
-is used to show the hierarchical structure of the record. All
-"GRS" type records support both the GRS-1 and SUTRS
-representations.
-
-
-
-
-
-
-ISO2709-based formats (USMARC, etc.). Only records with a
-two-level structure (corresponding to fields and subfields) can be
-directly mapped to ISO2709. For records with a different structuring
-(eg., GILS), the representation in a structure like USMARC involves a
-schema-mapping (see section ), to an
-"implied" USMARC schema (implied,
-because there is no formal schema which specifies the use of the
-USMARC fields outside of ISO2709). The resultant, two-level record is
-then mapped directly from the internal representation to ISO2709. See
-the GILS schema definition files for a detailed example of this
-approach.
-
-
-
-
-
-
-Explain. This representation is only available for records
-belonging to the Explain schema.
-
-
-
-
-
-
-Summary. This ASN-1 based structure is only available for records
-belonging to the Summary schema - or schema which provide a mapping
-to this schema (see the description of the schema mapping facility
-above).
-
-
-
-
-
-
-SOIF. Support for this syntax is experimental, and is currently
-keyed to a private Index Data OID (1.2.840.10003.5.1000.81.2). All
-abstract syntaxes can be mapped to the SOIF format, although nested
-elements are represented by concatenation of the tag names at each
-level.
-
-
-
-
-
-
-
-
-
-
-
+