Eighteen interesting/difficult/common problems, culled ... World... · Technical Seminar...
Transcript of Eighteen interesting/difficult/common problems, culled ... World... · Technical Seminar...
Technical Seminar
Real‐World Aleph Scenarios Eighteen interesting/difficult/common problems, culled from Salesforce Articles -- with solutions.
Ex Libris Technical Seminar, April 2014
Jerry Specht Senior Support Analyst [email protected]
Note: A Word document with the Article/KB records from the 2013
“Real‐World” session is included on the 2014 Technical Seminar CD (‐‐
along with, of course, this document).
The Articles in this 2014 document are all new.
If time permits …
Technical Seminar
Copyright
All of the information and material inclusive of text, images, logos, product names is either the property of, or used with permission by Ex Libris Ltd. The information may not be distributed, modified, displayed, reproduced – in whole or in part – without the prior written permission of Ex Libris Ltd. TRADEMARKS Ex Libris, the Ex Libris logo, Aleph, Alma, SFX, SFXIT, MetaLib, DigiTool, Verde, Primo, Voyager, MetaSearch, MetaIndex and other Ex Libris products and services referenced herein are trademarks of Ex Libris, and may be registered in certain jurisdictions. All other product names, company names, marks and logos referenced may be trademarks of their respective owners. DISCLAIMER The information contained in this document is compiled from various sources and provided on an "AS IS" basis for general information purposes only without any representations, conditions or warranties whether express or implied, including any implied warranties of satisfactory quality, completeness, accuracy or fitness for a particular purpose. Ex Libris, its subsidiaries and related corporations ("Ex Libris Group") disclaim any and all liability for all use of this information, including losses, damages, claims or expenses any person may incur as a result of the use of this information, even if advised of the possibility of such loss or damage. © Ex Libris Ltd., 2014
Technical Seminar
Article 000022292 Problem Symptoms Changes made directly to the item sublibrary, collection, or call number via the GUI or via manage‐62 (Update Item Records) do not work. If made via SQL, the changes can be seen under Items in Cataloging, but when the item (or the HOL record) is subsequently resent to the server, the changes disappear. Cause X852‐ITEM‐OVERRIDE is set to "Y" in the xxx50 and xxx60 tab100 (either explicitly or by default) and z30_temp_location = "N". Resolution 1. Change X852‐ITEM‐OVERRIDE to "N" or z30_temp_location to "Y". Note: Propagating HOL 852 changes to all of the associated item records is a great convenience. Setting X852‐ITEM‐OVERRIDE=N (or z30_temp_location to "N") will prevent this from happening: either each associated
Technical Seminar
item would need to be updated in the GUI or manage‐62 would need to be run. Or 2. Leaving X852‐ITEM‐OVERRIDE=Y and z30_temp_location N,... a. Use the file of z30 record keys as input to manage‐70 (Retrieve Record Keys), specifying ITEM‐TO‐HOL as the "Convert Type" b. Stop the xxx60 ue_01 (there have been problems with leaving this running when jobs updating HOL records are performed) c. Run manage‐21 in xxx60, with the output file from step 2a as the input and specifying the change which you want to be made to the 852 call number subfields d. Restart the xxx60 ue_01. (This will propagate the changed call numbers to the associated z30 records.)
Technical Seminar
Article 000022293 Problem Symptoms Changes to the item sublibrary, collection, or call number made via SQL do not update the Bib, ADM, or HOL library indexes. Cause Updating items via SQL does *not* create z07s (indexing requests). Resolution To make z07s be created ... 1. make the change via the GUI or via the manage‐62 (Update Item Records) or 2a. use the item number file as input to manage‐70 (Retrieve Record Keys), specifying ITEM‐TO‐BIB as the "Convert Type", and then 2b. use the file produced by 2a as input to xxx01 manage‐40 (Update Indexes for Selected Records) which will generate a z07 for each bib record, which will then be processed by the xxx01 ue_01.
Technical Seminar
Additional Information As noted in Article 000022292 ("Item sublibrary, collection, and call number updates do not work"), direct updates to items do not work at all when X852‐ITEM‐OVERRIDE=Y (or z30_temp_location = "N"). Note: Your Oracle license may not permit the updating of records in Oracle via SQL. Article 000007068 (KB 3937) We are moving our server to a new building, where it will have a different IP address. What changes do we need to make to ALEPH tables to accommodate this? Answer: For each pc: the \ALEPHCOM\TAB\library.ini file has server address.
Technical Seminar
If copying the installation, put in the new address. If moving the installation, if the file had the numeric address (rather than the name), you will need to change it. On the server side: The following files have the server address. If they have it in numeric form (rather than the name), you will need to change it: 1. $alephe_root/aleph_start 2. $alephe_root/www_server.conf 3. $alephe_root/apache/conf/httpd.conf 4. Initial Apache page ( file .index.html ) 5. /etc/hosts 6. You will need new $alephe_tab/license and ./license.www_x files with the new external IP address and, if relevant, the new internal address. Contact Ex Libris Support to have these created.
Technical Seminar
Article 000020238 (KB 16384‐48242) My Article 000020238 has existed for a while , but I discovered that it had wrong information. I have corrected it. I have inserted: (1) a new first paragraph in the Resolution and (2) an Additional Information section with additional new‐found information. Description: util a/17/11/2 in our abc50 z00p shows that there are 2,103,451 records, occupying 455 meg. But when Upgrade Express "Export Customer Data" (create_customer_data) processes the z00p using the Data Pump, we see this: . exported "ABC50"."Z00P" 24.48 GB 2103451 rows
(And it takes *11 hours* to process this z00p. The Data Pump export of all of the Oracle data, *without the z00p*, takes only 6 hours.)
Technical Seminar
Why does the exported table occupy 24 gig when the size in Oracle is only 455 meg? And why does it take so long? Resolution: In addition to the data and index components which one has for normal tables, because the Z00P_PTR column is in LOB format, a very large LOBSEGMENT component exists . This can be seen in util a/17/11/1; see example in Additional Information. Two solutions to this problem are given below. We originally suggested Solution B (Data Pump export of z00p prior to regular Upgrade Express export), but testing showed that solution A (rebuild of z00p using p_publish_04) in the new version is much faster: we were able to run p_publish_04 on the 2.3 million bib library (generating 2.0 million z00p records) in one hour. The Data Pump export, though not requiring downtime for the z00p export, still takes 8‐11 hours (plus three hours for import).
Technical Seminar
Solution A. p_publish_04 1. Stop the ue_21 daemons on v20 Prod. 2. Do util a/17/1 to drop the v20 z00p (‐‐ so the z00p will *not* be exported). 3. Run Upgrade Express. 4. Do util a/17/1 to drop the v21 z00p. 5. Run p_publish_04 on each source library in v21. These p_publish_04's will generate z00p's with timestamps with the current date/time, which will result in their being republished by p_publish_06. If there have been problems which you want to correct, this may be desirable, but if the v20 z00p is OK, then you may want to update the timestamp for each publishing set in the v21 $aleph_proc/tab_publish_timestamp with a date/time which is greater than that of the z00p_timestamp's of the new z00p's. This will prevent p_publish_06 from republishing all the z00p's.
Technical Seminar
Solution B. Separate Data Pump export of the z00p prior to the regular, complete Upgrade Express export. The night *before* the regular planned Upgrade Express export: 1. Stop the ue_21 daemons on v20 Prod. 2. Export the z00p using $aleph_proc/oracle_expdp_table on v20 Prod. 3. Copy the exported v20 z00p to v21. 4. Drop the z00p (util a/17/1) on v20 Prod. The night *of* the regular planned Upgrade Express export: 5. Take Prod v20 down. 6. Upgrade Express "Create Customer Data" to export the v20 Oracle data from v20 Prod. 7. Copy the v20 Oracle data to v21. 8. Import the Oracle data ("Install Customer Data") from step 7 on v21. 9. Import the z00p on v21 using $aleph_proc/oracle_impdp_table. (Note: this needs to wait until *after* step 8 which includes create_ora_user for the $usr_library.)
Technical Seminar
10. Start the ue_21 daemons on v21 (which will process the z07p's accumulated while the ue_21 daemons were stopped on v20). Additional Information util a/17/11/1 shows this: Z00P TABLE TS4D 5554176 694272 85 Z00P_ID INDEX TS3X 327680 40960 5 Z00P_ID1 INDEX TS3X 524288 65536 8 Z00P_ID2 INDEX TS3X 262144 32768 4 Z00P_ID3 INDEX TS3X 983040 122880 15 Z00P_ID4 INDEX TS3X 262144 32768 4 Z00P_ID5 INDEX TS3X 1310720 163840 20 Z00P_SEGNAME LOBSEGMENT TS4D 101293056 12661632 1734
The last is what is taking most of the space (101 Gig).
Technical Seminar
Article 000021368 While setting up a new library for consortial borrowing (xxx40), I inadvertently omitted a space in the following scripted command: csh -f$aleph_proc/open_new_library xxx40 SMALL usm40 "" N Y
should have been csh -f $aleph_proc/open_new_library xxx40 SMALL usm40 "" N Y
I.e., with space after the ‐f. Now, when I enter 'csh' at the prompt, I get these two lines in response: Execute master cshrc Variable syntax And I'm unable to invoke dlib (and, thus, util, etc.) The /exlibris/aleph/a20_1/alephm/.cshrc file hasn't been touched since 2007.
Technical Seminar
Resolution Since it seemed that the/exlibris/aleph/a20_1/alephm/. cshrc was not executing properly, I entered its commands individually. The first was accepted OK, but the second source /exlibris/aleph/u20_1/alephe/aleph_start got the "Variable syntax” error. Finding that aleph_start.private, a file which is included in aleph_start, was updated (by the open_new_library script) at the time this problem started, I copied aleph_start.private as aleph_start.private.bad.js20131003; restored the aleph_start.private.save file which had been saved by the script; and did "source aleph_start". I am now able to log in, do "dlib", etc.
Technical Seminar
Article 000021282 What Oracle table is the item barcode index in? Answer Unlike most other Aleph indexes the "BAR” (item barcode) index does not exist as a separate Aleph table. The z30_id1 Oracle index is built on the z30_barcode field: Z30_ID1 VALID NORMAL NONUNIQUE Z30_BARCODE When the user specifies that the BAR index should be searched, the system looks at the z30_id1/z30_barcode field. In a multi‐ADM situation, the system will retrieve all bib records (in the specified base) whose items contain the specified barcode. Most of the time, the user will be searching the local campus' base and will not retrieve bib records
Technical Seminar
not held by their campus. But, if users are searching in the shared catalog, they *could* retrieve bib records from other campuses whose items could share the same system‐assigned, "B‐nnnn" barcode as the item they are searching for. The solution to this is to have users search for barcodes in the local base only. Article 000001685 (KB 16384‐65073)
Desired outcome / Goal An error message will appear when a patron with a particular patron‐status tries to check out an item of a particular item‐status ‐‐ indicating that the patron is not allowed to check out the item. Procedure Add a line to ADM tab16 for the item‐status/patron‐status. (This could be either just the specific item‐status/patron‐status or, using the "##"
Technical Seminar
wildcard, all statuses not specified in preceding tab16 entries.) The entry needs to have "00" in column 12 ("Max number of loans"). Additional information 1. If no patrons of *any* patron‐status should be allowed to check out items of this status, change the value in column 6 ("Loan") in the ADM tab15.eng entry for this item status to "N". 2. If patrons with this patron status should not able to check out *any* items, regardless of their item‐status, change the value in col. 3 ("Loan Permission") in the ADM tab31 for this patron status to "N". 3. It might seem that simply omitting the specific item‐status / patron‐status combination from tab16 would produce an error message when a loan is attempted, but that is not the case. The loan occurs but the due date/time is the current date/time. See KB 16384‐20481 ("Due date on loan is same day/time as checkout").
Technical Seminar
Article 000010722 (KB 16384‐22993) p_manage_18 was run with an input file which contained incorrect doc numbers (such as "01493nam" or "2008005672"). This resulted in incorrect bib record keys and in a too-high last-doc-number in the abc01 util g/2. KB 8192-667 notes that "having large 'holes' (say, hundreds of thousands) in the bib record numbers is highly undesirable. (It causes the keyword bitmaps to be unnecessarily large.)" Resolution To delete these records and reduce the last-doc-number to a reasonable value, do the following. 1. (Optional) save/export the records using p_print_03 if the records were created in the GUI and you want to reload them later (with
Technical Seminar
legitimate doc numbers). 2a. Run p_ret_01 (or use "vi"), to create a file with the doc numbers in this form: 005001701ABC01 01493nam ABC01 059012606ABC01 <etc.> (This file is the format which p_manage_33 expects the input file to be in.) 2b. Run p_manage_33 with the file created in step 2a as input to delete the records. You need to make sure that any BATCH-DELETE lines in the xxx01 check_doc table are commented out. If they are not, then they will prevent the proper deletion of the ADM record, the links, etc. (See KB 5726.) 3. Or, if there are relatively few records, instead
Technical Seminar
of 2a and 2b you can delete the records in GUI Cataloging doing the "Total Delete" in the nav tree in the left-hand frame.
4. Stop the pc_server.
4. Run this SQL: >s+ abc01 SQL> select max(z95_doc_number) from z95; The purpose of the above SQL is to confirm that the deletion of the bib records has resulted in corresponding correct deletions of the z95 Word index records. If you find that there are still z95 records with the high keys, there's nothing in particular to do about it. You don't necessarily want to delete it. Jerry Specht ([email protected]) is studying this issue.
Technical Seminar
5. Use SQL to physically delete the "DEL Y" stub records which remain after step 3. If the bib doc numbers are in sequence, the following SQL could be used: SQL-abc01> delete from z00 where z00_doc_number >= 'nnnnnnnnn' and z00_doc_number <= 'yyyyyyyyy'; Otherwise, you will need to run the SQL individually for each doc record you have deleted. 6. Use SQL "select max (z00_doc_number) from z00" to locate the highest key in the bib file. 7. Use util g/2 to reset the last-doc-number to the value found in step 4. You should then be able to run p_manage_05, p_manage_01, etc., OK.
Technical Seminar
8. Restart pc_server.
Note: Sites have found that even after the deletion of these records with bad keys that the keyword indexes are corrupted (failing to give correct results on certain searches) and that p_manage_01 needs to be run to recreate the keyword index. (Contact [email protected] if you find this with your indexes.)
Article 000021633
Problem Symptoms
$LOGDIR/www_server log has a single occurrence
of the following errors in the middle of the night:
Oracle error: update_cursor z63 ORA-00942: table or view does not exist
Technical Seminar
Oracle error: io_z63_write ORA-00942: table or view does not exist WRITE ERROR 377VH68FFTFHN115J9FJC95NSCD7SUSGQLIUUINUQC1GJJ7NPU
Cause
The www_server was up while clear_vir01 was
running: user attempted to search OPAC after
clear_vir01 had dropped the z63 (OPAC session
table).
Resolution
1. dlib vir01
2. util a/17/14 for z63
If the util shows that the z63 (Web session) exists
and its Oracle indexes exist and are valid, then this
was a temporary situation and can be ignored.
Additional Information
Technical Seminar
If the user accesses the z63 immediately
*preceding* the drop/create of the z63, then the
clear_vir01 log will include a "Resource busy" error
and users will have the problems described in
Articles 000007337 {Can't log into Web, "unique
constraint (VIR01.Z63_ID) violated"} or 000003447
("Searches redisplays Initial OPAC screen; no
results").
To prevent such problems, entries can be added to
the job_list to:
* stop the www_server (and pc_server) prior to the
clear_vir01 job_list entry and to
* restart them afterwards.
See Article 000001845 ("Restarting Servers from
job_list").
Technical Seminar
Article 000007013 (KB 3923)
A batch job fails with a message like this: I/O error :
file 'TP1' error code: 9/065 (ANS74), pc=0, call=1,
seg=0 65 File locked
Resolution
This means that a file this job wants to create/use
already exists and is in use by another job.
It could be a case where
1. this run of the job started before a previous run
of the same job finished;
Technical Seminar
2. a previous run of this job "hung" and you did
not kill all of the job's processes before
resubmitting it; or
3. multiple lib_batch processes running for the
library are allowing multiple versions of the same
job to run simultaneously. (See Article 000013987.)
You can check the procedure in the $aleph_proc to
see what the file is.
For example, in the procedure p_cir_51, we see
this:
cd $data_scratch setenv dd_TP1 cir_51_1.scr
Thus, you should check for a cir_51_1.scr file in the
$data_scratch directory.
Technical Seminar
Also, you should check for a lock file in
$alephe_scratch, such as,
usm50_p_cir_51.5421.lock .
Any lock files for a job which is not currently
running should be deleted. Then resubmit the job.
Note: "library locked" is an entirely different
message.
With the exception of Case #3 (multiple lib_batch
processes), this should not happen in version 21
because the jobs have been changed to include a
"datex" timestamp (yyyymmdd.hhmmss) in the
workfile name.
Article 000022952
Desired outcome/Goal:
Diagnosis/correction of slow Aleph system
Technical Seminar
Procedure:
1. Check Articles 000018628 ("Aleph is slow") and
000011786 ("System/search slowness") for specific
conditions which might apply. If none....
2. Do the unix "top" command. Is
Cpu(s): .. %us greater than 50%?
If so, check which processes shown in "top" are
using large amounts of CPU and MEMory.
.. %wa (io wait) greater than 40%?
If so, check with the system administrator to see if
a backup is being run.
3. Perform the SQL commands described in Article
000009459 ("How can I tell if Oracle indexes for a
particular library are present & VALID?")
4. Do util a/8 ("List Analyzed Tables/Indexes"). The
result should not show any local XXXnn tables.
(Aleph Demo library tables are less of an issue.)
Technical Seminar
5. If the preceding don't yield anything concrete,
then do:
a. $alephe_root/aleph_shutdown ‐‐>
b. oracle restart {util o/1/2 (Close Oracle Server),
specifying "yes" for "restart Oracle Server after
closing"} ‐‐>
c. $alephe_root/aleph_startup
6. If that doesn't help, reboot the server.
Article 000013987 (KB 8192‐970)
Description
Certain p_cir_51 and p_cir_10 runs which should
produce output do not. Some of the
$alephe_scratch logs have this error at the end:
I/O error : file 'TP1' error code: 9/065 (ANS74), pc=0, call=1, seg=0 65 File locked
Technical Seminar
And, looking at the timestamps of the files in
$alephe_scratch, we see that, despite the fact that
the job_list entries have "Y" in column 4, indicating
that they should be queued, ‐‐ in certain cases a job
is being started before the preceding job for the
same library (ABC50) has completed.
For instance, we see this:
abc50_p_cir_10.09209.dllaw_circ Sat Oct 7 00:01:10 2006 42890 END READING AT 00:01:23 abc50_p_cir_10.09210.dlsxt_circ Sat Oct 7 00:01:14 2006 I/O error : file 'TP1' error code: 9/065 (ANS74), pc=0, call=1, seg=0 65 File locked abc50_p_cir_10.09211.dlwkk_circ Sat Oct 7 00:01:17 2006 I/O error : file 'TP1' error code: 9/065 (ANS74), pc=0, call=1, seg=0 65 File locked
Technical Seminar
We see that abc50_p_cir_10.09209.dllaw_circ
doesn't complete until 00:01:23, but
abc50_p_cir_10.09210.dlsxt_circ starts at 00:01:14
and abc50_p_cir_10.09211.dlwkk_circ at 00:01:17;
that is, they start before
abc50_p_cir_10.09209.dllaw_circ has completed.
Cause
Multiple lib_batch processes running for the same
library due to:
1: aleph_startup being run as root; or
2: deletion of the library’s ./files/que_batch_lock
file
Resolution
1. I saw this in util c/1:
Technical Seminar
root 19228 1 0 05:10:29 ? 1:26 /exlibris/aleph/a20_1/aleph/exe/rts32 ue_11_a ABC50.a20_1 root 17433 1 0 05:04:51 ? 7:45 /exlibris/aleph/a20_1/aleph/exe/rts32 ue_06_a ABC50.a20_1 root 15807 1 0 05:03:27 ? 0:00 /exlibris/aleph/a20_1/aleph/exe/lib_batch ABC50
The following command shows the processes
owned by root :
ps -ef | grep root | grep aleph/exe
This occurred because Aleph was started as root,
which it should not have been.
I suggest that you:
(1) run $alephe_root/aleph_shutdown;
(2) do the above command to verify that these
root/aleph processes have been stopped;
Technical Seminar
(3) change the ownership of the que_batch_lock file
in each library's $data_files (from root to aleph);
(4) run $alephe_root/aleph_startup .
2. If there are multiple lib_batch processes even
though Aleph has been (correctly) started as the
aleph user, it
could be that the library’s_./files/que_batch_lock
file has been deleted. If que_batch_lock is deleted,
the system lets you start a second que_batch
(lib_batch). Note: when que_batch_lock is
*present*, the system does *not* permit a second
lib_batch to be started: you get the message:
"lib_batch is already running".
Article 000018190 describes how the que_ batch_lock file can be deleted by util x/3 (Clean data_files). We recommend *not* doing util x/3.
Technical Seminar
If the error is seen when there are *not* multiple lib_batch processes running, consult Article 000007013 ("File locked" error when running a batch job).
Article 000016320 (KB 16384-10772) [See last page] We've run across a number of records where a
word was originally in the record, but when the text
is removed, that record still appears in a keyword
search for that word.
In this specific case, it's a fixed field position that is indexed. We have an index WJN on position 21 of the 008 field. Some Music records we're bringing in from Worldcat have this position set to n when it should be blank. When the record is later corrected, a search for WJN=n still retrieves this record. Aleph is not removing this record from the index that contains this word.
Technical Seminar
We have tried UTIL‐F‐1‐17 to delete all words and headings for a record. That has not helped. We have tried putting the n back, saving the record, and removing it again. That has not helped. We tried changing n to some other invalid character. That also has not helped. Is there any way to clear this word from the indexes for this record, short of running p_manage_01 on the entire database? 2 example records. They should be on both production and test servers, though all our attempts to correct it have been in production. 001491885 and 001344572 We have seen something like this happen before, for example, with changed holdings making the PST fields different, and the record remaining in a search with the old location. We're not sure if this might be related, or if the current problem is only with fixed fields.
Technical Seminar
Answer: Currently, the maximum number of words which can be included in the Word indexes for a single record is 4950. And we have seen cases where a record is at or near this limit where the deletion of certain words does not cause a proper delete of the text from the Word index. The words being indexed for a particular document can be seen in UTIL‐F‐1‐28 ("Display Word Indexing for a Single Record"). Checking there, you will see that the "double‐words", being generated for adjacency purposes, are also being included in this word count. So, in fact, the number of words being indexed is roughly the words you see in the record times 2. We have received the following update from Development regarding this limitation: In v.22 the limit for word indexing will be increased to 20,000 words (from 4,950).
Technical Seminar
Article 000010065: [*Not* on the CD or handout] Description How do we block patrons from borrowing when they owe more than x dollars? Resolution ADM library tab_check_circ does this. There are two different groups of checks which relate to blocking due to money owed. The *first* group is 5 and 5_b: ! 5 : check patron cash limit ! checks whether the patron has exceeded the amount allowed ! in z305_cash_limit ‐ uses O(pen) and T(ransferred) transactions ! 5_b: check patron cash limit ! checks whether the patron has exceeded the amount allowed ! in z305_cash_limit ‐ uses only O(pen) transactions and not T(ransferred)
Technical Seminar
The message which appears when one of these blocks is triggered is ./error_eng/check_circ: 0191 L Owes money ‐ $1. The *second* group is 9_a, 9_b, 10_a, and 10_b: ! 9_a: check overdues and fines ‐ADM library ! checks maximum overdues, recalled overdues and fines, for the ! ADM library based on tab_block_circ ! 9_b: check overdues and fines ‐sublibrary ! checks maximum overdues and recalled overdues in a sublibrary ! based on tab_block_circ ! 10_a: same as 9_a except that all loans are taken into account ‐ ! active, lost and claimed return ! 10_b: same as 9_b except that all loans are taken into account ‐ ! active, lost and claimed return The message which appears when one of these blocks is triggered is ./error_eng/check_circ: 0243 L Patron exceeded his max debt ($1).
Technical Seminar
Article 000003168 (KB 16384‐66769)
Certain patrons' items show in Primo as "Not
renewable", while in the Aleph GUI (though
requiring override) they *are* renewable.
Cause
Aleph ./xxx50/tab/tab_block_circ
Resolution
1. Increase the value in column 4 of
./xxx50/tab/tab_block_circ from "nn" to "99". (This
means that patrons will be blocked only when they
have 99 or more regular overdues.)
2. Use util w/3/1 to restart the www_server and util
w/3/7/9 to restart jboss.
Additional Information
Technical Seminar
The ADM tab_block_circ col. 5 controls blocking due
to recalled overdues and col. 6, blocking due to
fines.
tab_block_circ is checked in GUI Circ when any of
the following is specified: RENEW check_circ_9_a RENEW check_circ_9_b RENEW check_circ_10_a RENEW check_circ_10_b
Article 000004681 (KB 16384‐64817)
How can we tell what host/port Aleph is pointing to for PDS? Answer The following on the command line of the Aleph server shows the PDS_HOST and PDS_PORT: > env | grep ‐i pds
Technical Seminar
Additional information * These values are set in $alephe_root/aleph_start : setenv PDS_HOST aleph.xxx.edu setenv PDS_PORT 899n
* After changing them, apache and the www_server need to be restarted. Article 000022125 Question Is it possible to force defined Item Process Statuses into “Check holdings” instead of "Unavailable", so they will display in Primo as Check Holdings?
Technical Seminar
Answer The only option associated with the item processing status (via the ./xxx01/tab/expand_doc_bib_avail expand "AVA=" parameter) is to either create an AVA field or not create an AVA field. (If the item processing status is in the AVA= list, the AVA field will be created, otherwise, not.) It has no effect on the *value* in the AVA field. See also Article 00022124 ("How to change text of z00p AVA field status values") Additional Information The value of the AVA tag is set to “check holdings” in the expand_doc_bib_avail program if SW‐CHECK‐HOLDINGS has a value of "Y": IF SW-CHECK-HOLDINGS = "Y" THEN MOVE "check_holdings" TO L-AVAIL-STATUS
There are several conditions under which SW‐CHECK‐HOLDINGS is set to “Y” (see below). As can
Technical Seminar
be seen, none of them relate to the item processing status…. IF SW-AVAILABLE-EXIST = "Y" THEN IF SW-SERIAL-EXIST NOT = "Y" OR SW-UNAVAILABLE-EXIST = "N" THEN MOVE "available" TO L-AVAIL-STATUS END-IF ELSE IF SW-UNAVAILABLE-EXIST = "Y" THEN MOVE "unavailable" TO L-AVAIL-STATUS ELSE MOVE "Y" TO SW-IGNORE END-IF END-IF. ... ... IF BUF-Z30-NO-LINES = ZERO THEN MOVE "Y" TO SW-CHECK-HOLDINGS ... ... IF SW-THRESHOLD-EXIST = "Y" OR SW-ADM-LIBRARY-EXIST = "N" THEN MOVE "Y" TO SW-CHECK-HOLDINGS ... ... IF SW-UNAVAILABLE-EXIST = "Y" AND SW-AVAILABLE-EXIST = "Y" AND SW-SERIAL-EXIST = "Y"
Technical Seminar
THEN MOVE "Y" TO SW-CHECK-HOLDINGS ... ... IF NO-ITEMS > THRESHOLD THEN MOVE "Y" TO SW-CHECK-HOLDINGS
Article 000003268 (KB 16384-62737) Problem symptoms When attempting to view larger reports in Aleph (such as "PatronList"), a dialog box with the following error message appears: "XML/XSL parse errors" Is there a size limitation for viewing reports in Aleph? Cause The system broke PatronList into two smaller files (PatronList.1 and PatronList.2). These two pieces
Technical Seminar
should be printed rather than the larger PatronList file. Resolution Print the PatronList.1 and PatronList.2 files which the system has generated. Additional information The cut‐off size is 78 meg: if a print file is more than 78 meg, the system will break it into multiple pieces, each being 78 meg or less. If the "XML/XSL parse errors" occur in all cases, regardless of size, see KB 16384‐48673 ("Empty XSL files under .\xxx60\Templates\eng on pc"). If that doesn't apply, search Salesforce Articles for others. The breaking up of larger outputs into smaller pieces to obviate printing problems occurred with v18 rep_change 1739 (v20 rep_ver 15392).
Technical Seminar
Article 00022896
Desired outcome/Goal
Re‐publish all Aleph records to Primo.
Procedure
1. Do util e/22 to stop the ue_21 publishing daemon
in the libraries whose publishing sets are to be
rebuilt
2a. If *all* publishing sets are being recreated:
> dlib xxx50 (or whatever the $usr_library is)
> util a/17/1 to drop/create the xxx50 z00p (make
sure to specify "z00p"!)
2b. If only one publishing set is being recreated, run
publish‐05 (Delete ALEPH Published Records)
Technical Seminar
3. Run publish‐04 (Initial Publishing Process) in
xxx01 for the desired publishing sets, specifying doc
000000000 to 999999999.
4. > cd
/exlibris/aleph/u20_1/xxx01/primo_ongoing_publis
hing and do "rm ‐fr *" to remove old tar files
5. Run publish‐06 (Create Tar File for ALEPH
Published Records) in xxx01 for "Range of
documents" 000000000 to 999999999, specifying
"Y" for Update Date Flag.
6. Notify the Primo staff that a completely new
publishing set has been published and the existing
needs to be deleted and the new one reloaded into
Primo.
7. Do util e/21 to restart the ue_21 publishing
daemon in the libraries whose ue_21 was stopped
in step 1 above.
Technical Seminar
Article 000016320
Description
We've run across a number of records where a
word was originally in the record, but when the text
is removed, that record still appears in a keyword
search for that word.
...
...
Is there any way to clear this word from the indexes
for this record, short of running p_manage_01 on
the entire database?
Resolution
Prior to version 22, the maximum number of words
which can be included in the Word indexes for a
single record was 4950. And we have seen cases
where a record is at or near this limit where the
Technical Seminar
deletion of certain words does not cause a proper
delete of the text from the Word index.
The words being indexed for a particular document
can be seen in UTIL‐F‐1‐28 ("Display Word Indexing
for a Single Record"). Checking there, you will see
that the "double‐words", being generated for
adjacency purposes, are also being included in this
word count. So, in fact, the number of words being
indexed is roughly the words you see in the record
times 2.
In version 22 (rep_ver 19160), the limit for word
indexing has been increased to 20,000 words (from
4,950).
Note: The rep_ver says that manage‐01 needs to be
run to correct existing problems, but this needs to
wait until v22 rep_change 000056 (included in the
22.0.2 Service Pack, June 2014) is in place.
Technical Seminar
Article 000022187
Description
Though for one record, util f/1/28 (Display Word
Indexing for a Single Record) shows
"Macroéconomie /" indexed both with and without
the slash:
24510 $$aMacroéconomie /$$cOlivier Blanchard,
Daniel Cohen ; avec la collaboration de Cyril
Nouveau. 001110018 0364 0001 macroeconomie 001110018 0364 0002 macroeconomie 001110018 0365 0001 / 001110018 0365 0002 / 001110018 0366 0001 macroeconomie/ 001110018 0366 0002 macroeconomie/
For another, it shows "Macroéconomie /" indexed
*with* the slash only:
Technical Seminar
24510 $$aMacroéconomie /$$cOlivier Blanchard, Daniel Cohen, David Johnson; avec la collab. de Cyril Nouveau. 001220403 0270 0001 macroeconomie/ 001220403 0270 0002 macroeconomie/
For this second record, the word "Macroéconomie"
is *not* retrievable in OPAC or GUI Search.
Cause
The presence of a non‐breaking space character,
which is not changed to a blank, between the "e"
and the "/".
Resolution
Add "U+00A0" to the xxx01 tab_word_breaking,
such as this:
03 # to_blank !@#$%^()_={}[]:";<>,.?|\U+00A0
Technical Seminar
so that the XA0 will be changed to blank, and the
preceding word properly indexed.
Additional info:
Though the as‐delivered tab_filing header includes
information about the inclusion of "U+<hexa value>
s" tab_word_breaking header lacks this
information.
Corrected by v22 rep_change 55
("tab_word_breaking: The information about the
characters to be considered has been updated")