MIMIX Reference

747
MIMIX ha1 and MIMIX ha Lite for IBM ® i5/OS Version 5.0 MIMIX ® Reference Published: July 22, 2008 Software level: 5.0.15.00 Copyrights, Trademarks, and Notices

Transcript of MIMIX Reference

Page 1: MIMIX Reference

MIMIX ha1™ and MIMIX ha Lite™ for IBM® i5/OS™

Version 5.0

MIMIX® Reference

Published: July 22, 2008

Software level: 5.0.15.00

Copyrights, Trademarks, and Notices

Page 2: MIMIX Reference

Product conventions.................................................................................................. 14Menus and commands ........................................................................................ 14Accessing online help.......................................................................................... 14

Publication conventions............................................................................................. 14Formatting for displays and commands .............................................................. 15

Sources for additional information............................................................................. 17How to contact us...................................................................................................... 19

Chapter 1 MIMIX overview 21MIMIX concepts......................................................................................................... 23

System roles and relationships ........................................................................... 23Data groups: the unit of replication...................................................................... 24Changing directions: switchable data groups...................................................... 24

Additional switching capability ....................................................................... 25Journaling and object auditing introduction ......................................................... 25Log spaces .......................................................................................................... 26Multi-part naming convention .............................................................................. 27

The MIMIX environment ............................................................................................ 29The product library .............................................................................................. 29

IFS directories ............................................................................................... 29Job descriptions and job classes......................................................................... 30

User profiles .................................................................................................. 31The system manager........................................................................................... 31The journal manager ........................................................................................... 33The MIMIXQGPL library ...................................................................................... 34

MIMIXSBS subsystem................................................................................... 34Data libraries ....................................................................................................... 34Named definitions................................................................................................ 34Data group entries............................................................................................... 35

Journal receiver management................................................................................... 37Interaction with other products that manage receivers........................................ 38Processing from an earlier journal receiver ......................................................... 38Considerations when journaling on target ........................................................... 39

Operational overview................................................................................................. 40Support for starting and ending replication.......................................................... 40Support for checking installation status............................................................... 41Support for automatically detecting and resolving problems............................... 41Support for working with data groups.................................................................. 41Support for resolving problems ........................................................................... 42Support for switching a data group...................................................................... 44Support for working with messages .................................................................... 44

Chapter 2 Replication process overview 46Replication job and supporting job names ................................................................ 47Cooperative processing introduction ......................................................................... 50

MIMIX Dynamic Apply ......................................................................................... 50Legacy cooperative processing........................................................................... 51Advanced journaling............................................................................................ 51

System journal replication ......................................................................................... 53Processing self-contained activity entries ........................................................... 54

2

Page 3: MIMIX Reference

Processing data-retrieval activity entries ............................................................. 55Processes with multiple jobs ............................................................................... 57Tracking object replication................................................................................... 57Managing object auditing .................................................................................... 57

User journal replication.............................................................................................. 61What is remote journaling?.................................................................................. 61Benefits of using remote journaling with MIMIX .................................................. 61Restrictions of MIMIX Remote Journal support ................................................... 62Overview of IBM processing of remote journals .................................................. 63

Synchronous delivery .................................................................................... 63Asynchronous delivery .................................................................................. 65

User journal replication processes ...................................................................... 66The RJ link .......................................................................................................... 66

Sharing RJ links among data groups............................................................. 66RJ links within and independently of data groups ......................................... 67Differences between ENDDG and ENDRJLNK commands .......................... 67

RJ link monitors................................................................................................... 68RJ link monitors - operation........................................................................... 68RJ link monitors in complex configurations ................................................... 68

Support for unconfirmed entries during a switch ................................................. 70RJ link considerations when switching ................................................................ 70

User journal replication of IFS objects, data areas, data queues.............................. 72Benefits of advanced journaling .......................................................................... 72Replication processes used by advanced journaling .......................................... 73Tracking entries................................................................................................... 74IFS object file identifiers (FIDs) ........................................................................... 75

Lesser-used processes for user journal replication................................................... 76User journal replication with source-send processing ......................................... 76The data area polling process ............................................................................. 77

Chapter 3 Preparing for MIMIX 80Checklist: pre-configuration....................................................................................... 81Data that should not be replicated............................................................................. 83Planning for journaled IFS objects, data areas, and data queues............................. 85

Is user journal replication appropriate for your environment? ............................. 85Serialized transactions with database files.......................................................... 85Converting existing data groups.......................................................................... 85

Conversion examples .................................................................................... 86Database apply session balancing...................................................................... 87User exit program considerations........................................................................ 87

Starting the MIMIXSBS subsystem ........................................................................... 90Accessing the MIMIX Main Menu.............................................................................. 91

Chapter 4 Planning choices and details by object class 93Replication choices by object type ............................................................................ 96Configured object auditing value for data group entries............................................ 98Identifying library-based objects for replication ....................................................... 100

How MIMIX uses object entries to evaluate journal entries for replication ........ 101Identifying spooled files for replication .............................................................. 102

Additional choices for spooled file replication.............................................. 103

3

Page 4: MIMIX Reference

Replicating user profiles and associated message queues .............................. 104Identifying logical and physical files for replication.................................................. 105

Considerations for LF and PF files .................................................................... 105Files with LOBs............................................................................................ 107

Configuration requirements for LF and PF files................................................. 108Requirements and limitations of MIMIX Dynamic Apply.................................... 110Requirements and limitations of legacy cooperative processing....................... 111

Identifying data areas and data queues for replication............................................ 112Configuration requirements - data areas and data queues ............................... 112Restrictions - user journal replication of data areas and data queues .............. 113

Supported journal code E and Q entry types............................................... 114Identifying IFS objects for replication ...................................................................... 118

Supported IFS file systems and object types .................................................... 118Considerations when identifying IFS objects..................................................... 119

MIMIX processing order for data group IFS entries..................................... 119Long IFS path names .................................................................................. 119Upper and lower case IFS object names..................................................... 119Configured object auditing value for IFS objects ......................................... 120

Configuration requirements - IFS objects .......................................................... 120Restrictions - user journal replication of IFS objects ......................................... 121

Supported journal code B entry types ......................................................... 122Identifying DLOs for replication ............................................................................... 124

How MIMIX uses DLO entries to evaluate journal entries for replication .......... 124Sequence and priority order for documents ................................................ 124Sequence and priority order for folders ....................................................... 125

Processing of newly created files and objects......................................................... 127Newly created files ............................................................................................ 127

New file processing - MIMIX Dynamic Apply............................................... 127New file processing - legacy cooperative processing.................................. 128

Newly created IFS objects, data areas, and data queues................................. 128Determining how an activity entry for a create operation was replicated .... 129

Processing variations for common operations ........................................................ 130Move/rename operations - system journal replication....................................... 130Move/rename operations - user journaled data areas, data queues, IFS objects ...

131Delete operations - files configured for legacy cooperative processing ............ 134Delete operations - user journaled data areas, data queues, IFS objects ........ 134Restore operations - user journaled data areas, data queues, IFS objects ...... 134

Chapter 5 Configuration checklists 137Checklist: New remote journal (preferred) configuration ......................................... 139Checklist: New MIMIX source-send configuration................................................... 143Checklist: Converting to remote journaling.............................................................. 147Converting to MIMIX Dynamic Apply....................................................................... 150

Converting using the Convert Data Group command ....................................... 150Checklist: manually converting to MIMIX Dynamic Apply.................................. 151

Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling .................... 154Checklist: Converting to legacy cooperative processing ......................................... 157

4

Page 5: MIMIX Reference

Chapter 6 System-level communications 159Configuring for native TCP/IP.................................................................................. 159

Port aliases-simple example ............................................................................. 160Port aliases-complex example .......................................................................... 161Creating port aliases ......................................................................................... 162

Configuring APPC/SNA........................................................................................... 163Configuring OptiConnect ......................................................................................... 163

Chapter 7 Configuring system definitions 166Tips for system definition parameters ..................................................................... 167Creating system definitions ..................................................................................... 170Changing a system definition .................................................................................. 171Multiple network system considerations.................................................................. 172

Chapter 8 Configuring transfer definitions 174Tips for transfer definition parameters..................................................................... 176Using contextual (*ANY) transfer definitions ........................................................... 181

Search and selection process ........................................................................... 181Considerations for remote journaling ................................................................ 182Considerations for MIMIX source-send configurations...................................... 182Naming conventions for contextual transfer definitions..................................... 183Additional usage considerations for contextual transfer definitions................... 183

Creating a transfer definition ................................................................................... 184Changing a transfer definition ................................................................................. 186

Changing a transfer definition to support remote journaling.............................. 186Finding the system database name for RDB directory entries ................................ 188

Using i5/OS commands to work with RDB directory entries.............................. 188Starting the Lakeview TCP/IP server ...................................................................... 189Using autostart job entries to start the TCP server ................................................. 190

Adding an autostart job entry ............................................................................ 190Identifying the autostart job entry in the MIMIXSBS subsystem........................ 191Changing the job description for an autostart job entry ..................................... 191

Verifying a communications link for system definitions ........................................... 194Verifying the communications link for a data group................................................. 195

Verifying all communications links..................................................................... 195

Chapter 9 Configuring journal definitions 197Journal definitions created by other processes ....................................................... 200Tips for journal definition parameters ...................................................................... 201Journal definition considerations ............................................................................. 205

Naming convention for remote journaling environments with 2 systems........... 206Example journal definitions for a switchable data group ............................. 207

Naming convention for multimanagement environments .................................. 208Example journal definitions for three management nodes .......................... 209

Journal receiver size for replicating large object data ............................................. 213Verifying journal receiver size options.............................................................. 213Changing journal receiver size options ............................................................. 213

Creating a journal definition..................................................................................... 215Changing a journal definition................................................................................... 217Building the journaling environment ........................................................................ 219Changing the remote journal environment .............................................................. 222

5

Page 6: MIMIX Reference

Adding a remote journal link.................................................................................... 225Changing a remote journal link................................................................................ 227Temporarily changing from RJ to MIMIX processing .............................................. 228Changing from remote journaling to MIMIX processing .......................................... 229Removing a remote journaling environment............................................................ 231

Chapter 10 Configuring data group definitions 233Tips for data group parameters ............................................................................... 234

Additional considerations for data groups ......................................................... 244Creating a data group definition .............................................................................. 247Changing a data group definition ............................................................................ 251Fine-tuning backlog warning thresholds for a data group ....................................... 251

Chapter 11 Additional options: working with definitions 255Copying a definition................................................................................................. 255Deleting a definition................................................................................................. 256Displaying a definition ............................................................................................. 257Printing a definition.................................................................................................. 257Renaming definitions............................................................................................... 258

Renaming a system definition ........................................................................... 258Renaming a transfer definition .......................................................................... 261Renaming a journal definition with considerations for RJ link ........................... 262Renaming a data group definition ..................................................................... 263

Chapter 12 Configuring data group entries 265Creating data group object entries .......................................................................... 267

Loading data group object entries ..................................................................... 267Adding or changing a data group object entry................................................... 268

Creating data group file entries ............................................................................... 272Loading file entries ............................................................................................ 272

Loading file entries from a data group’s object entries ................................ 273Loading file entries from a library ................................................................ 275Loading file entries from a journal definition ................................................ 276Loading file entries from another data group’s file entries........................... 277

Adding a data group file entry ........................................................................... 278Changing a data group file entry ....................................................................... 279

Creating data group IFS entries .............................................................................. 282Adding or changing a data group IFS entry....................................................... 282

Loading tracking entries .......................................................................................... 284Loading IFS tracking entries.............................................................................. 284Loading object tracking entries.......................................................................... 285

Creating data group DLO entries ............................................................................ 287Loading DLO entries from a folder .................................................................... 287Adding or changing a data group DLO entry..................................................... 288

Creating data group data area entries..................................................................... 289Loading data area entries for a library............................................................... 289Adding or changing a data group data area entry ............................................. 290

Additional options: working with DG entries ............................................................ 291Copying a data group entry ............................................................................... 291Removing a data group entry ............................................................................ 292Displaying a data group entry............................................................................ 293

6

Page 7: MIMIX Reference

Printing a data group entry ................................................................................ 293

Chapter 13 Additional supporting tasks for configuration 294Accessing the Configuration Menu.......................................................................... 295Starting the system and journal managers.............................................................. 296Setting data group auditing values manually........................................................... 297

Examples of changing of an IFS object’s auditing value ................................... 298Checking file entry configuration manually.............................................................. 303Changes to startup programs.................................................................................. 305Checking DDM password validation level in use..................................................... 306

Option 1. Enable MIMIXOWN user profile for DDM environment...................... 306Option 2. Allow user profiles without passwords ............................................... 307

Starting the DDM TCP/IP server ............................................................................. 308Identifying data groups that use an RJ link ............................................................. 310Using file identifiers (FIDs) for IFS objects .............................................................. 312Configuring restart times for MIMIX jobs ................................................................. 313

Configurable job restart time operation ............................................................. 313Considerations for using *NONE................................................................. 315

Examples: job restart time................................................................................. 315Restart time examples: system definitions .................................................. 316Restart time examples: system and data group definition combinations..... 316

Configuring the restart time in a system definition ............................................ 319Configuring the restart time in a data group definition....................................... 319

Chapter 14 Starting, ending, and verifying journaling 322What objects need to be journaled.......................................................................... 323

Authority requirements for starting journaling.................................................... 324MIMIX commands for starting journaling................................................................. 325Journaling for physical files ..................................................................................... 326

Displaying journaling status for physical files .................................................... 326Starting journaling for physical files................................................................... 326Ending journaling for physical files .................................................................... 327Verifying journaling for physical files ................................................................. 328

Journaling for IFS objects........................................................................................ 330Displaying journaling status for IFS objects ...................................................... 330Starting journaling for IFS objects ..................................................................... 330Ending journaling for IFS objects ...................................................................... 331Verifying journaling for IFS objects.................................................................... 332

Journaling for data areas and data queues............................................................. 334Displaying journaling status for data areas and data queues............................ 334Starting journaling for data areas and data queues .......................................... 334Ending journaling for data areas and data queues............................................ 335Verifying journaling for data areas and data queues......................................... 336

Chapter 15 Configuring for improved performance 337Minimized journal entry data ................................................................................... 339

Restrictions of minimized journal entry data...................................................... 339Configuring for minimized journal entry data..................................................... 340

Configuring for high availability journal performance enhancements...................... 341Journal standby state ........................................................................................ 341

Minimizing potential performance impacts of standby state ........................ 342

7

Page 8: MIMIX Reference

Journal caching ................................................................................................. 342MIMIX processing of high availability journal performance enhancements....... 342Requirements of high availability journal performance enhancements ............. 343Restrictions of high availability journal performance enhancements................. 343

Caching extended attributes of *FILE objects ......................................................... 345Increasing data returned in journal entry blocks by delaying RCVJRNE calls ........ 346

Understanding the data area format.................................................................. 346Determining if the data area should be changed............................................... 347Configuring the RCVJRNE call delay and block values .................................... 347

Configuring high volume objects for better performance......................................... 350Improving performance of the #MBRRCDCNT audit .............................................. 351

Chapter 16 Configuring advanced replication techniques 353Keyed replication..................................................................................................... 355

Keyed vs positional replication .......................................................................... 355Requirements for keyed replication................................................................... 355Restrictions of keyed replication........................................................................ 356Implementing keyed replication......................................................................... 356

Changing a data group configuration to use keyed replication.................... 356Changing a data group file entry to use keyed replication........................... 357

Verifying key attributes ...................................................................................... 359Data distribution and data management scenarios ................................................. 361

Configuring for bi-directional flow ...................................................................... 361Bi-directional requirements: system journal replication ............................... 361Bi-directional requirements: user journal replication.................................... 362

Configuring for file routing and file combining ................................................... 363Configuring for cascading distributions ............................................................. 365

Trigger support ........................................................................................................ 368How MIMIX handles triggers ............................................................................. 368Considerations when using triggers .................................................................. 368Enabling trigger support .................................................................................... 369Synchronizing files with triggers ........................................................................ 369

Constraint support ................................................................................................... 370Referential constraints with delete rules............................................................ 370

Replication of constraint-induced modifications .......................................... 371Handling SQL identity columns ............................................................................... 373

The identity column problem explained............................................................. 373When the SETIDCOLA command is useful....................................................... 374SETIDCOLA command limitations .................................................................... 374Alternative solutions .......................................................................................... 375SETIDCOLA command details .......................................................................... 376

Usage notes ................................................................................................ 377Examples of choosing a value for INCREMENTS....................................... 377

Checking for replication of tables with identity columns .................................... 378Setting the identity column attribute for replicated files ..................................... 378

Collision resolution .................................................................................................. 381Additional methods available with CR classes .................................................. 381Requirements for using collision resolution....................................................... 382Working with collision resolution classes .......................................................... 383

Creating a collision resolution class ............................................................ 383

8

Page 9: MIMIX Reference

Changing a collision resolution class........................................................... 384Deleting a collision resolution class............................................................. 384Displaying a collision resolution class ......................................................... 384Printing a collision resolution class.............................................................. 385

Omitting T-ZC content from system journal replication ........................................... 387Configuration requirements and considerations for omitting T-ZC content ....... 388

Omit content (OMTDTA) and cooperative processing................................. 389Omit content (OMTDTA) and comparison commands ................................ 389

Selecting an object retrieval delay........................................................................... 391Object retrieval delay considerations and examples ......................................... 391

Configuring to replicate SQL stored procedures and user-defined functions.......... 393Requirements for replicating SQL stored procedure operations ....................... 393To replicate SQL stored procedure operations ................................................. 393

Using Save-While-Active in MIMIX.......................................................................... 396Considerations for save-while-active................................................................. 396Types of save-while-active options ................................................................... 397Example configurations ..................................................................................... 397

Chapter 17 Object selection for Compare and Synchronize commands 399Object selection process ......................................................................................... 399

Order precedence ............................................................................................. 401Parameters for specifying object selectors.............................................................. 402Object selection examples ...................................................................................... 407

Processing example with a data group and an object selection parameter ...... 407Example subtree ............................................................................................... 410Example Name pattern...................................................................................... 414Example subtree for IFS objects ....................................................................... 415

Report types and output formats ............................................................................. 418Spooled files...................................................................................................... 418Outfiles .............................................................................................................. 419

Chapter 18 Comparing attributes 420About the Compare Attributes commands .............................................................. 420

Choices for selecting objects to compare.......................................................... 421Unique parameters ...................................................................................... 421

Choices for selecting attributes to compare ...................................................... 422CMPFILA supported object attributes for *FILE objects .............................. 423CMPOBJA supported object attributes for *FILE objects ............................ 423

Comparing file and member attributes .................................................................... 425Comparing object attributes .................................................................................... 428Comparing IFS object attributes.............................................................................. 431Comparing DLO attributes....................................................................................... 434

Chapter 19 Comparing file record counts and file member data 437Comparing file record counts .................................................................................. 437

To compare file record counts........................................................................... 438Significant features for comparing file member data ............................................... 440

Repairing data ................................................................................................... 440Active and non-active processing...................................................................... 440Processing members held due to error ............................................................. 441Additional features............................................................................................. 441

9

Page 10: MIMIX Reference

Considerations for using the CMPFILDTA command ............................................. 441Recommendations and restrictions ................................................................... 441Using the CMPFILDTA command with firewalls................................................ 442Security considerations ..................................................................................... 442Comparing allocated records to records not yet allocated ................................ 442Comparing files with unique keys, triggers, and constraints ............................. 443

Avoiding issues with triggers ....................................................................... 444Referential integrity considerations ............................................................. 444

Job priority .................................................................................................... 444Specifying CMPFILDTA parameter values.............................................................. 445

Specifying file members to compare ................................................................. 445Tips for specifying values for unique parameters.............................................. 446Specifying the report type, output, and type of processing ............................... 449

System to receive output ............................................................................. 449Interactive and batch processing................................................................. 449

Using the additional parameters........................................................................ 449Advanced subset options for CMPFILDTA.............................................................. 451Ending CMPFILDTA requests ................................................................................. 454Comparing file member data - basic procedure (non-active) .................................. 455Comparing and repairing file member data - basic procedure ................................ 458Comparing and repairing file member data - members on hold (*HLDERR) .......... 461Comparing file member data using active processing technology .......................... 464Comparing file member data using subsetting options ........................................... 467

Chapter 20 Synchronizing data between systems 472Considerations for synchronizing using MIMIX commands..................................... 474

Limiting the maximum sending size .................................................................. 474Synchronizing user profiles ............................................................................... 474

Synchronizing user profiles with SYNCnnn commands .............................. 475Synchronizing user profiles with the SNDNETOBJ command ................... 475Missing system distribution directory entries automatically added .............. 476

Synchronizing large files and objects ................................................................ 476Status changes caused by synchronizing ......................................................... 476Synchronizing objects in an independent ASP.................................................. 477

About MIMIX commands for synchronizing objects, IFS objects, and DLOs .......... 478About synchronizing data group activity entries (SYNCDGACTE).......................... 479About synchronizing file entries (SYNCDGFE command) ...................................... 480About synchronizing tracking entries....................................................................... 482Performing the initial synchronization...................................................................... 483

Establish a synchronization point ...................................................................... 483Resources for synchronizing ............................................................................. 483

Using SYNCDG to perform the initial synchronization ............................................ 484To perform the initial synchronization using the SYNCDG command defaults . 485

Verifying the initial synchronization ......................................................................... 487Synchronizing database files................................................................................... 489Synchronizing objects ............................................................................................. 491

To synchronize library-based objects associated with a data group ................. 491To synchronize library-based objects without a data group .............................. 492

Synchronizing IFS objects....................................................................................... 495To synchronize IFS objects associated with a data group ................................ 495

10

Page 11: MIMIX Reference

To synchronize IFS objects without a data group ............................................. 496Synchronizing DLOs................................................................................................ 499

To synchronize DLOs associated with a data group ......................................... 499To synchronize DLOs without a data group ...................................................... 500

Synchronizing data group activity entries................................................................ 503Synchronizing tracking entries ................................................................................ 505

To synchronize an IFS tracking entry ................................................................ 505To synchronize an object tracking entry ............................................................ 505

Sending library-based objects ................................................................................. 506Sending IFS objects ................................................................................................ 508Sending DLO objects .............................................................................................. 509

Chapter 21 Introduction to programming 510Support for customizing........................................................................................... 511

User exit points.................................................................................................. 511Collision resolution ............................................................................................ 511

Completion and escape messages for comparison commands ............................. 514CMPFILA messages ......................................................................................... 514CMPOBJA messages........................................................................................ 515CMPIFSA messages ......................................................................................... 515CMPDLOA messages ....................................................................................... 516CMPRCDCNT messages.................................................................................. 516CMPFILDTA messages..................................................................................... 517

Adding messages to the MIMIX message log ......................................................... 521Output and batch guidelines.................................................................................... 523

General output considerations .......................................................................... 523Output parameter ........................................................................................ 523Display output.............................................................................................. 524Print output .................................................................................................. 524File output.................................................................................................... 526

General batch considerations............................................................................ 527Batch (BATCH) parameter .......................................................................... 527Job description (JOBD) parameter .............................................................. 527Job name (JOB) parameter ......................................................................... 527

Displaying a list of commands in a library ............................................................... 528Running commands on a remote system................................................................ 529

Benefits - RUNCMD and RUNCMDS commands ............................................. 529Procedures for running commands RUNCMD, RUNCMDS.................................... 530

Running commands using a specific protocol ................................................... 530Running commands using a MIMIX configuration element ............................... 532

Using lists of retrieve commands ............................................................................ 536Changing command defaults................................................................................... 537

Chapter 22 Customizing with exit point programs 538Summary of exit points............................................................................................ 538

MIMIX user exit points....................................................................................... 538MIMIX Monitor user exit points.......................................................................... 538MIMIX Promoter user exit points ....................................................................... 539Requesting customized user exit programs ...................................................... 540

Working with journal receiver management user exit points ................................... 541

11

Page 12: MIMIX Reference

Journal receiver management exit points.......................................................... 541Change management exit points................................................................. 541Delete management exit points ................................................................... 542Requirements for journal receiver management exit programs................... 542Journal receiver management exit program example ................................. 545

Appendix A Supported object types for system journal replication 549

Appendix B Copying configurations 552Supported scenarios ............................................................................................... 552Checklist: copy configuration................................................................................... 553Copying configuration procedure ............................................................................ 558

Appendix C Configuring Intra communications 559Manually configuring Intra using SNA ..................................................................... 559Manually configuring Intra using TCP ..................................................................... 561

Appendix D MIMIX support for independent ASPs 563Benefits of independent ASPs................................................................................. 564Auxiliary storage pool concepts at a glance ............................................................ 564Requirements for replicating from independent ASPs ............................................ 567Limitations and restrictions for independent ASP support....................................... 567Configuration planning tips for independent ASPs.................................................. 568

Journal and journal receiver considerations for independent ASPs.................. 569Configuring IFS objects when using independent ASPs ................................... 569Configuring library-based objects when using independent ASPs.................... 569Avoiding unexpected changes to the library list ................................................ 570

Detecting independent ASP overflow conditions..................................................... 572

Appendix E Interpreting audit results 573Interpreting audit results - MIMIX Availability Manager ........................................... 575Interpreting audit results - 5250 emulator................................................................ 576Checking the job log of an audit .............................................................................. 578Interpreting results for configuration data - #DGFE audit........................................ 580Interpreting results of audits for record counts and file data ................................... 582

What differences were detected by #FILDTA.................................................... 582What differences were detected by #MBRRCDCNT......................................... 583

Interpreting results of audits that compare attributes .............................................. 586What attribute differences were detected.......................................................... 587Where was the difference detected................................................................... 589What attributes were compared ........................................................................ 590Attributes compared and expected results - #FILATR, #FILATRMBR audits.... 591Attributes compared and expected results - #OBJATR audit ............................ 596Attributes compared and expected results - #IFSATR audit ............................. 604Attributes compared and expected results - #DLOATR audit ........................... 606Comparison results for journal status and other journal attributes .................... 608

How configured journaling settings are determined .................................... 611Comparison results for auxiliary storage pool ID (*ASP)................................... 612Comparison results for user profile status (*USRPRFSTS) .............................. 615

How configured user profile status is determined........................................ 616Comparison results for user profile password (*PRFPWDIND)......................... 619

12

Page 13: MIMIX Reference

Appendix F Outfile formats 621Outfile support in MIMIX Availability Manager......................................................... 621Work panels with outfile support ............................................................................. 622MCAG outfile (WRKAG command) ......................................................................... 623MCDTACRGE outfile (WRKDTARGE command) ................................................... 626MCNODE outfile (WRKNODE command)............................................................... 628MXCDGFE outfile (CHKDGFE command) .............................................................. 630MXCMPDLOA outfile (CMPDLOA command)......................................................... 632MXCMPFILA outfile (CMPFILA command) ............................................................. 634MXCMPFILD outfile (CMPFILDTA command) ........................................................ 636MXCMPFILR outfile (CMPFILDTA command, RRN report).................................... 639MXCMPRCDC outfile (CMPRCDCNT command)................................................... 640MXCMPIFSA outfile (CMPIFSA command) ............................................................ 644MXCMPOBJA outfile (CMPOBJA command) ......................................................... 647MXDGACT outfile (WRKDGACT command)........................................................... 649MXDGACTE outfile (WRKDGACTE command)...................................................... 651MXDGDAE outfile (WRKDGDAE command) .......................................................... 659MXDGDFN outfile (WRKDGDFN command) .......................................................... 660MXDGDLOE outfile (WRKDGDLOE command) ..................................................... 668MXDGFE outfile (WRKDGFE command)................................................................ 670MXDGIFSE outfile (WRKDGIFSE command) ......................................................... 674MXDGSTS outfile (WRKDG command) .................................................................. 676

WRKDG outfile SELECT statement examples.................................................. 696WRKDG outfile example 1........................................................................... 696WRKDG outfile example 2........................................................................... 696WRKDG outfile example 3........................................................................... 697WRKDG outfile example 4........................................................................... 697

MXDGOBJE outfile (WRKDGOBJE command) ...................................................... 703MXDGTSP outfile (WRKDGTSP command) ........................................................... 706MXJRNDFN outfile (WRKJRNDFN command) ....................................................... 709MXRJLNK outfile (WRKRJLNK command) ............................................................. 713MXSYSDFN outfile (WRKSYSDFN command)....................................................... 716MXTFRDFN outfile (WRKTFRDFN command) ....................................................... 720MZPRCDFN outfile (WRKPRCDFN command) ...................................................... 722MZPRCE outfile (WRKPRCE command) ................................................................ 723MXDGIFSTE outfile (WRKDGIFSTE command)..................................................... 726MXDGOBJTE outfile (WRKDGOBJTE command).................................................. 728

Index 732

13

Page 14: MIMIX Reference

Product conventions

Product conventionsThe conventions described here apply to all Lakeview products unless otherwise noted.

Menus and commandsFunctionality for all Lakeview products is accessible from the product’s main menu. For example, all MIMIX products are accessible from a common MIMIX Main Menu. The options you see on a given menu may vary according to which products are installed.

When there is a corresponding command for a menu option, the command is shown at the far right of the display. You can use either the menu option or the command to access the function.

To issue a command from a command line outside of the menu interface, you can add the product library name to your library list or you can qualify the command with the name of the product library.

If you enter a command without parameters, the system will prompt you for any required parameters. If you enter the command with all of the required parameters, the function is invoked immediately. Some commands can be submitted in batch jobs.

Accessing online helpMIMIX Availability Manager includes online help that is accessible from within the product. From any window within MIMIX Availability Manager, selecting the Help icon will open the help system and access help for the current window.

From a 5250 emulator, context sensitive online help is available for all MIMIX commands and displays. Simply press F1 to view help. The position of your cursor determines what you will see.

• To view general help for a command, a display, or a menu, press F1 when the cursor is at the top of the display.

• To view help for a specific option, prompt, or column, press F1 when the cursor is located in the area for which you want help.

Publication conventionsThis book uses typography and specialized formatting to help you quickly identify the type of information you are reading. For example, specialized styles and techniques distinguish information you see on a display from information you enter on a display or command line. In text, bold type identifies a new term whereas an underlined word highlights its importance. Notes and Attentions are specialized formatting techniques that are used, respectively, to highlight a fact or to warn you of the potential for damage. The following topics illustrate formatting techniques that may be used in this book.

14

Page 15: MIMIX Reference

Formatting for displays and commandsTable 1 shows the formatting used for the information you see on displays and command interfaces:

Table 1. Formatting examples for displays and commands

Convention Description Examples

InitialCapitalization

Names of menus or displays, commands, keyboard keys, columns. (Column names are also shown in italic).

MIMIX Basic Main Menu Update Access Code command Page Up key

Italic Names of columns, prompts on displays, variables, user-defined values

The Status column The Start processes prompt The library-name value

UPPERCASE System-defined mnemonic names for commands, parameters, and values.

CHGUPSCFG command WARNMSG parameter The value *YES

monospace font

Text that you enter into a 5250 emulator command line. In instructions, the conventions of italic and UPPERCASE also apply. Examples showing programming code.

Type the command MIMIX and press Enter. DGDFN(name system1 system2) CHGVAR &RETURN &CONTINUE

15

Page 16: MIMIX Reference

Publication conventions

16

Page 17: MIMIX Reference

Sources for additional informationThis book refers to other published information. The following information, plus additional technical information, can be located in the IBM System i and i5/OS Information Center.

From the Information center you can access these IBM PowerTM Systems topics, books, and redbooks:

• Backup and Recovery

• Journal management

• DB2 Universal Database for IBM PowerTM Systems Database Programming

• Integrated File System Introduction

• Independent disk pools

• OptiConnect for OS/400

• TCP/IP Setup

• IBM redbook Striving for Optimal Journal Performance on DB2 Universal Database for iSeries, SG24-6286

• IBM redbook AS/400 Remote Journal Function for High Availability and Data Replication, SG24-5189

• IBM redbook PowerTM Systems iASPs: A Guide to Moving Applications to Independent ASPs, SG24-6802

The following information may also be helpful if you use advanced journaling:

• DB2 UDB for iSeries SQL Programming Concepts

• DB2 Universal Database for iSeries SQL Reference

• IBM redbook AS/400 Remote Journal Function for High Availability and Data Replication, SG24-5189

17

Page 18: MIMIX Reference

Sources for additional information

18

Page 19: MIMIX Reference

How to contact usFor contact information, visit our Contact CustomerCare web page.

If you are current on maintenance, support for MIMIX products is also available when you log in to Support Central.

It is important to include product and version information whenever you report problems. If you use MIMIX Availability Manager, you should also include the version information provided at the bottom of each MIMIX Availability Manager window.

19

Page 20: MIMIX Reference

How to contact us

20

Page 21: MIMIX Reference

MIMIX overview

Chapter 1 MIMIX overview

This book provides concepts, configuration procedures, and reference information for MIMIX ha1 and MIMIX ha Lite. For simplicity, this book uses the term MIMIX to refer to the functionality provided by either product unless a more specific name is necessary.

MIMIX version 5 provides high availability for your critical data in a production environment on IBM PowerTM Systems through real-time replication of changes. MIMIX continuously captures changes to critical database files and objects on a production system, sends the changes to a backup system, and applies the changes to the appropriate database file or object on the backup system. The backup system stores exact duplicates of the critical database files and objects from the production system.

MIMIX uses two replication paths to address different pieces of your replication needs. These paths operate with configurable levels of cooperation or can operate independently.

• The user journal replication path captures changes to critical files and objects configured for replication through a user journal. When configuring this path, shipped defaults use the IBM i remote journaling function to simplify sending data to the remote system. In previous versions, MIMIX DB2 Replicator provided this function.

• The system journal replication path handles replication of critical system objects (such as user profiles or spooled files), integrated file system (IFS) objects, and document library object (DLOs) using the IBM i system journal. In previous versions MIMIX Object Replicator provided this function.

Configuration choices determine the degree of cooperative processing used between the system journal and user journal replication paths when replicating database files, IFS objects, data areas, and data queues.

One common use of MIMIX is to support a hot backup system to which operations can be switched in the event of a planned or unplanned outage. If a production system becomes unavailable, its backup is already prepared for users. In the event of an outage, you can quickly switch users to the backup system where they can continue using their applications. MIMIX captures changes on the backup system for later synchronization with the original production system. When the original production system is brought back online, MIMIX assists you with analysis and synchronization of the database files and other objects.

You can view the replicated data on the backup system at any time without affecting productivity. This allows you to generate reports, submit (read-only) batch jobs, or perform backups to tape from the backup system. In addition to real-time backup capability, replicated databases and objects can be used for distributed processing, allowing you to off-load applications to a backup system.

Typically MIMIX is used among systems in a network. Simple environments have one production system and one backup system. More complex environments have

21

Page 22: MIMIX Reference

multiple production systems or backup systems. MIMIX can also be used on a single system.

MIMIX automatically monitors your replication environment to detect and correct potential problems that could be detrimental to maintaining high availability.

MIMIX also provides a means of verifying that the files and objects being replicated are what is defined to your configuration. This can help ensure the integrity of your MIMIX configuration.

The topics in this chapter include:

• “MIMIX concepts” on page 23 describes concepts and terminology that you need to know about MIMIX.

• “The MIMIX environment” on page 29 describes components of the MIMIX operating environment.

• “Journal receiver management” on page 37 describes how MIMIX performs change management and delete management for replication processes.

• “Operational overview” on page 40 provides information about day to day MIMIX operations.

22

Page 23: MIMIX Reference

MIMIX conceptsThis topic identifies concepts and terminology that are fundamental to how MIMIX performs replication. You should be familiar with the relationships between systems, the concepts of data groups and switching, and role of the i5/OS journaling function in replication.

System roles and relationships Usually, replication occurs between two or more System i5 systems. The most common scenario for replication is a two-system environment in which one system is used for production activities and the other system is used as a backup system.

The terms production system and backup system are used to describe the role of a system relative to the way applications are used on that system. In an availability management context, a production system is the system currently running the production workload for the applications. In normal operations, the production system is the system on which the principal copy of the data and objects associated with the application exist. A backup system is the system that is not currently running the production workload for the applications. In normal operations, the backup system is the system on which you maintain a copy of the data and objects associated with the application. These roles are not always associated with a specific system. For example, if you switch application processing to the backup system, the backup system temporarily becomes the production system.

Typically, for normal operations in basic two-system environment, replicated data flows from the system running the production workload to the backup system. In a more complex environment, the terms production system and backup system may not be sufficient to clearly identify a specific system or its current role in the replication process. For example, if a payroll application on system CHICAGO is backed up on system LONDON and another application on system LONDON is backed up to the CHICAGO system, both systems are acting as production systems and as backup systems at the same time.

The terms source system and target system identify the direction in which an activity occurs between two participating systems. A source system is the system from which MIMIX replication activity between two systems originates. In replication, the source system contains the journal entries used for replication. Information from the journal entries is either replicated to the target system or used to identify objects to be replicated to the target system. A target system is the system on which MIMIX replication activity between two systems completes.

Because multiple instances of MIMIX can be installed on any system, it is important to correctly identify the instance to which you are referring. It is helpful to consider each installation of MIMIX on a system as being part of a separate network that is referred to as a MIMIX installation. A MIMIX installation is a network of System i5 systems that transfer data and objects among each other using functions of a common MIMIX product. A MIMIX installation is defined by the way in which you configure the MIMIX product for each of the participating systems. A system can participate in multiple independent MIMIX installations.

23

Page 24: MIMIX Reference

MIMIX concepts

The terms management system and network system define the role of a system relative to how the products interact within a MIMIX installation. These roles remain associated with the system within the MIMIX installation to which they are defined. Typically one system in the MIMIX installation is designated as the management system and the remaining one or more systems are designated as network systems. A management system is the system in a MIMIX installation that is designated as the control point for all installations of the product within the MIMIX installation. The management system is the location from which work to be performed by the product is defined and maintained. Often the system defined as the management system also serves as the backup system during normal operations. A network system is any system in a MIMIX installation that is not designated as the management system (control point) of that MIMIX installation. Work definitions are automatically distributed from the management system to a network system. Often a system defined as a network system also serves as the production system during normal operations.

Data groups: the unit of replicationThe concept of a data group is used to control replication activities. A data group is a logical grouping of database files, data areas, objects, IFS objects, DLOs, or a combination thereof that defines a unit of work by which MIMIX replication activity is controlled. A data group may represent an application, a set of one or more libraries, or all of the critical data on a given system. Application environments may define a data group as a specific set of files and objects. For example, the R/3 environment defines a data group as a set of SQL tables that all use the same journal and which are all replicated to the same system. Users can start and stop replication activity by data group, switch the direction of replication for a data group, and display replication status by data group.

By default, data groups support replication from both the system journal and the user journal. Optionally, you can limit a data group to replicate using only one replication path. The parameters in the data group definition identify the direction in which data is allowed to flow between systems and whether to allow the flow to switch directions. You also define the data to be replicated and many other characteristics the replication process uses on the defined data. The replication process is started and ended by operations on a data group.

A data group entry identifies a source of information that can be replicated. Once a data group definition is created, you can define data group entries. MIMIX uses the data group entries that you create during configuration to determine whether a journal entry should be replicated. If you are using both user journal and system journal replication, a data group can have any combination of entries for files, IFS objects, library-based objects, and DLOs.

Changing directions: switchable data groupsWhen you configure a data group definition, you specify which of the two systems in the data group is the source for replicated data. In normal operation, data flows between two systems in the direction defined within the data group. When you need to switch the direction of replication, for example, when a production system is removed from the network for planned downtime. default values in the data group definition allow the same data group to be used for replication from either direction.

24

Page 25: MIMIX Reference

MIMIX provides support for switching due to planned and unplanned events. At the data group level, the Switch Data Group (SWTDG) command will switch the direction in which replication occurs between systems.

Note: A switchable data group is different than bi-directional data flow. Bi-directional data flow is a data sharing technique described in “Configuring advanced replication techniques” on page 353.

Additional switching capabilityTypically, switching is performed by using the MIMIX Switch Assistant. MIMIX Switch Assistant provides a user interface that prompts you through the switch process. MIMIX Switch Assistant calls your default MIMIX Model Switch Framework to control the switching process.

MIMIX ha1 and MIMIX ha Lite include MIMIX Monitor, which provides support for the MIMIX Model Switch Framework. Through this support, you can customize monitoring and switching programs. Switching support in MIMIX Monitor includes logical and physical switching. When you perform switching in this manner, the exit programs called by your implementation of MIMIX Model Switch Framework must include the SWTDG command. For more information, see the Using MIMIX Monitor book. Your authorized Lakeview representative can assist you in implementing advanced switching scenarios.

Journaling and object auditing introductionMIMIX relies on data recorded by the i5/OS operating system functions of journaling, remote journaling, and object auditing. Each of these functions record information in a journal. Variations in the replication process are optimized according to characteristics of the information provided by each of these functions.

Journaling is the process of recording information about changes to user-identified objects, including those made by a system or user function, for a limited number of object types. Events are logged in a user journal. Optionally, logged events in a user journal can be on a remote system using remote journaling, whereby the journal and journal receiver exist on a remote system or on a different logical partition.

Object auditing is the process by which the system creates audit records for specified types of access to objects. Object auditing logs events in a specialized system journal (the security audit journal, QAUDJRN).

When an event occurs to an object or database file for which journaling is enabled, or when a security-relevant event occurs, the system logs identifying information about the event as a journal entry, a record in a journal receiver. The journal receiver is associated with a journal and contains the log of all activity for objects defined to the journal or all objects for which an audit trail is kept.

Journaling must be active before MIMIX can perform replication. MIMIX uses the recorded journal entries to replicate activity to a designated system. Data group entries and other data group configuration settings determine whether MIMIX replicates activity for objects and whether replication is performed based on entries logged to the system journal or to a user journal. For some configurations, MIMIX uses entries from both journals.

25

Page 26: MIMIX Reference

MIMIX concepts

Journal entries deposited into the system journal (on behalf of an audited object) contain only an indication of a change to an object. Some of these types of entries contain enough information needed by MIMIX to apply the change directly to the replicated object on the target system, however many types of these entries require MIMIX to gather additional information about the object from the source system in order to apply the change directly to the replicated object on the target system.

Journal entries deposited into a user journal (on behalf of a journaled file, data area, data queue, or IFS object) contain images of the data which was changed. This information is needed by MIMIX in order to apply the change directly to the replicated object on the target system.

When replication is started, the start request (STRDG command) identifies a sequence number within a journal receiver at which MIMIX processing begins. In data groups configured with remote journaling, the specified sequence number and receiver name is the starting point for MIMIX processing from the remote journal. The i5/OS remote journal function controls where it starts sending entries from the source journal receiver to the remote journal receiver.

The i5/OS operating system requires that journaled objects reside in the same auxiliary storage pool (ASP) as the user journal. The journal receivers can be in a different ASP. If the journal is in a primary independent ASP, the journal receivers must reside in the same primary independent ASP or a secondary independent ASP within the same ASP group.

The i5/OS operating system (V5R4 and higher releases) allows journaling a maximum of 10,000,000 objects to one user journal. MIMIX can use existing journals with this value. Journals created by MIMIX have a maximum of 250,000 objects. User journaling will not start if the number of objects associated with the journal exceeds the journal maximum. The maximum includes:

• Objects for which changes are currently being journaled

• Objects for which journaling was ended while the current receiver is attached

• Journal receivers that are, or were, associated with the journal while the current journal receiver is attached.

Remote journaling requires unique considerations for journaling and journal receiver management. For additional information, see “Journal receiver management” on page 37.

Log spacesBased on System i5 user space objects, a log space is a MIMIX object that provides an efficient storage and manipulation mechanism for replicated data that is temporarily stored on the target system during the receive and apply processes. All internal structures and objects that make up a log space are created and manipulated by MIMIX.

26

Page 27: MIMIX Reference

Multi-part naming conventionMIMIX uses named definitions to identify related user-defined configuration information. A multi-part, qualified naming convention uniquely describes certain types of definitions. This includes a two-part name for journal definitions and a three-part name for transfer definitions and data group definitions. Newly created data groups use remote journaling as the default configuration, which has unique requirements for naming data group definitions. For more information, see “Naming convention for remote journaling environments with 2 systems” on page 206.

The multi-part name consists of a name followed by one or two participating system names (actually, names of system definitions). Together the elements of the multi-part name define the entire environment for that definition. As a whole unit, a fully-qualified two-part or three-part name must be unique. The first element, the name, does not need to be unique. In a three-part name, the order of the system names is also important, since two valid definitions may share the same three elements but with the system names in different orders.

For example, MIMIX automatically creates a journal definition for the security audit journal when you create a system definition. Each of these journal definitions is named QAUDJRN, so the name alone is not unique. The name must be qualified with the name of the system to which the journal definition applies, such as QAUDJRN CHICAGO or QAUDJRN NEWYORK. Similarly, the data group definitions INVENTORY CHICAGO HONGKONG and INVENTORY HONGKONG CHICAGO are unique because of the order of the system names.

When using command interfaces which require a data group definition, MIMIX can derive the fully-qualified name of a data group definition if a partial name provided is sufficient to determine the unique name. If the first part of the name is unique, it can be used by itself to designate the data group definition. For example, if the data group definition INVENTORY CHICAGO HONGKONG is the only data group with the name INVENTORY, then specifying INVENTORY on any command requiring a data group name is sufficient. However, if a second data group named INVENTORY NEWYORK LONDON is created, the name INVENTORY by itself no longer describes a unique data group. INVENTORY CHICAGO would be the minimum parts of the name of the first data definition necessary to determine its uniqueness. If a third data group named INVENTORY CHICAGO LONDON was added, then the fully qualified name would be required to uniquely identify the data group. The order in which the systems are identified is also important. The system HONGKONG appears in only one of the data groups definitions. However, specifying INVENTORY HONGKONG will generate a “not found” error because HONGKONG is not the first system in any of the data group definitions. This applies to all external interfaces that reference multi-part definition names.

MIMIX can also derive a fully qualified name for a transfer definition. Data group definitions and system definitions include parameters that identify associated transfer definitions. When a subsequent operation requires the transfer definition, MIMIX uses the context of the operation to determine the fully qualified name. For example, when starting a data group, MIMIX uses information in the data group definition, the systems specified in the data group name, and the specified transfer definition name to derive the fully qualified transfer definition name. If MIMIX cannot find the transfer

27

Page 28: MIMIX Reference

definition, it reverses the order of the system names and checks again, avoiding the need for redundant transfer definitions.

You can also use contextual system support (*ANY) to configure transfer definitions. When you specify *ANY in a transfer definition, MIMIX uses information from the context in which the transfer definition is called to resolve to the correct system. Unlike the conventional configuration case, a specific search order is used if MIMIX is still unable to find an appropriate transfer definition. For more information, see “Using contextual (*ANY) transfer definitions” on page 181.

28

Page 29: MIMIX Reference

The MIMIX environmentA variety of product-defined operating elements and user-defined configuration elements collectively form an operational environment on each system. A MIMIX environment can be comprised of one or more MIMIX installations. Each system that participates in the same MIMIX environment must have the same operational environment. This topic describes each of the components of the MIMIX operating environment.

The product libraryThe name of the product library into which MIMIX is installed defines the connection among systems in the same MIMIX installation. The default name of the product installation library is MIMIX.

Several items are shipped as part of the product library. The IFS directory structure is associated with the product library for the MIMIX installation and is created during the installation process for License Manager and MIMIX. Each MIMIX installation also contains several default job descriptions and job classes within its library.

IFS directoriesA default IFS directory structure is used in conjunction with the library-based objects of the MIMIX family of products. The IFS directory structure is associated with the product library for the MIMIX installation and is created during the installation process for License Manager and MIMIX. Over time, the installation processes for products and fixes will restore objects to the IFS directory structure as well as to the QSYS library.

The directories created when License Manager is installed or upgraded follow these guidelines:

/LakeviewTech This is the root directory for all IFS-based objects.

/LakeviewTech/system-based-area This directory structure contains system-based objects that need to exist only once on a system. The system-based-area represents a unique directory for each set of objects. Two structures that you should be aware of are:

/LakeviewTech/Service/MIMIX/VvRrMm/ is the recommended location for users to place fixes downloaded from the Lakeview website. The VvRrMm value is the same as the release of License Manager on the system. Multiple VvRrMm directories will exist as the release of License Manager changes.

/LakeviewTech/Upgrades/ is where the MIMIX Installation Wizard places software packages that it uploads to the System i5.

/LakeviewTech/UserData/ is available to users to store product-related data.

The directories created when MIMIX is installed or upgraded follow these guidelines. The requirements of your MIMIX environment determine the structure of these directories:

29

Page 30: MIMIX Reference

The MIMIX environment

/LakeviewTech/MIMIX/product-installation-library There is a unique directory structure for each installation of MIMIX.

/LakeviewTech/MIMIX/product-installation-library/product-area There is a unique directory structure for each installation of MIMIX. The structure is determined by the set of objects needed by an area of the product and the product installation library.

Job descriptions and job classesMIMIX uses a customized set of job descriptions and job classes. Customized job descriptions optimize characteristics for a category of jobs, including the user profile, job queue, message logging level, and routing data for the job. Customized job classes optimize runtime characteristics such as the job priority and CPU time slice for a category of jobs. All of the shipped job descriptions and job classes are configured with recommended default values.

Job descriptions control batch processing. MIMIX features use a set of default job descriptions, MXAUDIT, MXSYNC, and MXDFT. When MIMIX is installed, these job descriptions are automatically restored in the product library. These job descriptions exist in the product library of each MIMIX installation. Jobs and related output are associated with the user profile submitting the request. Commands such as Compare File Attributes (CMPFILA), Compare File Data (CMPFILDTA), Synchronize Object (SYNCOBJ), as well as numerous others support this standard.

Older commands that provide job description support for batch processing use different job descriptions that are located in the MIMIXQGPL library. The MIMIXQGPL library, along with these job descriptions, is automatically restored on the system when a MIMIX product is installed. Installing additional MIMIX installations on the same system does not create additional copies of these job descriptions.

Table 2. shows a combined list of MIMIX job descriptions.

Table 2. Job descriptions used by MIMIX

Name Description Shipped in Installation Library

Shipped in MIMIXQGPL Library

MXAUDIT MIMIX Auditing. Used for MIMIX compare commands, such as those called by MIMIX audits, as the default value on the Job description (JOBD) parameter.

X

MXDFT MIMIX Default. Used for MIMIX load commands and by other commands that do not have a specific job description as the default value on the JOBD parameter.

X

MXSYNC MIMIX Synchronization. Used for MIMIX synchronization commands, such as those called by MIMIX audits, as the default value on the JOBD parameter.

X

MIMIXAPY MIMIX Apply. Used for MIMIX apply process jobs. X

MIMIXCMN MIMIX Communications. Used for all target communication jobs.

X

30

Page 31: MIMIX Reference

User profilesAll of the MIMIX job descriptions are configured to run jobs using the MIMIXOWN user profile. This profile owns all MIMIX objects, including the objects in the MIMIX product libraries and in the MIMIXQGPL library. The profile is created with sufficient authority to run all MIMIX products and perform all the functions provided by the MIMIX products. The authority of this user profile can be reduced, if business practices require, but this is not recommended. Reducing the authority of the MIMIXOWN requires significant effort by the user to ensure that the products continue to function properly and to avoid adversely affecting the performance of MIMIX products. See the License and Availability Manager book for additional security information for the MIMIXOWN user profile.

The system managerThe system manager consists of a pair of system management communication jobs between a management system and a network system. Each pair has a send side

MIMIXDFT MIMIX Default. Used for all MIMIX jobs that do not have a specific job description.

X

MIMIXMGR MIMIX Manager. Used for MIMIX system manager and journal manager jobs.

X

MIMIXMON MIMIX Monitor. Used for most jobs submitted by the MIMIX Monitor product.

X

MIMIXPRM MIMIX Promoter. Used for jobs submitted by the MIMIX Promoter product.

X

MIMIXRGZ MIMIX Reorganize File. Used for file reorganization jobs submitted by the database apply job.

X

MIMIXSND MIMIX Send. Used for database send, object send, object retrieve, container send, and status send jobs in MIMIX.

X

MIMIXSYNC MIMIX Synchronization. Used for MIMIX file synchronization. This is valid for synchronize commands that do not have a JOBD parameter on the display.

X

MIMIXUPS MIMIX UPS Monitor. Used for the uninterruptible power source (UPS) monitor managed by the MIMIX Monitor product.

X

MIMIXVFY MIMIX Verify. Used for MIMIX verify and compare command processes. This is valid for verify and compare commands that do not have a JOBD parameter on the display.

X

Table 2. Job descriptions used by MIMIX

Name Description Shipped in Installation Library

Shipped in MIMIXQGPL Library

31

Page 32: MIMIX Reference

The MIMIX environment

system manager job and a receiver side system manager job. These jobs must be active to enable replication.

Once started, the system manager monitors for configuration changes and automatically moves any configuration changes to the network system. Dynamic status changes are also collected and returned to the management system. The system manager also gathers messages and timestamp information from the network system and places them in a message log and timestamp file on the management system. In addition, the system manager performs periodic maintenance tasks, including cleanup of the system and data group history files.

Figure 1 shows a MIMIX installation with a management system and two network systems. In this installation, there are four pairs of system manager jobs; two between the first network system and the management system and two between the second network system and the management system. Each arrow represents a pair of system manager jobs. Since each pair has a send side system manager job and a receiver side system manager job, there are eight total system manager jobs in this installation.

Figure 1. System manager jobs in a MIMIX installation with one management system and

32

Page 33: MIMIX Reference

two network systems.

The System manager delay parameter in the system definition determines how frequently the system manager looks for work. Other parameters in the system definition control other aspects of system manager operation.

System manager jobs are included in a group of jobs that MIMIX automatically restarts daily to maintain the MIMIX environment. The default operation of MIMIX is to restart these MIMIX jobs at midnight (12:00 a.m.). MIMIX determines when to restart the system managers based on the value of the Job restart time parameter in the system definitions for the network and management systems. For more information, see the section “Configuring restart times for MIMIX jobs” on page 313.

The journal managerThe journal manager is the process by which MIMIX maintains journal receivers on a system. A journal manager job is runs on each system in a MIMIX installation. If you have a MIMIX installation with a management system and two network systems, you

33

Page 34: MIMIX Reference

The MIMIX environment

have three journal manager jobs, one on each system. For more information, see “Journal definition considerations” on page 205.

By default, MIMIX performs both change management and delete management for journal receivers used by the replication process. Parameters in a journal definition allow you to customize details of how the change and delete operations are performed. The Journal manager delay parameter in the system definition determines how frequently the journal manager looks for work.

Journal manager jobs are included in a group of jobs that MIMIX automatically restarts daily to maintain the MIMIX environment. The default operation of MIMIX is to restart these MIMIX jobs at midnight (12:00 a.m.). The Job restart time parameter in the system definition determines when the journal manager for that system restarts. For more information, see the section “Configuring restart times for MIMIX jobs” on page 313.

The MIMIXQGPL libraryWhen a MIMIX product is installed, a library named MIMIXQGPL is restored on the system. The MIMIXQGPL library includes work management objects used by all MIMIX products. Many of these objects are customized and shipped with default settings designed to streamline operations for the products which use them. These objects include the MIMIXSBS subsystem and a variety of job descriptions and job classes.

Note: If you have previous releases of MIMIX products on a system, you may find additional objects in the MIMIXQGPL library.

MIMIXSBS subsystemThe MIMIXSBS subsystem is the default subsystem used by nearly all MIMIX-related processing. This subsystem is shipped with the proper job queue entries and routing entries for correct operation of the MIMIX jobs.

Data librariesMIMIX uses the concept of data libraries. Currently there are two series of data libraries:

• MIMIX uses data libraries for storing the contents of the object cache. MIMIX creates the first data library when needed and may create additional data libraries. The names of data libraries are of the form product-library_n (where n is a number starting at 1).

• For system journal replication, MIMIX creates libraries named product-library_x, where x is derived from the ASP. For example, A for ASP 1, B for ASP 2. These ASP-specific data libraries are created when needed and are not deleted until the product is uninstalled.

Named definitionsMIMIX uses named definitions to identify related user-defined configuration information. You can create named definitions for system information, communication

34

Page 35: MIMIX Reference

(transfer) information, journal information, and replication (data group) information. Any definitions you create can be used by both user journal and system journal replication processes.

One or more or each of the following definitions are required to perform replication:

A system definition identifies to MIMIX the characteristics of a system that participates in a MIMIX installation.

A transfer definition identifies to MIMIX the communications path and protocol to be used between two systems. MIMIX supports Systems Network Architecture (SNA), OptiConnect, and Transmission Control Protocol/Internet Protocol (TCP/IP) protocols.

A journal definition identifies to MIMIX a journal environment on a particular system. MIMIX uses the journal definition to manage the journal receiver environment used by the replication process.

A data group definition identifies to MIMIX the characteristics of how replication occurs between two systems. A data group definition determines the direction in which replication occurs between the systems, whether that direction can be switched, and the default processing characteristics to use when processing the database and object information associated with the data group.

A remote journal link (RJ link) is a MIMIX configuration element that identifies an i5/OS remote journaling environment. Newly created data groups use remote journaling as the default configuration. An RJ link identifies journal definitions that define the source and target journals, primary and secondary transfer definitions for the communications path used by MIMIX, and whether the i5/OS remote journal function sends journal entries asynchronously or synchronously. When a data group is added, the ADDRJLNK command is run automatically, using the transfer definition defined in the data group.

The naming conventions used within definitions are described in “Multi-part naming convention” on page 27.

Data group entriesData group entries are part of the MIMIX environment that must exist on each system in a MIMIX installation. MIMIX uses the data group entries that you create during configuration to determine whether or not a journal entry should be replicated.

• Data group file entry This type of data group entry identifies the location of a database file to be replicated and what its name and location will be on the target system. Within a file entry, you can override the default file entry options defined for the data group. MIMIX only replicates transactions for physical files because a physical file contains the actual data stored in members. MIMIX supports both positional and keyed access paths for accessing records stored in a physical file.

• Data group object entries This type of entry allows you to identify library-based objects for replication. Examples of library-based objects include programs, user profiles, message queues, and non-journaled database files. To select these types of objects for replication, you select individual objects or groups of objects by generic or specific object and library name, and object type,. Optionally, for files, you can specify an extended object attribute such as PF-DTA or DSPF.

35

Page 36: MIMIX Reference

The MIMIX environment

• Data group IFS entries This type of entry allows you to identify integrated file system (IFS) objects for replication. IFS objects include directories and stream files. They reside in directories, similar to DOS or Unix files. You can select IFS objects for replication by specific or generic path name.

• Data group DLO entries This type of entry allows you to identify document library objects (DLOs) for replication. DLOs are documents and folders. They are contained in folders (except for first-level folders). To select DLOs for replication you select individual DLOs by specific or generic folder and DLO name, and owner.

• Data group data area entries This type of entry allows you to define a data area for replication by the data area polling process. However, the preferred way to replicate data areas is to use advanced journaling.

A single data group can contain any combination of these types of data group entries. If your license is for only one of the MIMIX products rather than for MIMIX ha1 or MIMIX ha Lite, only the entries associated with the product to which you are licensed will be processed for replication.

36

Page 37: MIMIX Reference

Journal receiver managementParameters in journal definition commands determine how change management and delete management are performed on the journal receivers used by the replication process. Shipped default values result in the recommended behavior of allowing MIMIX to perform change management and delete management.

Change management - The Receiver change management (CHGMGT) parameter controls how the journal receivers are changed. The recommended value *TIMESIZE results in MIMIX changing the journal receiver by both threshold size and time of day.

Additional parameters in the journal definition control the size at which to change (THRESHOLD), the time of day to change (TIME), and when to reset the receiver sequence number (RESETTHLD). The conditions specified in these parameters must be met before change management can occur. For additional information, see “Tips for journal definition parameters” on page 201.

If you do not use the recommended value *TIMESIZE for CHGMGT, consider the following:

• When you specify *TIMESYS, the system manages the receiver by size and during IPLs and MIMIX manages changing the receiver at a specified time.

Note: The value *TIME can be specified with *SIZE or *SYSTEM to achieve the same results as *TIMESIZE or *TIMESYS, respectively.

• When you specify *NONE, MIMIX does not handle changing the journal receivers. You must ensure that the system or another application performs change management to prevent the journal receivers from overflowing.

• When you allow the system to perform change management (*SYSTEM) and the attached journal receiver reaches its threshold, the system detaches the journal receiver and creates and attaches a new journal receiver. During an initial program load (IPL), the system creates and attaches a new journal receiver. During normal IPLs and most abnormal IPLs, the journal sequence number may be reset.

In a remote journaling configuration, MIMIX recognizes remote journals and ignores change management for the remote journals. The remote journal receiver is changed automatically by the i5/OS remote journal function when the receiver on the source system is changed. You can specify in the source journal definition whether to have receiver change management performed by the system or by MIMIX. Any change management values you specify for the target journal definition are ignored.

You can also customize how MIMIX performs journal receiver change management through the use of exit programs. For more information, see “Working with journal receiver management user exit points” on page 541.

Delete management - The Receiver delete management (DLTMGT) parameter controls how the journal receivers used for replication are deleted. It is strongly recommended that you use the value *YES to allow MIMIX to perform delete management.

When MIMIX performs delete management, the journal receivers are only deleted after MIMIX is finished with them and all other criteria specified on the journal

37

Page 38: MIMIX Reference

Journal receiver management

definition are met. The criteria includes how long to retain unsaved journal receivers (KEEPUNSAV), how many detached journal receivers to keep (KEEPRCVCNT), and how long to keep detached journal receivers (KEEPJRNRCV).

Note: If more than one MIMIX installation uses the same journal, the journal manager for each installation can delete the journal regardless of whether the other installations are finished with it. If you have this scenario, you need to use the journal receiver delete management exit points to control deleting the journal receiver. For more information, see “Working with journal receiver management user exit points” on page 541.

Delete management of the source and target receivers occur independently from each other. It is highly recommended that you configure the journal definitions to have MIMIX perform journal delete management. The i5/OS remote journal function does not allow a receiver to be deleted until it is replicated from the local journal (source) to the remote journal (target). When MIMIX manages deletion, a target journal receiver cannot be deleted until it is processed by the database reader (DBRDR) process and it meets the other criteria defined in the journal definition.

If you choose to manage journal receivers yourself, you need to ensure that journals are not removed before MIMIX has finished processing them. MIMIX operations can be affected if you allow the system to handle delete management. For example, the system may delete a journal receiver before MIMIX has completed its use.

Interaction with other products that manage receiversIf you run MIMIX replicate1 on the same System i5 as MIMIX ha1 (or MIMIX ha Lite) there may be considerations for journal receiver management.

Although both MIMIX replicate1 and MIMIX ha1 support receiver change management, you need to choose only one product to perform change management activities for a specific journal. If you choose MIMIX replicate1, your MIMIX ha1 journal definition should specify CHGMGT(*NONE). If you choose MIMIX ha1, see change management for available options that can be specified in the journal definition, including system managed receivers.

If both products scrape from the same journal, perform delete management only from MIMIX replicate1. This will prevent MIMIX ha1 deleting receivers before MIMIX replicate1 is finished with them. The journal definition within MIMIX ha1 should specify DLTMGT(*NO).

Processing from an earlier journal receiverIt is possible to have a situation where the operating system attempts to retransmit journal receivers that already exist on the target system. When this situation occurs, the remote journal function ends with an error and transmission of entries to the target system stops. This can occur in the following scenarios:

• When performing a clear pending start of the data group while also specifying a sequence number that is earlier in the journal stream than the last processed sequence number

• When starting a data group while specifying a database journal receiver that is earlier in the receiver chain than the last processed receiver.

38

Page 39: MIMIX Reference

For example, refer to Figure 2. Replication ended while processing journal entries in target receiver 2. Target journal receiver 1 is deleted through the configured delete management options. If the data group is started (STRDG) with a starting journal sequence number for an entry that is in journal receiver 1, the remote journal function attempts to retransmit source journal receivers 1 through 4, beginning with receiver 1. However, receiver 2 already exists on the target system. When the operating system encounters receiver 2, an error occurs and the transmission to the target system ends.

You can prevent this situation before starting that data group if you delete any target journal receivers following the receiver that will be used as the starting point. If you encounter the problem, recovery is simply to remove the target journal receivers and let remote journaling resend them. In this example, deleting target receiver 2 would prevent or resolve the problem.

Figure 2. Example of processing from an earlier journal receiver.

Considerations when journaling on target The default behavior for MIMIX is to have journaling enabled on the target systems for the target files. After a transaction is applied to the target system, MIMIX writes the journal entry to a separate journal on the target system. This journaling on the target system makes it easier and faster to start replication from the backup system following a switch. As part of the switch processing, the journal receiver is changed before the data group is started.

In a remote journaling environment, these additional journal receivers can become stranded on the backup system following a switch. When starting a data group after a switch, the i5/OS remote journal function begins transmitting journal entries from the just changed journal receiver. Because the backup system is now temporarily acting as the source system, the remote journal function interprets any earlier receivers as unprocessed source journal receivers and prevents them from being deleted.

To remove these stranded journal receivers, you need to use the IBM command DLTJRNRCV with *IGNTGTRCV specified as the value of the DLTOPT parameter.

Source Journal Receivers Target Journal Receivers

12 1

234

39

Page 40: MIMIX Reference

Operational overview

Operational overviewBefore replication can begin, the following requirements must be met through the installation and configuration processes:

• MIMIX software must be installed on each system in the MIMIX installation.

• At least one communication link must be in place for each pair of systems between which replication will occur.

• The MIMIX operating environment must be configured and be available on each system.

• Journaling must be active for the database files and objects configured for user journal replication.

• For objects to be replicated from the system journal, the object auditing environment must be set up.

• The files and objects must be initially synchronized between the systems participating in replication.

Once MIMIX is configured and files and objects are synchronized, day-to-day operations for MIMIX can be performed from either the web-based MIMIX Availability Manager or from a 5250 emulator for a System i5.

MIMIX Availability Manager is easy to use and preferable for daily operations. Newer MIMIX functions may only be available through this user interface. Through preferences, individuals have the ability to customize what systems, installations, and data groups to monitor.

Support for starting and ending replicationMIMIX Availability Manager and the 5250 emulator can be used to start and end replication. In the following paragraphs, only 5250 command names are used for simplicity. The corresponding windows have the same names as the commands to which they pass information.

The Start MIMIX (STRMMX) and End MIMIX (ENDMMX) commands provide the ability to start and end all elements of a MIMIX environment. These commands include MIMIX services and manager jobs, all replication jobs for all data groups, as well as the master monitor and jobs that are associated with it. While other commands are available to perform these functions individually, the STRMMX and ENDMMX commands are preferred because they ensure that processes are started or ended in the appropriate order.

The Start Data Group (STRDG) and End Data Group (ENDDG) commands operate at the data group level to control replication processes. These commands provide the flexibility to start or end selected processes and apply sessions associated with a data group, which can be helpful for balancing workload or resolving problems.

For more information about both sets of commands, see the Using MIMIX book.

40

Page 41: MIMIX Reference

Support for checking installation statusOnly MIMIX Availability Manager provides the ability to monitor multiple installations of MIMIX at once from a single interface. Status from each installation ‘bubbles up’ to the Enterprise View, where you can quickly see whether a problem exists on the systems you are monitoring. Status icons and flyover text start the problem resolution process by guiding you to the appropriate action for the most severe problem present.

In the 5250 emulator, the MIMIX Availability Status display reports the prioritized status of a single installation. Status from the installation is reported in three areas: Replication, Audits and Notification, and Services. Color and informational messages identify the most severe problem present in an area and identify the action to take to start problem isolation.

Support for automatically detecting and resolving problemsThe functions provided by MIMIX AutoGuard are fully integrated into MIMIX user interfaces.

Audits: MIMIX ships with a set of audits and associated audit monitors that are automatically scheduled to run daily. These audits check for common problems and automatically correct any detected problems within a data groups. Audits can also be invoked manually and automatic recovery can be optionally disabled. The Work with Audits display (WRKAUD) provides a summary view for audit status and a compliance view for adherence to auditing best practices. Similar windows exist in MIMIX Availability Manager.

Error recovery during replication: MIMIX AutoGuard also provides the ability to have MIMIX check for and correct common problems during user journal and system journal replication that would otherwise cause a replication error. Automatic recovery can be optionally disabled. Problems that cannot be resolved are reported like any other replication error.

For detailed information about MIMIX AutoGuard, refer to the Using MIMIX book.

Support for working with data groupsData groups are central to performing day-to-day operations. The Data Group Status window in MIMIX Availability Manager and the Work with Data Groups (WRKDG) display provide status of replication jobs and indication of any replication errors for the data groups within an installation. Status icons or highlighted text indicates whether problems exist. Many options are available for taking action at the data group level and for drilling into detailed status information.

Detailed status: When checking detailed status for a data group, MIMIX Availability Manager provides significant benefits over 5250 emulator commands.

From a 5250 emulator, the command DSPDGSTS (option 8 from the Work with Data groups display) accesses the Data Group Status display. The initial view summarizes replication errors and the status of user journal (database) and system journal (object) processes for both source and target systems. By using function keys, you can display additional detailed views of only database or only object status.

41

Page 42: MIMIX Reference

Operational overview

When you choose to display detailed status for a data group from MIMIX Availability Manager, the highest priority problem that exists for the data group determines which of several possible views of the Data Group Details window will be displayed. You can often take action to resolve problems directly from these detailed status windows.

Data Group Details - Status This window identifies all of the replication jobs and services jobs needed by the data group and provides their status. Similar information is available from the merged view of the Data Group Status display.

Data Group Details - User Journal This window represents replication performed by user journal replication processes, including journaled files, IFS objects, data areas, and data queues. It includes information about the replication of user journal transactions, including journal progress, performance, and recent activity. Similar information is available from database views of the Data Group Status display.

Data Group Details - System Journal This window represents replication performed by system journal replication processes, including journal progress, performance, and recent activity. Similar information is available from object views of the Data Group Status display.

Data Group Details - Activity This window summaries activity for the selected data group that is experiencing replication problems. Problems are grouped by type of activity: File, Object, IFS Tracking, or Object Tracking. This window displays only one type of problem at a time, based on the activity type selected from the navigation bar. Similar information is available in the 5250 emulator when you use the following options from the Work with Data Groups display: 12=Files not active, 13=Objects in error, 51=IFS trk entries not active, and 53=Obj trk entries not active.

Support for resolving problemsMIMIX includes functions that can assist you in resolving a variety of problems. Depending on the type of problem, some problem resolution tasks may need to be performed from the system where the problem occurs, such as on the source system where the journal resides or on the target system if the problem is related to the apply process. MIMIX will direct you to the correct system when this is required.

MIMIX Availability Manager provides superior assistance for problem resolution. Action lists include only the appropriate choices for the problem and only those available from the system you are viewing.

Object activity: The Work with Data Group Activity (WRKDGACT) command allows you to track system journal replication activity associated with a data group. You can see the object, DLO, IFS, and spooled file activity, which can help you determine the cause of an error. You can also see an error view that identifies the reason why the object is in error. Options on the Work with Data Group Activity display allow you to see messages associated with an entry, synchronize the entry between systems, and remove a failed entry with or without related entries.

MIMIX Availability Manager provides similar capabilities to those of WRKDGACT from the following windows: Data Group Details - System Journal, Data Group Details -

42

Page 43: MIMIX Reference

Activity, and Object Activity Details. Default filtering options in MIMIX Availability Manager only display problems with replicating objects from the system journal.

Failed requests: During normal processing, system journal replication processes may encounter object requests that cannot be processed due to an error. Often the error is due to a transient condition, such as when an object is in use by another process at the time the object retrieve process attempts to gather the object data. Although MIMIX will attempt some automatic retries, requests may still result in a Failed status. In many cases, failed entries can be resubmitted and they will succeed. Some errors may require user intervention, such as a never-ending process that holds a lock on the object.

MIMIX is shipped with the MIMIX Retry Monitor (#RTYDGACTE) which runs periodically and automatically resubmits all failed activity entries for all data groups. In order to use this monitor, it must be manually enabled, then started, using options on the Work with monitors (WRKMON) display. If your environment results in numerous transient failed entries it is recommended that you use the #RTYDGACTE monitor.

You can manually request that MIMIX retry processing for a data group activity entry that has a status of *FAILED. These entries can be viewed using the Work with Data Group Activity (WRKDGACT) command. From the Work with Data Group Activity or Work with Data Group Activity Entries displays, you can use the retry option to resubmit individual failed entries or all of the entries for an object. This option calls the Retry Data Group Activity Entries (RTYDGACTE) command. From the Work with Data Group Activity display, you can also specify a time at which to start the request, thereby delaying the retry attempt until a time when it is more likely to succeed.

MIMIX Availability Manager supports manually retrying activities from appropriate windows by providing Retry as an available action in the Action List.

Files on hold: When the database apply process detects a data synchronization problem, it places the file (individual member) on “error hold” and logs an error. File entries are in held status when an error is preventing them from being applied to the target system. You need to analyze the cause of the problem in order to determine how to correct and release the file and ensure that the problem does not occur again.

An option on the Work with Data Groups display provides quick access to the subset of file entries that are in error for a data group. From the Work with DG File Entries display, you can see the status of an entry and use a number of options to assist in resolving the error. An alternative view shows the database error code and journal code. Available options include access to the Work with DG Files on Hold (WRKDGFEHLD) command. The WRKDGFEHLD command allows you to work with file entries that are in a held status. You can view and work with the entry for which the error was detected and work with all other entries following the entry in error.

MIMIX Availability Manager provides similar capabilities to those of WRKDGFEHLD from the following windows: Data Group Details - User Journal, Data Group Details - Activity, and File Activity Details. Default filtering options in MIMIX Availability Manager only display problems with replicating objects from the user journal.

Journal analysis: With user journal replication, when the system that is the source of replicated data fails, it is possible that some of the generated journal entries may not have been transmitted to or received by the target system. However, it is not always possible to determine this until the failed system has been recovered. Even if the

43

Page 44: MIMIX Reference

Operational overview

failed system is recovered, damage to a disk unit or to the journal itself may prevent an accurate analysis of any missed data. Once the source system is available again, if there is no damage to the disk unit or journal and its associated journal receivers, you can use the journal analysis function to help determine what journal entries may have been missed and to which files the data belongs. You can only perform journal analysis on the system where a journal resides.

Support for switching a data group Typically, you perform a switch using the MIMIX Switch Assistant or by using commands to call a customized implementation of MIMIX Model Switch Framework. In either case, the Switch Data Group (SWTDG) command is called programmatically to change the direction in which replication occurs between systems defined to a data group. The SWTDG command supports both planned and unplanned switches.

In a planned switch, you are purposely changing the direction of replication for any of a variety of reasons. You may need to take the system offline to perform maintenance on its hardware or software, or you may be testing your disaster recovery plan. In a planned switch, the production system (the source of replication) is available. When you perform a planned switch, data group processing is ended on both the source and target systems. The next time you start the data group, it will be set to replicate in the opposite direction.

In an unplanned switch, you are changing the direction of replication as a response to a problem. Most likely the production system is no longer available. When you perform an unplanned switch, you must run the SWTDG command from the target system. Data group processing is ended on the target system. The next time you start the data group, it will be set to replicate in the opposite direction.

To enable a switchable data group to function properly for default user journal replication processes, four journal definitions (two RJ links) are required. “Journal definition considerations” on page 205 contains examples of how to set up these journal definitions.

You can specify whether to end the RJ link during a switch. Default behavior for a planned switch is to leave the RJ link running. Default behavior during an unplanned switch is to end the RJ link. Once you have a properly configured data group that supports switching, you should be aware of how MIMIX supports unconfirmed entries and the state of the RJ link following a switch. For more information, see “Support for unconfirmed entries during a switch” on page 70 and “RJ link considerations when switching” on page 70.

For additional information about switching, see the Using MIMIX book. For additional information about MIMIX Model Switch Framework, see the Using MIMIX Monitor book.

Support for working with messagesMIMIX sends a variety of system message based on the status of MIMIX jobs and processes. You can view messages generated by MIMIX from either the Message Log window or from the Work with Message Log (WRKMSGLOG) display.

44

Page 45: MIMIX Reference

These messages are sent to both the primary and secondary message queues that are specified for the system definition.

In addition to these message queues, message entries are recorded in a MIMIX message log file. The MIMIX message log provides a powerful tool for problem determination. Maintaining a message log file allows you to keep a record of messages issued by MIMIX as an audit trail. In addition, the message log provides robust subset and filter capabilities, the ability to locate and display related job logs, and a powerful debug tool. When messages are issued, they are initially sent to the specified primary and secondary message queues. In the event that these message queues are erased, placing messages into the message log file secures a second level of information concerning MIMIX operations.

The message log on the management system contains messages from the management system and each network system defined within the installation. The system manager is responsible for collecting messages from all network systems. On a network system, the message log contains only those messages generated by MIMIX activity on that system.

MIMIX automatically performs cleanup of the message log on a regular basis. The system manager deletes entries from the message log file based on the value of the Keep system history parameter in the system definition. However, if you process an unusually high volume of replicated data, you may want to also periodically delete unnecessary message log entries since the file grows in size depending on the number of messages issued in a day.

45

Page 46: MIMIX Reference

46

Chapter 2 Replication process overview

In general terms, a replication path is a series of processes that, together, represent the critical path on which data to be replicated moves from its origin to its destination.

MIMIX uses two replication paths to accommodate differences in how replication occurs for databases and objects. These paths operate with configurable levels of cooperation or can operate independently.

• The user journal replication path captures changes to critical files and objects configured for replication through the user journal using the i5/OS remote journaling function. In previous versions, MIMIX DB2 Replicator provided this function.

• The system journal replication path handles replication of critical system objects (such as user profiles or spooled files), integrated file system (IFS) objects, and document library object (DLOs) using the i5/OS system journal. In previous versions MIMIX Object Replicator provided this function.

Configuration choices determine the degree of cooperative processing used between the system journal and user journal replication paths when replicating files, IFS objects, data areas, and data queues.

Within each replication path, MIMIX uses a series of processes. This chapter describes the replication paths and the processes used in each.

The topics in this chapter include:

• “Replication job and supporting job names” on page 47 describes the replication paths for database and object information. Included is a table which identifies the replication job names for each of the processes that make up the replication path.

• “Cooperative processing introduction” on page 50 describes three variations available for performing replication activities using a coordinated effort between user journal processing and system journal processing.

• “System journal replication” on page 53 describes the system journal replication path which is designed to handle the object-related availability needs of your system through system journal processing.

• “User journal replication” on page 61 describes remote journaling and the benefits of using remote journaling with MIMIX.

• “User journal replication of IFS objects, data areas, data queues” on page 72 describes a technique which allows replication of changed data for certain object types through the user journal.

• “Lesser-used processes for user journal replication” on page 76 describes two lesser used replication processes, MIMIX source-send processing for database replication and the data area poller process.

Page 47: MIMIX Reference

Replication job and supporting job namesThe replication path for database information includes the i5/OS remote journal function, the MIMIX database reader process, and one or more database apply processes. If MIMIX source-send processes are used instead of remote journaling, then the processes include the database send process, the database receive process, and one or more database apply processes.

The replication path for object information includes the object send process, the object receive process, and the object apply process. When a data retrieval request is replicated, the replication path also includes the object retrieve, container send, and container receive processes. A data retrieval request is an operation that creates or changes the content of an object. A self-contained request is an operation that deletes, moves, or renames an object, or that changes the authority or ownership of an object.

Table 3 identifies the job names for each of the processes that make up the replication path. Except as noted, MIMIX automatically restarts the jobs in Table 3 to maintain the MIMIX environment. The default is to restart these MIMIX jobs daily at midnight (12:00 a.m.). If this time conflicts with scheduled workloads, you can configure a different time to restart the jobs. For more information, see “Configuring restart times for MIMIX jobs” on page 313.

Table 3. MIMIX processes and their corresponding job names

Abbreviation Description Runs on Job name Notes

CNRRCV Container receive process Target sdn_CNRRCV 1, 3

CNRSND Container send process Source sdn_CNRSND 1, 3

DAPOLL Data area polling Source sdn_DAPOLL 3

DBAPY Database apply process Target sdn_DBAPYs 3, 4

DBRCV Database receive process Target sdn_DBRCV 1, 3

DBRDR Database reader Target sdn_DBRDR 3

DBSND Database send process Source sdn_DBSND 1, 3

JRNMGR Journal manager System JRNMGR --

MXCOMMD MIMIX Communications Daemon System MXCOMMD --

MXOBJSELPR Object selection process System MXOBJSELPR --

OBJAPY Object apply process Target sdn_OBJAPY 3

OBJRTV Object retrieve process Source sdn_OBJRTV 1, 3

OBJSND Object send process Source sdn_OBJSND 1, 3

OBJRCV Object receive process Target sdn_OBJRCV 1, 3

STSSND Status send Target sdn_STSSND 1, 3

SYSMGR System manager System SM******** 1, 2

47

Page 48: MIMIX Reference

Replication job and supporting job names

Updated for 5.010.00, 5.0.11.00, and 5.0.12.00.

SYSMGRRCV System manager receive process Network SR******** 1, 2

STSRCV Status receive Source sdn_STSRCV 1, 3

TEUPD Tracking entry update process Source or Target sdn_TEUPD 3, 5

Note:1. Send and receive processes depend on communication. The job name varies, depending on the transfer

protocol. OptiConnect job names start with APIA* in the QSOC subsystem. The SNA job name is derived from the remote location name. TCP/IP uses a job name port number or alias as the job name. The alias is defined on the service table entry.

2. The system manager runs on both source and target systems. The ******** in the job name format indicates the name of the system definition.

3. The characters sdn in a job name indicate the short data group name.4. The character s is the apply session letter.5. The job is used only for replication with advanced journaling and is started only when needed.

Table 3. (Continued) MIMIX processes and their corresponding job names

Abbreviation Description Runs on Job name Notes

48

Page 49: MIMIX Reference

49

Page 50: MIMIX Reference

Cooperative processing introduction

Cooperative processing introductionCooperative processing is when the MIMIX user journal processes and system journal processes work in a coordinated effort to perform replication activities for certain object types.

When configured, cooperative processing enables MIMIX to perform replication in the most efficient way by evaluating the object type and the MIMIX configuration to determine whether to use the system journal replication processes, user journal replication processes, or a combination of both. Cooperative processing also provides a greater level of data protection, data management efficiency, and high availability by ensuring the complete replication of newly created or redefined files and objects.

Object types that can be journaled to a user journal are eligible to be processed cooperatively when properly configured to MIMIX. MIMIX supports the following variations of cooperative processing for these object types:

• MIMIX Dynamic Apply (files)

• Legacy cooperative processing (files)

• Advanced journaling (IFS objects, data areas, and data queues).

When a data group definition meets the requirements for MIMIX Dynamic Apply, any logical files and physical (source and data) files properly identified for cooperative processing will be processed via MIMIX Dynamic Apply unless a known restriction prevents it.

When a data group definition does not meet the requirements for MIMIX Dynamic Apply but still meets legacy cooperative processing requirements, any PF-DTA or PF38-DTA files properly configured for cooperative processing will be replicated using legacy cooperative processing. All other types of files are processed using system journal replication.

IFS objects, data areas, or data queues that can be journaled are not automatically configured for advanced journaling, by default. These object types must be manually configured to use advanced journaling.

In all variations of cooperative processing, the system journal is used to replicate the following operations:

• The creation of new objects that do not deposit an entry in a user journal when they are created.

• Restores of objects on the source system

• Move and rename operates from a non-replicated library or path into a library or path that is configured for replication.

MIMIX Dynamic ApplyMost environments can take advantage of cooperatively processed operations for journaled *FILE objects that are journaled primarily through a user (database) journal. MIMIX Dynamic Apply is the most efficient way to perform cooperative processing of logical and physical files. MIMIX Dynamic Apply intelligently handles files with

50

Page 51: MIMIX Reference

relationships by assigning them to the same or appropriate apply sessions. It is also much better at maintaining data integrity of replicated objects which previously needed legacy cooperative processing in order to replicate some operations such as creates, deletes, moves, and renames. Another benefit of MIMIX Dynamic Apply is more efficient hold log processing by enabling multiple files to be processed through a hold log instead of just one file at a time.

New data groups created with the shipped default configuration values are configured to use MIMIX Dynamic Apply. This configuration requires data group object entries and data group file entries.

For more information, see “Identifying logical and physical files for replication” on page 105 and “Requirements and limitations of MIMIX Dynamic Apply” on page 110.

Legacy cooperative processingIn legacy cooperative processing, record and member operations of *FILE objects are replicated through user journal processes, while all other transactions are replicated through system journal processes. Legacy cooperative processing supports only data files (PF-DTA and PF38-DTA).

Data groups that existed prior to upgrading to MIMIX version 5 are typically configured with legacy cooperative processing which requires data group object entries and data group file entries.

It is recommended to use MIMIX Dynamic Apply for cooperative processing. Existing data groups configured to use legacy cooperative processing can be converted to use MIMIX Dynamic Apply. For more information, see “Requirements and limitations of legacy cooperative processing” on page 111.

Advanced journalingThe term advanced journaling refers to journaled IFS objects, data areas, or data queues that are configured for cooperative processing. When these objects are configured for cooperative processing, replication of changed bytes of the journaled objects’ data occurs through the user journal. This is more efficient than replicating an entire object through the system journal each time changes occur.

Such a configuration also allows for the serialization of updates to IFS objects, data areas, and data queues with database journal entries. In addition, processing time for these object types may be reduced, even for equal amounts of data, as user journal replication eliminates the separate save, send, and restore processes necessary for system replication.

Frequently you will see the phrase “user journal replication of IFS objects, data areas, and data queues” used interchangeably with the term advanced journaling. These terms are the same.

For more information, see “User journal replication of IFS objects, data areas, data queues” on page 72 and “Planning for journaled IFS objects, data areas, and data queues” on page 85.

51

Page 52: MIMIX Reference

Cooperative processing introduction

52

Page 53: MIMIX Reference

System journal replication The system journal replication path is designed to handle the object-related availability needs of your system. You identify the critical system objects that you want to replicate, such as user profiles, programs, and DLOs. MIMIX uses the journal entries generated by the operating system’s object auditing function to identify the changes to objects on production systems and replicates the changes to backup systems.

If you are not already using the system’s security audit journal (QAUDJRN, or system journal), when you use MIMIX commands to build the journaling environment, MIMIX creates the journal and correctly sets system values related to auditing. MIMIX checks the settings of the following system values, making changes as necessary:

• QAUDLVL (Security auditing level) system value. MIMIX sets the values *CREATE, *DELETE, *OBJMGT, and *SAVRST. MIMIX checks for values *SECURITY, *SECCFG, *SECRUN, and *SECVLDL and will set them only if the value *SECURITY is not already set. If any data group is configured to replicated spooled files, MIMIX also sets *SPLFDTA and *PRTDTA.

• QAUDCTL (Auditing control) system value. MIMIX sets the values *OBJAUD and *AUDLVL.

These system value settings, along with the object audit value of each object, control what journal entries are created in the system journal (QAUDJRN) for an object.

If an operation on an object is not represented by an entry in the system journal, MIMIX is not aware of the operation and cannot replicate it.

The system objects you want to replicate are defined to a data group through data group object entries, data group DLO entries, and data group IFS entries. The term name space refers to this collection of objects that are identified for replication by MIMIX using the system journal replication processes.

An object is replicated when it is created, restored, moved, or renamed into the MIMIX name space. While in the MIMIX name space, changes to the object or to the authority settings of the object are also replicated.

Replication through the system journal is event-driven. When a data group is started, each process used in the replication path waits for its predetermined event to occur then begins its activity. The processes are interdependent and run concurrently. The system journal replication path in MIMIX uses the following processes:

• Object send process: alternates between identifying objects to be replicated and transmitting control information about objects ready for replication to the target system.

• Object receive process: receives control information and waits for notification that additional source system processing, if any, is complete before passing the control information to the object apply process.

• Object retrieve process: if any additional information is needed for replication, obtains it and places it in a holding area. This process is also used when additional processing is required on the source system prior to transmission to the target system.

53

Page 54: MIMIX Reference

System journal replication

• Container send process: transmits any additional information from a holding area to the target system and notifies the control process of that action.

• Container receive process: receives any additional information and places it into a holding area on the target system.

• Object apply process: replicates objects according to the control information and any required additional information that is retrieved from the holding area.

• Status send process: notifies the source system of the status of the replication.

• Status receive process: updates the status on the source system and, if necessary, passes control information back to the object send process.

MIMIX uses a collection of structures and customized functions for controlling these structures during replication. Collectively the customized functions and structures are referred to as the work log. The structures in the work log consist of log spaces, work lists (implemented as user queues), and distribution status file.

When a data group is started, MIMIX uses the security audit journal to monitor for activity on objects within the name space. When activity occurs on the object, such as it is being accessed or changed, a corresponding journal entry is created in the security audit journal. As journal entries are added to the journal receiver on the source system, the object send process reads journal entries and determines if they represent operations to objects that are within the name space. For each journal entry for an object within the name space, the object send process creates an activity entry in the work log. Creation of an activity entry includes adding the entry to the log space and adding a record to the distribution status file. An activity entry includes a copy of the journal entry and any related information associated with a replication operation for an object, including the status of the entry. User interaction with activity entries is through the Work with Data Group Activity display and the Work with DG Activity Entries display.

There are two categories of activity entries: those that are self-contained and those that require the retrieval of additional information. “Processing self-contained activity entries” on page 54 describes the simplest object replication scenario. “Processing data-retrieval activity entries” on page 55 describes the object replication scenario in which additional data must be retrieved from the source system and sent to the target system.

Processing self-contained activity entriesFor a self-contained activity entry, the copied journal entry contains all of the information required to replicate the object. Examples of journal entries include Change Authority (T-CA), Object Move or Rename (T-OM), and Object Delete (T-DO).

After the object send process determines that an entry is to be replicated, it performs the following actions:

• Sets the status of the entry to PA (pending apply)

• Adds the “sent” date and time to the activity entry

• Writes the activity entry to the log space and adds a record to the distribution status file

54

Page 55: MIMIX Reference

• Transmits the activity entry to a corresponding object receive process job on the target system.

The object receive process adds the “received” date and time to the activity entry, writes the activity entry to the log space, adds a record to the distribution status file, and places the activity entry on the object apply work list. Now each system has a copy of the activity entry.

The next available object apply process job for the data group retrieves the activity entry from the object apply work list and replicates the operation represented by the entry. The object apply process adds the “applied” date and time to the activity entry, changes the status of the entry to CP (completed processing), and adds the entry to the status send work list.

The status send process retrieves the activity entry from the status send work list and transmits the updated entry to a corresponding status receive process on the source system. The status receive process updates the activity entry in the work log and the distribution status file.

Processing data-retrieval activity entriesIn a data retrieval activity entry, additional data must be gathered from the object on the source system in order to replicate the operation. The copied journal entry indicates that changes to an object affect the attributes or data of the object. The actual content of the change is not recorded in the journal entry. To properly replicate the object, its content, attributes, or both, must be retrieved and transmitted to the target system. MIMIX may retrieve this data by using APIs or by using the appropriate save command for the object type. APIs store the data in one or more user spaces (*USRSPC) in a data library associated with the MIMIX installation. Save commands store the object data in a save file (*SAVF) in the data library. Collectively, these objects in the data library are known as containers.

After the object send process determines that an entry is to be replicated and that additional processing or information on the source system is required, it performs the following actions:

• Sets the status of the entry to PR (pending retrieve)

• Adds the “sent” date and time to the activity entry

• Writes the activity entry to the log space and adds a record to the distribution status file

• Transmits the activity entry to a corresponding object receive process on the target system.

• Adds the entry to the object retrieve work list on the source system.

The object receive process adds the “received” date and time to the activity entry, writes the activity entry to the log space, and adds a record to the distribution status file. Now each system has a copy of the activity entry. The object receive process waits until the source system processing is complete before it adds the activity entry to the object apply work list.

55

Page 56: MIMIX Reference

System journal replication

Concurrently, the object send process reads the object send work list. When the object send process finds an activity entry in the object send work list, the object send process performs one or more of the following additional steps on the entry:

• If an object retrieve job packaged the object, the activity entry is routed to the container send work list.

• The activity entry is transmitted to the target system, its status is updated, and a “retrieved” date and time is added to the activity entry.

On the source system the next available object retrieve process for the data group retrieves the activity entry from the object retrieve work list and processes the referenced object. In addition to retrieving additional information for the activity entry, additional processing may be required on the source system. The object retrieve process may perform some or all of the following steps:

• Retrieve the extended attribute of the object. This may be one step in retrieving the object or it may be the primary function required of the retrieve process.

• If necessary, cooperative processing activities, such as adding or removing a data group file entry, are performed.

• The object identified by the activity entry is packaged into a container in the data library. The object retrieve process adds the “retrieved” date and time to the activity entry and changes the status of the entry to “pending send.”

• The activity entry is added to the object send work list. From there the object send job takes the appropriate action for the activity, which may be to send the entry to the target system, add the entry to the container send work list, or both.

The container send and receive processes are only used when an activity entry requires information in addition to what is contained within the journal entry. The next available job for the container send process for the data group retrieves the activity entry from the container send work list and retrieves the container for the packaged object from the data library. The container send job transmits the container to a corresponding job of the container receive process on the target system. The container receive process places the container in a data library on the target system. The container send process waits for confirmation from the container receive job, then adds the “container sent” date and time to the activity entry, changes the status of the activity entry to PA (pending apply), and adds the entry to the object send work list.

The next available object apply process job for the data group retrieves the activity entry from the object apply work list, locates the container for the object in the data library, and replicates the operation represented by the entry. The object apply process adds the “applied” date and time to the activity entry, changes the status of the entry to CP (completed processing), and adds the entry to the status send work list.

The status send process retrieves the activity entry from the status send work list and transmits the updated entry to a corresponding job for status receive process on the source system. The status receive process updates the activity entry in the log space and the distribution status file. If the activity entry requires further processing, such as if an updated container is needed on the target system, the status receive job adds the entry to the object send work list.

56

Page 57: MIMIX Reference

Processes with multiple jobsThe object retrieve, container send and receive, and object apply processes all consist of one or more asynchronous jobs. You can specify the minimum and maximum number of asynchronous jobs you want to allow MIMIX to run for each process and a threshold for activating additional jobs. The minimum number indicates how many permanent jobs should be started for the process. These jobs stay active as long as the data group is active.

During periods of peak activity, if more requests are backlogged than are specified in the threshold, additional temporary jobs, up to the maximum number, may also be started. This load leveling feature allows system journal replication processes to react automatically to periodic heavy workloads. By doing this, the replication process stays current with production system activity. When system activity returns to a reduced level, the temporary jobs end after a period of inactivity elapses.

Tracking object replicationAfter you start a data group, you need to monitor the status of the replication processes and respond to any error conditions. Regular monitoring and timely responses to error conditions significantly reduce the amount of time and effort required in the event that you need to switch a data group.

MIMIX provides an indication of high level status of the processes used in object replication and error conditions. You can access detailed status information through the Data Group Status window in MIMIX Availability Manager or the MIMIX Availability Status display in a 5250 emulator.

When an operation cannot complete on either the source or target system (such as when the object is in use by another process and cannot be accessed), the activity entry may go to a failed state. MIMIX attempts to rectify many failures automatically, but some failures require manual intervention. Objects with at least one failed entry outstanding are considered to be “in error.” You should periodically review the objects in error, and the associated failed entries, and determine the appropriate action. You may retry or delete one or all of the failed entries for an object. You can check the progress of activity entries and take corrective action through the Work with Data Group Activity display and the Work with DG Activity Entries display. You can also subset directly to the activity entries in error from the Work with Data Groups display.

If you have new objects to replicate that are not within the MIMIX name space, you need to add data group entries for them. Before any new data group entries can be replicated, you must end and restart the system journal replication processes in order for the changes to take effect.

The system manager removes old activity entries from the work log on each system after the time specified in the system definition passes. The Keep data group history (days) parameter (KEEPDGHST) indicates how long the activity entries remain on the system. You can also manually delete activity entries. Containers in the data libraries are deleted after the time specified in the Keep MIMIX data (days) parameter (KEEPMMXDTA).

Managing object auditing

57

Page 58: MIMIX Reference

System journal replication

The system journal replication path within MIMIX relies on entries placed in the system journal by i5/OS object auditing functions. To ensure that objects configured for this replication path retain an object auditing value that supports replication, MIMIX evaluates and changes the objects’ auditing value when necessary.

To do this, MIMIX employs a configuration value that is specified on the Object auditing value (OBJAUD) parameter of data group entries (object, IFS, DLO) configured for the system journal replication path. When MIMIX determines that an object’s auditing value is lower than the configured value, it changes the object to have the higher configured value specified in the data group entry that is the closest match to the object. The OBJAUD parameter supports object audit values of *ALL, *CHANGE, or *NONE.

MIMIX evaluates and may change an object’s auditing value when specific conditions exist during object replication or during processing of a Start Data Group (STRDG) request. This evaluation process can also be invoked manually for all objects identified for replication by a data group.

During replication - MIMIX may change the auditing value during replication when an object is replicated because it was created, restored, moved, or renamed into the MIMIX name space (the group of objects defined to MIMIX).

While starting a data group - MIMIX may change the auditing value while processing a STRDG request if the request specified processes that cause object send (OBJSND) jobs to start and the request occurred after a data group switch or after a configuration change to one or more data group entries (object, IFS, or DLO).

Shipped command defaults for the STRDG command allow MIMIX to set object auditing if necessary. If you would rather set the auditing level for replicated objects yourself, you can specify *NO for the Set object auditing level (SETAUD) parameter when you start data groups.

Invoking manually - The Set Data Group Auditing (SETDGAUD) command provides the ability to manually set the object auditing level of existing objects identified for replication by a data group. When the command is invoked, MIMIX checks the audit value of existing objects identified for system journal replication. Shipped default values on the command cause MIMIX to change the object auditing value of objects to match the configured value when an object’s actual value is lower than the configured value.

The SETDGAUD command is used during initial configuration of a data group. Otherwise, it is not necessary for normal operations and should only be used under the direction of a trained MIMIX support representative.

The SETDGAUD command also supports optionally forcing a change to a configured value that is lower than the existing value through its Force audit value (FORCE) parameter.

Evaluation processing - Regardless of how the object auditing evaluation is invoked, MIMIX may find that an object is identified by more than one data group entry within the same class of object (IFS, DLO, or library-based). It is important to understand the order of precedence for processing data group entries.

Data group entries are processed in order from most generic to most specific. IFS entries are processed using the unicode character set; object entries and DLO entries

58

Page 59: MIMIX Reference

are processed using the EBCDIC character set. The first entry (more generic) found that matches the object is used until a more specific match is found.

The entry that most specifically matches the object is used to process the object. If the object has a lower audit value, it is set to the configured auditing value specified in the data group entry that most specifically matches the object.

When MIMIX processes a data group IFS entry and changes the auditing level of objects which match the entry, all of the directories in the object’s directory path are checked and, if necessary, changed to the new auditing value. In the case of an IFS entry with a generic name, all descendents of the IFS object may also have their auditing value changed.

When you change a data group entry, MIMIX updates all objects identified by the same type of data group entry in order to ensure that auditing is set properly for objects identified by multiple entries with different configured auditing values. For example, if a new DLO entry is added to a data group, MIMIX sets object auditing for all objects identified by the data group’s DLO entries, but not for its object entries or IFS entries.

For more information and examples of setting auditing values with the SETDGAUD command, see “Setting data group auditing values manually” on page 297.

59

Page 60: MIMIX Reference

System journal replication

60

Page 61: MIMIX Reference

User journal replicationMIMIX Remote Journal support enables MIMIX to take advantage of the cross-journal communications capabilities provided by the i5/OS remote journal function instead of using internal communications. Newly created data groups use remote journaling as the default configuration.

What is remote journaling?Remote journaling is a function in the i5/OS operating system that allows you to establish journals and journal receivers on a target eServer System i5™ system and associate them with specific journals and journal receivers on a source system. After the journals and journal receivers are established on both systems, the remote journal function can replicate journal entries from the source system to the journals and journal receivers located on the target system.

The remote journal function supports both synchronous and asynchronous modes of operation. More information about the benefits and implications of each mode can be found in topic “Overview of IBM processing of remote journals” on page 63.

You should become familiar with the terminology used by the i5/OS remote journal function. The Backup and Recovery and Journal management books are good sources for terminology and for information about considerations you should be aware of when you use remote journaling. The IBM redbooks AS/400 Remote Journal Function for High Availability and Data Replication (SG24-5189) and Striving for Optimal Journal Performance on DB2 Universal Database for iSeries (SG24-6286) provide an excellent overview of remote journaling in a high availability environment. You can find these books online at the IBM eServer iSeries Information Center.

Benefits of using remote journaling with MIMIXMIMIX has internal send and receive processing as part of its architecture. IBM added the remote journal function to the System i5 within the licensed internal code layer of OS/400 in its V4R3 release. Moving cross-journal communications into the licensed internal code provides greater System i5 integration and efficiency. The MIMIX Remote Journal support allows MIMIX to take advantage of the cross-journal communications functions provided by the i5/OS remote journal function instead of using the internal communications provided by MIMIX. As stated in the AS/400 Remote Journal Function for High Availability and Data Replication redbook,

“The benefits of remote journal function include:

• It lowers the CPU consumption on the source machine by shifting the processing required to receive the journal entries from the source system to the target system. This is true when asynchronous delivery is selected.

• It eliminates the need to buffer journal entries to a temporary area before transmitting them from the source machine to the target machine. This translates into less disk writes and greater DASD efficiency on the source system.

• Since it is implemented in microcode, it significantly improves the

61

Page 62: MIMIX Reference

User journal replication

replication performance of journal entries and allows database images to be sent to the target system in realtime. This realtime operation is called the synchronous delivery mode. If the synchronous delivery mode is used, the journal entries are guaranteed to be in main storage on the target system prior to control being returned to the application on the source machine.

• It allows the journal receiver save and restore operations to be moved to the target system. This way, the resource utilization on the source machine can be reduced.”

Restrictions of MIMIX Remote Journal supportThe i5/OS remote journal function does not allow writing journal entries directly to the target journal receiver. This restriction severely limits the usefulness of cascading remote journals in a managed availability environment.

MIMIX user journal replication does not support a cascading environment in which remote journal receivers on the target system are also source journal receivers for a third system.

Users who require this type of environment may use multiple installations of MIMIX, implementing apply side journaling in one installation and using remote journaling to replicate the applied transactions to a third system.

62

Page 63: MIMIX Reference

Overview of IBM processing of remote journalsSeveral key concepts within the i5/OS remote journal function are important to understanding its impact on MIMIX replication.

A local-remote journal pair refers to the relationship between a configured source journal and target journal. The key point about a local-remote journal pair is that data flows only in one direction within the pair, from source to target.

When the remote journal function is activated and all journal entries from the source are requested, existing journal entries for the specified journal receiver on the source system which have not already been replicated are replicated as quickly as possible. This is known as catchup mode. Once the existing journal entries are delivered to the target system, the system begins sending new entries in continuous mode according to the delivery mode specified when the remote journal function was started. New journal entries can be delivered either synchronously or asynchronously.

Synchronous deliveryIn synchronous delivery mode the target system is updated in real time with journal entries as they are generated by the source applications. The source applications do not continue processing until the journal entries are sent to the target journal.

Each journal entry is first replicated to the target journal receiver in main memory on the target system (1 in Figure 3). When the source system receives notification of the delivery to the target journal receiver, the journal entry is placed in the source journal receiver (2) and the source database is updated (3).

With synchronous delivery, journal entries that have been written to memory on the target system are considered unconfirmed entries until they have been written to

63

Page 64: MIMIX Reference

auxiliary storage on the source system and confirmation of this is received on the target system (4).

Figure 3. Synchronous mode sequence of activity in the IBM remote journal feature.

Unconfirmed journal entries are entries replicated to a target system but the state of the I/O to auxiliary storage for the same journal entries on the source system is not known. Unconfirmed entries only pertain to remote journals that are maintained synchronously. They are held in the data portion of the target journal receiver. These entries are not processed with other journal entries unless specifically requested or until confirmation of the I/O for the same entries is received from the source system. Confirmation typically is not immediately sent to the target system for performance reasons.

Once the confirmation is received, the entries are considered confirmed journal entries. Confirmed journal entries are entries that have been replicated to the target system and the I/O to auxiliary storage for the same journal entries on the source system is known to have completed.

With synchronous delivery, the most recent copy of the data is on the target system. If the source system becomes unavailable, you can recover using data from the target system.

Since delivery is synchronous to the application layer, there are application performance and communications bandwidth considerations. There is some performance impact to the application when it is moved from asynchronous mode to synchronous mode for high availability purposes. This impact can be minimized by ensuring efficient data movement. In general, a minimum of a dedicated 100 megabyte ethernet connection is recommended for synchronous remote journaling.

Applications

1

TargetJournal

Receiver(Remote)

SourceJournal

Receiver(Local)

ProductionDatabase

4

2 3

Source JournalMessage Queue

Target JournalMessage Queue

Source System

Target System

64

Page 65: MIMIX Reference

MIMIX includes special switch processing for unconfirmed entries to ensure that the most recent transactions are preserved in the event of a source system failure. For more information, see “Support for unconfirmed entries during a switch” on page 70.

Asynchronous deliveryIn asynchronous delivery mode, the journal entries are placed in the source journal first (A in Figure 4) and then applied to the source database (B). An independent job sends the journal entries from a buffer (C) to the target system journal receiver (D) at some time after control is returned to the source applications that generated the journal entries.

Because the journal entries on the target system may lag behind the source system’s database, in the event of a source system failure, entries may become trapped on the source system.

Figure 4. Asynchronous mode sequence of activity in the IBM remote journal feature.

With asynchronous delivery, the most recent copy of the data is on the source system. Performance critical applications frequently use asynchronous delivery.

Default values used in configuring MIMIX for remote journaling use asynchronous delivery. This delivery mode is most similar to the MIMIX database send and receive processes.

Applications Source System

Target System

TargetJournal

Receiver(Remote)

SourceJournal

Receiver(Local)

ProductionDatabase

A B

Buffer

C

D

Source JournalMessage Queue

Target JournalMessage Queue

65

Page 66: MIMIX Reference

User journal replication processes Data groups created using default values are configured to use remote journaling support for user journal replication.

The replication path for database information includes the i5/OS remote journal function, the MIMIX database reader process, and one or more database apply processes.

The i5/OS remote journal function transfers journal entries to the target system.

The database reader process (DBRDR) process reads journal entries from the target journal receiver of a remote journal configuration and places those journal entries that match replication criteria for the data group into a log space.

Remote journaling does not allow entries to be filtered from being sent to the remote system. All entries deposited into the source journal will be transmitted to the target system. The database reader process performs the filtering that is identified in the data group definition parameters and file and tracking entry options.

The database apply process applies the changes stored in the target log space to the target system’s database. MIMIX uses multiple apply processes in parallel for maximum efficiency. Transactions are applied in real-time to generate a duplicate image of the journaled objects being replicated from the source system.

The RJ linkTo simplify tasks associated with remote journaling, MIMIX implements the concept of a remote journal link. A remote journal link (RJ link) is a configuration element that identifies an i5/OS remote journaling environment. An RJ link identifies:

• A “source” journal definition that identifies the system and journal which are the source of journal entries being replicated from the source system.

• A “target” journal definition that defines a remote journal.

• Primary and secondary transfer definitions for the communications path for use by MIMIX.

• Whether the i5/OS remote journal function sends journal entries asynchronously or synchronously.

Once an RJ link is defined and other configuration elements are properly set, user journal replication processes will use the i5/OS remote journaling environment within its replication path.

The concept of an RJ link is integrated into existing commands. The Work with RJ Links display makes it easy to identify the state of the i5/OS remote journaling environment defined by the RJ link.

Sharing RJ links among data groupsIt is possible to configure multiple data groups to use the same RJ link. However, data groups should only share an RJ link if they are intended to be switched together or if they are non-switchable data groups. Otherwise, there is additional communications overhead from data groups replicating in opposite directions and the potential for

66

Page 67: MIMIX Reference

journal entries for database operations to be routed back to their originating system. See “Support for unconfirmed entries during a switch” on page 70 and “RJ link considerations when switching” on page 70 for more details.

RJ links within and independently of data groupsThe RJ link is integrated into commands for starting and ending data group replication (STRDG and ENDDG). The STRDG and ENDDG commands automatically determine whether the data group uses remote journaling and select the appropriate replication path processes, including the RJ link, as needed.

Two MIMIX commands provide the ability to use an RJ link without performing data replication. The Start Remote Journal Link (STRRJLNK) and the End Remote Journal Link (ENDRJLNK) commands provide this capability.

Differences between ENDDG and ENDRJLNK commandsYou should be aware of differences between ending data group replication (ENDDG command) and ending only the remote journal link (ENDRJLNK command). You will primarily use the End Data Group (ENDDG) command to end replication processes and to optionally end the RJ link when necessary. The End Remote Journal Link (ENDRJLNK) command ends only the RJ link.

Both commands include an end option (ENDOPT parameter) to specify whether to end immediately or in a controlled manner. These options on the ENDRJLNK command do not have the same meaning as on the ENDDG command. For ENDRJLNK, the ENDOPT parameter has the following values:

The ENDRJLNK command’s ENDOPT parameter is ignored and an immediate end is preformed when either of the following conditions are true:

• When the remote journal function is running in synchronous mode (DELIVERY(*SYNC)).

• When the remote journal function is performing catch-up processing.

Table 4. End option values on the End Remote Journal Link (ENDRJLNK) command.

*IMMED The target journal is deactivated immediately. Journal entries that are already queued for transmission are not sent before the target journal is deactivated. The next time the remote journal function is started, the journal entries that were queued but not sent are prepared again for transmission to the target journal.

*CNTRLD Any journal entries that are queued for transmission to the target journal will be transmitted before the i5/OS remote journal function is ended. At any time, the remote journal function may have one or more journal entries prepared for transmission to the target journal. If an asynchronous delivery mode is used over a slow communications line, it may take a significant amount of time to transmit the queued entries before actually ending the target journal.

67

Page 68: MIMIX Reference

RJ link monitorsUser journal replication processes monitor the journal message queues of the journals identified by the RJ link. Two RJ link monitors are created automatically, one on the source system and one on the target system. These monitors provide added value by allowing MIMIX to automatically monitor the state of the remote journal link, to notify the user of problems, and to automatically recover the link when possible.

RJ link monitors - operationThe RJ link monitors are automatically started when the master monitor is started. If for some reason the monitors are not already started, they will be started when you start a remote journal link. The monitors are created if they do not already exist. The source RJ link monitor is named after the source journal definition and the target RJ link monitor is named after the target journal definition.

The RJ link monitors are MIMIX message queue monitors. They monitor messages put on the message queues associated with the source and target journals. The operating system issues messages to these journal message queues when a failure is detected in i5/OS remote journal processing. Each RJ link monitor uses information provided in the messages to determine which remote journal link is affected and to try to automatically recover that remote journal link. (The state of a remote journal link can be seen by using the Work with RJ Links (WRKRJLNK) command.) There is a limit on the number of times that a link will be recovered in a particular time period; a continually failing link will eventually be marked failed and recovery will end. Typically this occurs when there are communications problems. Once the problem is resolved, you can start the RJ link monitors again the using the Work with Monitors (WRKMON) command and selecting the Start option.

The RJ link monitor for the source does not end once it is started, since more than one remote journal link can use a source monitor. Users can end the monitors by using the Work with Monitors (WRKMON) command and selecting the End option.

MIMIX Monitor commands can be used to see the status of your RJ link monitors. The WRKMON command lists all monitors for a MIMIX installation and displays whether the monitor is active or inactive. You can also view the status of your RJ link monitors on the DSPDGSTS status display (option 8 from the Work with Data Groups display). Both the source and target RJ link monitor processes appear on this display. The display shows whether or not the monitor processes are active. If MIMIX Monitor is not installed as recommended, the RJ link monitor status appears as unknown on the Display Data Group Status display.

RJ link monitors in complex configurationsIn a broadcast scenario, a single source journal definition can link to multiple target journal definitions, each over its own remote journal link. One source RJ link monitor handles this broadcast, since there is one source RJ monitor per source journal definition communicating via a remote journal link.

Alternately, in a cascade scenario an intermediate system can have both a source RJ link monitor and a target RJ link monitor running on it for the same journal definition. This intermediate system has the target journal definition for the system that

68

Page 69: MIMIX Reference

originated the replication and holds the source journal definition for the next system in the cascade.

For more information about configuring for these environments, see “Data distribution and data management scenarios” on page 361.

69

Page 70: MIMIX Reference

Support for unconfirmed entries during a switchThe MIMIX Remote Journal support implements synchronous mode processing in a way that reduces data latency in the movement of journal entries from the source to the target system. This reduces the potential for and the degree of manual intervention when an unplanned outage occurs.

Whenever an RJ link failure is detected MIMIX saves any unconfirmed entries on the target system so they can be applied to the backup database if an unplanned switch is required. The unconfirmed entries are the most recent changes to the data. Maintaining this data on the target system is critical to your managed availability solution.

In the event of an unplanned switch, the unconfirmed entries are routed to the MIMIX database apply process to be applied to the backup database. As a result, you will see the database apply process jobs run longer than they would under standard switch processing. If the apply process is ended by a user before the switch, MIMIX will restart the apply jobs to preserve these entries.

As part of the unplanned switch processing, MIMIX checks whether the apply jobs are caught up. Then, unconfirmed entries are applied to the target database and added to a journal that will be transferred to the source system when that system is brought back up. When the backup system is brought online as the temporary the source system, the unconfirmed entries are processed before any new journal entries generated by the application are processed. Furthermore, to ensure full data integrity, once the original source system is operational these unconfirmed entries are the first entries replicated back to that system.

RJ link considerations when switching By default, when a data group is ended or a planned switch occurs, the RJ link remains active. You need to consider whether to keep the original RJ link active after a planned switch of a data group. If the RJ link is used by another application or data group, the RJ link must remain active. Sharing an RJ link among multiple data groups is only recommended for the conditions identified in “Sharing RJ links among data groups” on page 66.

If the RJ link is not used by any other application or data group, the link should be ended to prevent communications and processing overhead. When you are temporarily running production applications on the backup system after a planned switch, journal entries generated on the backup system are transmitted to the remote journal receiver (which is on the production system). MIMIX applies the entries to the original production database. If journaling is still active on the original production database, new journal entries are created for the entries that were just applied. These new journal entries are essentially a repeat of the same operation just performed against the database. Remote journaling causes the entries to be transmitted back to the backup system. MIMIX prevents these repeat entries from being reapplied, however, these repeated entries cause additional resources to be used within MIMIX and in communications.

MIMIX Model Switch Framework considerations - When remote journaling is used in an environment in which MIMIX Model Switch Framework is implemented, you need to consider the implications of sharing an RJ link. In addition, default values

70

Page 71: MIMIX Reference

used during a planned switch cause the RJ link to remain active. You may need to end the RJ link after a planned switch.

71

Page 72: MIMIX Reference

User journal replication of IFS objects, data areas, data queues

User journal replication of IFS objects, data areas, data queues

IBM provides journaling support for IFS objects as well as for data areas and data queues. This capability allows transactions to be journaled in the user journal (database journal), much like transactions are recorded for database record changes. Each time an IFS object, data area, or data queue changes, only changed bytes are recorded in the journal entry.

MIMIX enables you to take advantage of this capability of the i5/OS operating system when replicating these journaled objects. This support within MIMIX is often referred to as advanced journaling and is enabled by explicitly configuring data group object entries for data areas and data queues and data group IFS entries for IFS objects. In addition to data group object entries and IFS entries, MIMIX uses tracking entries to uniquely identify each object that is configured for advanced journaling.

A data group that replicates some or all configured IFS objects, data areas, or data queues through a user journal may also replicate files from the same journal as well as replicate objects from the system journal. For example, a data group could be configured to support MIMIX Dynamic Apply for *FILE objects, advanced journaling for IFS objects and data areas, and system journal processes for data queues and other library-based objects. For more information, see “Replication choices by object type” on page 96

You may need to consider how much data is replicated through the same apply session for user journal replication processes and whether any transactions need to be serialized with database files. For more information, see “Planning for journaled IFS objects, data areas, and data queues” on page 85.

Benefits of advanced journalingOne of the most significant benefits of using advanced journaling is that IFS objects, data areas, and data queues are processed by replicating only changed bytes.

For example, when IFS objects, data areas, or data queues are replicated through the system journal, the entire object is shipped across the communications link. While this may be sufficient for many applications, those using large files or making frequent small byte-level changes can be negatively impacted by the additional data transmission. When these objects are configured to allow user journal replication, MIMIX replicates only changed bytes of the data for IFS objects, data areas, and data queues.

Another significant benefit of using advanced journaling for IFS objects, data areas, and data queues is that transactions can be applied in lock-step with a database file. This requires that the objects and database are configured to the same data group and the same database apply session.

For example, assume that a hotel uses a database application to reserve rooms. Within the application, a data area contains a counter to indicate the number of rooms reserved for a particular day and a database file contains detailed information about reservations. Each time a room is reserved, both the counter and the database file are updated. If these updates do not occur in the same order on the target system,

72

Page 73: MIMIX Reference

the hotel risks reserving too many or too few rooms. Without advanced journaling, serialization of these transactions cannot not be guaranteed on the target system due to inherent differences in MIMIX processing from the user journal (database file) and the system journal (default for objects). With advanced journaling, MIMIX serializes these transactions on the target system by updating both the file and the data area through user journal processing. Thus, as long as the database file and data area are configured to be processed by the same apply session, updates occur on the target system in the same order they were originally made on the source system.

Additional benefits of replicating IFS objects, data areas, and data queues from the user journal include:

• Replication is less intrusive. In traditional object replication, the save/restore process places locks on the replicated object on the source system. Database replication touches the user journal only, leaving the source object alone.

• Changes to objects replicated from the user journal may be replicated to the target system in a more timely manner. In traditional object replication, system journal replication processes must contend with potential locks placed on the objects by user applications.

• Processing time may be reduced, even for equal amounts of data. Database replication eliminates the separate save, send, and restore processes necessary for object replication.

• The objects replicated from the user journal can reduce burden on object replication processes when there is a lot of activity being replicated through the system journal.

• Commitment control is supported for B journal entry types for IFS objects journaled to a user journal.

• Advanced journaling can be used in configurations that use either remote journaling or MIMIX source-send processes for user journal replication.

Restrictions and configuration requirements vary for IFS objects and data area or data queue objects. If one or more of the configuration requirements are not met, the system journal replication path is used. For detailed information, including supported journal entry types, see “Identifying data areas and data queues for replication” on page 112 and “Identifying IFS objects for replication” on page 118.

Replication processes used by advanced journaling When IFS objects, data areas, and data queues are properly configured, replication occurs through the user journal replication path. Processing occurs through the i5/OS remote journal function, the MIMIX database reader process1, and one database apply process (session A).

1. Data groups can also be configured for MIMIX source-send processing instead of MIMIX RJ sup-port.

73

Page 74: MIMIX Reference

User journal replication of IFS objects, data areas, data queues

Tracking entriesA unique tracking entry is associated with each IFS object, data area, and data queue that is replicated using advanced journaling.

The collection of data group IFS entries for a data group determines the subset of existing IFS objects on the source system that are eligible for replication using advanced journaling techniques. Similarly, the collection of data group object entries determines the subset of existing data areas and data queues on the source system that are eligible for replication using advanced journaling techniques. MIMIX requires a tracking entry for each of the eligible objects to identify how it is defined for replication and to assist with tracking status when it is replicated. IFS tracking entries identify IFS stream files, including the source and target file ID (FID), while object tracking entries identify data areas or data queues.

When you initially configure a data group you must load tracking entries, start journaling for the objects which they identify, and synchronize the objects with the target system. The same is true when you add new or change existing data group IFS entries or object entries.

It is also possible for tracking entries to be automatically created. After creating or changing data group IFS entries or object entries that are configured for advanced journaling, tracking entries are created the next time the data group is started. However, this method has disadvantanges.This can significantly increase the amount of time needed to start a data group. If the objects you intend to replicate with advanced journaling are not journaled before the start request is made, MIMIX places the tracking entries in *HLDERR state. Error messages indicate that journaling must be started and the objects must be synchronized between systems.

Once a tracking entry exists, it remains until one of the following occurs:

• The object identified by the tracking entry is deleted from the source system and replication of the delete action completes on the target system.

• The data group configuration changes so that an object is no longer identified for replication using advanced journaling.

74

Page 75: MIMIX Reference

Figure 5 shows an IFS user directory structure, the include and exclude processing selected for objects within that structure, and the resultant list of tracking entries created by MIMIX.

Figure 5. IFS tracking entries produced by MIMIX

Viewing tracking entries is supported in both 5250 emulator and MIMIX Availability Manager interfaces. Their status is included with other data group status. You also can see what objects they identify, whether the objects are journaled, and their replication status. You can also perform operations on tracking entries, such as holding and releasing, to address replication problems.

IFS object file identifiers (FIDs)Normally, when dealing with objects and database files, you have the ability of seeing the name of the object (filename, library name, and member name) in the journal entries. For IFS objects, it is impractical to put the name of the IFS object in the header of the journal entry due to potentially long path names.

Each IFS object on a system has a unique 16-byte file ID (FID). The FID is used to identify IFS objects in journal entries. The FID is machine-specific, meaning that IFS objects with the same path name may have different FIDs on different systems.

MIMIX tracks the FIDs for all IFS objects configured for replication with advanced journaling via IFS tracking entries. When the data group is switched, the source and target FID associations are reversed, allowing MIMIX to successfully replicate transactions to IFS objects.

75

Page 76: MIMIX Reference

Lesser-used processes for user journal replication

Lesser-used processes for user journal replicationThis topic describes two lesser used replication processes, MIMIX source-send processing for database replication and the data area poller process.

User journal replication with source-send processingThis topic describes user journal replication when data groups are configured to use MIMIX source-send processes.

Note: New data groups are created to use remote journaling support for user journal replication when shipped default values on commands are used. Using remote journaling support offers many benefits over using MIMIX source-send processes.

MIMIX uses journaling to identify changes to database files and other journaled objects to be replicated. As journal entries are added to the journal receiver, the database send process collects data from journal entries on the source system and compares them to the data group file entries defined for the data group.

Journal entries for which a match is found for the file and library are then transported to the target system for replication according to the DB journal entry processing parameter (DBJRNPRC) filtering specified in the data group definition. The Data group file entries (FEOPT) parameter, specified either at the data group level or on individual data group file entries, also indicates whether to send only the after-image of the change or both before-image and after-images.

Alternatively, if all journal entries are sent to the target system, the journal entries are filtered there by the apply process. The matching for the apply process is at the file, library, and member level.

Note: If an application program adds or removes members and all members within the file are to be processed by MIMIX, it is better to use *ALL as the member name in that data group file entry. If individual members are specified, only those members you identify are processed.

On the target system, the database receive process transfers the data received over the communications line from the source system into a log space on the target system.

The database apply process on the applies replicated database transactions from the log space to the appropriate database physical file member or data area on the target system. For database files, transactions are applied at record level (puts, updates, deletes) or file level (clears, reorganizations, member deletes). MIMIX uses multiple apply processes in parallel for maximum efficiency. Transactions are applied in real-time to generate a duplicate image of the files and data areas replicated from the source system.

Throughout this process, MIMIX manages the journal receiver unless you have specified otherwise. The journal definition default operation specifies that MIMIX automatically create the next journal receiver when the journal receiver reaches the threshold size you specified in the journal definition. After MIMIX finishes reading the entries from the current journal receiver, it deletes this receiver (if configured to do so)

76

Page 77: MIMIX Reference

and begins reading entries from the next journal receiver. This eliminates excessive use of disk storage and allows valuable system resources to be available for other processing.

Besides indicating the mapping between source and target file names, data group file entries identify additional information used by database processes. The data group file entry can also specify a particular apply session to use for processing on the target system.

A status code in the data group file entry also stores the status of the file or member in the MIMIX process. If a replication problem is detected, MIMIX puts the member in hold error (*HLDERR) status so that no further transactions are applied. Files can also be put on hold (*HLD) manually.

Putting a file on hold causes MIMIX to retain all journal entries for the file in log spaces on the target system. If you expect to synchronize files at a later time, it is better to put the file in an ignored state. By setting files to an ignored state, journal entries for the file in the log spaces are deleted and additional entries received from the target system are discarded. This keeps the log spaces to a minimal size and improves efficiency for the apply process.

The file entry option Lock member during apply indicates whether or not to allow only restricted access (read-only) to the file on the backup system. This file entry option can be specified on the data group definition or on individual data group entries.

The data area polling processNote: The preferred way to replicate data areas is through the user journal. Data

areas can alternatively be replicated through system journal replication processes or with the data area poller.

When a data group is configured to use the data area polling process, polling programs capture changes to data areas defined to the data group at specified intervals. MIMIX creates a journal entry when there is a change to a data area.

MIMIX supports the following data area types:

You define a data group data area entry for each data area that you want MIMIX to manage. The data group definition determines how frequently the polling programs check for changes to data areas.

The data area polling process runs on the source system. This process retrieves each data area defined to a data group at the interval you specify and determines whether or not a data area has changed. MIMIX checks for changes to the data area type and length as well as to the contents of the data area. If a data area has changed, the data area polling process retrieves the data area and converts it into a journal entry. This

Table 5. Data area types supported by the data area polling process.

*CHAR character, up to 2000 bytes

*DEC decimal, up to 24 bytes in length and 9 decimal positions

*LGL logical, equal to 1 byte.

77

Page 78: MIMIX Reference

Lesser-used processes for user journal replication

journal entry is sent through the normal user journal replication processing and is used to update the data area on the target system.

For example, if a data area that is defined to MIMIX is deleted and recreated with new attributes, the data area polling process will capture the new attributes and recreate the data area on the target system.

78

Page 79: MIMIX Reference

79

Page 80: MIMIX Reference

80

Chapter 3 Preparing for MIMIX

This chapter outlines what you need to do to prepare for using MIMIX.

Preparing for the installation and use of MIMIX is a very important step towards meeting your availability management requirements. Because of their shared functions and their interaction with other MIMIX products, it is best to determine System i5 requirements for user journal and system journal processing in the context of your total MIMIX environment.

Give special attention to planning and implementing security for MIMIX. General security considerations for all MIMIX products can be found in the License and Availability Manager book. In addition, you can make your systems more secure with MIMIX product-level and command-level security. Each product has its own product-level security, but now you must consider the security implications of common functions used by each product. Information about setting security for common functions is also found in the License and Availability Manager book.

The topics in this chapter include:

• “Checklist: pre-configuration” on page 81 provides a procedure to follow to prepare to configure MIMIX on each system that participates in a MIMIX installation.

• “Data that should not be replicated” on page 83 describes how to consider what data should not be replicated.

• “Planning for journaled IFS objects, data areas, and data queues” on page 85 describes considerations when planning to use advanced journaling for IFS objects, data areas, or data queues.

• “Starting the MIMIXSBS subsystem” on page 90 describes how to start the MIMIXSBS subsystem which all MIMIX products run in.

• “Accessing the MIMIX Main Menu” on page 91 describes the MIMIX Main Menu and its two assistance levels, basic and intermediate which provide options to help simplify daily interactions with MIMIX.

Page 81: MIMIX Reference

Checklist: pre-configurationYou need to configure MIMIX on each system that participates in a MIMIX installation. Do the following:

1. By now, you should have completed the following tasks:

• The checklist for installing MIMIX software in the License and Availability Manager book

• You should have also turned on product-level security and granted authority to user profiles to control access to the MIMIX products.

2. At this time, you should review the information in “Data that should not be replicated” on page 83.

3. Decide what replication choices are appropriate for your environment. For detailed information see the chapter “Planning choices and details by object class” on page 93.

4. If it is not already active, start the MIMIXSBS subsystem using topic “Starting the MIMIXSBS subsystem” on page 90.

5. Configure each system in the MIMIX installation, beginning with the management system. The chapter “Configuration checklists” on page 137 identifies the primary options you have for configuring MIMIX.

6. Once you complete the configuration process you choose, you may also need to do one or more of the following:

• If you plan to use MIMIX Monitor in conjunction with MIMIX, you may need to write exit programs for monitoring activity and you may want to ensure that your monitor definitions are replicated. See the Using MIMIX book for more information.

• Verify the configuration.

• Verify any exit programs that are called by MIMIX.

• Update any automation programs you use with MIMIX and verify their operation.

• If you plan to use switching support, you or your Certified MIMIX Consultant may need to take additional action to set up and test switching. In order to use MIMIX Switch Assistant, a default model switch framework must be configured and identified in MIMIX policies. For more information about MIMIX Model Switch Framework, see the Using MIMIX Monitor book. For more information about switching and policies, see the Using MIMIX book.

81

Page 82: MIMIX Reference

Checklist: pre-configuration

82

Page 83: MIMIX Reference

Data that should not be replicatedThere are some considerations to keep in mind when defining data for replication. Not only do you need to determine what is critical to replicate, but you also need to consider data that should not be replicated.

As you identify your critical data, consider the following:

• You may not need to replicate temporary files, work files, and temporary objects, including DLOs and stream files. Evaluate how your applications use such files to determine if they need to be replicated.

You should not replicate the following:

• LAKEVIEW, MIMIXQGPL, or any MIMIX installation libraries.

• The LAKEVIEW or MIMIXOWN user profiles.

• System user profiles from one system to another. For example, QSYSOPR and QSECOFR should not be replicated.

• IBM i5/OS objects from one system to another. IBM-supplied libraries, files, and other objects for i5/OS typically begin with the prefix letter Q.

83

Page 84: MIMIX Reference

Data that should not be replicated

84

Page 85: MIMIX Reference

Planning for journaled IFS objects, data areas, and data queues

You can choose to use the cooperative processing support within MIMIX to replicate any combination of journaled IFS objects, data queue objects, or data queue objects using user journal replication processes.

In addition to configuration and journaling requirements and the restrictions that apply, you need to address several other considerations when planning to replicate journaled IFS objects, data areas, or data queues. These considerations affect whether journals should be shared, whether objects should be replicated in a data group shared with database files, whether configuration changes are needed to change apply sessions for database files, and whether exit programs need to be updated.

Is user journal replication appropriate for your environment?While user journal replication has significant advantages, it may not be appropriate for your environment. Or, it may be appropriate for only some of the supported object types. Consider the following:

• Do the objects remain relatively static? Static objects typically persist after they are created, while their data may change. Examples of more dynamic objects include temporary objects, which are created, renamed, and deleted frequently. Objects for some applications, like those which heavily use *DTAQs, may be better suited to replication from the system journal.

• What release of IBM i is in use? On some operating system releases, the types of operations that can be replicated from a user journal are limited. The IBM i release in use may influence whether objects are considered static or dynamic for replication purposes.

The benefits of user journal replication are described in “Benefits of advanced journaling” on page 72. For restrictions and limitations, see “Identifying data areas and data queues for replication” on page 112 and “Identifying IFS objects for replication” on page 118.

Serialized transactions with database files Transactions completed for database files and objects (IFS objects, data areas, or data queues) can be serialized with one another when they are applied to objects on the target system. If you require serialization, these objects and database files must share the same data group as well as the same database apply session, session A.

Since MIMIX uses apply session A for all objects configured for advanced journaling, serialization may require that you change the configuration for database files to ensure that they use the same apply session. Load balancing may also become a concern. See “Database apply session balancing” on page 87.

Converting existing data groupsWhen converting an existing data group consider the following:

85

Page 86: MIMIX Reference

Planning for journaled IFS objects, data areas, and data queues

• You may have previously used data groups with a Data group type (TYPE) value of *OBJ to separate replication of IFS, data area, or data queue objects from other activity. Converting these data groups to use advanced journaling will not cause problems with the data group. The data group definition and existing data group entries must be changed to the values required for advanced journaling.

• When converting an existing data group to use advanced journaling, all objects in the IFS path or the library specified that match the selection criteria are selected. You may need to create additional data group IFS or object entries in order to achieve the desired results. This may include creating entries that exclude objects from replication.

• Adding IFS, data area, or data queue objects configured for advanced journaling to an existing database replication environment may increase replication activity and affect performance. If a large amount of data is to be replicated, consider the overall replication performance and throughput requirements when choosing a configuration.

• Changing the replication mechanism of IFS objects, data areas, or data queues from system journal replication to user journal replication generally reduces bandwidth consumption, improves replication latency, and eliminates the locking contention associated with the save and restore process. However, if these objects have never been replicated, the addition of IFS byte stream files, data areas, or data queues to the replication environment will increase bandwidth consumption and processing workload.

Conversion examplesTo illustrate a simple conversion, assume that the systems defined to data group KEYAPP are running on IBM i V5R4. You use this data group for system journal replication of the objects in library PRODLIB. The data group has one data group object entry which has the following values:

LIB1(PRODLIB) OBJ1(*ALL) OBJTYPE(*ALL) PRCTYPE(*INCLD) COOPDB(*YES) COOPTYPE(*FILE)

Example 1 - You decide to use advanced journaling for all *DTAARA and *DTAQ objects replicated with data group KEYAPP. You have confirmed that the data group definition specifies TYPE(*ALL) and does not need to change. After performing a controlled end of the data group, you change the data group object entry to have the following values:

LIB1(PRODLIB) OBJ1(*ALL) OBJTYPE(*ALL) PRCTYPE(*INCLD) COOPDB(*YES) COOPTYPE(*FILE *DTAARA *DTAQ)

When the data group is started, object tracking entries are loaded and journaling is started for the data area and data queue objects in PRODLIB. Those objects will now be replicated from a user journal. Any other object types in PRODLIB continue to be replicated from the system journal.

Example 2 - You want to use advanced journaling for data group KEYAPP but one data area, XYZ, must remain replicated from the system journal. You will need the data group object entry described in Example 1

86

Page 87: MIMIX Reference

LIB1(PRODLIB) OBJ1(*ALL) OBJTYPE(*ALL) PRCTYPE(*INCLD) COOPDB(*YES) COOPTYPE(*FILE *DTAARA *DTAQ)

You will also need a new data group object entry that specifies the following so that data area XYZ can be replicated from the system journal:

LIB1(PRODLIB) OBJ1(XYZ) OBJTYPE(*DTAARA) PRCTYPE(*INCLD) COOPDB(*NO)

Database apply session balancingIn each data group, one database apply session, session A, is used for all IFS objects, data areas, and data queues replicated from a user journal. If you also replicate database files in the same data group, the way in which files are configured for replication can also affect how much data is processed by apply session A. In some cases, you may need adjust the configured apply session in data group object and file entries to either ensure that files that should be serialized remain in the same apply session or to move files to another apply session to manually balance loads. Consider the following:

• In MIMIX Dynamic Apply configurations, newly created database files are distributed evenly across database apply sessions by default. This ensures that the files are distributed in a way that will not overload any one apply session.

• In configurations using legacy cooperative processing, newly created database files are distributed to apply session A by default. In data groups that also replicate IFS objects, data areas or data queues through the user journal, it may be necessary to change the apply session to which cooperatively processed files are directed when the database files are created to prevent apply session A from becoming overloaded. The apply session can be changed in the file entry options (FEOPT) on the data group object and file entries.

• Logical files and physical files with referential constraints also have apply session requirements to consider. For more information see “Considerations for LF and PF files” on page 105.

User exit program considerationsWhen new or different journaled object types are added to an existing data group, user exit programs may be affected. Be aware of the following exit program considerations when changing an existing configuration to include IFS objects, data areas, or data queues configured for replication processing from a user journal.

• When IFS objects, data areas, or data queues are journaled to a user journal, new journal entry codes are provided to the user exit program. If the user exit program interprets the journal code, changes may be required.

• The path name for IFS objects cannot be interpreted in the same way as it can for database files. MIMIX uses the file ID (FID) to identify the IFS object being replicated. User exit programs that rely on the library and file names in the journal entry may need to be changed to either ignore IFS journal entries or process them by resolving the FID to a path name using the IBM-supplied APIs.

• Journaled IFS objects and data queues can have incomplete journal entries. For

87

Page 88: MIMIX Reference

Planning for journaled IFS objects, data areas, and data queues

incomplete journal entries, MIMIX provides two or more journal entries with duplicate journal entry sequence numbers and journal codes and types to the user exit program when the data for the incomplete entry is retrieved. Programs need to correctly handle these duplicate entries representing the single, original journal entry.

• Journal entries for journaled IFS objects, data areas, and data queues will be routed to the user exit program. This may be a performance consideration relative to user exit program design.

Contact your Certified MIMIX Consultant for assistance with user exit programs.

88

Page 89: MIMIX Reference

89

Page 90: MIMIX Reference

Starting the MIMIXSBS subsystem

90

Starting the MIMIXSBS subsystemBy default, all MIMIX products run in the MIMIXSBS subsystem that is created when you install the product. This subsystem must be active before you can use the MIMIX products.

If the MIMIXSBS is not already active, start the subsystem by typing the command STRSBS SBS(MIMIXQGPL/MIMIXSBS) and pressing Enter.

Any autostart job entries added to the MIMIXSBS subsystem will start when the subsystem is started.

Note: You can ensure that the MIMIX subsystem is started after each IPL by adding this command to the end of the startup program for your system. Due to the unique requirements and complexities of each MIMIX implementation, it is strongly recommended that you contact your Certified MIMIX Consultant to determine the best way in which to design and implement this change.

Page 91: MIMIX Reference

Accessing the MIMIX Main MenuThe MIMIX command accesses the main menu for a MIMIX installation. The MIMIX Main Menu has two assistance levels, basic and intermediate. The command defaults to the basic assistance level, shown in Figure 6, with its options designed to simplify day-to-day interaction with MIMIX. Figure 7 shows the intermediate assistance level.

The options on the menu vary with the assistance level. In either assistance level, the available options also depend on the MIMIX products installed in the installation library and their licensing. The products installed and the licensing also affect subsequent menus and displays.

Accessing the menu - If you know the name of the MIMIX installation you want, you can use the name to library-qualify the command, as follows:

Type the command library-name/MIMIX and press Enter. The default name of the installation library is MIMIX.

If you do not know the name of the library, do the following:

1. Type the command LAKEVIEW/WRKPRD and press Enter.

2. Type a 9 (Display product menu) next to the product in the library you want on the Lakeview Technology Installed Products display and press Enter.

Changing the assistance level - The F21 key (Assistance level) on the main menu toggles between basic and intermediate levels of the menu. You can also specify the the Assistance Level (ASTLVL) parameter on the MIMIX command.

Note: Procedures are written assuming you are using the MIMIX Availability Status (WRKMMXSTS) display, which can only be selected from the MIMIX Basic

91

Page 92: MIMIX Reference

Accessing the MIMIX Main Menu

Main Menu. We recommend you use the MIMIX Basic Main Menu unless you must access the MIMIX Intermediate Main Menu.

Figure 6. MIMIX Basic Main Menu

Figure 7. MIMIX Intermediate Main Menu

MIMIX Basic Main Menu System: SYSTEM1 MIMIX Select one of the following: 1. Availability status WRKMMXSTS 2. Start MIMIX 3. End MIMIX 5. Start or complete switch 11. Configuration menu 12. Work with monitors WRKMON 13. Work with messages WRKMSGLOG 31. Product management menu LAKEVIEW/PRDMGT Selection or command ===>__________________________________________________________________________ ______________________________________________________________________________ F3=Exit F4=Prompt F9=Retrieve F21=Assistance level F12=Cancel (C) Copyright Lakeview Technology Inc., 1990, 2007.

MIMIX Intermediate Main Menu System: SYSTEM1 MIMIX Select one of the following: 1. Work with data groups WRKDG 2. Work with systems WRKSYS 3. Work with messages WRKMSGLOG 4. Work with monitors WRKMON 11. Configuration menu 12. Compare, verify, and synchronize menu 13. Utilities menu 31. Product management menu LAKEVIEW/PRDMGT Selection or command ===>__________________________________________________________________________ ______________________________________________________________________________ F3=Exit F4=Prompt F9=Retrieve F21=Assistance level F12=Cancel (C) Copyright Lakeview Technology Inc., 1990, 2007.

92

Page 93: MIMIX Reference

Chapter 4

Planning choices and details by object class

This chapter describes the replication choices available for objects and identifies critical requirements, limitations, and configuration considerations for those choices.

Many MIMIX processes are customized to provide optimal handling for certain classes of related object types and differentiate between database files, library-based objects, integrated file system (IFS) objects, and document library objects (DLOs). Each class of information is identified for replication by a corresponding class of data group entries. A data group can have any combination of data group entry classes. Some classes even support multiple choices for replication.

In each class, a data group entry identifies a source of information that can be replicated by a specific data group. When you configure MIMIX, each data group entry you create identifies one or more objects to be considered for replication or to be explicitly excluded from replication. When determining whether to replicate a journaled transaction, MIMIX evaluates all of the data group entries for the class to which the object belongs. If the object is within the name space determined by the existing data group entries, the transaction is replicated.

The topics in this chapter include:

• “Replication choices by object type” on page 96 identifies the available replication choices for each object class.

• “Configured object auditing value for data group entries” on page 98 describes how MIMIX uses a configured object auditing value that is identified in data group entries and when MIMIX will change an object’s auditing value to match this configuration value.

• “Identifying library-based objects for replication” on page 100 includes information that is common to all library-based objects, such as how MIMIX interprets the data group object entries defined for a data group. This topic also provides examples and additional detail about configuring entries to replicate spooled files and user profiles.

• “Identifying logical and physical files for replication” on page 105 identifies the replication choices and considerations for *FILE objects with logical or physical file extended attributes. This topic identifies the requirements, limitations, and configuration requirements of MIMIX Dynamic Apply and legacy cooperative processing.

• “Identifying data areas and data queues for replication” on page 112 identifies the replication choices and configuration requirements for library-based objects of type *DTAARA and *DTAQ. This topic also identifies restrictions for replication of these object types when user journal processes (advanced journaling) is used.

• “Identifying IFS objects for replication” on page 118 identifies supported and unsupported file systems, replication choices, and considerations such as long path names and case sensitivity for IFS objects. This topic also identifies restrictions and configuration requirements for replication of these object types when user journal processes (advanced journaling) is used.

93

Page 94: MIMIX Reference

• “Identifying DLOs for replication” on page 124 describes how MIMIX interprets the data group DLO entries defined for a data group and includes examples for documents and folders.

• “Processing of newly created files and objects” on page 127 describes how new IFS objects, data areas, data queues, and files that have journaling implicitly started are replicated from the user journal.

• “Processing variations for common operations” on page 130 describes configuration-related variations in how MIMIX replicates move/rename, delete, and restore operations.

94

Page 95: MIMIX Reference

Planning choices and details by object class

95

Page 96: MIMIX Reference

Replication choices by object type

Replication choices by object typeWith version 5, a new configuration of MIMIX that uses shipped defaults for all configuration choices will use remote journaling support for replication from user journals. Default configuration choices result in physical files (data and source) as well as logical files to be processed through user journal replication and all other supported object types and classes to be replicated using system journal replication. You can optionally use other replication processes as described in Table 6.

Table 6. Replication choices by object class

Object Class and Type

Replication Options Required Classes of DG Entry

More Information

Objects of type *FILE, extended attributes:• PF (data, source)• LF

Default: user journal with MIMIX Dynamic Apply1

Object entries andFile entries

“Identifying logical and physical files for replication” on page 105

Other: For PF data files, legacy cooperative processing2. (For PF source and LF files, system journal)

Object entries and File entries

• *FILE, other extended attributes

Default: For other files, system journal

Object entries “Identifying library-based objects for replication” on page 100

Objects of type *DTAARA

Default: system journal Object entries “Identifying data areas and data queues for replication” on page 112Other: advanced

journaling2 Object entries andObject tracking entries

Other: Data area polling process associated with user journal2

Data area entries

Objects of type *DTAQ

Default: system journal Object entries

Other: advanced journaling2

Object entries andObject tracking entries

Other library-based objects

Default: system journal Object entries “Identifying library-based objects for replication” on page 100

IFS objects Default: system journal IFS entries “Identifying IFS objects for replication” on page 118

Other: advanced journaling2

IFS entries and IFS tracking entries

DLOs Default: system journal DLO entries “Identifying DLOs for replication” on page 124

1. New data groups are created to use remote journaling and to cooperatively process files using MIMIX Dynamic Apply. Existing data groups can be converted to this method of cooperative processing.

2. User journal replication can be configured for either remote journaling or MIMIX source-send processes.

96

Page 97: MIMIX Reference

97

Page 98: MIMIX Reference

Configured object auditing value for data group entries

Configured object auditing value for data group entriesWhen you create data group entries for library-based objects, IFS objects, or DLOs, you can specify an object auditing value within the configuration. This configured object auditing value affects how MIMIX handles changes to attributes of objects. It is particularly important for, but not limited to, objects configured for system journal replication.

The Object auditing value (OBJAUD) parameter defines a configured object auditing level for use by MIMIX. This configured value is associated with all objects identified for processing by the data group entry. An object’s actual auditing level determines the extent to which changes to the object are recorded in the system journal and replicated by MIMIX. The configured value is used during initial configuration and during processing of requests to compare objects that are identified by configuration data.

In specific scenarios, MIMIX evaluates whether an object’s auditing value matches the configured value of the data group entry that most closely matches the object being processed. If the actual value is lower than the configured value, MIMIX sets the object to the configured value so that future changes to the object will be recorded as expected in the system journal and therefore can be replicated.

Note: MIMIX only considers changing an object’s auditing value when the data group object entry is configured for system journal replication. MIMIX does not change the object’s value for files that are configured for MIMIX Dynamic Apply or legacy cooperative processing or for data areas and data queues that are configured for user journal replication.

The configured value specified in data group entries can affect replication of some journal entries generated when an object attribute changes. Specifically, the configured value can affect replication of T-ZC journal entries for files and IFS objects and T-YC entries for DLOs. Changes that generate other types of journal entries are not affected by this parameter.

When MIMIX changes the audit level, the possible values have the following results:

• The default value, *CHANGE, ensures that all changes to the object by all users are recorded in the system journal.

• The value *ALL ensures that all changes or read accesses to the object by all users are recorded in the system journal. The journal entries generated by read accesses to objects are not used for replication and their presence can adversely affect replication performance.

• The value *NONE results in no entries recorded in the system journal when the object is accessed or changed.

The values *CHANGE and *ALL result in replication of T-ZC and T-YC journal entries. The value *NONE prevents replication of attribute and data changes for the identified object or DLO because T-ZC and T-YC entries are not recorded in the system journal. For files configured for MIMIX Dynamic Apply and any IFS objects, data areas, or data queues configured for user journal replication, the value *NONE can improve MIMIX performance by preventing unneeded entries from being written to the system journal.

98

Page 99: MIMIX Reference

When a compare request includes an object with a configured object auditing value of *NONE, any differences found for attributes that could generate T-ZC or T-YC journal entries are reported as *EC (equal configuration).

You may also want to read the following:

• For more information about when MIMIX sets an object’s auditing value, see “Managing object auditing” on page 57.

• For more information about manually setting values and examples, see “Setting data group auditing values manually” on page 297.

• To see what attributes can be compared and replicated, see the following topics:

– “Attributes compared and expected results - #FILATR, #FILATRMBR audits” on page 591

– “Attributes compared and expected results - #OBJATR audit” on page 596

– “Attributes compared and expected results - #DLOATR audit” on page 606.

– “Attributes compared and expected results - #IFSATR audit” on page 604

99

Page 100: MIMIX Reference

Identifying library-based objects for replication

Identifying library-based objects for replicationMIMIX uses data group object entries to identify whether to process transactions for library-based objects. Collectively, the object entries identify which library-based objects can be replicated by a particular data group.

Each data group object entry identifies one or more library-based objects. An object entry can specify either a specific or a generic name for the library and object. In addition, each object entry also identifies the object types and extended object attributes (for *FILE and *DEVD objects) to be selected, defines a configured object auditing level for the identified objects, and indicates whether the identified objects are to be included in or excluded from replication.

For most supported object types which can be identified by data group object entries, only the system journal replication path is available. For a list of object types, see “Supported object types for system journal replication” on page 549. This list includes information about what can be specified for the extended attributes of *FILE objects.

A limited number of object types which use the system journal replication path have unique configuration requirements. These are described in are described in “Identifying spooled files for replication” on page 102 and “Replicating user profiles and associated message queues” on page 104.

For detailed procedures, see “Configuring data group entries” on page 265.

Replication options for object types journaled to a user journal - For objects of type *FILE, *DTAARA, and *DTAQ, MIMIX supports multiple replication methods. For these object types, additional configuration data is evaluated when determining what replication path to use for the identified objects.

For *FILE objects, the extended attribute and other configuration data are considered when MIMIX determines what replication path to use for identified objects.

• For logical and physical files, MIMIX supports several methods of replication. Each method varies in its efficiency, in its supported extended attributes, and in additional configuration requirements. See “Identifying logical and physical files for replication” on page 105 for additional details.

• For other extended attribute types, MIMIX supports only system journal replication. Only data group object entries are required to identify these files for replication.

For *FILE objects configured for replication through the system journal, MIMIX caches extended file attribute information for a fixed set of *FILE objects. Also, the Omit content (OMTDTA) parameter provides the ability to omit a subset of data-changing operations from replication. For more information, see “Caching extended attributes of *FILE objects” on page 345 and “Omitting T-ZC content from system journal replication” on page 387.

For *DTAARA and *DTAQ object types, MIMIX supports replication using either system journal or user journal replication processes. A configuration that uses the user journal is also called an advanced journaling configuration. Additional information, including configuration requirements are described in “Identifying data areas and data queues for replication” on page 112.

100

Page 101: MIMIX Reference

How MIMIX uses object entries to evaluate journal entries for replicationThe following information and example can help you determine whether the objects you specify in data group object entries will be selected for replication. MIMIX determines which replication process will be used only after it determines whether the library-based object will be replicated.

When determining whether to process a journal entry for a library-based object, MIMIX looks for a match between the object information in the journal entry and one of the data group object entries. The data group object entries are checked from the most specific to the least specific. The library name is the first search element, then followed by the object type, attribute (for files and device descriptions), and the object name. The most significant match found (if any) is checked to determine whether to include or exclude the journal entry in replication.

Table 7 shows how MIMIX checks a journal entry for a match with a data group object entry. The columns are arranged to show the priority of the elements within the object entry, with the most significant (library name) at left and the least significant (object name) at right.

Table 7. Matching order for library-based object names.

Search Order Library Name Object Type Attribute1

1. The extended object attribute is only checked for objects of type *FILE and *DEVD.

Object Name 1 Exact Exact Exact Exact 2 Exact Exact Exact Generic* 3 Exact Exact Exact *ALL 4 Exact Exact *ALL Exact 5 Exact Exact *ALL Generic* 6 Exact Exact *ALL *ALL 7 Exact *ALL Exact Exact 8 Exact *ALL Exact Generic* 9 Exact *ALL Exact *ALL 10 Exact *ALL *ALL Exact 11 Exact *ALL *ALL Generic* 12 Exact *ALL *ALL *ALL 13 Generic* Exact Exact Exact 14 Generic* Exact Exact Generic* 15 Generic* Exact Exact *ALL 16 Generic* Exact *ALL Exact 17 Generic* Exact *ALL Generic* 18 Generic* Exact *ALL *ALL 19 Generic* *ALL Exact Exact 20 Generic* *ALL Exact Generic* 21 Generic* *ALL Exact *ALL 22 Generic* *ALL *ALL Exact 23 Generic* *ALL *ALL Generic* 24 Generic* *ALL *ALL *ALL

101

Page 102: MIMIX Reference

Identifying library-based objects for replication

When configuring data group object entries, the flexibility of the generic support allows a variety of include and exclude combinations for a given library or set of libraries. But, generic name support can also cause unexpected results if it is not well planned. Consider the search order shown in Table 7 when configuring data group object entries to ensure that objects are not unexpectedly included or excluded in replication.

Example - For example, say you that you have a data group configured with data group object entries like those shown in Table 9. The journal entries MIMIX is evaluating for replication are shown in Table 8.

A transaction is received from the system journal for program BOOKKEEP in library FINANCE. MIMIX will replicate this object since it fits the criteria of the first data group object entry shown in Table 9.

A transaction for file ACCOUNTG in library FINANCE would also be replicated since it fits the third entry.

A transaction for data area BALANCE in library FINANCE would not be replicated since it fits the second entry, an Exclude entry.

Likewise, a transaction for data area ACCOUNT1 in library FINANCE would not be replicated. Although the transaction fits both the second and third entries shown in Table 9, the second entry determines whether to replicate because it provides a more significant match in the second criteria checked (object type). The second entry provides an exact match for the library name, an exact match for object type, and a object name match to *ALL.

In order for MIMIX to process the data area ACCOUNT1, an additional data group object entry with process type *INCLD could be added for object type of *DTAARA with an exact name of ACCOUNT1 or a generic name ACC*.

Identifying spooled files for replicationMIMIX supports spooled file replication on an output queue basis. When an output queue (*OUTQ) is identified for replication by a data group object entry, its spooled files are not automatically replicated when default values are used. Table 10 identifies the values required for spooled file replication. When MIMIX processes an output

Table 8. Sample journal transactions for objects in the system journal

Object Type Library Object*PGM FINANCE BOOKKEEP*FILE FINANCE ACCOUNTG*DTAARA FINANCE BALANCE*DTAARA FINANCE ACCOUNT1

Table 9. Sample of data group object entries, arranged in order from most to least specific

Entry Source Library Object Type Object Name Attribute Process Type1 Finance *PGM *ALL *ALL *INCLD2 Finance *DTAARA *ALL *ALL *EXCLD3 Finance *ALL acc* *ALL *INCLD

102

Page 103: MIMIX Reference

queue that is identified by an object entry with the appropriate settings, all spooled files for the output queue (*OUTQ) are replicated by system journal replication processes.

Is it important to consider which spooled files must be replicated and which should not. Some output queues contain a large number of non-critical spooled files and probably should not be replicated. Most likely, you want to limit the spooled files that you replicate to mission-critical information. It may be useful to direct important spooled files that should be replicated to specific output queues instead of defining a large number of output queues for replication.

When an output queue is selected for replication and the data group object entry specifies *YES for Replicate spooled files, MIMIX ensures that the values *SPLFDTA and *PRTDTA are included in the system value for the security auditing level (QAUDLVL). This causes the system to generate spooled file (T-SF) entries in the system journal. When a spooled file is created, moved, deleted, or its attributes are changed, the resulting entries in the system journal are processed by a MIMIX object send job and are replicated.

Additional choices for spooled file replication MIMIX provides additional options to customize your choices for spooled file replication.

Keeping deleted spooled files: You can also specify to keep spooled files on the target system after they have been deleted from the source system by using the Keep deleted spooled files parameter on the data group definition. The parameter is also available on commands to add and change data group object entries.

Options for spooled file status: You can specify additional options for processing spooled files. The Spooled file options (SPLFOPT) parameter is only available on commands to add and change data group object entries. The following values support choosing how status of replicated spooled files is handled on the target system:

*NONE This is the shipped default value. Spooled files on the target system will have the same status as on the source system.

*HLD All replicated spooled files are put on hold on the target system regardless of their status on the source system.

*HLDONSAV All replicated spooled files that have a saved status on the source system will be put on hold on the target system. Spooled files on the source system which have other status values will have the same status on the target system.

This parameter can be helpful if your environment includes programs which automatically process spooled files on the target system. For example, if you have a

Table 10. Data group object entry parameter values for spooled file replication

Parameter Value

Object type (OBJTYPE) *ALL or *OUTQ

Replicate spooled files (REPSPLF) *YES

103

Page 104: MIMIX Reference

Identifying library-based objects for replication

program that automatically prints spooled files, you may want to use one of these values to control what is printed after replication when printers writers are active.

If you move a spooled file between output queues which have different configured values for the SPLFOPT parameter, consider the following:

• Spooled files moved from an output queue configured with SPLFOPT(*NONE) to an output queue configured with SPLFOPT(*HLD) are placed in a held state on the target system.

• Spooled files moved from an output queue configured with SPLFOPT(*HLD) to an output queue configured with SPLFOPT(*NONE) or SPLFOPT(*HLDONSAV) remain in a held state on the target system until you take action to release them.

Replicating user profiles and associated message queuesWhen user profile objects (*USRPRF) are identified by a data group object entry which specifies *ALL or *USRPRF for the Object type parameter, MIMIX replicates the objects using system journal replication processes.

When MIMIX replicates user profiles, the message queue (*MSGQ) objects associated with the *USRPRF objects may also be created automatically on the target system as a result of replication. If the *MSGQ objects are not also configured for replication, the private authorities for the *MSGQ objects may not be the same between the source and target systems. If it is necessary for the private authorities for the *MSGQ objects be identical between the source and target systems, it is recommended that *MSGQ objects associated with *USRPRF objects be configured for replication.

For example, Table 11 shows the data group object entries required to replicate user profiles beginning with the letter A and maintain identical private authorities on associated message queues. In this example, the user profile ABC and its associated message queue are excluded from replication.

Table 11. Sample data group object entries for maintaining private authorities of message queues associated with user profiles

Entry Source Library Object Type Object Name Process Type1 QSYS *USRPRF A* *INCLD2 QUSRSYS *MSGQ A* *INCLD3 QSYS *USRPRF ABC *EXCLD4 QUSRSYS *MSGQ ABC *EXCLD

104

Page 105: MIMIX Reference

Identifying logical and physical files for replicationMIMIX supports multiple ways of replicating *FILE objects with extended attributes of LF, PF-DTA, PF38-DTA, PF-SRC, PF38-SRC. MIMIX configuration data determines the replication method used for these logical and physical files. The following configurations are possible:

• MIMIX Dynamic Apply - MIMIX Dynamic Apply is strongly recommended. In this configuration, logical files and physical files (source and data) are replicated primarily through the user (database) journal. This configuration is the most efficient way to replicate LF, PF-DTA, PF38-DTA, PF-SRC, and PF38-SRC files. In this configuration, files are identified by data group object entries and file entries.

• Legacy cooperative processing - Legacy cooperative processing supports only data files (PF-DTA and PF38-DTA). It does not support source physical files or logical files. In legacy cooperative processing, record data and member data operations are replicated through user journal processes, while all other file transactions such as creates, moves, renames, and deletes are replicated through system journal processes. The database processes can use either remote journaling or MIMIX source-send processes, making legacy cooperative processing the recommended choice for physical data files when the remote journaling environment required by MIMIX Dynamic Apply is not possible. In this configuration, files are identified by data group object entries and file entries.

• User journal (database) only configurations - Environments that do not meet MIMIX Dynamic Apply requirements but which have data group definitions that specify TYPE(*DB) can only replicate data changes to physical files. These configurations may not be able to replicate other operations such as creates, restores, moves, renames, and some copy operations. In this configuration, files are identified by data group file entries.

• System journal (object) only configurations - Data group definitions which specify TYPE(*OBJ) are less efficient at processing logical and physical files. The entire member is updated with each replicated transaction. Members must be closed in order for replication to occur. In this configuration, files are identified by data group object entries.

You should be aware of common characteristics of replicating library-based objects, such when the configured object auditing value is used and how MIMIX interprets data group entries to identify objects eligible for replication. For this information, see “Configured object auditing value for data group entries” on page 98 and “How MIMIX uses object entries to evaluate journal entries for replication” on page 101.

Some advanced techniques may require specific configurations. See “Configuring advanced replication techniques” on page 353 for additional information.

For detailed procedures, see “Creating data group object entries” on page 267.

Considerations for LF and PF files As of version 5, newly created data groups are automatically configured to use MIMIX Dynamic Apply when its requirements and restrictions are met and shipped command

105

Page 106: MIMIX Reference

Identifying logical and physical files for replication

defaults are used. With this configuration, logical and physical files are processed primarily from the user journal.

Cooperative journal - The value specified for the Cooperative journal (COOPJRN) parameter in the data group definition is critical to determining how files are cooperatively processed. When creating a new data group, you can explicitly specify a value or you can allow MIMIX to automatically change the default value (*DFT) to either *USRJRN or *SYSJRN based on whether operating system and configuration requirements for MIMIX Dynamic Apply are met. When requirements are met, MIMIX changes the value *DFT to *USRJRN. When the MIMIX Dynamic Apply requirements are not met, MIMIX changes *DFT to *SYSJRN.

Note: Data groups created prior to upgrading to version 5 continue to use their existing configuration. The installation process sets the value of COOPJRN to *SYSJRN and this value remains in effect until you take action as described in “Converting to MIMIX Dynamic Apply” on page 150.

When a data group definition meets the requirements for MIMIX Dynamic Apply, any logical files and physical (source and data) files properly identified for cooperative processing will be processed via MIMIX Dynamic Apply unless a known restriction prevents it.

When a data group definition does not meet the requirements for MIMIX Dynamic Apply but still meets legacy cooperative processing requirements, any PF-DTA or PF38-DTA files properly configured for cooperative processing will be replicated using legacy cooperative processing. All other types of files are processed using system journal replication.

Logical file considerations - Consider the following for logical files.

• Logical files are replicated through the user journal when MIMIX Dynamic Apply requirements are met. Otherwise, they are replicated through the system journal.

• It is strongly recommended that logical files reside in the same data group as all of their associated physical files.

Physical file considerations - Consider the following for physical files

• Physical files (source and data) are replicated through the user journal when MIMIX Dynamic Apply requirements are met. Otherwise, data files are replicated using legacy cooperative processing if those requirements are met, and source files are replicated through the system journal.

• If a data group definition specifies TYPE(*DB) and the configuration meets other MIMIX Dynamic Apply requirements, source files need to be identified by both data group object entries and data group file entries.

• If a data group is configured for only user journal replication (TYPE is *DB) and does not meet other configuration requirements for MIMIX Dynamic Apply, source files should be identified by only data group file entries.

• If a data group is configured for only system replication (TYPE is *OBJ), any source files should be identified by only data group object entries. Any data group object entries configured for cooperative processing will be replicated through the

106

Page 107: MIMIX Reference

system journal and should not have any corresponding data group file entries.

• Physical files with referential constraints require a field in another physical file to be valid. All physical files in a referential constraint structure must be in the same database apply session. See “Requirements and limitations of MIMIX Dynamic Apply” on page 110 and “Requirements and limitations of legacy cooperative processing” on page 111 for additional information. For more information about load balancing apply sessions, see “Database apply session balancing” on page 87.

Commitment control - This database technique allows multiple updates to one or more files to be considered a single transaction. When used, commitment control maintains database integrity by not exposing a part of a database transaction until the whole transaction completes. This ensures that there are no partial updates when the process is interrupted prior to the completion of the transaction. This technique is also useful in the event that a partially updated transaction must be removed, or rolled back, from the files or when updates identified as erroneous need to be removed.

MIMIX fully simulates commitment control on the target system. When commitment control is used on a source system in a MIMIX environment, MIMIX maintains the integrity of the database on the target system by preventing partial transactions from being applied until the whole transaction completes. If the source system becomes unavailable, MIMIX will not have applied incomplete transactions on the target system. In the event of an incomplete (or uncommitted) commitment cycle, the integrity of the database is maintained.

If your application dynamically creates database files that are subsequently used in a commitment control environment, use MIMIX Dynamic Apply for replication.

Without MIMIX Dynamic Apply, replication of the create operation may fail if a commit cycle is open when MIMIX tries to save the file. The save operation will be delayed and may fail if the file being saved has uncommitted transactions.

Files with LOBsLarge objects (LOBs) in files that are configured for either MIMIX Dynamic Apply or legacy cooperative processing are automatically replicated.

LOBs can greatly increase the amount of data being replicated. As a result, you may see some degradation in your replication activity. The amount of degradation you see is proportionate to the amount of journal entries with LOBs that are applied per hour. This is also true during switch processing if you are using remote journaling and have unconfirmed entries with LOB data.

Since the volume of data to be replicated can be very large, you should consider using the minimized journal entry data function along with LOB replication. IBM support for minimized journal entry data can be extremely helpful when database records contain static, very large objects. If minimized journal entry data is enabled, journal entries for database files containing unchanged LOB data may be complete and therefore processed like any other complete journal entry. This can significantly improve performance, throughput, and storage requirements. If minimized journal entry is used with files containing LOBs, keyed replication is not supported. For more information, see “Minimized journal entry data” on page 339.

107

Page 108: MIMIX Reference

Identifying logical and physical files for replication

User exit programs may be affected when journaled LOB data is added to an existing data group. Non-minimized LOB data produces incomplete entries. For incomplete journal entries, two or more entries with duplicate journal sequence numbers and journal codes and types will be provided to the user exit program when the data for the incomplete entry is retrieved and segmented. Programs need to correctly handle these duplicate entries representing the single, original journal entry.

You should also be aware of the following restrictions:

• Copy Active File (CPYACTF) and Reorganize Active File (RGZACTF) do not work against database files with LOB fields.

• There is no collision detection for LOB data. Most collision detection classes compare the journal entries with the content of the record on the target system. Although you can compare the actual content of the record, you cannot compare the content of the LOBs.

Configuration requirements for LF and PF filesMIMIX Dynamic Apply and legacy cooperative processing have unique requirements for data group definitions as well as many common requirements for data group object entries and file entries, as indicated in Table 12. In both configurations, you must have:

• A data group definition which specifies the required values.

• One or more data group object entries that specify the required values. These entries identify the items within the name space for replication. You may need to create additional entries to achieve the desired results, including entries which specify a Process type of *EXCLD.

• The identified existing objects must be journaled to the journal defined for the data group.

• Data group file entries for the items identified by data group object entries. Processing cannot occur without these corresponding data group file entries.

108

Page 109: MIMIX Reference

Corresponding data group file entries - Both MIMIX Dynamic Apply and legacy cooperative processing require that existing files identified by a data group object entry which specifies *YES for the Cooperate with DB (COOPDB) parameter must also be identified by data group file entries.

When a file is identified by both a data group object entry and an data group file entry, the following are also required:

• The object entry must enable the cooperative processing of files by specifying

Table 12. Key configuration values required for MIMIX Dynamic Apply and legacy cooperative processing

Critical Parameters MIMIX Dynamic ApplyRequired Values

Legacy Coopera-tive Processing Required Values

Configuration Notes

Data Group Definition

Data group type (TYPE) *ALL or *DB *ALL See “Requirements and limitations of MIMIX Dynamic Apply” on page 110.

Use remote journal link (RJLNK)

*YES any value

Cooperative journal (COOPJRN)

*DFT or *USRJRN *DFT or *SYSJRN See cooperative journal is default.

File and tracking ent. opts: (FEOPT) Replication type

*POSITION any value See “Requirements and limitations of MIMIX Dynamic Apply” on page 110.

Data Group Object Entries

Object type (OBJTYPE) *ALL or *FILE *ALL or *FILE

Attribute (OBJATR) *ALL or one of the following: LF, LF38, PF-DTA, PF-SRC, PF38-DTA, PF38-SRC

*ALL, PF-DTA, or PF38-DTA

Cooperate with database (COOPDB)

*YES *YES See Corresponding data group file entries required.

Cooperating object types (COOPTYPE)

*FILE *FILE

File and tracking ent. opts: (FEOPT) Replication type

*POSITION any value See “Requirements and limitations of MIMIX Dynamic Apply” on page 110.

109

Page 110: MIMIX Reference

Identifying logical and physical files for replication

COOPDB(*YES) and COOPTYPE(*FILE).

• If name mapping is used between systems, the data group object entry and file entry must have the same name mapping defined.

• If the data group object entry and file entry specify different values for the File and tracking ent. opts (FEOPT) parameter, the values specified in the data group file entry take precedence.

• Files defined by data group file entries must have journaling started and must be synchronized. If journaling is not started, MIMIX cannot replicate activity for the file.

Typically, data group object entries are created during initial configuration and are then used as the source for loading the data group file entries. The #DGFE audit can be used to determine whether corresponding data group file entries exist for the files identified by data group object entries.

Requirements and limitations of MIMIX Dynamic ApplyMIMIX Dynamic Apply requires that user journal replication be configured to use remote journaling. Specific data group definition and data group entry requirements are listed in Table 12.

MIMIX Dynamic Apply configurations have the following limitations.

Operating system release - The following object changes are only replicated when running i5/OS release V5R4 or later: source file date/time, compiler, object control level, licensed program, program temporary fixes (PTF), authorized program analysis reports (APAR), allow change by program, user-defined attributes, days used count and reset date, product option ID, product option load ID, component ID, last used date, change date and time stamp, and member’s days used count and reset date.

Files in library - It is recommended that files within a single library be replicated using the same user journal.

Data group file entries for members - Data group file entries (DGFE) for specific member names are not supported unless they are created by MIMIX. MIMIX may create these for error hold processing.

Name mapping - MIMIX Dynamic Apply configurations support name mapping at the library level only. Entries with object name mapping are not supported. For example, MYLIB/MYOBJ mapped to MYLIB/OTHEROBJ is not supported. If you require object name mapping, it is supported in legacy cooperative processing configurations.

TYPE(*DB) data groups - MIMIX Dynamic Apply configurations that specify TYPE(*DB) in the data group definition will not be able to replicate the following actions:

• Files created using CPYF CRTFILE(*YES) on OS V5R3 into a library configured for replication

• Files restored into a source library configured for replication

• Files moved or renamed from a non-replicated library into a replicated library

• Files created which are not otherwise journaled upon creation into a library

110

Page 111: MIMIX Reference

configured for replication

Files created by these actions can be added to the MIMIX configuration by running the #DGFE audit. The audit recovery will synchronize the file as part of adding the file entry to the configuration. In data groups that specify TYPE(*ALL), the above actions are fully supported.

Referential constraints - The following restrictions apply:

• If using referential constraints with *CASCADE or *SETNULL actions you must specify *YES for the Journal on target (JRNTGT) parameter in the data group definition.

• Physical files with referential constraints require a field in another physical file to be valid. All physical files in a referential constraint structure must be in the same database apply session. If a particular preferred apply session has been specified in file entry options (FEOPT), MIMIX may ignore the specification in order to satisfy this restriction.

Positional replication only - Keyed replication is not supported by MIMIX Dynamic Apply. Data group definitions, data group object entries, and data group file entries must specify *POSITION for the Replication type element of the file and tracking entry options (FEOPT) parameter. The value *KEYED cannot be used.

Requirements and limitations of legacy cooperative processingLegacy cooperative processing requires that data groups be configured for both database (user journal) and object (system journal) replication. While remote journaling is recommended, MIMIX source-send processing for database replication is also supported. Specific data group definition and data group entry requirements are listed in Table 12.

Legacy cooperative processing configurations have the following limitations.

Supported extended attributes - Legacy cooperative processing supports only data files (PF-DTA and PF38-DTA).

When a *FILE object is configured for legacy cooperative processing, only file and member attribute changes identified by T-ZC journal entries with a subclass of 7=Change are logged and replicated through system journal replication processes. All member and data changes are logged and replicated through user journal replication processes.

File entry options - If a file is moved or renamed and both names are defined by a data group file entry, the file entry options must be the same in both data group file entries.

Referential constraints - Physical files with referential constraints require a field in another physical file to be valid. All physical files in a referential constraint structure must be in the same apply session. If this is not possible, contact Lakeview Customer Support.

111

Page 112: MIMIX Reference

Identifying data areas and data queues for replication

Identifying data areas and data queues for replicationMIMIX uses data group object entries to determine whether to process transactions for data area (*DTAARA) and data queue (*DTAQ) object types. Object entries can be configured so that these object types can be replicated from journal entries recorded in the system journal (default) or in a user journal (optional).

While user journal replication, also called advanced journaling, has significant advantages, you must decide whether it is appropriate for your environment. For more information, see “Planning for journaled IFS objects, data areas, and data queues” on page 85.

For detailed procedures, see “Configuring data group entries” on page 265.

Data areas can also be replicated by the data area poller process associated the user journal. However, this type of replication is the least preferred and requires data group data area entries. See “Creating data group data area entries” on page 289.

Configuration requirements - data areas and data queuesFor any data group object entries you create for data areas or data queues, consider the following:

• You must have at least one data group object entry which specifies a a Process type of *INCLD. You may need to create additional entries to achieve the desired results. This may include entries which specify a Process type of *EXCLD.

• When specifying objects in data group object entries, specify only the objects that need to be replicated. Specifying *ALL or a generic name for the System 1 object (OBJ1) parameter will select multiple objects within the library specified for System 1 library (LIB1).

• When you create data group object entries, you can specify an object auditing value within the configuration. The configured object auditing value affects how MIMIX handles changes to attributes of library-based objects. It is particularly important for, but not limited to, objects configured for system journal replication. For objects configured for user journal replication, the configured value can affect MIMIX performance. For detailed information, see “Configured object auditing value for data group entries” on page 98.

Additional requirements for user journal replication - The following additional requirements must be met before data areas or data queues identified by data group object entries can be replicated with user journal processes.

• The data group definition and data group object entries must specify the values indicated in Table 13 for critical parameters.

• Object tracking entries must exist for the objects identified by properly configured object entries. Typically these are created automatically when the data group is started.

• Journaling must be started on both the source and target systems for the objects

112

Page 113: MIMIX Reference

identified by object tracking entries.

Additionally, see “Planning for journaled IFS objects, data areas, and data queues” on page 85 for additional details if any of the following apply:

• Converting existing configurations - When converting an existing data group to use or add advanced journaling, you must consider whether journals should be shared and whether data area or data queue objects should be replicated in a data group that also replicates database files.

• Serialized transactions - If you need to serialize transactions for database files and data area or data queue objects replicated from a user journal, you may need to adjust the configuration for the replicated files.

• Apply session load balancing - One database apply session, session A, is used for all data area and data queue objects are replicated from a user journal. Other replication activity can use this apply session, and may cause it to become overloaded. You may need to adjust the configuration accordingly.

• User exit programs - If you use user exit programs that process user journal entries, you may need to modify your programs.

Restrictions - user journal replication of data areas and data queues For operating systems V5R4 and above, changes to data area and data queue content, as well as changes to structure (such as moves and renames) and number (such as creates and deletes), are recognized and supported through user journal replication.

When considering replicating data areas and data queues using MIMIX user journal replication processes, be aware of the following restrictions:

• For V5R3 operating systems, only a static environment of data areas and data queues is replicated. For V5R3 systems, while changes to the actual data are recognized and replicated, attribute changes are not. MIMIX AutoGuard™ must be used to detect attribute changes that occur on the source objects and correct the

Table 13. Critical configuration parameters for replicating *DTAARA and *DTAQ objects from a user journal

Critical Parameters Required Values

Configuration Notes

Data Group Definition

Data group type (TYPE) *ALL

Data Group Object Entry

Cooperate with database (COOPDB) *YES

Cooperating object types (COOPTYPE) *DTAARA*DTAQ

The appropriate object types must be specified to enable advanced journaling. Otherwise, system journal replication results.

113

Page 114: MIMIX Reference

Identifying data areas and data queues for replication

differences on the target objects. These functions are supported in environments using V5R4 or higher operating systems.

• MIMIX does not support before-images for data updates to data areas, and cannot perform data integrity checks on the target system to ensure that data being replaced on the target system is an exact match to the data replaced on the source system. Furthermore, MIMIX does not provide a mechanism to prevent users or applications from updating replicated data areas on the target system accidentally. To guarantee the data integrity of replicated data areas between the source and target systems, you should run MIMIX AutoGuard on a regular basis.

• The apply of data area and data queue objects is restricted to a single database apply job (DBAPYA). If a data group has too much replication activity, this job may fall behind in the processing of journal entries. If this occurs, you should load-level the apply sessions by moving some or all of the database files to another database apply job.

• Pre-existing data areas and data queues to be selected for replication must have journaling started on both the source and target systems before the data group is started.

• The ability to replicate Distributed Data Management (DDM) data areas and data queues is not supported. If you need to replicate DDM data areas and data queues, use standard system journal replication methods.

Supported journal code E and Q entry types The operating system uses journal codes E and Q to indicate that journal entries are related to operations on data areas and data queues, respectively. When configured for user journal replication, MIMIX recognizes specific E and Q journal entry types as eligible for replication from a user journal.

Table 14 shows the currently supported journal entry types for data areas.

Table 14. Journal entry types supported by MIMIX for data areas

Journal Code

Type Description Notes

E EA Update data area, after image

E EB Update data area, before image 1

E ED Data area deleted 1

E EE Create data area 1

E EG Start journal for data area

E EH End journal for data area

E EK Change journaled object attribute 1

E EL Data area restored 1

Notes: 1. The indicated journal entry type is only supported for i5/OS V5R4 and higher.

114

Page 115: MIMIX Reference

Table 15 shows the currently supported journal entry types for data queues.

E EM Data area moved 1

E EN Data area renamed 1

E ES Data area saved

E EW Start of save for data area

E ZA Change authority 1

E ZB Change object attribute 1

E ZO Ownership change 1

E ZP Change primary group 1

E ZT Auditing change 1

Table 15. Data queue journal entry types supported by MIMIX

Journal Code

Type Description Notes

Q QA Create data queue 1

Q QB Start data queue journaling

Q QC Data queue cleared, no key

Q QD Data queue deleted 1

Q QE End data queue journaling

Q QG Data queue attribute changed 1

Q QJ Data queue cleared, has key

Q QK Send data queue entry, has key

Q QL Receive data queue entry, has key

Q QM Data queue moved 1

Q QN Data queue renamed 1

Q QR Receive data queue entry, no key

Notes: 1. The indicated journal entry type is only supported for i5/OS V5R4 and higher.

Table 14. Journal entry types supported by MIMIX for data areas

Journal Code

Type Description Notes

Notes: 1. The indicated journal entry type is only supported for i5/OS V5R4 and higher.

115

Page 116: MIMIX Reference

Identifying data areas and data queues for replication

For more information about journal entries, see Journal Entry Information (Appendix D) in the iSeries Backup and Recovery guide in the IBM eServer iSeries Information Center.

Q QS Send data queue entry, no key

Q QX Start of save for data queue

Q QY Data queue saved

Q QZ Data queue restored 1

Q ZA Change authority 1

Q ZB Change object attribute 1

Q ZO Ownership change 1

Q ZP Change primary group 1

Q ZT Auditing change 1

Table 15. Data queue journal entry types supported by MIMIX

Journal Code

Type Description Notes

Notes: 1. The indicated journal entry type is only supported for i5/OS V5R4 and higher.

116

Page 117: MIMIX Reference

117

Page 118: MIMIX Reference

Identifying IFS objects for replication

Identifying IFS objects for replicationMIMIX uses data group IFS entries to determine whether to process transactions for objects in the integrated file system (IFS), and what replication path is used. IFS entries can be configured so that the identified objects can be replicated from journal entries recorded in the system journal (default) or in a user journal (optional).

One of the most important decisions in planning for MIMIX is determining which IFS objects you need to replicate. Most likely, you want to limit the IFS objects you replicate to mission-critical objects.

User journal replication, also called advanced journaling, is well suited to the dynamic environments of IFS objects. While user journal replication has significant advantages, you must decide whether it is appropriate for your environment. For more information, see “Planning for journaled IFS objects, data areas, and data queues” on page 85.

For detailed procedures, see “Creating data group IFS entries” on page 282.

Objects configured for user journal replication may have create, restore, delete, move, and rename operations. Differences in implementation details are described in “Processing variations for common operations” on page 130.

Supported IFS file systems and object typesThe IFS objects to be replicated must be in the Root (‘/’) or QOpenSys file systems. The following object types are supported:

• Directories (*DIR)

• Stream Files (*STMF)

• Symbolic Links (*SYMLNK)

Table 16 identifies the IFS file systems that are not supported by MIMIX and cannot be specified for either the System 1 object prompt or the System 2 object prompt in the Add Data Group IFS Entry (ADDDGIFSE) command.

Journaling is not supported for files in Network Work Storage Spaces (NWSS), which are used as virtual disks by IXS and IXA technology. Therefore, IFS objects configured to be replicated from a user journal must be in the Root (‘/’) or QOpenSys file systems.

Refer to the IBM book OS/400 Integrated File System Introduction for more information about IFS.

Table 16. IFS file systems that are not supported by MIMIX

/QDLS /QLANSrv /QOPT

/QFileSvr.400 /QNetWare /QSYS.LIB

/QFPNWSSTG /QNTC /QSR

118

Page 119: MIMIX Reference

Considerations when identifying IFS objectsThe following considerations for IFS objects apply regardless of whether replication occurs through the system journal or user journal.

MIMIX processing order for data group IFS entriesData group IFS entries are processed in order from most generic to most specific. IFS entries are processed using the unicode character set. The first entry (more generic) found that matches the object is used until a more specific match is found.

Long IFS path names

MIMIX currently replicates IFS path names of 512 characters. However, any MIMIX command that takes an IFS path name as input may be susceptible to a 506 character limit. This character limit may be reduced even further if the IFS path name contains embedded apostrophes ('). In this case, the supported IFS path name length is reduced by four characters for every apostrophe the path name contains.

For information about IFS path name naming conventions, refer to the IBM book, Integrated File System Introduction V5R4.

Upper and lower case IFS object namesWhen you create data group IFS entries, be aware of the following information about character case sensitivity for specifying IFS object names.

• The root file system on the System i5 is generally not case sensitive. Character case is preserved when creating objects, but otherwise character case is ignored. For example, you can create /AbCd or /ABCD, but not both. You can refer to the object by any mix of character case, such as /AbCd, /abcd, or /ABCD.

• The QOpenSys file system on the System i5 is generally case sensitive. Except for "QOpenSys" in a path name, all characters in a path name are case sensitive. For example, you can create both /QOpenSys/AbCd and /QOpenSys/ABCD. You must specify the correct character case when referring to an object.

During replication, MIMIX preserves the character case of IFS object names. For example, the creation of /AbCd on the source system will be replicated as /AbCd on the target system.

Replication will not alter the character case of objects that already exist on the target system (unless the object is deleted and recreated). In the root file system, /AbCd and /ABCD are equivalent names. If /ABCD exists as such on the target system, changes to /AbCd will be replicated to /ABCD, but the object name will not be changed to /AbCd on the target system.

When character case is not a concern (root file system), MIMIX may present path names as all upper case or all lower case. For example, the WRKDGACTE display shows all lower case, while the WRKDGIFSE display shows all upper case. Names can be entered in either case. For example, subsetting WRKDGACTE by /AbCd and /ABCD will produce the same result.

119

Page 120: MIMIX Reference

Identifying IFS objects for replication

When character case does matter (QOpenSys file system), MIMIX presents path names in the appropriate case. For example, the WRKDGACTE display and the WRKDGIFSE display would show /QOpenSys/AbCd, if that is the actual object path. Names must be entered in the appropriate character case. For example, subsetting the WRKDGACTE display by /QOpenSys/ABCD will not find /QOpenSys/AbCd.

Configured object auditing value for IFS objects When you create data group IFS entries, you can specify an object auditing value within the configuration. The configured object auditing value affects how MIMIX handles changes to attributes of IFS objects. It is particularly important for, but not limited to, objects configured for system journal replication. For IFS objects configured for user journal replication, the configured value can affect MIMIX performance. For detailed information, see “Configured object auditing value for data group entries” on page 98.

Configuration requirements - IFS objectsFor any data group IFS entry you create, consider the following:

• You must have at least one data group IFS entry which specifies a a Process type of *INCLD. You may need to create additional entries to achieve the desired results. This may include entries which specify a Process type of *EXCLD.

• When specifying which IFS objects in data group IFS entries, specify only the IFS objects that need to be replicated. The System 1 object (OBJ1) parameter selects all IFS objects within the path specified.

• You can specify an object auditing value within the configuration. For details, see “Configured object auditing value for data group entries” on page 98.

Additional requirements for user journal replication - The following additional requirements must be met before IFS objects identified by data group IFS entries can be replicated with user journal processes.

• The data group definition and data group IFS entries must specify the values indicated in Table 17 identifies for critical parameters.

• IFS tracking entries must exist for the objects identified by properly configured IFS entries. Typically these are created automatically when the data group is started.

• Journaling must be started on both the source and target systems for the objects identified by IFS tracking entries.

Table 17. Critical configuration parameters for replicating IFS objects from a user journal

Critical Parameters Required Values

Configuration Notes

Data Group Definition

Data group type (TYPE) *ALL

Data Group IFS Entry

120

Page 121: MIMIX Reference

Additionally, see “Planning for journaled IFS objects, data areas, and data queues” on page 85 for additional details if any of the following apply:

• Converting existing configurations - When converting an existing data group to use or add advanced journaling, you must consider whether journals should be shared and whether IFS objects should be replicated in a data group that also replicated database files.

• Serialized transactions - If you need to serialize transactions for database files and IFS objects replicated from a user journal, you may need to adjust the configuration for the replicated files.

• Apply session load balancing - One database apply session, session A, is used for all IFS objects that are replicated from a user journal. Other replication activity can use this apply session, and may cause it to become overloaded. You may need to adjust the configuration accordingly.

• User exit programs - If you use user exit programs that process user journal entries, you may need to modify your programs.

Restrictions - user journal replication of IFS objects

When considering replicating IFS objects using MIMIX user journal replication processes, be aware of the following restrictions:

• The operating system does not support before-images for data updates to IFS objects. As such, MIMIX cannot perform data integrity checks on the target system to ensure that data being replaced on the target system is an exact match to the data replaced on the source system. MIMIX will check the integrity of the IFS data through the use of regularly scheduled audits, specifically the #IFSATR audit.

• The apply of IFS objects is restricted to a single database apply job (DBAPYA). If a data group has too much replication activity, this job may fall behind in the processing of journal entries. If this occurs, you should load-level the apply sessions by moving some or all of the database files to another database apply job.

• Pre-existing IFS objects to be selected for replication must have journaling started both the source and target systems before the data group is started.

• A physical object, such as an IFS object, is identified by a hard link. Typically, an unlimited number of hard links can be created as identifiers for one object. For journaled IFS objects, MIMIX does not support the replication of additional hard links because doing so causes the same FID to be used for multiple names for the same IFS object.

Cooperate with database (COOPDB) *YES The default, *NO, results in system journal replication

Table 17. Critical configuration parameters for replicating IFS objects from a user journal

Critical Parameters Required Values

Configuration Notes

121

Page 122: MIMIX Reference

Identifying IFS objects for replication

• The ability to “lock on apply” IFS objects in order to prevent unauthorized updates from occurring on the target system is not supported when advanced journaling is configured.

• The ability to use the Remove Journaled Changes (RMVJRNCHG) command for removing journaled changes for IFS tracking entries is not supported.

• It is recommended that option 14 (Remove related) on the Work with Data Group Activity (WKRDGACT) display not be used for failed activity entries representing actions against cooperatively processed IFS objects. Because this option does not remove the associated tracking entries, orphan tracking entries can accumulate on the system.

Supported journal code B entry typesThe system uses journal code B to indicate that the journal entry deposited is related to an IFS operation. Table 18 shows the currently supported IFS entry types that MIMIX can replicate for IFS objects configured for user journal replication.

Table 18. IFS entry types supported by MIMIX

Journal Code

Type Description Notes

B AA Change audit attributes

B B1 Create files, directories, or symbolic links

B B3 Move/rename object 1

B B5 Remove link (unlink) 1

B B6 Bytes cleared, after-image

B ET End journaling for object

B FA Change object attribute

B FR Restore object 1

B FS Saved IFS object

B FW Start of save-while-active

B JT Start journaling for object

B OA Change object authority

B OG Change primary group

B OO Change object owner

B RN Rename file identifier

B TR Truncated IFS object

B WA Write after-image

122

Page 123: MIMIX Reference

Note: 1. The action identified in these entries are replicated cooperatively through the security

audit journal.

Table 18. IFS entry types supported by MIMIX

Journal Code

Type Description Notes

123

Page 124: MIMIX Reference

Identifying DLOs for replication

Identifying DLOs for replicationMIMIX uses data group DLO entries to determine whether to process system journal transactions for document library objects (DLOs). Each DLO entry for a data group includes a folder path, document name, owner, an object auditing level, and an include or exclude indicator. In addition to specific names, MIMIX supports generic names for DLOs. In a data group DLO entry, the folder path and document can be generic or *ALL.

When you create data DLO object entries, you can specify an object auditing value within the configuration. The configured object auditing value affects how MIMIX handles changes to attributes of DLOs. For detailed information, see “Configured object auditing value for data group entries” on page 98.

For detailed procedures, see “Creating data group DLO entries” on page 287.

How MIMIX uses DLO entries to evaluate journal entries for replicationHow items are specified within a DLO determines whether MIMIX selects or omits them from processing. This information can help you understand what is included or omitted.

When determining whether to process a journal entry for a DLO, MIMIX looks for a match between the DLO information in the journal entry and one of the data group DLO entries. The data group DLO entries are checked from the most specific to the least specific. The folder path is the most significant search element, followed by the document name, then the owner. The most significant match found (if any) is checked to determine whether to process the entry.

An exact or generic folder path name in a data group DLO entry applies to folder paths that match the entry as well as to any unnamed child folders of that path which are not covered by a more explicit entry. For example, a data group DLO entry with a folder path of “ACCOUNT” would also apply to a transaction for a document in folder path ACCOUNT/JANUARY. If a second data group DLO entry with a folder path of “ACCOUNT/J*” were added, it would take precedence because it is more specific.

For a folder path with multiple elements (for example, A/B/C/D), the exact checks and generic checks against data group DLO entries are performed on the path. If no match is found, the lowest path element is removed and the process is repeated. For example, A/B/C/D is reduced to A/B/C and is rechecked. This process continues until a match is found or until all elements of the path have been removed. If there is still no match, then checks for folder path *ALL are performed.

Sequence and priority order for documentsTable 19 illustrates the sequence in which MIMIX checks DLO entries for a match.

Table 19. Matching order for document names

Search Order Folder Path Document Name Owner1 Exact Exact Exact2 Exact Exact *ALL3 Exact Generic* Exact

124

Page 125: MIMIX Reference

Document example - Table 20 illustrates some sample data group DLO entries. For example, a transaction for any document in a folder named FINANCE would be blocked from replication because it matches entry 6. A transaction for document ACCOUNTS in FINANCE1 owned by JONESB would be replicated because it matches entry 4. If SMITHA owned ACCOUNTS in FINANCE1, the transaction would be blocked by entry 3. Likewise, documents LEDGER.JUL and LEDGER.AUG in FINANCE1 would be blocked by entry 2 and document PAYROLL in FINANCE1 would be blocked by entry 1. A transaction for any document in FINANCE2 would be blocked by entry 6. However, transactions for documents in FINANCE2/Q1, or in a child folder of that path, such as FINANCE2/Q1/FEB, would be replicated because of entry 5.

Sequence and priority order for foldersFolders are treated somewhat differently than documents. Folders are replicated based on whether there are any data group DLO entries with a process type of *INCLD that would require the folder to exist on the target system. If a folder needs to exist to satisfy the folder path of an include entry, the folder will be replicated even if a different exclude entry prevents replication of the contents of the folder.

4 Exact Generic* *ALL5 Exact *ALL Exact6 Exact *ALL *ALL7 Generic* Exact Exact8 Generic* Exact *ALL9 Generic* Generic* Exact10 Generic* Generic* *ALL11 Generic* *ALL Exact12 Generic* *ALL *ALL13 *ALL Exact Exact14 *ALL Exact *ALL15 *ALL Generic* Exact16 *ALL Generic* *ALL17 *ALL *ALL Exact18 *ALL *ALL *ALL

Table 20. Sample data group DLO entries, arranged in order from most to least specific

Entry Folder Path Document Owner Process Type1 FINANCE1 PAYROLL *ALL *EXCLD2 FINANCE1 LEDGER* *ALL *EXCLD3 FINANCE1 *ALL SMITHA *EXCLD4 FINANCE1 *ALL *ALL *INCLD5 FINANCE2/Q1 *ALL *ALL *INCLD6 FIN* *ALL *ALL *EXCLD

Table 19. Matching order for document names

Search Order Folder Path Document Name Owner

125

Page 126: MIMIX Reference

Identifying DLOs for replication

There is one exception to the requirement of replicating folders to satisfy the folder path for an include entry. A folder will not be replicated when the only include entry that would cause its replication specifies *ALL for its folder path and the folder matches an exclude entry with an exact or a generic folder path name, a document value of *ALL and an owner of *ALL.

Table 20 and Table 21 illustrate the differences in matching folders to be replicated.

In Table 20, above, a transaction for a folder named FINANCE would be blocked from replication because it matches entry 6. This would also affect all folders within FINANCE. A transaction for folder FINANCE1 would be replicated because of entry 4. Likewise, a transaction for folder FINANCE2 would be replicated because of entry 5. Note that any transactions for documents in FINANCE2 or any child folders other than those in the path that includes Q1 would be blocked by entry 6; only FINANCE2 itself must exist to satisfy entry 5.

In Table 21, although entry 5 is an include entry, a transaction for folder ACCOUNT would be blocked from replication because it matches entry 2. This is because of the exception described above. ACCOUNT matches an exclude entry with an exact folder path, document value of *ALL, and an owner of *ALL, and the only include entry that would cause it to be replicated specifies folder path *ALL. The exception also affects all child folders in the ACCOUNT folder path. Note that the exception holds true even if ACCOUNT is owned by user profile JONESB (entry 4) because the more specific folder name match takes precedence.

A transaction for folder ACCOUNT2 would be replicated even though it is an exact path name match for exclude entry 1. The exception does not apply because entry 1 does not specify document *ALL. Entry 5 requires that ACCOUNT2 exist on the target system to satisfy the folder path requirements for document names other than LEDGER* and for child folders of ACCOUNT2.

Table 21. Sample data group DLO entries, folder example

Entry Folder Path Document Owner Process Type1 ACCOUNT2 LEDGER* *ALL *EXCLD2 ACCOUNT *ALL *ALL *EXCLD3 *ALL ABC* *ALL *INCLD4 *ALL *ALL JONESB *INCLD5 *ALL *ALL *ALL *INCLD

126

Page 127: MIMIX Reference

Processing of newly created files and objectsYour production environment is dynamic. New objects continue to be created after MIMIX is configured and running. When properly configured, MIMIX automatically recognizes entries in the user journal that identify new create operations and replicates any that are eligible for replication. Optionally, MIMIX can also notify you of newly created objects not eligible for replication so that you can choose whether to add them to the configuration.

Configurations that replicate files, data areas, data queues, or IFS objects from user journal entries require journaling to be started on the objects before replication can occur. When a configuration enables journaling to be implicitly started on new objects, a newly created object is already journaled. When the journaled object falls within the group of objects identified for replication by a data group, MIMIX replicates the create operation. Processing variations exist based on how the data group and the data group entry with the most specific match to the object are configured. These variations are described in the following subtopics.

The MMNFYNEWE monitor is a shipped journal monitor that watches the security audit journal (QAUDJRN) for newly created libraries, folders, or directories that are not already included or excluded for replication by a data group and sends warning notifications when its conditions are met. This monitor is shipped disabled. User action is required to enable this monitor on the source system within your MIMIX environment. Once enabled, the monitor will automatically start with the master monitor. For more information about the conditions that are checked, see topic ‘Notifications for newly created objects’ in the Using MIMIX book.

For more information about requirements and restrictions for implicit starting of journaling as well as examples of how MIMIX determines whether to replicate a new object, see “What objects need to be journaled” on page 323.

Newly created filesWhen newly created *FILE objects are implicitly journaled and are eligible for replication, the replication processes used depend on how the data group definition is configured and how the data group entry with the most specific match to the file is configured.

New file processing - MIMIX Dynamic ApplyWhen a data group definition meets configuration requirements for MIMIX Dynamic Apply and data group object and file entries are properly configured, new files created on the source system that are eligible for replication will be re-created on the target system by MIMIX. The following briefly describes the events that occur for newly created files on the source system which are configured for MIMIX Dynamic Apply:

• System journal replication processes ignore the creation entry, knowing that user journal replication processes will get a create entry as well.

• User journal replication processes dynamically add a file entry for a file when a file create is seen in the user journal. The file entry is added with a status of *ACTIVE.

• User journal replication processes create the file on the target system. Replication

127

Page 128: MIMIX Reference

Processing of newly created files and objects

proceeds normally after the file has been created.

• All subsequent file changes including moves or renames, member operations (adds, changes, and removes), member data updates, file changes, authority changes, and file deletes are replicated through the user journal.

New file processing - legacy cooperative processingWhen a data group definition meets configuration requirements for legacy cooperative processing and data group object and file entries are properly configured, files created on the source system will be saved and restored to the target system by system journal replication processes. The following briefly describes the events that occur when files are created that have been defined for legacy cooperative processing:

• System journal replication processes communicate with user journal replication processes to add a data group file entry for the file (ADDDGFE command). The file entry is added with the status of *HLD.

• A user journal transaction is created on the source system and is transferred to the target system to dynamically add the file to active user journal processes.

• Journaling on the file is started if it is not already active.

• System journal replication processes save the created file, restores it on the target system, then communicates with user journal replication processes to issue a release wait request against the file. The status of the file entry changes to *RLSWAIT.

• The database apply process waits for the save point in the journal, and then makes the file active. The status of the file entry changes to *ACTIVE.

Newly created IFS objects, data areas, and data queuesWhen journaling is implicitly started for IFS objects, data areas, and data queues, newly created objects that are eligible for replication are automatically replicated. Configuration values specified in the data group IFS entry or object entry that most specifically matches the new object determines what replication processes are used.

Note: Non-journaled objects are replicated through the system journal.

For IFS objects, MIMIX user journal replication processes will replicate creates of IFS objects if the parent directory is journaled to the journal defined for a data group. Typically, if MIMIX commands were used to start journaling on the parent directory, new objects are permitted to inherit journal information from the parent directory.

For data areas and data queues, automatic journaling of new *DTAARA or *DTAQ objects is only supported in i5/OS V5R4 and higher. MIMIX configurations can be enabled to permit the automatic start of journaling for newly created data areas and data queues in libraries journaled to a user journal. New version 5 MIMIX installations that meet the i5/OS requirement and are configured for MIMIX Dynamic Apply of files automatically have this behavior. Installations that upgraded to version 5 may require conversion to MIMIX Dynamic Apply before automatic journaling of these object types can occur.

128

Page 129: MIMIX Reference

For more information about requirements for implicit starting of journaling, see “What objects need to be journaled” on page 323.

If the object is journaled to the user journal, MIMIX user journal replication processes can fully replicate the create operation. The user journal entries contain all the information necessary for replication without needing to retrieve information from the object on the source system. MIMIX creates a tracking entry for the newly created object and an activity entry representing the T-CO (create) journal entry.

If the object is not journaled to the user journal, then the create of the object is processed with system journal processing.

If the specified values in data group entry that identified the object as eligible for replication do not allow the object type to be cooperatively processed, the create of the object and subsequent operations are replicated through system journal processes.

When MIMIX replicates a create operation through the user journal, the create timestamp (*CRTTSP) attribute may differ between the source and target systems.

Determining how an activity entry for a create operation was replicatedTo determine whether a create operation of a given object is being replicated through user journal processes or through system journal processes, do the following:

1. On the Data Group Activity Entries (WRKDGACTE) display, locate the entry for a create operation that you want to check. Create operations have a value of T-CO in the Code column.

2. Use option 5 (Display) next to an activity entry for a create operation.

3. On the resulting details display, check the value of the Requires container send field.

If *YES appears for an activity entry representing a create operation, the create operation is being replicated through the system journal.

If *NO appears in the field, the create operation is being replicated through the user journal.

129

Page 130: MIMIX Reference

Processing variations for common operations

Processing variations for common operationsSome variation exists in how MIMIX performs common operations such as moves, renames, deletes, and restores. The variations are based on the configuration of the data group entry used for replication.

Configurations specify whether these operations are processed through the system journal, user journal, or a combination of both journals. Advanced journaling (user journal replication of data areas, data queues and IFS objects), legacy cooperative processing, and MIMIX Dynamic Apply utilize both journals, however MIMIX Dynamic Apply primarily processes through the user journal.

For IFS objects, user journal replication offers full support of create, restore, delete, and move and rename operations. In environments using V5R4 and higher operating systems, user journal replication also offers full support of these operations for data area and data queue objects.

Move/rename operations - system journal replicationTable 22 describes how MIMIX processes a move or rename journal entry from the system journal. MIMIX uses system journal replication processes DLOs and for IFS objects and library-based objects which are not explicitly identified for user journal replication. The Original Source Object and New Name or Location columns indicate whether the object is identified within the name space for replication. The Action column indicates the operation that MIMIX will attempt on the target system.

Table 22. Current object move actions

Original Source Object New Name or Location MIMIX Action on Target System

Excluded from or not identified for replication

Within name space of objects to be replicated

Create Object 1

Identified for replication Excluded from or not identified for replication

Delete Object 2

Identified for replication Within name space of objects to be replicated

Move Object

Excluded from or not identified for replication

Excluded from or not identified for replication

None

1. If the source system object is not defined to MIMIX or if it is defined by an Exclude entry, it is not guaranteed that an object with the same name exists on the backup system or that it is really the same object as on the source system. To ensure the integrity of the target (backup) system, a copy of the source object must be brought over from the source system.

2. If the target object is not defined to MIMIX or if it is defined by an Exclude entry, there is no guarantee that the target library exists on the target system. Further, the customer is assumed not to care if the target object is replicated, since it is not defined with an Include entry, so deleting the object is the most straight forward approach.

130

Page 131: MIMIX Reference

Move/rename operations - user journaled data areas, data queues, IFS objects

IFS, data area, and data queue objects replicated by user journal replication processes can be moved or renamed while maintaining the integrity of the data. If the new location or new name on the source system remains within the set of objects identified as eligible for replication, MIMIX will perform the move or rename operation on the object on the target system.

When a move or rename operation starts with or results in an object that is not within the name space for user journal replication, MIMIX may need to perform additional operations in order to replicate the operation. MIMIX may use a create or delete operation and may need to add or remove tracking entries.

Each row in Table 23 summarizes a move/rename scenario and identifies the action taken by MIMIX.

Table 23. MIMIX actions when processing moves or renames of objects when user journal replication pro-cesses are involved

Source object New name or loca-tion

MIMIX action

Identified for replication with user journal processing

Within name space of objects to be replicated with user journal processing

Moves or renames the object on the target system and renames the associated tracking entry. See example 1.

Not identified for replication

Not identified for replication

None. See example 2.

Identified for replication with user journal processing

Not identified for replication

Deletes the target object and deletes the associated tracking entry. The object will no longer be replicated. See example 3.

Identified for replication with user journal processing

Within name space of objects to be replicated with system journal processing

Moves or renames the object using system journal processes and removes the associated tracking entry. See example 4.

Identified for replication with system journal processing

Within name space of objects to be replicated with user journal processing

Creates tracking entry for the object using the new name or location and moves or renames the object using user journal processes. If the object is a library or directory, MIMIX creates tracking entries for those objects within the library or directory that are also within name space for user journal replication and synchronizes those objects. See example 5.

Not identified for replication

Within name space of objects to be replicated with user journal processing

Creates tracking entry for the object using the new name or location. If the object is a library or directory, MIMIX creates tracking entries for those objects within the library or directory that are also within name space for user journal replication. Synchronizes all of the objects identified by these new tracking entries. See example 6.

131

Page 132: MIMIX Reference

Processing variations for common operations

The following examples use IFS objects and directories to illustrate the MIMIX operations in move/rename scenarios that involve user journal replication (advanced journaling). The MIMIX behavior described is the same as that for data areas and data queues that are within the configured name space for advanced journaling. Table 24 identifies the initial set of source system objects, data group IFS entries, and IFS tracking entries before the move/rename operation occurs.

Example 1, moves/renames within advanced journaling name space: The most common move and rename operations occur within advanced journaling name space. For example, MIMIX encounters user journal entries indicating that the source system IFS directory /TEST/dir1 was renamed to /TEST/dir2, and that the IFS stream file /TEST/stmf1 was renamed to /TEST/stmf2. In both cases, the old and new names fall within advanced journaling name space, as indicated in Table 23. The rename operations are replicated and names are changed on the target system objects. The tracking entries for these objects are also renamed. The resulting changes on the target system objects and MIMIX configuration are shown in Table 25.

Example 2, moves/renames outside name space: When MIMIX encounters a journal entry for a source system object outside of the name space that has been renamed or moved to another location also outside of the name space, MIMIX ignores the transaction. The object is not eligible for replication.

Example 3, moves/renames from advanced journaling name space to outside name space: In this example, MIMIX encounters user journal entries indicating that the source system IFS directory /TEST/dir1 was renamed to /TEST/xdir1 and IFS stream file /TEST/stmf1 was renamed to /TEST/xstmf1. MIMIX is aware of only the original names, as indicated in Table 23. Thus, the old name is eligible for replication,

Table 24. Initial data group IFS entries, IFS tracking entries, and source IFS objects for examples

Configuration Supports

Data Group IFS Entries

Source System IFS Objects in Name Space

Associated Data Group IFS Tracking Entries

advanced journaling /TEST/STMF* /TEST/stmf1 /TEST/stmf1

advanced journaling /TEST/DIR* /TEST/dir1/doc1 /TEST/dir1

/TEST/dir1/doc1

system journal replication

/TEST/NOTAJ* /TEST/notajstmf1

/TEST/notajdir1/doc1

Table 25. Results of move/rename operations within name space for advanced journaling

Resulting Target IFS objects Resulting data group IFS tracking entries

/TEST/stmf2 /TEST/stmf2

/TEST/dir2/doc1 /TEST/dir2

/TEST/dir2/doc1

132

Page 133: MIMIX Reference

but the new name is not. MIMIX treats this as a delete operation during replication processing. MIMIX deletes the IFS directory and IFS stream file from the target system. MIMIX also deletes the associated IFS tracking entries.

Example 4, moves/renames from advanced journaling to system journal name space: In this example, MIMIX encounters user journal entries indicating that the source system IFS directory /TEST/dir1 was renamed to /TEST/notajdir1 and that IFS stream file /TEST/stmf1 was renamed to /TEST/notajstmf1. MIMIX is aware that both the old names and new names are eligible for replication as indicated in Table 23. However, the new names fall within the name space for replication through the system journal. As a result, MIMIX removes the tracking entries associated with the original names and performs the rename operation the objects on the target system. Table 26 shows these results.

Example 5, moves/renames from system journal to advanced journaling name space: In this example, MIMIX encounters journal entries indicating that source system IFS directory from /TEST/notajdir1 was renamed to /TEST/dir1 and that IFS stream file /TEST/notajstmf1 was renamed to /TEST/stmf1. MIMIX is aware that the old names are within the system journal name space and that the new names are within the advanced journaling name space. MIMIX creates tracking entries for the names and then performs the rename operation on the target system using advanced journaling.

MIMIX also creates tracking entries for any objects that reside within the moved or renamed IFS directory (or library in the case of data areas or data queues). The objects identified by these tracking entries are individually synchronized from the source to the target system. Table 27 illustrates the results on the target system.

Example 6, moves/renames from outside to within advanced journaling name space: In this example MIMIX encounters journal entries indicating that the source system IFS directory /TEST/xdir1 was renamed to /TEST/dir1 and that IFS stream file /TEST/xstmf1 was renamed to /TEST/stmf1. The original names are outside of the name space and are not eligible for replication. However, the new names are within

Table 26. Results of move/rename operations from advanced journaling to system journal name space

Resulting target IFS objects Resulting data group IFS tracking entries

/TEST/notajstmf1 (removed)

/TEST/notajdir1/doc1 (removed)

Table 27. Results of move/rename operations from system journal to advanced journaling name space

Resulting target IFS objects Resulting data group IFS tracking entries

/TEST/stmf1 /TEST/stmf1

/TEST/dir1/doc1 /TEST/dir1

/TEST/dir1/doc1

133

Page 134: MIMIX Reference

Processing variations for common operations

the name space for advanced journaling as indicated in Table 23. Because the objects were not previously replicated, MIMIX processes the operations as creates during replication. See “Newly created files” on page 127.

MIMIX also creates tracking entries for any objects that reside within the moved or renamed IFS directory (or library in the case of data areas or data queues). The objects identified by these tracking entries are individually synchronized from the source to the target system. Table 28 illustrates the results.

Delete operations - files configured for legacy cooperative processingThe following briefly describes the events that occur in MIMIX when a file that is defined for legacy cooperative processing is deleted:

• System journal replication processes communicate with user journal replication processes that a file has been deleted on the source system and indicates that the file should be deleted from the target system.

• A journal transaction which identifies the deleted file is created on the source system. The transaction is transferred dynamically.

• If the data group file entry is set to use the option to dynamically update active replication processes, the file and associated file entry will be dynamically removed from the replication processes. If the dynamic update option is not used, the data group changes are not recognized until all data group processes are ended and restarted.

• MIMIX system journal replication processes delete the file on the target system.

Delete operations - user journaled data areas, data queues, IFS objects When a T-DO (delete) journal entry for an IFS, data area, or data queue object is encountered in the system journal, MIMIX system journal replication processes generate an activity entry representing the delete operation and handle the delete of the object from the target system. The user journal replication processes remove the corresponding tracking entry.

Restore operations - user journaled data areas, data queues, IFS objectsWhen an IFS, data area, or data queue object is restored, the pre-existing object is replaced by a backup copy on the source system. With user journal replication, restores of IFS, data area, and data queue objects on the source system are

Table 28. Results of move/rename operations from outside to within advanced journaling name space

Resulting target IFS objects Resulting data group IFS tracking entries

/TEST/stmf1 /TEST/stmf1

/TEST/dir1/doc1 /TEST/dir1

/TEST/dir1/doc1

134

Page 135: MIMIX Reference

supported through cooperative processing between MIMIX system journal and user journal replication processes.

Provided the object was journaled when it was saved, a restored IFS, data area, or data queue object is also journaled .

During cooperative processing, system journal replication processes generate an activity entry representing the T-OR (restore) journal entry from the system journal and perform a save and restore operation on the IFS, data area, or data queue object. Meanwhile, user journal replication processes handle the management of the corresponding IFS or object tracking entry. MIMIX may also start journaling, or end and restart journaling on the object so that the journaling characteristics of the IFS, data area, or data queue object match the data group definition.

135

Page 136: MIMIX Reference

Processing variations for common operations

136

Page 137: MIMIX Reference

Chapter 5

Configuration checklists

MIMIX can be configured in a variety of ways to support your replication needs. Each configuration requires a combination of definitions and data group entries. Definitions identify systems, journals, communications, and data groups that make up the replication environment. Data group entries identify what to replicate and the replication option to be used. For available options, see “Replication choices by object type” on page 96. Also, advanced techniques, such as keyed replication, have additional configuration requirements. For additional information see “Configuring advanced replication techniques” on page 353.

New installations: Before you start configuring MIMIX, system-level configuration for communications (lines, controllers, IP interfaces) must already exist between the systems that you plan to include in the MIMIX installation. Choose one of the following checklists to configure a new installation of MIMIX.

• “Checklist: New remote journal (preferred) configuration” on page 139 uses shipped default values to create a new installation. Unless you explicitly configure them otherwise, new data groups will use the i5/OS remote journal function as part of user journal replication processes.

• “Checklist: New MIMIX source-send configuration” on page 143 configures a new installation and is appropriate when your environment cannot use remote journaling. New data groups will use MIMIX source-send processes in user journal replication.

• To configure a new installation that is to use the integrated MIMIX support for IBM WebSphere MQ (MIMIX for MQ), refer to the MIMIX for IBM WebSphere MQ book.

Upgrades and conversions: You can use any of the following topics, as appropriate, to change a configuration:

• “Checklist: Converting to remote journaling” on page 147 changes an existing data group to use remote journaling within user journal replication processes.

• “Converting to MIMIX Dynamic Apply” on page 150 provides checklists for two methods of changing the configuration of an existing data group to use MIMIX Dynamic Apply for logical and physical file replication. Data groups that existed prior to installing version 5 must use this information in order to use MIMIX Dynamic Apply.

• “Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling” on page 154 changes the configuration of an existing data group to use user journal replication processes for these objects.

• To add integrated MIMIX support for IBM WebSphere MQ (MIMIX for MQ) to an existing installation, use topic ‘Choosing the correct checklist for MIMIX for MQ’ in the MIMIX for IBM WebSphere MQ book.

• “Checklist: Converting to legacy cooperative processing” on page 157 changes the configuration of an existing data group so that logical and physical source files are processed from the system journal and physical data files use legacy

137

Page 138: MIMIX Reference

cooperative processing.

Other checklists: The following configuration checklist employs less frequently used configuration tools and is not included in this chapter.

• Use “Checklist: copy configuration” on page 553 if you need to copy configuration data from an existing product library into another MIMIX installation.

138

Page 139: MIMIX Reference

Checklist: New remote journal (preferred) configurationUse this checklist to configure a new installation of MIMIX. This checklist creates the preferred configuration that uses i5/OS remote journaling and uses MIMIX Dynamic Apply to cooperatively process logical and physical files.

To configure your system manually, perform the following steps on the system that you want to designate as the management system of the MIMIX installation:

1. Communications between the systems must be configured and operational before you start configuring MIMIX.

a. If communications is not configured, refer to Chapter 6, “System-level communications for more information.

b. If you have TCP configured and plan to use it for your transfer protocol, verify that is it is operational using the PING command.

2. Create system definitions for the management system and each of the network systems for the MIMIX installation. Use topic “Creating system definitions” on page 170.

3. Create transfer definitions to define the communications protocol used between pairs of systems. A pair of systems consists of a management system and a network system. Use topic “Creating a transfer definition” on page 184.

4. If you have implemented DDM password validation, you need to verify that your environment will allow MIMIX RJ support to work properly. Use topic “Checking DDM password validation level in use” on page 306.

5. If you are using the TCP protocol, ensure that the Lakeview TCP server is running on each system defined in the transfer definition. You can use the Work with Active Jobs (WRKACTJOB) command to look for a job under the MIMIXSBS subsystem with a function of PGM-LVSERVER. If the Lakeview TCP server is not active on a system, use topic “Starting the Lakeview TCP/IP server” on page 189.

Note: You can optionally configure so that the Lakeview TCP server starts automatically. Use the procedure in topic “Using autostart job entries to start the TCP server” on page 190.

6. If you are using the TCP protocol, ensure that the DDM TCP server is running using topic “Starting the DDM TCP/IP server” on page 308.

7. Verify that the communications link defined in each transfer definition is operational using topic “Verifying a communications link for system definitions” on page 194.

8. Start the MIMIX managers using topic “Starting the system and journal managers” on page 296. When the system manager is running, configuration information for data groups will be automatically replicated to the other system as you create it.

9. Create the data group definitions that you need using topic “Creating a data group definition” on page 247. The referenced topic creates a data group definition with appropriate values to support MIMIX Dynamic Apply.

10. Verify all potential communications links that can be used by this configuration using topic “Verifying the communications link for a data group” on page 195.

139

Page 140: MIMIX Reference

Checklist: New remote journal (preferred) configuration

11. Use Table 29 to create data group entries for this configuration. This configuration requires object entries and file entries for LF and PF files. For other object types or classes, any replication options identified in planning topic “Replication choices by object type” on page 96 are supported.

12. Use the #DGFE audit to confirm and automatically correct any problems found in file entries associated with data group object entries. Do the following:

a. Type WRKAUD RULE(#DGFE) and press Enter.

b. Next to the data group you want to confirm, type 9 (Run rule) and press Enter.

c. The results are placed in an outfile. For additional information, see “Interpreting results for configuration data - #DGFE audit” on page 580.

13. If you anticipate a delay between configuring data group entries (object, DLO, or IFS) and starting the data group, you should use the SETDGAUD command before synchronizing data between systems. Doing so will ensure that replicated objects will be properly audited and that any transactions for the objects that occur between configuration and starting the data group will be replicated. Use the procedure “Setting data group auditing values manually” on page 297.

14. Ensure that there are no batch jobs or users on the system that will be the source for replication for the rest of this procedure. Do not allow users onto the source

Table 29. How to configure data group entries for the remote journal (preferred) configuration.

Class Do the following: Planning and Requirements Information

Library-based objects

1. Create object entries using. Use“Creating data group object entries” on page 267.

2. After creating object entries, load file entries for LF and PF (source and data) *FILE objects using “Loading file entries from a data group’s object entries” on page 273.

Note: If you cannot use MIMIX Dynamic Apply for logical files or PF data files, you should still create file entries for PF data files to ensure that legacy cooperative processing can be used.

3. After creating object entries, load object tracking entries for any *DTAARA and *DTAQ objects to be replicated from a user journal. Use “Loading object tracking entries” on page 285.

“Identifying library-based objects for replication” on page 100“Identifying logical and physical files for replication” on page 105“Identifying data areas and data queues for replication” on page 112

IFS objects

1. Create IFS entries using “Creating data group IFS entries” on page 282.

2. After creating IFS entries, load IFS tracking entries for IFS objects to be replicated from a user journal. Use “Loading IFS tracking entries” on page 284.

“Identifying IFS objects for replication” on page 118

DLOs Create DLO entries using “Creating data group DLO entries” on page 287.

“Identifying DLOs for replication” on page 124

140

Page 141: MIMIX Reference

system or batch processing until you have successfully completed Step 18.

15. Start journaling using the following procedures as needed for your configuration.

• For user journal replication, use “Journaling for physical files” on page 326 to start journaling on both source and target systems

• For IFS objects, configured for advanced journaling, use “Journaling for IFS objects” on page 330

• For data areas or data queues configured for advanced journaling, use “Journaling for data areas and data queues” on page 334

16. Synchronize the database files and objects on the systems between which replication occurs. Topic “Performing the initial synchronization” on page 483 includes instructions for how to establish a synchronization point and identifies the options available for synchronizing.

17. Verify your configuration. Topic “Verifying the initial synchronization” on page 487 identifies the additional aspects of your configuration that are necessary for successful replication.

18. Start the data groups. You should use the procedure “Starting Selected Data Group Processes” in the Using MIMIX book.

141

Page 142: MIMIX Reference

Checklist: New remote journal (preferred) configuration

142

Page 143: MIMIX Reference

Checklist: New MIMIX source-send configurationBest practices for MIMIX are to use MIMIX Remote Journal support for database replication. However, in cases where you cannot use remote journaling, this checklist will configure a new installation that uses MIMIX source-send processes for database replication. System journal replication is also configured.

To configure a source-send environment, perform the following steps on the system that you want to designate as the management system of the MIMIX installation:

1. Communications between the systems must be configured and operational before you start configuring MIMIX.

a. If communications is not configured, refer to Chapter 6, “System-level communications for more information.

b. If you have TCP configured and plan to use it for your transfer protocol, verify that is it is operational using the PING command.

2. Create system definitions for the management system and each of the network systems for the MIMIX installation. Use topic “Creating system definitions” on page 170.

3. Create transfer definitions to define the communications protocol used between pairs of systems. A pair of systems consists of a management system and a network system. Use topic “Creating a transfer definition” on page 184.

4. If you are using the TCP protocol, ensure that the Lakeview TCP server is running on each system defined in the transfer definition. You can use the Work with Active Jobs (WRKACTJOB) command to look for a job under the MIMIXSBS subsystem with a function of PGM-LVSERVER. If the Lakeview TCP server is not active on a system, use topic “Starting the Lakeview TCP/IP server” on page 189.

Note: You can optionally configure so that the Lakeview TCP server starts automatically. Use the procedure in topic “Using autostart job entries to start the TCP server” on page 190.

5. Verify that the communications link defined in each transfer definition is operational using topic “Verifying a communications link for system definitions” on page 194.

6. Start the MIMIX managers using topic “Starting the system and journal managers” on page 296. When the system manager is running, configuration information for data groups will be automatically replicated to the other system as you create it.

7. Create the data group definitions that you need using topic “Creating a data group definition” on page 247.

8. If the journaling environment does not exist, use topic “Building the journaling environment” on page 219 to create the journaling environment.

9. Verify all potential communications links that can be used by this configuration using topic “Verifying the communications link for a data group” on page 195.

10. Use Table 30 to create data group entries for this configuration. This configuration requires object entries and file entries for legacy cooperative processing of PF data files. For other object types or classes, any replication options identified in

143

Page 144: MIMIX Reference

Checklist: New MIMIX source-send configuration

planning topic “Replication choices by object type” on page 96 are supported.

11. Use the #DGFE audit to confirm and automatically correct any problems found in file entries associated with data group object entries. Do the following:

a. Type WRKAUD RULE(#DGFE) and press Enter.

b. Next to the data group you want to confirm, type 9 (Run rule) and press Enter.

c. The results are placed in an outfile. For additional information, see “Interpreting results for configuration data - #DGFE audit” on page 580.

12. If you anticipate a delay between configuring data group entries (object, DLO, or IFS) and starting the data group, you should use the SETDGAUD command before synchronizing data between systems. Doing so will ensure that replicated objects will be properly audited and that any transactions for the objects that occur between configuration and starting the data group will be replicated. Use the procedure “Setting data group auditing values manually” on page 297.

13. Ensure that there are no batch jobs or users on the system that will be the source for replication for the rest of this procedure. Do not allow users onto the source system or batch processing until you have successfully completed Step 17.

14. Start journaling using the following procedures as needed for your configuration.

• For user journal replication, use “Journaling for physical files” on page 326 to start journaling on both source and target systems

Table 30. How to configure data group entries a new MIMIX source-send configuration.

Class Do the following: Planning and Requirement Information

Library-based objects

1. Create object entries using “Creating data group object entries” on page 267.

2. After creating object entries, load file entries for PF (data) *FILE objects using “Loading file entries from a data group’s object entries” on page 273.

3. After creating object entries, load object tracking entries for *DTAARA and *DTAQ objects to be replicated from a user journal. Use “Loading object tracking entries” on page 285.

“Identifying library-based objects for replication” on page 100“Identifying logical and physical files for replication” on page 105“Identifying data areas and data queues for replication” on page 112

IFS objects

1. Create IFS entries using “Creating data group IFS entries” on page 282.

2. After creating IFS entries, load IFS tracking entries for IFS objects to be replicated from a user journal. Use “Loading IFS tracking entries” on page 284.

“Identifying IFS objects for replication” on page 118

DLOs Create DLO entries using “Creating data group DLO entries” on page 287.

“Identifying DLOs for replication” on page 124

144

Page 145: MIMIX Reference

• For IFS objects, configured for advanced journaling, use “Journaling for IFS objects” on page 330

• For data areas or data queues configured for advanced journaling, use “Journaling for data areas and data queues” on page 334

15. Synchronize the database files and objects on the systems between which replication occurs. Topic “Performing the initial synchronization” on page 483 includes instructions for how to establish a synchronization point and identifies the options available for synchronizing.

16. Verify your configuration. Topic “Verifying the initial synchronization” on page 487 identifies the additional aspects of your configuration that are necessary for successful replication.

17. Start the data groups. You should use the procedure ‘Starting Selected Data Group Processes’ in the Using MIMIX book.

145

Page 146: MIMIX Reference

Checklist: New MIMIX source-send configuration

146

Page 147: MIMIX Reference

Checklist: Converting to remote journalingUse this checklist to convert an existing data group from using MIMIX source-send processes to using MIMIX Remote Journal support for user journal replication.

Note: This checklist does not change values specified in data group entries that affect how files are cooperatively processed or how data areas, data queues, and IFS objects are processed. For example, files configured for legacy processing prior to this conversion will continue to be replicated with legacy cooperative processing.

Perform these tasks from the MIMIX management system unless these instructions indicate otherwise.

1. If you use a startup program, make the modifications to the program described in “Changes to startup programs” on page 305.

2. If you have implemented DDM password validation, you need to verify that your environment will allow MIMIX RJ support to work properly. Use topic “Checking DDM password validation level in use” on page 306.

3. Do the following to ensure that you have a functional transfer definition:

a. Modify the transfer definition to identify the RDB directory entry. Use topic “Changing a transfer definition to support remote journaling” on page 186.

b. Verify the communication link using “Verifying the communications link for a data group” on page 195.

4. If you are using the TCP protocol, ensure that the DDM TCP server is running using topic “Starting the DDM TCP/IP server” on page 308.

5. Connect the journal definitions for the local and remote journals using “Adding a remote journal link” on page 225. This procedure also creates the target journal definition.

6. Build the journaling environment on each system defined by the RJ pair using “Building the journaling environment” on page 219.

7. Modify the data group definition as follows:

a. From the Work with DG Definitions display, type a 2 (Change) next to the data group you want and press Enter.

b. The Change Data Group Definition (CHGDGDFN) display appears. Press Enter to see additional prompts.

c. Specify *YES for the Use remote journal link prompt.

d. When you are ready to accept the changes, press Enter.

8. To make the configuration changes effective, you need to end the data group you are converting to remote journaling and start it again as follows:

a. Perform a controlled end of the data group (ENDDG command), specifying *ALL for Process and *CNTRLD for End process. Refer to topic “Ending all replication in a controlled manner” in the Using MIMIX book.

147

Page 148: MIMIX Reference

Checklist: Converting to remote journaling

b. Start data group replication using the procedure “Starting selected data group processes” in the Using MIMIX book. Be sure to specify *ALL for Start processes prompt (PRC parameter) and *LASTPROC as the value for the Database journal receiver and Database sequence number prompts.

148

Page 149: MIMIX Reference

149

Page 150: MIMIX Reference

Converting to MIMIX Dynamic Apply

Converting to MIMIX Dynamic ApplyUse either procedure in this topic to change a data group configuration to use MIMIX Dynamic Apply. In a MIMIX Dynamic Apply configuration, objects of type *FILE (LF, PF source and data) are replicated using primarily user journal replication processes. This configuration is the most efficient way to process these files.

• “Converting using the Convert Data Group command” on page 150 automatically converts a data group configuration.

• “Checklist: manually converting to MIMIX Dynamic Apply” on page 151 enables you to perform the conversion yourself.

It is recommended that you contact your Certified MIMIX Consultant for assistance before performing this procedure.

Requirements: Before starting, consider the following:

• Any data group that existed prior to installing version 5 must use one of these procedures in order to use MIMIX Dynamic Apply. As of version 5, newly created data groups are automatically configured to use MIMIX Dynamic Apply when its requirements and restrictions are met and shipped command defaults are used.

• Any data group to be converted must already be configured to use remote journaling.

• Any data group to be converted must have *SYSJRN specified as the value of Cooperative journal (COOPJRN).

• Keyed replication cannot be present in the data group configuration.

• A minimum level of i5/OS PTFs are required on both systems. For a complete list of required and recommended IBM PTFs, log in to Support Central and refer to the Technical Documents page.

• The conversion must be performed from the management system. The data group must be active when starting the conversion.

For additional information about configuration requirements and limitations of MIMIX Dynamic Apply, see “Identifying logical and physical files for replication” on page 105.

Converting using the Convert Data Group commandThe Convert Data Group (CVTDG) will automatically convert the configuration of specified data groups to enable MIMIX Dynamic Apply. This command will automatically attempt to perform the steps described in the manual procedure and will issue diagnostic messages if a step cannot be performed.

Perform the following steps from the management system on an active data group:

1. From a command line enter the command:

CVTDG DGDFN(name system1 system2)

2. Watch for diagnostic messages in the job log and take any recovery action indicated.

The conversion is complete when you see message LVI321A.

150

Page 151: MIMIX Reference

Checklist: manually converting to MIMIX Dynamic ApplyPerform the following steps from the management system to enable an existing data group to use MIMIX Dynamic Apply:

1. Verify the environment meets the requirements and restrictions. See “Requirements and limitations of MIMIX Dynamic Apply” on page 110.

2. Apply any IBM PTFs (or their supersedes) associated with i5/OS releases as they pertain to your environment. Log in to Support Central and refer to the Technical Documents page for a list of required and recommended IBM PTFs.

3. Verify that the System Manager jobs are active. See “Starting the system and journal managers” on page 296.

4. Verify that data group is synchronized by running the MIMIX audits. See “Verifying the initial synchronization” on page 487.

5. Use the Work with Data Groups display to ensure that there are no files on hold and no failed or delayed activity entries. Refer to topic “Preparing for a controlled end of a data group” in the Using MIMIX book.

Note: Topic “Ending a data group in a controlled manner” in the Using MIMIX book includes subtask “Preparing for a controlled end of a data group” and the other subtasks needed for Step 6 and Step 7.

6. Perform a controlled end of the data group you are converting. Follow the procedure for “Performing the controlled end” in the Using MIMIX book.

7. Ensure that there are no open commit cycles for the database apply process. Follow the steps for “Confirming the end request completed without problems” in the Using MIMIX book.

8. From the management system, change the data group definition so that the Cooperative journal (COOPJRN) parameter specifies *USRJRN. Use the command:

CHGDGDFN DGDFN(name system1 system2) COOPJRN(*USRJRN)

9. Ensure that you have one or more data group object entries that specify the required values. These entries identify the items within the name space for replication. You may need to create additional entries to achieve desired results. For more information, see “Identifying logical and physical files for replication” on page 105.

10. To ensure that new files created while the data group is inactive are automatically journaled, create the QDFTJRN data areas into the libraries configured for replication of cooperatively processed files by running the following command from the source system:

SETDGAUD DGDFN(name system1 system2) OBJTYPE(*AUTOJRN)

11. From the management system, use the following command to load the data group file entries from the target system. Ensure that the value you specify (*SYS1 or *SYS2) for the LODSYS parameter identifies the target system.

LODDGFE DGDFN(name system1 system2) CFGSRC(*DGOBJE) UPDOPT(*ADD) LODSYS(value) SELECT(*NO)

151

Page 152: MIMIX Reference

Converting to MIMIX Dynamic Apply

For additional information about loading file entries, see “Loading file entries from a data group’s object entries” on page 273.

12. Start journaling for all files not previously journaled. See “Starting journaling for physical files” on page 326.

13. Start the data group specifying the command as follows:

STRDG DGDFN(name system1 system2) CRLPND(*YES)

14. Verify that data groups are synchronized by running the MIMIX audits. See “Verifying the initial synchronization” on page 487.

152

Page 153: MIMIX Reference

153

Page 154: MIMIX Reference

Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling

Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling

Use this checklist to change the configuration of an existing data group so that IFS objects, *DTAARA and *DTAQ objects can be replicated from entries in a user journal. This environment is also called advanced journaling.

Topic “User journal replication of IFS objects, data areas, data queues” on page 72 describes the benefits and restrictions of replicating these objects from user journal entries. It also identifies the MIMIX processes used for replication and the purpose of tracking entries.

To convert existing data groups to use advanced journaling, do the following:

1. Determine if IFS objects, data areas, and data queues should be replicated in a data group shared with other objects undergoing database replication, or if these objects should be in a separate data group. Topic “Planning for journaled IFS objects, data areas, and data queues” on page 85 provides guidelines for the following planning considerations:

• Serializing transactions with database files

• Converting existing data groups, including examples

• Database apply session balancing

• User exit program considerations

2. Perform a controlled end of the data groups that will include objects to be replicated using advanced journaling. See the Using MIMIX book for how to end a data group in a controlled manner (ENDOPT(*CNTRLD)).

3. Ensure that all pending activity for objects and IFS objects has completed. Use the command WRKDGACTE STATUS(*ACTIVE) to display any pending activity entries. Any activities that are still in progress will be listed.

4. The data group definitions used for user journal replication of IFS objects, data areas, and data queues must specify *ALL as the value for Data group type (TYPE). Verify the value in the data group definition is correct. If necessary, change the value.

5. Add or change data group IFS entries for the IFS objects you want to replicate. Be sure to specify *YES for the Cooperate with database prompt in procedure “Adding or changing a data group IFS entry” on page 282. For additional information, see “Restrictions - user journal replication of IFS objects” on page 121.

6. Add or change data group object entries for the data areas and data queues you want to replicate using the procedure “Adding or changing a data group object entry” on page 268. For additional information, see “Restrictions - user journal replication of data areas and data queues” on page 113.

7. Load the tracking entries associated with the data group IFS entries and data group object entries you configured. Use the procedures in “Loading tracking entries” on page 284.

154

Page 155: MIMIX Reference

8. Start journaling using the following procedures as needed for your configuration. If you ever plan to switch the data groups, you must also start journaling on the target system.

• For IFS objects, use “Starting journaling for IFS objects” on page 330

• For data areas or data queues, use “Starting journaling for data areas and data queues” on page 334

9. Verify that journaling is started correctly. This step is important to ensure the IFS objects, data areas and data queues are actually replicated. For IFS objects, see “Verifying journaling for IFS objects” on page 332. For data areas and data queues, see “Verifying journaling for data areas and data queues” on page 336.

10. If you anticipate a delay between configuring data group IFS, object, or file entries and starting the data group, use the SETDGAUD command before synchronizing data between systems. Doing so will ensure that replicated objects are properly audited and that any transactions for the objects that occur between configuration and starting the data group are replicated. Use the procedure “Setting data group auditing values manually” on page 297.

11. Synchronize the IFS objects, data areas and data queues between the source and target systems. For IFS objects, follow the Synchronize IFS Object (SYNCIFS) procedures. For data areas and data queues, follow the Synchronize Object (SYNCOBJ) procedures. Refer to chapter “Synchronizing data between systems” on page 472 for additional information.

12. If you are replicating large amounts of data, you should specify i5/OS journal receiver size options that provide large journal receivers and large journal entries. Journals created by MIMIX are configured to allow maximum amounts of data. Journals that already exist may need to be changed.

a. After IFS objects are configured, perform the steps in “Verifying journal receiver size options” on page 213 to ensure journaling is configured appropriately.

b. Change any journal receiver size options necessary using “Changing journal receiver size options” on page 213.

13. If you have database replication user exit programs, changes may need to be made. See “User exit program considerations” on page 87.

14. Once you have completed the preceding steps, start the data groups. For more information about starting data groups, see the Using MIMIX book.

155

Page 156: MIMIX Reference

Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling

156

Page 157: MIMIX Reference

Checklist: Converting to legacy cooperative processingIf you find that you cannot use MIMIX Dynamic Apply for logical and physical files, use this checklist to change the configuration of an existing data group so that user journal replication (MIMIX Dynamic Apply) is no longer used. This checklist changes the configuration so that physical data files can be processed using legacy cooperative processing. Logical files and physical source files will be processed using the system journal. For more information, see “Requirements and limitations of legacy cooperative processing” on page 111.

Important! Before you use this checklist, consider the following:

• As of version 5, newly created data groups are configured for MIMIX Dynamic Apply when default values are taken and configuration requirements are met.

• This checklist does not convert user journal replication processes from using remote journaling to MIMIX source-send processing.

• This checklist only affects the configuration of *FILE objects. The configuration of any other *DTAARA, *DTAQ, or IFS objects that are replicated through the user journal are not affected.

Perform the following steps to enable legacy cooperative processing and system journal replication:

1. Verify that data group is synchronized by running the MIMIX audits. See “Verifying the initial synchronization” on page 487.

2. Use the Work with Data Groups display to ensure that there are no files on hold and no failed or delayed activity entries. Refer to topic “Preparing for a controlled end of a data group” in the Using MIMIX book.

Note: Topic “Ending a data group in a controlled manner” in the Using MIMIX book includes subtask “Preparing for a controlled end of a data group” and the subtask needed for Step 3.

3. End the data group you are converting by performing a controlled end. Follow the procedure for “Performing the controlled end” in the Using MIMIX book.

4. From the management system, change the data group definition so that the Cooperative journal (COOPJRN) parameter specifies *SYSJRN. Use the command:

CHGDGDFN DGDFN(name system1 system2) COOPJRN(*SYSJRN)

5. From the management system, use the following command to load the data group file entries from the target system. Ensure that the value you specify (*SYS1 or *SYS2) for the LODSYS parameter identifies the target system.

LODDGFE DGDFN(name system1 system2) CFGSRC(*DGOBJE) UPDOPT(*REPLACE) LODSYS(value) SELECT(*NO)

For additional information about loading file entries, see “Loading file entries from a data group’s object entries” on page 273.

6. Optional step: Delete the QDFTJRN data areas. These data areas automatically start journaling for newly created files. This may not be desired because the

157

Page 158: MIMIX Reference

Checklist: Converting to legacy cooperative processing

journal image (JRNIMG) value for these files may be different than the value specified in the MIMIX configuration. Such a difference will be detected by the file attributes (#FILATR) audit. To delete these data areas, run the following command from each system:

DLTDTAARA DTAARA(library/QDFTJRN)

7. Start the data group specifying the command as follows:

STRDG DGDFN(name system1 system2) CRLPND(*YES)

158

Page 159: MIMIX Reference

Chapter 6

System-level communications

This information is provided to assist you with configuring the System i5 communications that are necessary before you can configure MIMIX. MIMIX supports the following communications protocols:

• Transmission Control Protocol/Internet Protocol (TCP/IP)

• Systems Network Architecture (SNA)

• OptiConnect

MIMIX should have a dedicated communications line that is not shared with other applications, jobs, or users on the production system. A dedicated path will make it easier to fine-tune your MIMIX environment and to determine the cause of problems. For TCP/IP, it is recommended that the TCP/IP host name or interface used be in its own subnet. For SNA, it is recommended that MIMIX have its own communication line instead of sharing an existing SNA device.

Your Certified MIMIX Consultant can assist you in determining your communications requirements and ensuring that communications can efficiently handle peak volumes of journal transactions.

If you plan to use system journal replication processes, you need to consider additional aspects that may affect the communications speed. These aspects include the type of objects being transferred and the size of data queues, user spaces, and files defined to cooperate with user journal replication processes.

MIMIX IntelliStart can help you determine your communications requirements.

The topics in this chapter include:

• “Configuring for native TCP/IP” on page 159 describes using native TCP/IP communications and provides steps to prepare and configure your system for it.

• “Configuring APPC/SNA” on page 163 describes basic requirements for SNA communications.

• “Configuring OptiConnect” on page 163 describes basic requirements for OptiConnect communications and identifies MIMIX limitations when this communications protocol is used.

Configuring for native TCP/IPMIMIX has the ability to use native TCP/IP communications over sockets. This allows users with TCP communications on their networks to use MIMIX without requiring the use of IBM ANYNET through SNA.

Using TCP/IP communications may or may not improve your CPU usage, but if your primary communications protocol is TCP/IP, this can simplify your network configuration.

Native TCP/IP communications allow MIMIX users greater flexibility and provides another option in the communications available for use on their System i5 systems.

159

Page 160: MIMIX Reference

Configuring for native TCP/IP

MIMIX users can also continue to use IBM ANYNET support to run SNA protocols over TCP networks.

Preparing your system to use TCP/IP communications with MIMIX requires the following:

1. Configure both systems to use TCP/IP. The procedure for configuring a system to use TCP/IP is documented in the information included with the i5/OS software. Refer to the IBM TCP/IP Fastpath Setup book, SC41-5430, and follow the instructions to configure the system to use TCP/IP communications.

2. If you need to use port aliases, do the following:

a. Refer to the examples “Port aliases-simple example” on page 160 and “Port aliases-complex example” on page 161.

b. Create the port aliases for each system using the procedure in topic “Creating port aliases” on page 162.

3. Once the system-level communication is configured, you can begin the MIMIX configuration process.

Port aliases-simple exampleBefore using the MIMIX TCP/IP support, you must first configure the system to recognize the feature. This involves identifying the ports that will be used by MIMIX to communicate with other systems. The port identifiers used depend on the configuration of the MIMIX installations. MIMIX installations vary according to the needs of each enterprise. At a minimum, a MIMIX installation consists of one management system and one network system. A more complex MIMIX installation may consist of one management system and multiple network systems. A large enterprise may even have multiple MIMIX installations that are interconnected.

Figure 8 shows a simple MIMIX installation in which the management system (LONDON) and a network system (HONGKONG) use the TCP communications protocol through the port number 50410. Figure 9 shows a MIMIX installation with two network systems.

Figure 8. Creating Ports. In this example, the MIMIX installation consists of two systems.

Figure 9. Creating Ports. In this example, the MIMIX installation consists of three systems,

160

Page 161: MIMIX Reference

System-level communications

two of which are network systems.

In both Figure 8 and Figure 9, if you need to use port aliases for port 50410, you need to have a service table entry on each system that equates the port number to the port alias. For example, you might have a service table entry on system LONDON that defines an alias of MXMGT for port number 50410. Similarly, you might have service table entries on systems HONGKONG and CHICAGO that define an alias of MXNET for port 50410. You would use these aliases in the PORT1 and PORT2 parameters in the transfer definition.

Port aliases-complex exampleIf a network system communicates with more than one management system (it participates with multiple MIMIX installations), it must have a different port for each management system with which it communicates. Figure 10 shows an example of such an environment with two MIMIX installations. In the LIBA cluster, the port 50410 is used to communicate between LONDON (the management system) and HONGKONG and CHICAGO (network systems). In the LIBB cluster, the port 50411 is used to communicate between CHICAGO (the management system for this cluster) and MEXICITY and CAIRO. The CHICAGO system has two port numbers defined, one for each MIMIX installation in which it participates.

Figure 10. Creating Port Aliases. In this example, the system CHICAGO participates in two

161

Page 162: MIMIX Reference

Configuring for native TCP/IP

MIMIX installations and uses a separate port for each MIMIX installation.

If you need to use port aliases in an environment such as Figure 10, you need to have a service table entry on each system that equates the port number to the port alias. In this example, CHICAGO would require two port aliases and two service table entries. For example, you might use a port alias of LIBAMGT for port 50410 on LONDON and an alias of LIBANET for port 50410 on both HONKONG and CHICAGO. You might use an alias of LIBBMGT for port 50411 on CHICAGO and an alias of LIBBNET for port 50411 on both CAIRO and MEXICITY. You would use these port aliases in the PORT1 and PORT2 parameters on the transfer definitions.

Creating port aliasesThe following procedure describes the steps for creating port aliases which allow MIMIX installations to communicate through TCP/IP.

Notes: • Perform this procedure on each system in the MIMIX installation that will use

the TCP protocol.

• To allow communications in both directions between a pair of systems, such as between a management system and a network system, you need to add port aliases for both systems in the pair on each system.

• If you are using more than one MIMIX installation, define a different set of aliases for each MIMIX installation.

Do the following to create a port alias on a system:

1. From a command line, type the command CFGTCP and press Enter.

2. The Configure TCP/IP menu appears. Select option 21 (Configure related tables) and press Enter.

162

Page 163: MIMIX Reference

System-level communications

3. The Configure Related Tables display appears. Select option 1 (Work with service table entries) and press Enter.

4. The Work with Service Table Entries display appears. Do the following:

a. Type a 1 in the Opt column next to the blank lines at the top of the list.

b. In the blank at the top of the Service column, use uppercase characters to specify the alias that the System i5 will use to identify this port as a MIMIX native TCP port.

Note: Port alias names are case sensitive and must be unique to the system on which they are defined. For environments that have only one MIMIX installation, Lakeview Technology recommends that you use the same port number or same port alias on each system in the MIMIX installation.

c. In the blank at the top of the Port column, specify the number of an unused port ID to be associated with the alias. The port ID can be any number greater than 1024 and less than 55534 that is not being used by another application. You can page down through the list to ensure that the number is not being used by the system.

d. In the blank at the top of the Protocol column, type TCP to identify this entry as using TCP/IP communications.

e. Press Enter.

5. The Add Service Table Entry (ADDSRVTBLE) display appears. Verify that the information shown for the alias and port is what you want. At the Text 'description' prompt, type a description of the port alias, enclosed in apostrophes, and then press Enter.

Configuring APPC/SNABefore you create a transfer definition that uses the SNA protocol, a functioning SNA (APPN or APPC) line, controller, and device must exist between the systems that will be identified by the transfer definition. If a line, controller, and device do not exist, consult your network administrator before continuing.

Configuring OptiConnectIf you plan to use the OptiConnect protocol, a functioning OptiConnect line must exist between the two system that you identify in the transfer definition

You can use the OptiConnect® product from IBM for all communication for most1 MIMIX processes. Use the IBM book OptiConnect for OS/400 to install and verify OptiConnect communications. Then you can do the following:

Attention: MIMIX requires that you restrict the length of port aliases to 14 or fewer characters and suggests that you specify the alias in uppercase characters.

163

Page 164: MIMIX Reference

Configuring OptiConnect

• Ensure that the QSOC library is in the system portion of the library list. Use the command DSPSYSVAL SYSVAL(QSYSLIBL) to verify whether the QSOC library is in the system portion of the library list. If it not, use the CHGSYSVAL command to add this library to the system library list.

• When you create the transfer definition, specify *OPTI for the transfer protocol.

1. The #FILDTA audit and the Compare File Data (CMPFILDTA) command require TCP/IP commu-nicaitons.

164

Page 165: MIMIX Reference

System-level communications

165

Page 166: MIMIX Reference

Chapter 7

Configuring system definitions

By creating a system definition, you identify to MIMIX characteristics of a System i5 system that participates in a MIMIX installation.

When you create a system definition, MIMIX automatically creates a journal definition for the security audit journal (QAUDJRN) for the associated system. This journal definition is used by MIMIX system journal replication processes. It is recommended that you avoid naming system definitions based on their roles. System roles such as source, target, production, and backup change upon switching.

The topics in this chapter include:

• “Tips for system definition parameters” on page 167 provides tips for using the more common options for system definitions.

• “Creating system definitions” on page 170 provides the steps to follow for creating system definitions.

• “Changing a system definition” on page 171 provides the steps to follow for changing a system definition.

• “Multiple network system considerations” on page 172 describes recommendations when configuring an environment that has multiple network systems.

166

Page 167: MIMIX Reference

Tips for system definition parametersThis topic provides tips for using the more common options for system definitions. Context-sensitive help is available online for all options on the system definition commands.

System definition (SYSDFN) This parameter is a single-part name that represents a system within a MIMIX installation. This name is a logical representation and does not need to match the system name that it represents.

Note: In the first part of the name, the first character must be either A - Z, $, #, or @. The remaining characters can be alphanumeric and can contain a $, #, @, a period (.), or an underscore (_).

System type (TYPE) This parameter indicates the role of this system within the MIMIX installation. A system can be a management (*MGT) system or a network (*NET) system. Only one system in the MIMIX installation can be a management system.

Transfer definitions (PRITRFDFN, SECTFRDFN) These parameters identify the primary and secondary transfer definitions used for communicating with the system. The communications path and protocol are defined in the transfer definitions. For MIMIX to be operational, the transfer definition names you specify must exist. MIMIX does not automatically create transfer definitions. If you accept the default value primary for the Primary transfer definition, create a transfer definition by that name. If you specify a Secondary transfer definition, it will be used by MIMIX if communications path specified by the primary transfer definition is not available.

Cluster member (CLUMBR) You can specify if you want this system definition to be a member of a cluster. The system (node) will not be added to the cluster until the system manager is started the first time.

Cluster transfer definition (CLUTFRDFN) You can specify the transfer definition that cluster resource services will use to communicate to the node and for the node to communicate with other nodes in the cluster. You must specify *TCP as the transfer protocol.

Message handling (PRIMSGQ, SECMSGQ) MIMIX uses the centralized message log facility which is common to all MIMIX products. These parameters provide additional flexibility by allowing you to identify the message queues associated with the system definition and define the message filtering criteria for each message queue. By default, the primary message queue, MIMIX, is located in the MIMIXQGPL library. You can specify a different message queue or optionally specify a secondary message queue. You can also control the severity and type of messages that are sent to each message queue.

Manager delay times (JRNMGRDLY, SYSMGRDLY) Two parameters define the delay times used for all journal management and system management jobs. The value of the journal manager delay parameter determines how often the journal manager process checks for work to perform. The value of the system manager delay parameter determines how often the system manager process checks for work to perform.

167

Page 168: MIMIX Reference

Tips for system definition parameters

Output queue values (OUTQ, HOLD, SAVE) These parameters identify an output queue used by this system definition and define characteristics of how the queue is handled. Any MIMIX functions that generate reports use this output queue. You can hold spooled files on the queue and save spooled files after they are printed.

Keep history (KEEPSYSHST, KEEPDGHST) Two parameters specify the number of days to retain MIMIX system history and data group history. MIMIX system history includes the system message log. Data group history includes time stamps and distribution history. You can keep both types of history information on the system for up to a year.

Keep notifications (KEEPNEWNFY, KEEPACKNFY) Two parameters specify the number of days to retain new and acknowledged notifications. The Keep new notifications (days) parameter specifies the number of days to retain new notifications in the MIMIX data library. The Keep acknowledged notifications (days) parameter specifies the number of days to retain acknowledged notifications in the MIMIX data library.

MIMIX data library, storage limit (KEEPMMXDTA, DTALIBASP, DSKSTGLMT) Three parameters define information about MIMIX data libraries on the system. The Keep MIMIX data (days) parameter specifies the number of days to retain objects in the MIMIX data library, including the container cache used by system journal replication processes. The MIMIX data library ASP parameter identifies the auxiliary storage pool (ASP) from which the system allocates storage for the MIMIX data library. For libraries created in a user ASP, all objects in the library must be in the same ASP as the library. The Disk storage limit (GB) parameter specifies the maximum amount of disk storage that may be used for the MIMIX data libraries.

User profile and job descriptions (SBMUSR, MGRJOBD, DFTJOBD) MIMIX runs under the MIMIXOWN user profile and uses several job descriptions to optimize MIMIX processes. The default job descriptions are stored in the MIMIXQGPL library.

Job restart time (RSTARTTIME) System-level MIMIX jobs, including the system manager and journal manager, restart daily to maintain the MIMIX environment. You can change the time at which these jobs restart. The management or network role of the system affects the results of the time you specify on a system definition. Changing the job restart time is considered an advanced technique.

Printing (CPI, LPI, FORMLEN, OVRFLW, COPIES) These parameters control characteristics of printed output.

Product library (PRDLIB) This parameter is used for installing MIMIX into a switchable independent ASP, and allows you to specify a MIMIX installation library that does not match the library name of the other system definitions. The only time this parameter should be used is in the case of an INTRA system (which is handled by the default value) or in replication environments where it is necessary to have extra MIMIX system definitions that will “switch locations” along with the switchable independent ASP. Due to its complexity, changing the product library is considered an advanced technique and should not be attempted without the assistance of a Certified MIMIX Consultant.

ASP group (ASPGRP) This parameter is used for installing MIMIX into a switchable independent ASP, and defines the ASP group (independent ASP) in which the product library exists. Again, this parameter should only be used in replication

168

Page 169: MIMIX Reference

environments involving a switchable independent ASP. Due to its complexity, changing the ASP group is considered an advanced technique and should not be attempted without the assistance of a Certified MIMIX Consultant.

169

Page 170: MIMIX Reference

Creating system definitions

170

Creating system definitionsTo create a system definition, do the following:

1. From the MIMIX Configuration Menu, select option 1 (Work with system definitions) and press Enter.

2. The Work with System Definitions display appears. Type a 1 (Create) next to the blank line at the top of the list area and press Enter.

3. The Create System Definition (CRTSYSDFN) display appears. Specify a name at the System definition prompt. Once created, the name can only be changed by using the Rename System Definition command.

4. Specify the appropriate value for the system you are defining at the System type prompt.

5. Specify the names of the transfer definitions you want at the Primary transfer definition and, if desired, the Secondary transfer definition prompts.

6. If the system definition is for a cluster environment, do the following:

a. Specify *YES at the Cluster member prompt.

b. Verify that the value of the Cluster transfer definition is what you want. If necessary, change the value.

7. If you want use a secondary message queue, at the prompts for Secondary message handling specify the name and library of the message queue and values indicating the severity and the Information type of messages to be sent to the queue.

8. At the Description prompt, type a brief description of the system definition.

9. If you want to verify or change values for additional parameters, press F10 (Additional parameters).

10. To create the system definition, press Enter.

Page 171: MIMIX Reference

171

Changing a system definitionTo change a system definition, do the following:

1. From the MIMIX Configuration Menu, select option 1 (Work with system definitions) and press Enter.

2. The Work with System Definitions display appears. Type a 2 (Change) next to the system definition you want and press Enter.

3. The Change System Definition (CHGSYSDFN) display appears. Press F10 (Additional parameters)

4. Locate the prompt for the parameter you need to change and specify the value you want. Press F1 (Help) for more information about the values for each parameter.

5. To save the changes press Enter.

Page 172: MIMIX Reference

Multiple network system considerations

Multiple network system considerationsWhen configuring an environment that has multiple network systems, it is recommended that each system definition in the environment specify the same name for the Primary transfer definition prompt. This configuration is necessary for the MIMIX system managers to communicate between the management system and all systems in the network. Data groups can use the same transfer definitions that the system managers use, or they can use differently named transfer definitions.

Similarly, if you use secondary transfer definitions, it is recommended that each system definition in the multiple network environment specifies the same name for the Secondary transfer definition prompt. (The value of the Secondary transfer definition should be different than the value of the Primary transfer definition.)

Figure 11 shows system definitions in a multiple network system environment. The management system (LONDON) specifies the value PRIMARY for the primary transfer definition in its system definition. The management system can communicate with the other systems using any transfer definition named PRIMARY that has a value for System 1 or System 2 that resolves to its system name (LONDON). Figure 12 shows the recommended transfer definition configuration which uses the value *ANY for both systems identified by the transfer definition.

The management system LONDON could also use any transfer definition that specified the name LONDON as the value for either System 1 or System 2.

The default value for the name of a transfer definition is PRIMARY. If you use a different name, you need to specify that name as the value for the Primary transfer definition prompt in all system definitions in the environment.

Figure 11. Example of system definition values in a multiple network system environment.

Figure 12. Example of a contextual (*ANY) transfer definition in use for a multiple network

Work with System Definitions System: LONDON Type options, press Enter. 1=Create 2=Change 3=Copy 4=Delete 5=Display 6=Print 7=Rename 11=Verify communications link 12=Journal definitions 13=Data group definitions 14=Transfer definitions -Transfer Definitions- Cluster Opt System Type Primary Secondary Member __ _______ __ CHICAGO *NET PRIMARY *NONE *NO __ NEWYORK *NET PRIMARY *NONE *NO __ LONDON *MGT PRIMARY *NONE *NO

172

Page 173: MIMIX Reference

system environment.

Work with Transfer Definitions System: LONDON Type options, press Enter. 1=Create 2=Change 3=Copy 4=Delete 5=Display 6=Print 7=Rename 11=Verify communications link ---------Definition--------- Threshold Opt Name System 1 System 2 Protocol (MB) __ __________ _______ ________ PRIMARY *ANY *ANY *TCP *NOMAX

173

Page 174: MIMIX Reference

Chapter 8

Configuring transfer definitions

By creating a transfer definition, you identify to MIMIX the communications path and protocol to be used between two systems. You need at least one transfer definition for each pair of systems between which you want to perform replication. A pair of systems consists of a management system and a network system. If you want to be able to use different transfer protocols between a pair of systems, create a transfer definition for each protocol.

System-level communication must be configured and operational before you can use a transfer definition.

You can also define an additional communications path in a secondary transfer definition. If configured, MIMIX can automatically use a secondary transfer definition if the path defined in your primary transfer definition is not available.

In an Intra environment, a transfer definition defines a communications path and protocol to be used between the two product libraries used by Intra. For detailed information about configuring an Intra environment, refer to “Configuring Intra communications” on page 559.

Once transfer definitions exist for MIMIX, they can be used for other functions, such as the Run Command (RUNCMD), or by other MIMIX products for their operations.

The topics in this chapter include:

• “Tips for transfer definition parameters” on page 176 provides tips for using the more common options for transfer definitions.

• “Using contextual (*ANY) transfer definitions” on page 181 describes using the value (*ANY) when configuring transfer definitions.

• “Creating a transfer definition” on page 184 provides the steps to follow for creating a transfer definition.

• “Changing a transfer definition” on page 186 provides the steps to follow for changing a transfer definition. This topic also includes sub-task for how to changing a transfer definition when converting to a remote journaling environment.

• “Finding the system database name for RDB directory entries” on page 188 provides the steps to follow for finding the system database name for RDB directory entries.

• “Starting the Lakeview TCP/IP server” on page 189 provides the steps to follow if you need to start the Lakeview TCP/IP server.

• “Using autostart job entries to start the TCP server” on page 190 provides the steps to configure the Lakeview TCP server to start automatically every time the MIMIX subsystem is started

• “Verifying a communications link for system definitions” on page 194 provides the steps to verify that the communications link defined for each system definition is operational.

174

Page 175: MIMIX Reference

Configuring transfer definitions

• “Verifying the communications link for a data group” on page 195 provides a procedure to verify the primary transfer definition used by the data group.

175

Page 176: MIMIX Reference

Tips for transfer definition parameters

Tips for transfer definition parametersThis topic provides tips for using the more common options for transfer definitions. Context-sensitive help is available online for all options on the transfer definition commands.

Transfer definition (TFRDFN) This parameter is a three-part name that identifies a communications path between two systems. The first part of the name identifies the transfer definition. The second and third parts of the name identify two different system definitions which represent the systems between which communication is being defined. Lakeview recommends that you use PRIMARY as the name of one transfer definition. To support replication, a transfer definition must identify the two systems that will be used by the data group. You can explicitly specify the two systems, or you can allow MIMIX to resolve the names of the systems. For more information about allowing MIMIX to resolve the system names, see “Using contextual (*ANY) transfer definitions” on page 181.

Note: In the first part of the name, the first character must be either A - Z, $, #, or @. The remaining characters can be alphanumeric and can contain a $, #, @, a period (.), or an underscore (_).

For more information, see “Naming convention for remote journaling environments with 2 systems” on page 206.

Short transfer definition name (TFRSHORTN) This parameter specifies the short name of the transfer definition to be used in generating a relational database (RDB) directory name. The short transfer definition name must be a unique, four-character name if you specify to have MIMIX manage your RDB directory entries. Lakeview recommends that you use the default value *GEN to generate the name. The generated name is a concatenation of the first character of the transfer definition name, the last character of the system 1 name, the last character of the system 2 name, and the fourth character will be either a blank, a letter (A - Z), or a single digit number (0 - 9).

Transfer protocol (PROTOCOL) This parameter specifies the communications protocol to be used. Each protocol has a set of related parameters. If you change the protocol specified after you have created the transfer definition, MIMIX saves information about both protocols.

For the *TCP protocol the following parameters apply:

• System x host name or address (HOST1, HOST2) These two parameters specify the host name or address of system 1 and system 2, respectively. The name is a mixed-case host alias name or a TCP address (nnn.nnn.nnn.nnn) and can be up to 256 characters in length. For the HOST1 parameter, the special value *SYS1 indicates that the host name is the same as the name specified for System 1 in the Transfer definition parameter. Similarly, for the HOST2 parameter, the special value *SYS2 indicates that the host name is the same as the name specified for System 2 in the Transfer definition parameter.

• System x port number or alias (PORT1, PORT2) These two parameters specify the port number or port alias of system1 and system 2, respectively. The value of each parameter can be a 14-character mixed-case TCP port number or port alias

176

Page 177: MIMIX Reference

with a range from 1000 through 55534. Lakeview Technology recommends using values between 40000 and 55500 to avoid potential conflicts with designations made by the operating system. By default, the PORT1 parameter uses the port 50410. For the PORT2 parameter, the default special value *PORT1 indicates that the value specified on the System 1 port number or alias (PORT1) parameter is used. If you configured TCP using port aliases in the service table, specify the alias name instead of the port number.

For the *SNA protocol the following parameters apply:

• System x location name (LOCNAME1, LOCNAME2) These two parameters specify the location name or address of system 1 and system 2, respectively. The value of each parameter is the unique location name that identifies the system to remote devices. For the LOCNAME1 parameter, the special value *SYS1 indicates that the location name is the same as the name specified for System 1 on the Transfer definition (TFRDFN) parameter. Similarly, for the LOCNAME2 parameter, the special value *SYS2 indicates that the location name is the same as the name specified for System 2 on the Transfer definition (TFRDFN) parameter.

• System x network identifier (NETID1, NETID2) These two parameters specify name of the network for system 1 and system 2, respectively. The default value *LOC indicates that the network identifier for the location name associated with the system is used. The special value *NETATR indicates that the value specified in the system network attributes is used. The special value *NONE indicates that the network has no name. For the NETID2 parameter, the special value *NETID1 indicates that the network identifier specified on the System 1 network identifier (NETID1) parameter is used.

• SNA mode (MODE) This parameter specifies the name of mode description used for communication. The default name is MIMIX. The special value *NETATR indicates that the value specified in the system network attributes is used.

The following parameters apply for the *OPTI protocol:

• System x location name (LOCNAME1, LOCNAME2) These two parameters specify the location name or address of system 1 and system 2, respectively. The value of each parameter is the unique location name that identifies the system to remote devices. For the LOCNAME1 parameter, the special value *SYS1 indicates that the location name is the same as the name specified for System 1 on the Transfer definition (TFRDFN) parameter. Similarly, for the LOCNAME2 parameter, the special value *SYS2 indicates that the location name is the same as the name specified for System 2 on the Transfer definition (TFRDFN) parameter.

Threshold size (THLDSIZE) This parameter is accessible when you press F10 (Additional parameters). This controls the size of files and objects by specifying the maximum size of files and objects that are sent. If the file or object exceeds the threshold it is not sent. Valid values range from 1 through 9999999. The special value *NOMAX indicates that no maximum value is set. Transmitting large files and objects can consume excessive communications bandwidth and negatively impact communications performance, especially for slow communication lines.

177

Page 178: MIMIX Reference

Tips for transfer definition parameters

Relational database (RDB) This parameter is accessible when you press F10 (Additional parameters) and is valid when default remote journaling configuration is used. The parameter consists of a four relational database values, which identify the communications path used by the i5/OS remote journal function to transport journal entries: a relational database directory entry name, two system database names, and a management indicator for directory entries. This parameter creates two RDB directory entries, one on each system identified in the transfer definition. Each entry identifies the other system’s relational database.

Note: If you use the value *ANY for both system 1 and system 2 on the transfer definition, *NONE is used for the directory entry name, and no directory entry is generated.

If MIMIX is managing your RDB directory entries, a directory entry is generated if you use the value *ANY for only one of the systems on the transfer definition. This directory entry is generated for the system that is specified as something other than *ANY. For more information about the use of the value *ANY on transfer definitions, see “Using contextual (*ANY) transfer definitions” on page 181.

The four elements of the relational database parameter are:

• Directory entry This element specifies the name of the relational database entry. The default value *GEN causes MIMIX to create an RDB entry and add it to the relational database. The generated name is in the format MX_nnnnnnnnnn_ssss, where nnnnnnnnnn is the 10-character installation name, and ssss is the transfer definition short name. If you specify a value for the RDB parameter, it is recommended that you limit its length to 18 characters. When you specify the special value *NONE, the directory entry is not added or changed by MIMIX.

• System 1 relational database This element specifies the name of the relational database for System 1. The default value *SYSDB specifies that MIMIX will determine the relational database name. If you are managing the RDB directory entries and you need to determine the system database name, refer to “Finding the system database name for RDB directory entries” on page 188.

Note: For remote journaling that uses an independent ASP, specify the database name for the independent ASP.

• System 2 relational database This element specifies the name of the relational database for System 2. The default value *SYSDB specifies that MIMIX will determine the relational database name. If you are managing the RDB directory entries and you need to determine the system database name, refer to “Finding the system database name for RDB directory entries” on page 188.

Note: For remote journaling that uses an independent ASP, specify the database name for the independent ASP.

• Manage directory entries This element specifies that MIMIX will manage the relational database directory entries associated with the transfer definition whether the directory entry name is specified or whether the directory entry name is generated by MIMIX. Management of the relational database directory entries consists of adding, changing, and deleting the directory entries on both systems, as needed, when the transfer definition is created, changed, or deleted. The

178

Page 179: MIMIX Reference

special value *DFT indicates that MIMIX manages the relational database directory entries only when the name is generated using the special value *GEN on the Directory entry element of this parameter. The special value *YES indicates that the directory entries on each system are managed by MIMIX. If the relational database directory entries do not exist, MIMIX adds them. If they do exist, MIMIX changes them to match the values specified by the Relational database (RDB) parameter. When any of the transfer definition relational database values change, the directory entry is also changed. When the transfer definition is deleted, the directory entries are also deleted.

179

Page 180: MIMIX Reference

Tips for transfer definition parameters

180

Page 181: MIMIX Reference

Using contextual (*ANY) transfer definitionsWhen the three-part name of transfer definition specifies the value *ANY for System 1 or System 2 instead system names, MIMIX uses information from the context in which the transfer definition is called to resolve to the correct system. Such a transfer definitions is called contextual transfer definition.

For remote journaling environments, best practice is to use transfer definitions that identify specific system definitions in the thee-part transfer definition name. Although you can use contextual transfer definitions with remote journaling, they are not recommended. For more information, see “Considerations for remote journaling” on page 182.

In MIMIX source-send configurations, a contextual transfer definition may be an aid in configuration. For example, if you create a transfer definition named PRIMARY SYSA *ANY. This definition can be used to provide the necessary parameters for establishing communications between SYSA and any other system.

The *ANY value represents several transfer definitions, one for each system definition. For example, a transfer definition PRIMARY SYSA *ANY in an installation that has three system definitions (SYSA, SYSB, INTRA) represents three transfer definitions:

• PRIMARY SYSA SYSA

• PRIMARY SYSA SYSB

• PRIMARY SYSA INTRA

Search and selection process Data group definitions and system definitions include parameters that identify associated transfer definitions. When an operation requires a transfer definition, MIMIX uses the context of the operation to determine the fully qualified name. For example, when starting a data group, MIMIX uses information in the data group definition, the systems specified in the data group name and the specified transfer definition name, to derive the fully qualified transfer definition name. If MIMIX is still unable to find an appropriate transfer definition the following search order is used:

1. PRIMARY SYSA SYSB

2. PRIMARY *ANY SYSB

3. PRIMARY SYSA *ANY

4. PRIMARY SYSB SYSA

5. PRIMARY *ANY SYSA

6. PRIMARY SYSB *ANY

7. PRIMARY *ANY *ANY

When you specify *ANY in the three-part name of a transfer definition, and you have specified *TFRDFN for the Protocol parameter on such commands as RUNCMD or VFYCMNLNK, MIMIX searches your system and selects those systems with a

181

Page 182: MIMIX Reference

Using contextual (*ANY) transfer definitions

transfer definition that matches the transfer definition that you specified, for example, (PRIMARY SYSA SYSB).

Considerations for remote journalingBest practice for a remote journaling environment is to use a transfer definition that identifies specific system definitions in the thee-part transfer definition name. By specifying both systems, the transfer definition can be used for replication from either direction.

If you do use a contextual transfer definition in a remote journaling environment, the value *ANY can be used for the system where the local journal (source) resides. This value can be either the second or third parts of the three-part name. For example, a transfer definition of PRIMARY name *ANY is valid in a remote journaling environment, where name identifies the system definition for the system where the remote journal (target) resides. A transfer definition of PRIMARY *ANY name is also valid. The command would look like this:

CRTTFRDFN TFRDFN(PRIMARY name *ANY) TEXT('description')

MIMIX Remote Journal support requires that each transfer definition that will be used has a relational database (RDB) directory entry to properly identify the remote system. An RDB directory entry cannot be added to a transfer definition using the value *ANY for the remote system.

To support a switchable data group when using contextual transfer definitions, each system in the remote journaling environment must be defined by a contextual transfer definition. For example, an environment with systems NEWYORK and CHICAGO, you would need a transfer definition named PRIMARY NEWYORK *ANY as well as a transfer definition named PRIMARY CHICAGO *ANY.

Considerations for MIMIX source-send configurationsWhen creating a transfer definition for a MIMIX source-send configuration that uses contextual system capability (*ANY) and the TCP protocol, take the default values for other parameters on the CRTTFRDFN command. For example, using the naming conventions for contextual systems, the command would look like this:

CRTTFRDFN TFRDFN(PRIMARY *ANY *ANY) TEXT('Recommended configuration')

Note: Ensure that you consult with your site TCP administrator before making these changes.

For an Intra environment, an additional transfer definition is needed. If there is an Intra system definition defined, the transfer definition must specify a unique port number to communicate with Intra. The following is an example of an additional transfer definition that uses port number 42345 to establish communications with the Intra system:

CRTTFRDFN TFRDFN(PRIMARY *ANY INTRA) PORT2(42345) TEXT('Recommended configuration')

182

Page 183: MIMIX Reference

Naming conventions for contextual transfer definitionsThe following suggested naming conventions make the contextual (*ANY) transfer definitions more useful in your environment.

*TCP protocol: The MIMIX system definition names should correspond to DNS or host table entries that tie the names to a specific TCP address.

*SNA protocol: The MIMIX system definition names must match SNA environment (controller names) for the respective systems. The MIMIX system definitions should match the net attribute system name (DSPNETA). For example, with two MIMIX systems called SYSA and SYSB, on the SYSA system there would have to be a controller called SYSB that is used for SYSA to SYSB communications. Conversely, on SYSB, a SYSA controller would be necessary.

*OPTI protocol: The MIMIX system definition names must match the OptiConnect names for the systems (DSPOPCLNK).

Additional usage considerations for contextual transfer definitionsThe Run Command (RUNCMD) and the Verify Communications Link (VFYCMNLNK) commands requires specific system names to verify communications between systems. These commands do not handle transfer definitions that specify *ANY in the three-part name.

When the VFYCMNLNK command is called from option 11 on the Work with System Definitions display or option 11 on the Work with Data Groups display, MIMIX determine the specific system names. However, when the command is called from option 11 on the Work with Transfer Definitions display, entered from a command line, or included in automation programs, you will receive an error message if the transfer definition has the value *ANY for either system 1 or system 2.

183

Page 184: MIMIX Reference

Creating a transfer definition

Creating a transfer definitionSystem-level communication must be configured and operational before you can use a transfer definition.

To create a transfer definition, do the following:

1. Access the Work with Transfer Definitions display by doing one of the following:

• From the MIMIX Configuration Menu, select option 2 (Work with transfer definitions) and press Enter.

• From the MIMIX Cluster Menu, select option 21 (Work with transfer definitions) and press Enter.

2. The Work with Transfer Definitions display appears. Type 1 (Create) next to the blank line at the top of the list area and press Enter.

3. The Create Transfer Definition display appears. Do the following:

a. At the Transfer definition prompts, specify a name and the two system definitions between which communications will occur.

b. At the Short transfer definition name prompt, accept the default value *GEN to generate a short transfer definition name. This short transfer definition name is used in generating relational database directory entry names if you specify to have MIMIX manage your RDB directory entries.

c. At the Transfer protocol prompt, specify the communications protocol you want, then press Enter. If you are creating a transfer definition for a cluster environment, you must accept the default of *TCP for the Transfer protocol prompt.

4. Additional parameters for the protocol you selected appear on the display. Verify that the values shown are what you want. Make any necessary changes.

5. At the Description prompt, type a text description of the transfer definition, enclosed in apostrophes.

6. Optional step: If you need to set a maximum size for files and objects to be transferred, press F10 (Additional parameters). At the Threshold size (MB) prompt, specify a valid value.

7. Optional step: If you need to change the relational database information, press F10 (Additional parameters). See “Tips for transfer definition parameters” on page 176 for details about the Relational database (RDB) parameter. If MIMIX is not managing the RDB directory entries, it may be necessary to change the RDB values.

8. To create the transfer definition, press Enter.

184

Page 185: MIMIX Reference

185

Page 186: MIMIX Reference

Changing a transfer definition

Changing a transfer definitionTo change a transfer definition, do the following:

1. Access the Work with Transfer Definitions display by doing one of the following:

• From the MIMIX Configuration Menu, select option 2 (Work with transfer definitions) and press Enter.

2. The Work with Transfer Definitions display appears. Type 2 (Change) next to the definition you want and press Enter.

3. The Change Transfer Definition (CHGTFRDFN) display appears. If you want to change which protocol is used between the specified systems, specify the value you want for the Transfer protocol prompt.

4. Press Enter to display the parameters for the specified transfer protocol. Locate the prompt for the parameter you need to change and specify the value you want.

Press F1 (Help) for more information about the values for each parameter.

5. If you need to set a maximum size for files and objects to be transferred, press F10 (Additional parameters). At the Threshold size (MB) prompt, specify a valid value.

6. If you need to change your relational database information, press F10 (Additional parameters). At the Relational database (RDB) prompt, specify the desired values for each of the four elements and press Enter. For special considerations when changing your transfer definitions that are configured to use RDB directory entries see “Tips for transfer definition parameters” on page 176.

7. To save changes to the transfer definition, press Enter.

Changing a transfer definition to support remote journalingIf the value *ANY is specified for either system in the transfer definition, before you complete this procedure refer to “Using contextual (*ANY) transfer definitions” on page 181. Contextual transfer definitions are not recommended in a remote journaling environment.

To support remote journaling, modify the transfer definition you plan to use as follows:

1. From the MIMIX Configuration menu, select option 2 (Work with transfer definitions) and press Enter.

2. The Work with Transfer Definitions display appears. Type a 2 (Change) next to the definition you want and press Enter.

3. The Change Transfer Definition (CHGTFRDFN) display appears. Press F10 (Additional parameters), then press Page Down.

4. At the Relational database (RDB) prompt, specify the desired values for each of the four elements and press Enter.

Note: See “Tips for transfer definition parameters” on page 176 for detailed information about the Relational database (RDB) parameter. Also see “Finding the system database name for RDB directory entries” on

186

Page 187: MIMIX Reference

page 188 for special considerations when changing your transfer definitions that are configured to use RDB directory entries.

187

Page 188: MIMIX Reference

Finding the system database name for RDB directory entries

188

Finding the system database name for RDB directory entries

To find the system database name, do the following:

1. Login to the system that was specified for System 1 in the transfer definition.

2. From the command line type DSPRDBDIRE and press Enter. Look for the relational database name that has a corresponding remote location name of *LOCAL.

3. Repeat steps 1 and 2 to find the system database name for System 2.

Using i5/OS commands to work with RDB directory entriesIf you did not accept the default value *GEN for the Directory entry element and *DFT for the Manage directory entries element when you created your transfer definition, or if you specified *NO for Manage directory entries element on the Relational Database (RDB) parameter which specifies that MIMIX is managing your RDB directory entries, you can use the i5/OS Add RDB Directory Entry (ADDRDBDIRE) command to add RDB directory entries. You can also use the i5/OS Change RDB Directory Entry (CHGRDBDIRE) command to change an existing RDB directory entry.

Page 189: MIMIX Reference

189

Starting the Lakeview TCP/IP serverUse this procedure if you need to start the Lakeview TCP/IP server. You can also start the TCP/IP server automatically.

Once the TCP communication connections have been defined in a transfer definition, the Lakeview TCP server must be started on each of the systems identified by the transfer definition.

Note: Use the host name and port number (or port alias) defined in the transfer definition for the system on which you are running this command.

From a 5250 emulator, do the following on the system on which you want to start the TCP server:

1. From the MIMIX Intermediate Main Menu, select option 13 (Utilities menu) and press Enter.

2. The Utilities Menu appears. Select option 51 (Start TCP server) and press Enter.

3. The Start Lakeview TCP Server display appears. At the Host name or address prompt, specify the host name for the local system as defined in the transfer definition.

4. At the Port number or alias prompt, verify that the value shown is correct. If necessary, change the value.

Note: If you specify an alias, you must have an entry in the service table on this system that equates the alias to the port number.

5. Press Enter.

6. Verify that the Lakeview server job is running under the MIMIX subsystem on that system. You can use the Work with Active Jobs (WRKACTJOB) command to look for a job under the MIMIXSBS subsystem with a function of PGM-LVSERVER.

Page 190: MIMIX Reference

Using autostart job entries to start the TCP server

Using autostart job entries to start the TCP serverTo use TCP/IP communications, the MIMIX TCP/IP server must be started each time the MIMIX subsystem is started. This can become a time consuming task that can be mistakenly forgotten. For these reasons, many users prefer to add an autostart job entry to start the Lakeview TCP server automatically with the MIMIXSBS subsystem.

The autostart job entry uses a job description that contains the STRSVR command which will automatically start the Lakeview TCP server when the MIMIXSBS subsystem is started. The STRSVR command is defined in the RQSDTA (request data) parameter of the job description.

Adding an autostart job entryTo configure an autostart job entry to start the Lakeview TCP server automatically with the MIMIXSBS subsystem, do the following:

Note: Perform this procedure on both of the systems defined as system 1 and system 2 in the transfer definition.

1. Type the command CRTDUPOBJ and press Enter.

2. The Create Duplicate Object (CRTDUPOBJ) display appears. Specify these values at the following prompts:

a. At the From object prompt specify MIMIXCMN.

b. At the From library prompt specify MIMIXQGPL.

c. At the Object type prompt specify *JOBD.

d. At the To library prompt specify MIMIXQGPL.

e. At the New object prompt specify a name for the new object. Lakeview Technology recommends that you use the port number for the system with which the server is associated in the form PORTnnnnn where nnnnn is the port number. If you are using port aliases, specify the alias associated with the port number.

f. Press Enter. The new object is created.

3. Type the command CHGJOBD and press F4 (Prompt).

4. The Change Job Description (CHGJOBD) prompt display appears. Specify the port number in the form PORTnnnnn or the port alias in the Job description prompt and MIMIXQGPL for the Library.

5. Press F10 (Additional parameters).

6. Page Down to the second group of parameters and specify the following:

a. At the Request data or command prompt, specify the STRSVR command using the values you need in the following string:'MIMIX/STRSVR HOST(local-cp-name) PORT(nnnnn) JOBD(MIMIXQGPL/yyyy)'

where yyyy is either the port number in the form PORTnnnnn or the port alias.

190

Page 191: MIMIX Reference

b. Press Enter. The job description is changed.

7. Type the command ADDAJE and press Enter.

8. The Add Autostart Job Entry (ADDAJE) display appears. Specify the following values to configure the job description to start each time the MIMIXSBS subsystem is started:

a. At the Subsystem description prompt specify MIMIXSBS.

b. At the Library prompt, specify MIMIXQGPL.

c. At the Job name prompt specify a name to describe the job being processed. Lakeview Technology suggests that you use the value you specified in Step 4.

d. At the Job description prompt specify the name of the job description you just changed in Step 4.

e. At the Library prompt specify MIMIXQGPL.

f. Press Enter. The job description is added to the automatic start procedures within the MIMIXSBS subsystem. Each time the MIMIXSBS subsystem is started, this TCP server is also started.

Identifying the autostart job entry in the MIMIXSBS subsystemAutostart job entries need to be reviewed occasionally for possible changes, such as when a configuration change has been made. The autostart job entry may need to be updated to call the new system name or port number. The first step is to identify the autostart job entry in the MIMIXSBS subsystem. This procedure enables you to display the appropriate autostart job entry’s information to determine if its STRSVR command needs to be updated. This command contains the system name and port number or port alias for the system, which may need to be changed. To display the autostart job entry information, do the following:

1. Type the command DSPSBSD MIMIXQGPL/MIMIXSBS and press Enter. The Display Subsystem Description display appears.

2. Type 3 (Autostart job entries) and press enter. The Display Autostart Job Entries display appears.

3. Identify the Job Description and Library of the autostart job entry. Typically the job description is named PORTnnnnn where nnnnn is the port number. Press Enter.

4. Using the information identified in step 3, type the command DSPJOBD Library/Job Description and press Enter. The Display Job Description display appears.

5. Page down to view the Request data information and determine whether the STRSVR command needs to be updated. If updates are needed, perform the steps in “Changing the job description for an autostart job entry” on page 191.

Changing the job description for an autostart job entryIf a system name or port number has changed and an autostart job entry is used for the STRSVR command, the autostart job entry for the STRSVR command must be updated to call the new system name or port number. The RQSDTA (request data)

191

Page 192: MIMIX Reference

Using autostart job entries to start the TCP server

parameter in the job description determines which program or command is run when the MIMIXSBS subsystem is started. Use the following command to change the job description to call the new system definition name or port number used for the autostart job entry which calls the STRSVR command when the MIMIXSBS subsystem is started:

CHGJOBD JOBD(MIMIXLIB/STRMXSVR) RQSDTA(‘MIMIXLIB/STRSVR HOST(System name) PORT(nnnnn) JOBD(MIMIXQGPL/MIMIXCMN)’)

• where System name is the system host name for the system where the autostart job entry is defined in the MIMIX transfer definition.

• where nnnnn is either the port number in the form PORTnnnnn or the port alias of the system where the autostart job entry is defined in the MIMIX transfer definition.

192

Page 193: MIMIX Reference

193

Page 194: MIMIX Reference

Verifying a communications link for system definitions

194

Verifying a communications link for system definitionsDo the following to verify that the communications link defined for each system definition is operational:

1. From the MIMIX Basic Main Menu, type an 11 (Configuration menu) and press Enter.

2. From the MIMIX Configuration Menu, type a 1 (Work with system definitions) and press Enter.

3. From the Work with System Definitions display, type an 11 (Verify communications link) next to the system definition you want and press Enter. You should see a message indicating the link has been verified.

Note: If the system manager is not active, this process will only verify that communications to the remote system is successful. You will also see a message in the job log indicating that "communications link failed after 1 request."

If you are performing this procedure as directed by the manual configuration checklist before the system manager is active, this result is expected and indicates that the remote system could not return communications to the local system. Once you start the system managers as directed by the checklist, the configuration information needed for successful two-way communications is automatically sent to the remote system. The checklist will direct you to verify all communications paths again at the appropriate point in the configuration process.

4. Repeat this procedure for all system definitions. If the communications link defined for a system definition uses SNA protocol, do not check the link from the local system.

Note: If your transfer definition uses the *TCP communications protocol, then MIMIX uses the Verify Communication Link command to validate the information that has been specified for the Relational database (RDB) parameter. MIMIX also uses VFYCMNLNK to verify that the System 1 and System 2 relational database names exist and are available on each system.

Page 195: MIMIX Reference

Verifying the communications link for a data groupBefore you synchronize data between systems, ensure that the communications link for the data group is active. This procedure verifies the primary transfer definition used by the data group. If your configuration requires multiple data groups, be sure to check communications for each data group definition.

Do the following:

1. From the Work with Data Group Definitions display, type an 11 (Verify communications link) next to the data group you want and press F4.

2. The Verify Communications Link display appears. Ensure that the values shown for the prompts are what you want.

3. To start the check, press Enter.

4. You should see a message "VFYCMNLNK command completed successfully."

If your data group definition specifies a secondary transfer definition, use the following procedure to check all communications links.

Verifying all communications linksThe Verify Communications Link (VFYCMNLNK) command requires specific system names to verify communications between systems. When the command is called from option 11 on the Work with System Definitions display or option 11 on the Work with Data Groups display, MIMIX identifies the specific system names.

For transfer definitions using TCP protocol: MIMIX uses the Verify Communication Link (VFYCMNLNK) command1 to validate the values specified for the Relational database (RDB) parameter. MIMIX also uses VFYCMNLNK to verify that the System 1 and System 2 relational database names exist and are available on each system.

When the command is called from option 11 on the Work with Transfer Definitions display or when entered from a command line, you will receive an error message if the transfer definition specifies the value *ANY for either system 1 or system 2.

1. From the Work with Transfer Definitions display, type an 11 (Verify communications link) next to all transfer definitions and press Enter.

2. The Verify Communications Link display appears. If you are checking a Transfer definition with the value of *ALL, you need to specify a value for the System 1 or System 2 prompt. Ensure that the values shown for the prompts are what you want and then press Enter.

You will see the Verify Communications Link display for each transfer definition you selected.

3. You should see a message "VFYCMNLNK command completed successfully."

1. On installations running service pack SPC05 or higher.

195

Page 196: MIMIX Reference

Verifying the communications link for a data group

196

Page 197: MIMIX Reference

Chapter 9

Configuring journal definitions

By creating a journal definition you identify to MIMIX a journal environment that can be used in the replication process. MIMIX uses the journal definition to manage the journaling environment, including journal receiver management.

A journal definition does not automatically build the underlying journal environment that it defines. If the journal environment does not exist, it must be built. This can be done after the journal definition is created. Configuration checklists indicate when to build the journal environment.

The topics in this chapter include:

• “Journal definitions created by other processes” on page 200 describes the security audit journal (QAUDJRN) and other journal definitions that are automatically created by MIMIX.

• “Tips for journal definition parameters” on page 201 provides tips for using the more common options for journal definitions.

• “Journal definition considerations” on page 205 provides things to consider when creating journal definitions for remote journaling.

• “Journal receiver size for replicating large object data” on page 213 provides procedures to verify that a journal receiver is large enough to accommodate large IFS stream files and files containing LOB data, and if necessary, to change the receiver size options.

• “Creating a journal definition” on page 215 provides the steps to follow for creating a journal definition.

• “Changing a journal definition” on page 217 provides the steps to follow for changing a journal definition.

• “Building the journaling environment” on page 219 describes the journaling environment and provides the steps to follow for building it.

• “Changing the remote journal environment” on page 222 provides steps to follow when changing an existing remote journal configuration. The procedure is appropriate for changing a journal receiver library for the target journal in a remote journaling environment or for any other changes that affect the target journal.

• “Adding a remote journal link” on page 225 describes how to create a MIMIX RJ link, which will in turn create a target journal definition with appropriate values to support remote journaling. In most configurations, the RJ link is automatically created for you when you follow the steps of the configuration checklists.

• “Changing a remote journal link” on page 227 describes how to change an existing RJ link.

• “Temporarily changing from RJ to MIMIX processing” on page 228 describes how to change a data group configured for remote journaling to temporarily use MIMIX send processing.

• “Changing from remote journaling to MIMIX processing” on page 229 describes

197

Page 198: MIMIX Reference

how change a data group that uses remote journaling so that it uses MIMIX send processing. Remote journaling is preferred.

• “Removing a remote journaling environment” on page 231 describes how to remove a remote journaling environment that you no longer need.

198

Page 199: MIMIX Reference

Configuring journal definitions

199

Page 200: MIMIX Reference

Journal definitions created by other processes

200

Journal definitions created by other processesWhen you create system definitions, MIMIX automatically creates a journal definition for the security audit journal (QAUDJRN) on that system. The QAUDJRN is used only by MIMIX system journal replication processes. If you do not already have a journaling environment for the security audit journal, it will be created when the first data group that replicates from the system journal is started.

When you create a data group definition, MIMIX automatically creates a journal definition if one does not already exist. Any journal definitions that are created in this manner will be named with the value specified in the data group definition.

In an environment that uses MIMIX Remote Journal support, the process of creating a data group definition creates a remote journal link which in turn creates the journal definition for the target journal. The target journal definition is created using values appropriate for remote journaling.

Any journal definitions created by another process can be changed if necessary.

Page 201: MIMIX Reference

Tips for journal definition parametersThis topic provides tips for using the more common options for journal definitions. Context-sensitive help is available online for all options on the journal definition commands.

Journal definition (JRNDFN) This parameter is a two-part name that identifies a journaling environment on a system. The first part of the name identifies the journal definition. When a journal definition for the security audit journal (system journal) is automatically created as a result of creating a system definition, the first part of the name is QAUDJRN. The second part of the name identifies a system definition which represents the system on which you want the journal to reside.

Note: In the first part of the name, the first character must be either A - Z, $, #, or @. The remaining characters can be alphanumeric and can contain a $, #, @, a period (.), or an underscore (_). Journal definition names cannot be UPSMON or begin with the characters MM. If the journal definition is configured by MIMIX for use with MIMIX RJ support, the Name is the first eight characters from the name of the source journal definition followed by the characters @R. If a journal definition name is already in use, the name may include @S, @T, @U, @V, or @W. There are additional specific naming conventions for journal definitions that are used with remote journaling.

MIMIX uses the first six characters of the journal definition name to generate the journal receiver prefix. MIMIX restricts the last character of the prefix from being numeric. If the last character of a prefix resulting from the journal definition name is numeric, it can become part of the receiver number and no longer match the journal name.

Journal (JRN) This parameter specifies the qualified name of a journal to which changes to files or objects to be replicated are journaled. For the journal name, the default value *JRNDFN uses the name of the journal definition for the name of the journal.

For the journal library, the default value *DFT allows MIMIX to determine the library name based on the ASP in which the journal library is allocated, as specified in the Journal library ASP parameter. If that parameter specifies *ASPDEV, MIMIX uses #MXJRNIASP for the default journal library name; otherwise, the default library name is #MXJRN.

Journal library ASP (JRNLIBASP) This parameter specifies the auxiliary storage pool (ASP) from which the system allocates storage for the journal library. You can use the default value *CRTDFT or you can specify the number of an ASP in the range 1 through 32.

The value *CRTDFT indicates that the command default value for the i5/OS Create Library (CRTLIB) command is used to determine the auxiliary storage pool (ASP) from which the system allocates storage for the library.

For libraries that are created in a user ASP, all objects in the library must be in the same ASP as the library.

201

Page 202: MIMIX Reference

Tips for journal definition parameters

Journal receiver prefix (JRNRCVPFX) This parameter specifies the prefix to be used in the name of journal receivers associated with the journal used in the replication process and the library in which the journal receivers are located.

The prefix must be unique to the journal definition and cannot end in a numeric character. The default value *GEN for the name prefix indicates that MIMIX will generate a unique prefix, which usually is the first six characters of the journal definition name with any trailing numeric characters removed. If that prefix is already used in another journal definition, a unique six character prefix name is derived from the definition name. If the journal definition will be used in a configuration which broadcasts data to multiple systems, there are additional considerations. See “Journal definition considerations” on page 205.

The value *DFT for the journal receiver library allows MIMIX to determine the library name based on the ASP in which the journal receiver is allocated, as specified in the Journal receiver library ASP parameter. If that parameter specifies *ASPDEV, MIMIX uses #MXJRNIASP for the default journal receiver library name. Otherwise, the default library name is #MXJRN. You can specify a different name or specify the value *JRNLIB to use the same library that is used for the associated journal.

Journal receiver library ASP (RCVLIBASP) This parameter specifies the auxiliary storage pool (ASP) from which the system allocates storage for the journal receiver library. You can use the default value *CRTDFT or you can specify the number of an ASP in the range 1 through 32.

The value *CRTDFT indicates that the command default value for the i5/OS Create Library (CRTLIB) command is used to determine the auxiliary storage pool (ASP) from which the system allocates storage for the library.

For libraries that are created in a user ASP, all objects in the library must be in the same ASP as the library.

Target journal state (TGTSTATE) This parameter specifies the requested status of the target journal, and can be used with active journaling support or journal standby state. Use the default value *ACTIVE to set the target journal state to active when the data group associated with the journal definition is journaling on the target system (JRNTGT(*YES)). Use the value *STANDBY to journal objects on the target system while preventing most journal entries from being deposited into the target journal. For more information about journal standby state, see “Configuring for high availability journal performance enhancements” on page 341.

Journal caching (JRNCACHE) This parameter specifies whether the system should cache journal entries in main storage before writing them to disk. Use the recommended default value *BOTH to perform journal caching on both the source and the target systems. You can also specify values *SRC, *TGT, or *NONE.

Receiver change management (CHGMGT, THRESHOLD, TIME, RESETTHLD) Four parameters control how journal receivers associated with the replication process are changed.

The Receiver change management (CHGMGT) parameter controls whether MIMIX performs change management operations for the journal receivers used in the replication process. The recommended value is the shipped default of *TIMESIZE, where MIMIX changes journal receivers by both threshold size and time of day.

202

Page 203: MIMIX Reference

The following parameters specify conditions that must be met before change management can occur.

• Receiver threshold size (MB) (THRESHOLD) You can specify the size, in megabytes, of the journal receiver at which it is changed. The default value is 6600 MB. This value is used when MIMIX or the system changes the receivers.

If you decide to decrease the size of the Receiver threshold size you will need to manually change your journal receiver to reflect this change.

If you change the journal receiver threshold size in the journal definition, the change is effective with the next receiver change.

• Time of day to change receiver (TIME) You can specify the time of day at which MIMIX changes the journal receiver. The time is based on a 24 hour clock and must be specified in HHMMSS format.

• Reset sequence threshold (RESETTHLD) You can specify the sequence number (in millions) at which to reset the receiver sequence number. When the threshold is reached, the next receiver change resets the sequence number to 1.

For information about how change management occurs in a remote journal environment and about using other change management choices, see “Journal receiver management” on page 37

Receiver delete management (DLTMGT, KEEPUNSAV, KEEPRCVCNT, KEEPJRNRCV) Four parameters control how MIMIX handles deleting the journal receivers associated with the replication process.

The Receiver delete management (DLTMGT) parameter specifies whether or not MIMIX performs delete management for the journal receivers. By default, MIMIX performs the delete management operations. MIMIX operations can be adversely affected if you allow the system or another process to handle delete management. For example, if another process deletes a journal receiver before MIMIX is finished with it, replication can be adversely affected.

All of the requirements that you specify in the following parameters must be met before MIMIX deletes a journal receiver:

• Keep unsaved journal receivers (KEEPUNSAV) You can specify whether or not to have MIMIX retain any unsaved journal receivers. Retaining unsaved receivers allows you to back out (rollback) changes in the event that you need to recover from a disaster. The default value *YES causes MIMIX to keep unsaved journal receivers until they are saved.

• Keep journal receiver count (KEEPRCVCNT) You can specify the number of detached journal receivers to retain. For example, if you specify 2 and there are 10 journal receivers including the attached receiver (which is number 10), MIMIX retains two detached receivers (8 and 9) and deletes receivers 1 through 7.

• Keep journal receivers (days) (KEEPJRNRCV) You can specify the number of days to retain detached journal receivers. For example, if you specify to keep the journal receiver for 7 days and the journal receiver is eligible for deletion, it will be deleted after 7 days have passed from the time of its creation. The exact time of the deletion may vary. For example, the deletion may occur within a few hours after the 7 days have passed.

203

Page 204: MIMIX Reference

Tips for journal definition parameters

For information see “Journal receiver management” on page 37

Journal receiver ASP (JRNRCVASP) This parameter specifies the auxiliary storage pool (ASP) from which the system allocates storage for the journal receivers. The default value *LIBASP indicates that the storage space for the journal receivers is allocated from the same ASP that is used for the journal receiver library.

Threshold message queue (MSGQ) This parameter specifies the qualified name of the threshold message queue to which the system sends journal-related messages such as threshold messages. The default value *JRNDFN for the queue name indicates that the message queue uses the same name as the journal definition. The value *JRNLIB for the library name indicates that the message queue uses the library for the associated journal.

Exit program (EXITPGM) This parameter allows you to specify the qualified name of an exit program to use when journal receiver management is performed by MIMIX. The exit program will be called when a journal receiver is changed or deleted by the MIMIX journal manager. For example, you might want to use an exit program to save journal receivers as soon as MIMIX finishes with them so that they can be removed from the system immediately.

Minimize entry specific data (MINENTDTA) This parameter specifies which object types allow journal entries to have minimized entry-specific data. For additional information about improving journaling performance with this capability, see “Minimized journal entry data” on page 339.

Updated for 5.0.02.00.

204

Page 205: MIMIX Reference

Journal definition considerationsConsider the following as you create journal definitions for remote journaling:

• The source journal definition identifies the local journal and the system on which the local journal exists. Similarly, the target journal definition identifies the remote journal and the system on which the remote journal exists. Therefore, the source journal definition identifies the source system of the remote journal process and the target journal definition identifies the target system of the remote journal process.

• You can use an existing journal definition as the source journal definition to identify the local journal. However, using an existing journal definition for the target journal definition it is not recommended. The existing definition is likely to be used for journaling and therefore is not appropriate as the target journal definition for a remote journal link.

• MIMIX recognizes the receiver change management parameters (CHGMGT, THRESHOLD, TIME, RESETTHLD) specified in the source journal definition and ignores those specified in the target journal definition. When a new receiver is attached to the local journal, a new receiver with the same name is automatically attached to the remote journal. The receiver prefix specified in the target journal definition is ignored.

• Each remote journal link defines a local-remote journal pair that functions in only one direction. Journal entries flow from the local journal to the remote journal. The direction of a defined pair of journals cannot be switched. If you want to use the RJ process in both directions for a switchable data group, you need to create journal definitions for two remote journal links (four journal definitions). For more information, see “Example journal definitions for a switchable data group” on page 207.

• MIMIX will try to create *TYPE2 journals when possible and *TYPE1 journals when a *TYPE2 journal cannot be created. MIMIX creates the environment that is appropriate for the type of journal created. Refer to the IBM book, Backup and Recovery, for information about save and restore considerations for *TYPE2 and *TYPE1 journals in a remote journaling environment.

• After the journal environment is built for a target journal definition, MIMIX cannot change the value of the target journal definition’s Journal receiver prefix (JRNRCVPFX) or Threshold message queue (MSGQ), and several other values. To change these values see the procedure in the IBM topic “Library Redirection with Remote Journals” in the IBM eServer iSeries Information Center.

• If you are configuring MIMIX for a scenario in which you have one or more target systems, there are additional considerations for the names of journal receivers. Each source journal definition must specify a unique value for the Journal receiver prefix (JRNRCVPFX) parameter. MIMIX ensures that the same prefix is not used more than once on the same system but cannot determine if the prefix is used on a target journal while it is being configured. If the prefix defined by the source journal definition is reused by target journals

205

Page 206: MIMIX Reference

Journal definition considerations

that reside in the same library and ASP, attempts to start the remote journals will fail with message CPF699A (Unexpected journal receiver found).

When you create a target journal definition instead of having it generated using the Add Remote Journal Link (ADDRJLNK) command, use the default value *GEN for the prefix name for the JRNRCVPFX on a target journal definition. The receiver name for source and target journals will be the same on the systems but will not be the same in the journal definitions. In the target journal, the prefix will be the same as that specified in the source journal definition.

Naming convention for remote journaling environments with 2 systemsIf you allow MIMIX to generate the target journal definition when you create a remote journal link, MIMIX implements the following naming conventions for the target journal definition and for the objects in its associated journaling environment. If you specify your own target journal definition, follow these same naming conventions to reduce the potential for confusion and errors.

The two-part name of the target journal definition is generated as follows:

• The Name is the first eight characters from the name of the source journal definition followed by the characters @R when the journal definition is created for MIMIX RJ support. If a journal definition name is already in use, the name may instead include @S, @T, @U, @V, or @W.

Note: Journal definition names cannot be UPSMON or begin with the characters MM.

• The System is the value entered in the target journal definition system field.

For example, if the source journal definition name is MYJRN and you specified TGTJRNDFN(*GEN CHICAGO), the target journal definition will be named MYJRN@R CHICAGO.

The target journal definition will have the following characteristics and associated new objects:

• The Journal name will have the same name as the source journal.

• The Journal library will use the first eight characters of the name of the source journal library followed by the characters @R.

• The Journal library ASP will be copied from source journal definition.

• The Journal receiver prefix will be copied from the source journal definition.

• The Journal receiver library will use the first eight characters of the name of the source journal receiver library followed by the characters @R.

• The Message queue library will use the first eight characters of the name of the source message queue library followed by the characters @R.

• The value for the Receiver change management (CHGMGT) parameter will be *NONE.

206

Page 207: MIMIX Reference

Example journal definitions for a switchable data groupTo support a switchable data group in a remote journaling environment, you need to have four journal definitions configured: two for the RJ link used for normal production-to-backup operations, and two for the RJ link used for replication in the opposite direction.

In this example, a switchable data group named PAYABLES is created between systems CHICAGO and NEWYORK. System 1 (CHICAGO) is the data source. The data group definition specifies *YES to Use remote journal link. Command defaults create the data group using a generated short data group name and using the data group name for the system 1 and system 2 journal definitions.

To create the RJ link and associated journal definitions for normal operations, option 10 (Add RJ link) on the Work with Journal Definitions display is used on an existing journal definition named PAYABLES CHICAGO (the first entry listed in Figure 13). This is the source journal definition for normal operations. The process of adding the link creates the target journal definition PAYABLES@R NEWYORK (the last entry listed in Figure 13).

To create the RJ link and associated definitions for replication in the opposite direction, a new source journal definition, PAYABLES NEWYORK, is created (the second entry listed in Figure 13). Then that definition is used to create second RJ link, which in turn generates the target journal definition PAYABLES@R CHICAGO (the third entry listed in Figure 13).

Figure 13. Example journal definitions for a switchable data group.

Work with Journal Definitions CHICAGO Type options, press Enter. 1=Create 2=Change 3=Copy 4=Delete 5=Display 6=Print 7=Rename 10=Add RJ link 12=Work with RJ links 14=Build 17=Work with jrn attributes 24=Delete jrn environment ---- Definition ---- ------ Journal ------- - Management - RJ Opt Name System Name Library Change Delete Link PAYABLES CHICAGO PAYABLES MIMIXJRN *SYSTEM *YES *SRC PAYABLES NEWYORK PAYABLES MIMIXJRN *SYSTEM *YES *SRC PAYABLES@R CHICAGO PAYABLES MIMIXJRN@R *NONE *YES *TGT PAYABLES@R NEWYORK PAYABLES MIMIXJRN@R *NONE *YES *TGT Bottom F3=Exit F4=Prompt F5=Refresh F6=Create F12=Cancel F18=Subset F21=Print list F22=Work with RJ links

207

Page 208: MIMIX Reference

Journal definition considerations

Identifying the correct journal definition on the Work with Journal Definition display can be confusing. Fortunately, the Work with RJ Links display (Figure 14) shows the association between journal definitions much more clearly.

Figure 14. Example of RJ links for a switchable data group.

Naming convention for multimanagement environments The i5/OS remote journal function requires unique names for the local journal receiver and the remote receiver. In a MIMIX environment that uses multimanagement functions1, more than one system serves as the management system for MIMIX operations. In a multimanagement environment, it is possible that each node that is a management system is also both a source and target for replication activity. The following manually implemented naming convention ensures that journal receivers have unique names.

Library name-mapping - In target journal definitions, specify journal library and receiver library names that include a two-character identifier, nn, to represent the node of the associated source (local journal). Place this identifier before the remote journal indicator @R at the end of the name, like this: nn@R. Also include this identifier at the end of the target journal definition name. This convention allows for the use of the same local journal name for all data groups and places all journals and receivers from the same source in the same library.

To ensure that journal receivers in a multimanagement environment have unique names, the following is strongly recommended:

• Limit the data group name to six characters. This will simplify keeping an association between the data group name and the names of associated journal definitions by allowing space for the source node identifier within those names.

Work with RJ Links System: CHICAGO Type options, press Enter. 1=Add 2=Change 4=Remove 5=Display 6=Print 9=Start 10=End 14=Build 15=Remove RJ connection 17=Work with jrn attributes 24=Delete target jrn environment ---Source Jrn Def--- ---Target Jrn Def--- Opt Name System Name System Priority Dlvry State PAYABLES CHICAGO PAYABLES@R NEWYORK *SYSDFT *ASYNC *INACTIVE PAYABLES NEWYORK PAYABLES@R CHICAGO *SYSDFT *ASYNC *INACTIVE Bottom Parameters or command ===> F3=Exit F4=Prompt F5=Refresh F6=Add F9=Retrieve F11=View 2 F12=Cancel F13=Repeat F16=Jrn Definitions F18=Subset F21=Print list

1. A MIMIX cluster1 access code is required for multimanagement functions.

208

Page 209: MIMIX Reference

• Manually create journal definitions (CRTJRNDFN command) using the library name-mapping convention. Journal definitions created when a data group is created may not have unique names and will not create all the necessary target journal definitions.

• Once the appropriately named journal definitions are created for source and target systems, manually create the remote journal links between them (ADDRJLNK command).

Example journal definitions for three management nodes The following figures illustrate the library-mapping naming convention for journal definitions in a multimanagement environment with three nodes. In this example, all three nodes are designated as management systems. The data group name ABC.

When implementing the naming convention, it is helpful to consider one source node at a time and create all the journal definitions necessary for replication from that source. This technique is illustrated in the example.

Library-mapping example: In Figure 15, a three node environment is shown in three separate graphics. Each graphic identifies one node as a replication source, with arrows pointing to the possible target nodes and lists the journal definitions needed to replicate from that source.

In each graphic, library name-mapping is evident in the names shown for the target journal definitions and their journal and receiver libraries. For example, when SYS01 is the source, journal definition ABC SYS01 identifies the local journal on SYS01. The source identifier 01 appears target journal definitions ABC01@R SYS02 and ABC01@R SYS03 and in the library names defined within each.

Figure 15 also includes a list of all the journal definitions associated with all nodes from this example as they would appear on the Work with Journal definitions display.

209

Page 210: MIMIX Reference

Journal definition considerations

Figure 15. Library-mapped journal definitions - three node environment. All nodes are management systems

210

Page 211: MIMIX Reference

Figure 16 shows the RJ links needed for this example.

Figure 16. Library-mapped names shown in RJ links for three node environment

211

Page 212: MIMIX Reference

Journal definition considerations

212

Page 213: MIMIX Reference

Journal receiver size for replicating large object dataFor potentially large IFS stream files and files containing LOB data, it is important that your journal receiver is large enough to accommodate the data. You may need to change your journal receiver size in order to accommodate the data.

For data groups that can be switched, the journal receivers on both the source and target systems must be large enough to accomodate the data.

Verifying journal receiver size optionsTo display the current journal receiver size options for journals used by MIMIX, do the following from the system where the journal definition is located:

1. Enter the command installation-library/WRKJRNDFN

2. Next to the journal definition for the system you are on, type a 17 (Work with journal attributes).

3. View the Receiver size options field to see how the journal is configured. The value should support large journal entries, such as *MAXOPT2.

Changing journal receiver size options To change the journal receiver size, do the following:

1. From a command line, type CHGJRN (Change Journal) and press F4 to prompt.

2. At the Journal prompt, enter the journal and library names for the journal you wish to change.

3. At the Receiver size option prompt, specify a value that supports large journal entries, such as *MAXOPT2. Make sure the other systems in your environment are compatible in size.

Note: Do not specify *MAXOPT3.

213

Page 214: MIMIX Reference

Journal receiver size for replicating large object data

214

Page 215: MIMIX Reference

Creating a journal definitionDo the following to create a journal definition:

1. Access the Work with Journal Definitions display. From the MIMIX Configuration Menu select option 3 (Work with journal definitions) and press Enter.

2. The Work with Journal Definitions display appears. Type 1 (Create) next to the blank line at the top of the list area and press Enter.

3. The Create Journal Definition display appears. At the Journal definition prompts, specify a two-part name.

Note: Journal definition names cannot be UPSMON or begin with the characters MM.

4. Verify that the following prompts contain the values that you want. If you have not journaled before, the default values are appropriate. If you need to identify an existing journaling environment to MIMIX, specify the information you need.

Journal Library

Journal library ASP

Journal receiver prefix Library

Journal receiver library ASP

5. At the Target journal state prompt, specify the requested status of the target journal. The default value is *ACTIVE. This value can be used with active journaling support or journal standby state.

6. At the Journal caching prompt, specify whether the system should cache journal entries in main storage before writing them to disk. The recommended default value is *BOTH.

7. Set the values you need to manage changing journal receivers, as follows:

a. At the Receiver change management prompt, specify the value you want. Lakeview recommends that you use the default values. For more information about valid combinations of values, press F1 (Help).

b. Press Enter.

c. One or more additional prompts related to receiver change management appear on the display. Verify that the values shown are what you want and, if necessary, change the values.

Receiver threshold size (MB)

Time of day to change receiver

Reset sequence threshold

d. Press Enter.

8. Set the values you need to manage deleting journal receivers, as follows:

215

Page 216: MIMIX Reference

Creating a journal definition

a. Lakeview recommends that you accept the default value *YES for the Receiver delete management prompt to allow MIMIX to perform delete management.

b. Press Enter.

c. One or more additional prompts related to receiver delete management appear on the display. If necessary, change the values.

Keep unsaved journal receivers

Keep journal receiver count

Keep journal receivers (days)

9. At the Description prompt, type a brief text description of the transfer definition.

10. This step is optional. If you want to access additional parameters that are considered advanced functions, press F10 (Additional parameters). Make any changes you need to the additional prompts that appear on the display.

11. To create the journal definition, press Enter.

216

Page 217: MIMIX Reference

Changing a journal definitionTo change a journal definition, do the following:

1. Access the Work with Journal Definitions display according to your configuration needs:

• In a clustering environment, from the MIMIX Cluster Menu select option 20 (Work with system definitions) and press Enter. When the Work with System Definitions display appears, type 12 (Journal Definitions) next to the system name you want and press Enter.

• In a standard MIMIX environment, from the MIMIX Configuration Menu select option 3 (Work with journal definitions) and press Enter.

2. The Work with Journal Definitions display appears. Type 2 (Change) next to the definition you want and press Enter.

3. The Change Journal Definition (CHGJRNDFN) display appears. Press Enter twice to see all prompts for the display.

4. Make any changes you need to the prompts. Press F1 (Help) for more information about the values for each parameter.

5. If you need to access advanced functions, press F10 (Additional parameters). When the additional parameters appear on the display, make the changes you need.

6. To accept the changes, press Enter.

Note: Changes to the Receiver threshold size (MB) (THRESHOLD) are effective with the next receiver change. Before a change to any other parameter is effective, you must rebuild the journal environment. Rebuilding the journal environment ensures that it matches the journal definition and prevents problems starting the data group.

217

Page 218: MIMIX Reference

Changing a journal definition

218

Page 219: MIMIX Reference

Building the journaling environmentBefore replication for a data group can occur, the journal environment for all journal definitions used by that data group must be created on each system. A journaling environment includes the following objects: library, journal, journal receiver, and threshold message queue on the system specified in the journal definition. The Build Journal Environment (BLDJRNENV) command is used to build the journal environment objects for a journal definition. When the BLDJRNENV command is run, if the objects do not exist, they are created based on what is specified in the journal definition. If the journal exists, the Source for values (JRNVAL) parameter of the BLDJRNENV command is used to determine the source for the values of these objects. The journal receiver prefix and library, message queue and library, and threshold parameters are updated from the source specified in the JRNVAL parameter.

Specifying *JRNENV for the JRNVAL parameter changes the values of the objects in the journal definition to match the values in the existing journal environment objects. Specifying *JRNDFN for the JRNVAL parameter changes the values of the journal environment objects to match the values of the objects in the journal definition. In a remote journal environment, the values specified in the journal definition (*JRNDFN) are only applicable to the source journal.

If the data group definition specifies to journal on the target system, the journal environment must be built on each system that will be a target system for replication of that data group. If you do not build either source or target journal environments, the first time the data group starts MIMIX will automatically build the journal environments for you.

Note: When building a journal environment, ensure the journal receiver prefix in the specified library is not already used. If the journal receiver prefix in the specified library is already used, you must change it to an unused value.

For switchable data groups not specified to journal on the target system, it is recommended to build the source journaling environments for both directions of replication so the environments exist for data group replication after switching.

All previous steps in your configuration checklist must be complete before you use this procedure.

To build the journaling environment, do the following:

Note: If you are journaling on the target system, perform this procedure for both the source and target systems.

1. From the MIMIX Main Menu, select 11 (Configuration menu) and press Enter.

2. From the MIMIX Configuration Menu, select one of the following and press Enter:

a. Select 8 (Work with remote journal links) to build the journaling environments for remote journaling.

b. Select 3 (Work with journal definitions) to build all other journaling environments.

3. From the Work with display, type 14 (Build) next to the journal definition you want

219

Page 220: MIMIX Reference

Building the journaling environment

to build and press Enter.

Option 14 calls the Build Journal Environment (BLDJRNENV) command. For environments using remote journaling, the command is called twice (first for the source journal definition and then for the target journal definition). A status message is issued indicating that the journal environment was created for each system.

4. If you plan to journal access paths, you need to change the value of the receiver size options. To do this, do the following:

a. Type the command CHGJRN and press F4 (Prompt):

b. For the JRN parameter, specify the name of the journal from the journal definition.

c. Specify *GEN for the JRNRCV parameter.

d. Specify *NONE for the RCVSIZOPT parameter.

e. Press Enter.

220

Page 221: MIMIX Reference

221

Page 222: MIMIX Reference

Changing the remote journal environment

Changing the remote journal environmentUse the following checklist to guide you through the process of changing an existing remote journal configuration. For example, this procedure is appropriate for changing a journal receiver library for the target journal in a remote journaling (RJ) environment or for any other changes that affect the target journal. These steps can be used for synchronous or asynchronous remote journals.

Important! Changing the RJ environment must be done in the correct sequence. Failure to follow the proper sequence can introduce errors in replication and journal management.

Perform these tasks from the MIMIX management system unless these instructions indicate otherwise.

1. Verify that no other data groups use the RJ link using topic “Identifying data groups that use an RJ link” on page 310.

2. Use topic “Ending a data group in a controlled manner” in the Using MIMIX book to prepare for and perform a controlled end of the data group and end the RJ link. Specify the following on the ENDDG command:

• *ALL for the Process prompt

• *CNTRLD for the End process prompt

• *YES for the End remote journaling prompt.

3. Verify that the remote journal link is not in use on both systems. Use topic “Displaying status of a remote journal link” in the Using MIMIX book. The remote journal link should have a state value of *INACTIVE before you continue.

4. Remove the connection to the remote journal as follows:

a. Access the journal definitions for the data group whose environment you want to change. From the Work with Data Groups display, type a 45 (Journal definitions) next to the data group that you want and press Enter.

b. Type a 12 (Work with RJ links) next to either journal definition you want and press Enter. You can select either the source or target journal definition.

Note: The target journal definition will end with @R.

c. From the Work with RJ Links display, choose the link based on the name in the Target Jrn Def column. Type a 15 (Remove RJ connection) next to the link with the target journal definition you want and press Enter

d. A confirmation display appears. To continue removing the connections for the selected links, press Enter.

5. From the Work with RJ Links display, do the following to delete the target system objects associated with the RJ link:

Note: The target journal definition will end with @R.

a. Type a 24 (Delete target jrn environment) next to the link that you want and press Enter.

222

Page 223: MIMIX Reference

b. A confirmation display appears. To continue deleting the journal, its associated message queue, and the journal receiver, press Enter.

6. Make the changes you need for the target journal.

For example, to change the target (remote) journal definition to a new receiver library, do the following:

a. Press F12 to return to the Work with Journal Definitions display.

b. Type option 2 (Change) next to the journal definition for the target system you want and press Enter.

7. From the Work with Journal Definitions display, type a 14 (Build) next to the target journal definition and press Enter.

Note: The target journal definition will end with @R.

8. Return to the Work with Data Groups display. Then do the following:

a. Type an 8 (Display status) next to the data group you want and press Enter.

b. Locate the name of the receiver in the Last Read field for the Database process.

9. Do the following to start the RJ link:

a. From the Work with Data Groups display, type a 44 (RJ links) next to the data group you want and press Enter.

b. Locate the link you want based on the name in the Target Jrn Def column. Type a 9 (Start) next to the link with the target journal definition and press F4 (Prompt)

c. The Start Remote Journal Link (STRRJLNK) appears. Specify the receiver name from Step 8b as the value for the Starting journal receiver (STRRCV) and press Enter.

10. Start the data group using default values Refer to topic “Starting selected data group processes” in the Using MIMIX book.

223

Page 224: MIMIX Reference

Changing the remote journal environment

224

Page 225: MIMIX Reference

Adding a remote journal linkThis procedure requires that a source journal definition exists. The process of creating an RJ link will create the target journal definition with appropriate values for remote journaling.

Before you create the RJ link you should be familiar with the “Journal definition considerations” on page 205.

To create a link between journal definitions, do the following:

1. From the MIMIX Configuration menu, select option 3 (Work with journal definitions) and press Enter.

2. The Work with Journal Definitions display appears. Type a 10 (Add RJ link) next to the journal definition you want and press Enter.

3. The Add Remote Journal Link (ADDRJLNK) display appears. The journal definition you selected in the previous step appears in the prompts for the Source journal definition. Verify that this is the definition you want as the source for RJ processing.

4. At the Target journal definition prompts, specify *GEN as the Name and specify the value you want for System.

Note: If you specify the name of a journal definition, the definition must exist and you are responsible for ensuring that its values comply with the recommended values. Refer to the related topic on considerations for creating journal definitions for remote journaling for more information.

5. Verify that the values for following prompts are what you want. If necessary, change the values.

• Delivery

• Sending task priority

• Primary transfer definition

• Secondary transfer definition

• If you are using an independent ASP in this configuration you also need to identify the auxiliary storage pools (ASPs) from which the journal and journal receiver used by the remote journal are allocated. Verify and change the values for Journal library ASP, Journal library ASP device, Journal receiver library ASP, and Journal receiver lib ASP dev as needed.

6. At the Description prompt, type a text description of the link, enclosed in apostrophes.

7. To create the link between journal definitions, press Enter.

225

Page 226: MIMIX Reference

Adding a remote journal link

226

Page 227: MIMIX Reference

227

Changing a remote journal linkChanges to the delivery and sending task priority take effect only after the remote journal link has been ended and restarted.

To change characteristics of the link between source and target journal definitions, do the following:

1. Before you change a remote journal link, end activity for the link. The Using MIMIX book describes how to end only the RJ link.

Note: If you plan to change the primary transfer definition or secondary transfer definition to a definition that uses a different RDB directory entry, you also need to remove the existing connection between objects. Use topic “Removing a remote journaling environment” on page 231 before changing the remote journal link.

2. From the Work with RJ Links display, type a 2 (Change) next to the entry you want and press Enter.

3. The Change Remote Journal Link (CHGRJLNK) display appears. Specify the values you want for the following prompts:

• Delivery

• Sending task priority

• Primary transfer definition

• Secondary transfer definition

• Description

4. When you are ready to accept the changes, press Enter.

5. To make the changes effective, do the following:

a. If you removed the RJ connection in Step 1, you need to use topic “Building the journaling environment” on page 219.

b. Start the data group which uses the RJ link.

Page 228: MIMIX Reference

Temporarily changing from RJ to MIMIX processing

228

Temporarily changing from RJ to MIMIX processingThis procedure is appropriate for when you plan to continue using remote journaling as your primary means of transporting data to the target system but, for some reason, temporarily need to revert to MIMIX send processing.

Important! If the data group is configured for MIMIX Dynamic Apply, you must complete the procedure in “Checklist: Converting to legacy cooperative processing” on page 157 before you remove remote journaling.

For the data group you want to change, do the following:

1. Use topic “Ending a data group in a controlled manner” in the Using MIMIX book to prepare for and perform a controlled end of the data group and end the RJ link. Specify the following on the ENDDG command:

• *ALL for the Process prompt

• *CNTRLD for the End process prompt

• *YES for the End remote journaling prompt.

2. Verify that the process is ended. On the Work with Data Groups display, the data group should change to show a red “L” in the Source DB column.

3. Modify the data group definition as follows:

a. From the Work with DG Definitions display, type a 2 (Change) next to the data group you want and press Enter.

b. The Change Data Group Definition (CHGDGDFN) display appears. Press Enter to see additional prompts.

c. Specify *NO for the Use remote journal link prompt.

d. To accept the change press Enter.

4. Use the procedure “Starting selected data group processes” in the Using MIMIX book, specifying *ALL for the Start Process prompt.

Page 229: MIMIX Reference

Changing from remote journaling to MIMIX processingUse this procedure when you no longer want to use remote journaling for a data group and want to permanently change the data group to use MIMIX send processing.

Important! If the data group is configured for MIMIX Dynamic Apply, you must complete the procedure in “Checklist: Converting to legacy cooperative processing” on page 157 before you remove remote journaling.

Perform these tasks from the MIMIX management system unless these instructions indicate otherwise.

1. Perform a controlled end for the data group that you want to change using topic “Ending a data group in a controlled manner” in the Using MIMIX book. On the ENDDG command, specify the following:

• *ALL for the Process prompt

• *CNTRLD for the End process prompt

Note: Do not end the RJ link at this time. Step 2 verifies that the RJ link is not in use by any other processes or data groups before ending and removing the RJ environment.

2. Perform the procedure in topic “Removing a remote journaling environment” on page 231.

3. Modify the data group definition as follows:

a. From the Work with DG Definitions display, type a 2 (Change) next to the data group you want and press Enter.

b. The Change Data Group Definition (CHGDGDFN) display appears. Press Enter to see additional prompts.

c. Specify *NO for the Use remote journal link prompt.

d. To accept the change, press Enter.

4. Start data group replication using the procedure “Starting selected data group processes” in the Using MIMIX book and specify *ALL for the Start processes prompt (PRC parameter).

229

Page 230: MIMIX Reference

Changing from remote journaling to MIMIX processing

230

Page 231: MIMIX Reference

Removing a remote journaling environmentUse this procedure when you want to remove a remote journaling environment that you no longer need. This procedure removes configuration elements and system objects necessary for data group replication with remote journaling.

1. Verify that the remote journal link is not used by any data group. Use “Identifying data groups that use an RJ link” on page 310.

If you identify a data group that uses the remote journal link, check with your MIMIX administrator and determine how to proceed. Possible courses of action are:

• If the data group is being converted to use MIMIX send processing or if the data group will no longer be used, perform a controlled end of the data group. When the data group is ended, continue with Step 2 of this procedure.

• If the data group needs to remain operable using remote journaling, do not continue with this procedure.

2. End the remote journal link and verify that it has a state value of *INACTIVE before you continue. Refer to topics “Ending a remote journal link independently” and “Checking status of a remote journal link” in the Using MIMIX book.

3. From the management system, do the following to remove the connection to the remote journal:

a. Access the journal definitions for the data group whose environment you want to change. From the Work with Data Groups display, type a 45 (Journal definitions) next to the data group that you want and press Enter.

b. Type a 12 (Work with RJ links) next to either journal definition you want and press Enter. You can select either the source or target journal definition.

c. From the Work with RJ Links display, type a 15 (Remove RJ connection) next to the link that you want and press Enter.

Note: If more than one RJ link is available for the data group, ensure that you choose the link you want.

d. A confirmation display appears. To continue removing the connections for the selected links, press Enter.

4. From the Work with RJ Links display, do the following to delete the target system objects associated with the RJ link:

a. Type a 24 (Delete target jrn environment) next to the link that you want and press Enter.

Attention: Do not continue with this procedure if you identified a data group that uses the remote journal link and the data group must continue to be operational. This procedure removes configuration elements and system objects necessary for replication with remote journaling

231

Page 232: MIMIX Reference

Removing a remote journaling environment

b. A confirmation display appears. To continue deleting the journal, its associated message queue, the journal receiver, and to remove the connection to the source journal receiver, press Enter.

5. Delete the target journal definition using topic “Deleting a Definition” in the Using MIMIX book. When you delete the target journal definition, its link to the source journal definition is removed.

6. Use option 4 (Delete) on the Work with Monitors display to delete the RJLNK monitors which have the same name as the RJ link.

232

Page 233: MIMIX Reference

Chapter 10

Configuring data group definitions

By creating a data group definition, you identify to MIMIX the characteristics of how replication occurs between two systems. You must have at least one data group definition in order to perform replication.

In an Intra environment, a data group definition defines how replication occurs between the two product libraries used by INTRA.

Once data group definitions exist for MIMIX, they can also be used by the MIMIX Promoter product.

The topics in this chapter include:

• “Tips for data group parameters” on page 234 provides tips for using the more common options for data group definitions.

• “Creating a data group definition” on page 247 provides the steps to follow for creating a data group definition.

• “Changing a data group definition” on page 251 provides the steps to follow for changing a data group definition.

• “Fine-tuning backlog warning thresholds for a data group” on page 251 describes what to consider when adjusting the values at which the backlog warning thresholds are triggered.

233

Page 234: MIMIX Reference

Tips for data group parameters

Tips for data group parametersThis topic provides tips for using the more common options for data group definitions. Context-sensitive help is available online for all options on the data group definition commands. Refer to “Additional considerations for data groups” on page 244 for more information.

Shipped default values for the Create Data Group Definition (CRTDGDFN) command result in data groups configured for MIMIX Dynamic Apply. For additional information see Table 12 in “Considerations for LF and PF files” on page 105.

Data group names (DGDFN, DGSHORTNAM) These parameters identify the data group.

The Data group definition (DGDFN) is a three-part name that uniquely identifies a data group. The three-part name must be unique to a MIMIX cluster. The first part of the name identifies the data group. The second and third parts of the name (System 1 and System 2) specify system definitions representing the systems between which the files and objects associated with the data group are replicated.

Notes: • In the first part of the name, the first character must be either A - Z, $, #, or @.

The remaining characters can be alphanumeric and can contain a $, #, @, a period (.), or an underscore (_). Data group names cannot be UPSMON or begin with the characters MM.

• For Clustering environments only, MIMIX recommends using the value *RCYDMN in System 1 and System 2 fields for Peer CRGs.

One of the system definitions specified must represent a management system. Although you can specify the system definitions in any order, you may find it helpful if you specify them in the order in which replication occurs during normal operations. For many users normal replication occurs from a production system to a backup system, where the backup system is defined as the management system for MIMIX. For example, if you normally replicate data for an application from a production system (MEXICITY) to a backup system (CHICAGO) and the backup system is the management system for the MIMIX cluster, you might name your data group SUPERAPP MEXICITY CHICAGO.

The Short data group name (DGSHORTNAM) parameter indicates an abbreviated name used as a prefix to identify jobs associated with a data group. MIMIX will generate this prefix for you when the default *GEN is used. The short name must be unique to the MIMIX cluster and cannot be changed after the data group is created.

Data source (DTASRC) This parameter indicates which of the systems in the data group definition is used as the source of data for replication.

Allow to be switched (ALWSWT) This parameter determines whether the direction in which data is replicated between systems can be switched. If you plan to use the data group for high availability purposes, use the default value *YES. This allows you to use one data group for replicating data in either direction between the two systems. If you do not allow switching directions, you need to have second data group with

234

Page 235: MIMIX Reference

similar attributes in which the roles of source and target are reversed in order to support high availability.

Data group type (TYPE) The default value *ALL indicates that the data group can be used by both user journal and system journal replication processes. This enables you to use the same data group for all of the replicated data for an application. The value *ALL is required for user journal replication of IFS objects, data areas, and data queues. MIMIX Dynamic Apply also supports the value *DB. For additional information, see “Requirements and limitations of MIMIX Dynamic Apply” on page 110

Note: In Clustering environments only, the data group value of *PEER is available. This provides you with support for system values and other system attributes that MIMIX currently does not support.

Transfer definitions (PRITFRDFN, SECTFRDFN) These parameters identify the transfer definitions used to communicate between the systems defined by the data group. The name you specify in these parameters must match the first part of a transfer definition name. By default, MIMIX uses the name PRIMARY for a value of the primary transfer definition (PRITFRDFN) parameter and for the first part of the name of a transfer definition.

If you specify a secondary transfer definition (SECTRFDFN), it is used if the communications path specified in the primary transfer definition is not available. Once MIMIX starts using the secondary transfer definition, it continues to use it even after the primary communication path becomes available again.

Reader wait time (seconds) (RDRWAIT) You can specify the maximum number of seconds that the send process waits when there are no entries available to process. Jobs go into a delay state when there are no entries to process. Jobs wait for the time you specify even when new entries arrive in the journal. A value of 0 uses more system resources.

Common database parameters (JRNTGT, JRNDFN1, JRNDFN2, ASPGRP1, ASPGRP2, RJLNK, COOPJRN, NBRDBAPY, DBJRNPRC) These parameters apply to data groups that can include database files or tracking entries. Data group types of *ALL or *DB include database files. Data group types of *ALL may also include tracking entries.

Journal on target (JRNTGT) The default value *YES enables journaling on the target system, which allows you to switch the direction of a data group more quickly. Replication of files with some types of referential constraint actions may require a value of *YES. For more information, see “Considerations for LF and PF files” on page 105.

If you specify *NO, you must ensure that, in the event of a switch to the direction of replication, you manually start journaling on the target system before allowing users to access the files. Otherwise, activity against those files may not be properly recorded for replication.

System 1 journal definition (JRNDFN1) and System 2 journal definition (JRNDFN2) parameters identify the user journal definitions associated with the systems defined as System 1 and System 2, respectively, of the data group. The value *DGDFN indicates that the journal definition has the same name as the data

235

Page 236: MIMIX Reference

Tips for data group parameters

group definition.

The DTASRC, ALWSWT, JRNTGT, JRNDFN1, and JRNDFN2 parameters interact to automatically create as much of the journaling environment as possible. The DTASRC parameter determines whether system 1 or system 2 is the source system for the data group. When you create the data group definition, if the journal definition for the source system does not exist, a journal definition is created. If you specify to journal on the target system and the journal definition for the target system does not exist, that journal definition is also created. The names of journal definitions created in this way are taken from the values of the JRNDFN1 and JRNDFN2 parameters according to which system is considered the source system at the time they are created. You may need to build the journaling environment for these journal definitions.

System 1 ASP group (ASPGRP1) and System 2 ASP group (ASPGRP2) parameters identify the name of the primary auxiliary storage pool (ASP) device within an ASP group on each system. The value *NONE allows replication from libraries in the system ASP and basic user ASPs 2-32. Specify a value when you want to replicate IFS objects from a user journal or when you want to replicate objects from ASPs 33 or higher. For more information see “Benefits of independent ASPs” on page 564.

Use remote journal link (RJLNK) This parameter identifies how journal entries are moved to the target system. The default value, *YES, uses remote journaling to transfer data to the target system. This value results in the automatic creation of the journal definitions (CRTJRNDFN command) and the RJ link (ADDRJLNK command), if needed. The RJ link defines the source and target journal definitions and the connection between them. When ADDRJLNK is run during the creation of a data group, the data group transfer definition names are used for the ADDRJLNK transfer definition parameters.

MIMIX Dynamic Apply requires the value *YES. The value *NO is appropriate when MIMIX source-send processes must be used.

Cooperative journal (COOPJRN) This parameter determines whether cooperatively processed operations for journaled objects are performed primarily by user (database) journal replication processes or system (audit) journal replication processes. Cooperative processing through the user journal is recommended and is called MIMIX Dynamic Apply. For data groups created on version 5, the shipped default value *DFT resolves to *USRJRN (user journal) when configuration requirements for MIMIX Dynamic Apply are met. If those requirements are not met, *DFT resolves to *SYSJRN and cooperative processing is performed through system journal replication processes.

Number of DB apply sessions (NBRDBAPY) You can specify the number of apply sessions allowed to process the data for the data group.

DB journal entry processing (DBJRNPRC) This parameter allows you to specify several criteria that MIMIX will use to filter user journal entries before they reach the database apply (DBAPY) process. Each element of the parameter identifies a criteria that can be set to either *SEND or *IGNORE.

The value *SEND causes the journal entries meeting the criteria to be processed and sent to the database apply process. For data groups configured to use

236

Page 237: MIMIX Reference

MIMIX source-send processes, *SEND can minimize the amount of data that is sent over a communications path. The value *IGNORE prevents the entries from being sent to the database apply process. Certain database techniques, such as keyed replication, may require that an element be set to a specific value.

The following available elements describe how journal entries are handled by the database reader (DBRDR) or the database send (DBSND) processes.

• Before images This criteria determines whether before-image journal entries are filtered out before reaching the database apply process. If you use keyed replication, the before-images are often required and you should specify *SEND. *SEND is also required for the IBM RMVJRNCHG (Remove Journal Change) command. See “Additional considerations for data groups” on page 244 for more information.

• For files not in data group This criteria determines whether journal entries for files not defined to the data group are filtered out.

• Generated by MIMIX activity This criteria determines whether journal entries resulting from the MIMIX database apply process are filtered out.

• Not used by MIMIX This criteria determines whether journal entries not used by MIMIX are filtered out.

Additional parameters: Use F10 (Additional parameters) to access the following parameters. These parameters are considered advanced configuration topics.

Remote journaling threshold (RJLNKTHLD) This parameter specifies the backlog threshold criteria for the remote journal function. When the backlog reaches any of the specified criterion, the threshold exceeded condition is indicated in the status of the RJ link. The threshold can be specified as a time difference, a number of journal entries, or both. When a time difference is specified, the value is amount of time, in minutes, between the timestamp of the last source journal entry and the timestamp of the last remote journal entry. When a number of journal entries is specified, the value is the number of journal entries that have not been sent from the local journal to the remote journal. If *NONE is specified for a criterion, that criterion is not considered when determining whether the backlog has reached the threshold.

Synchronization check interval (SYNCCHKITV) This parameter, which is only valid for database processing, allows you to specify how many before-image entries to process between synchronization checks. For MIMIX to use this feature, the journal image file entry option (FEOPT parameter) must allow before-image journaling (*BOTH). When you specify a value for the interval, a synchronization check entry is sent to the apply process on the target system. The apply process compares the before-image to the image in the file (the entire record, byte for byte). If there is a synchronization problem, MIMIX puts the data group file entry on hold and stops applying journal entries. The synchronization check transactions still occur even if you specify to ignore before-images in the DB journal entry processing (DBJRNPRC) parameter.

Time stamp interval (TSPITV) This parameter, which is only valid for database processing, allows you to specify the number of entries to process before MIMIX creates a time stamp entry. Time stamps are used to evaluate performance.

Note: The TSPITV parameter does not apply for remote journaling (RJ) data groups.

237

Page 238: MIMIX Reference

Tips for data group parameters

Verify interval (VFYITV) This parameter allows you to specify the number of journal transactions (entries) to process before MIMIX performs additional processing. When the value specified is reached, MIMIX verifies that the communications path between the source system and the target system is still active and that the send and receive processes are successfully processing transactions. A higher value uses less system resources. A lower value provides more timely reaction to error conditions. Larger, high-volume systems should have higher values. This value also affects how often the status is updated with the "Last read" entries. A lower value results in more accurate status information.

Data area polling interval (DTAARAITV) This parameter specifies the number of seconds that the data area poller waits between checks for changes to data areas. The poller process is only used when configured data group data area entries exist. The preferred methods of replicating data areas require that data group object entries be used to identify data areas. When object entries identify data areas, the value specified in them for cooperative processing (COOPDB) determines whether the data areas are processed through the user journal with advanced journaling, or through the system journal.

Journal at creation (JRNATCRT) This parameter allows you to specify whether to start journaling when objects are created in the libraries replicated by the data group. This applies to new objects of type *FILE, *DTAARA, and *DTAQ that are cooperatively processed. All new objects of the same type are journaled, including those not replicated by the data group. If multiple data groups include the same library in their configurations, only allow one data group to use journal at object creation (*YES or *DFT). The default for this parameter is *DFT which allows MIMIX to determine the objects to journal at creation.

For example, a data group is configured to cooperatively process only file ABC from library APPDTA. The library also contains data areas and temporary files that are not configured for replication. Specifying a value that permits journaling of newly created objects (*YES or *DFT) will result in all newly created files in library APPDTA being journaled. Newly created data areas in this library would not be journaled.

Note: There are operating system restrictions and some IBM library restrictions. For more information, see the requirements for implicit starting of journaling in “What objects need to be journaled” on page 323. For additional information, see “Processing of newly created files and objects” on page 127.

Parameters for automatic retry processing: MIMIX may use delay retry cycles when performing system journal replication to automatically retry processing an object that failed due to a locking condition or an in-use condition. It is normal for some pending activity entries to undergo delay retry processing—for example, when a conflict occurs between replicated objects in MIMIX and another job on the system. The following parameters define the scope of two retry cycles:

Number of times to retry (RTYNBR) This parameter specifies the number of attempts to make during a delay retry cycle.

First retry delay interval (RTYDLYITV1) This parameter specifies the amount of time, in seconds, to wait before retrying a process in the first (short) delay retry cycle.

Second retry delay interval (RTYDLYITV2) specifies the amount of time, in

238

Page 239: MIMIX Reference

seconds, to wait before retrying a process in the second (long) delay retry cycle. This is only used after all the retries for the RTYDLYITV1 parameter have been attempted.

After the initial failed save attempt, MIMIX delays for the number of seconds specified for the First retry delay interval (RTYDLYITV1) before retrying the save operation. This is repeated for the specified number of times (RTYNBR).

If the object cannot be saved after all attempts in the first cycle, MIMIX enters the second retry cycle. In the second retry cycle, MIMIX uses the number of seconds specified in the Second retry delay interval (RTYDLYITV2) parameter and repeats the save attempt for the specified number of times (RTYNBR).

If the object identified by the entry is in use (*INUSE) after the first and second retry cycle attempts have been exhausted, a third retry cycle is attempted if the Automatic object recovery policy is enabled. The values in effect for the Number of third delay/retries policy and the Third retry interval (min.) policy determine the scope of the third retry cycle. After all attempts have been performed, if the object still cannot be processed because of contention with other jobs, the status of the entry will be changed to *FAILED.

Adaptive cache (ADPCHE) This parameter enables adaptive caching for a data group. Adaptive caching is a technique by which MIMIX caches data into memory before it is needed by user journal replication processes. Using adaptive caching provides greater elapsed time performance by using additional memory.

File and tracking entry options (FEOPT) This parameter specifies default options that determine how MIMIX handles file entries and tracking entries for the data group. All database file entries, object tracking entries, and IFS tracking entries defined to the data group use these options unless they are explicitly overridden by values specified in data group file or object entries. File entry options in data group object entries enable you to set values for files and tracking entries that are cooperatively processed.

The options are as follows:

• Journal image This option allows you to control the kinds of record images that are written to the journal when data updates are made to database file records, IFS stream files, data areas or data queues. The default value *AFTER causes only after-images to be written to the journal. The value *BOTH causes both before-images and after-images to be written to the journal. Some database techniques, such as keyed replication, may require the use of both before-image and after-images. *BOTH is also required for the IBM RMVJRNCHG (Remove Journal Change) command. See “Additional considerations for data groups” on page 244 for more information.

• Omit open/close entries This option allows you to specify whether open and close entries are omitted from the journal. The default value *YES indicates that open and close operations on file members or IFS tracking entries defined to the data group do not create open and close journal entries and are therefore omitted from the journal. If you specify *NO, journal entries are created for open and close operations and are placed in the journal.

• Replication type This option allows you to specify the type of replication to use for

239

Page 240: MIMIX Reference

Tips for data group parameters

database files defined to the data group. The default value *POSITION indicates that each file is replicated based on the position of the record within the file. Positional replication uses the values of the relative record number (RRN) found in the journal entry header to locate a database record that is being updated or deleted. MIMIX Dynamic Apply requires the value *POSITION.

The value *KEYED indicates that each file is replicated based on the value of the primary key defined to the database file. The value of the key is used to locate a database record that is being deleted or updated. MIMIX strongly recommends that any file configured for keyed replication also be enabled for both before-image and after-image journaling. Files defined using keyed replication must have at least one unique access path defined. For additional information, see “Keyed replication” on page 355.

• Lock member during apply This option allows you to choose whether you want the database apply process to lock file members when they are being updated during the apply process. This prevents inadvertent updates on the target system that can cause synchronization errors. Members are locked only when the apply process is active.

• Apply session With this option, you can assign a specific apply session for processing files defined to the data group. The default value *ANY indicates that MIMIX determines which apply session to use and performs load balancing.

Notes:

• Any changes made to the apply session option are not effective until the data group is started with *YES specified for the clear pending and clear error parameters.

• For IFS and object tracking entries, only apply session A is valid. For additional information see “Database apply session balancing” on page 87.

• Collision resolution This option determines how data collisions are resolved. The default value *HLDERR indicates that a file is put on hold if a collision is detected. The value *AUTOSYNC indicates that MIMIX will attempt to automatically synchronize the source and target file. You can also specify the name of the collision resolution class (CRCLS) to use. A collision resolution class allows you to specify how to handle a variety of collision types, including calling exit programs to handle them. See the online help for the Create Collision Resolution Class (CRTCRCLS) command for more information.

Note: The *AUTOSYNC value should not be used if the Automatic database recovery policy is enabled.

• Disable triggers during apply This option determines if MIMIX should disable any triggers on physical files during the database apply process. The default value *YES indicates that triggers should be disabled by the database apply process while the file is opened.

• Process trigger entries This option determines if MIMIX should process any journal entries that are generated by triggers. The default value *YES indicates that journal entries generated by triggers should be processed.

240

Page 241: MIMIX Reference

Database reader/send threshold (DBRDRTHLD) This parameter specifies the backlog threshold criteria for the database reader (DBRDR) process. When the backlog reaches any of the specified criterion, the threshold exceeded condition is indicated in the status of the DBRDR process. If the data group is configured for MIMIX source-send processing instead of remote journaling, this threshold applies to the database send (DBSND) process. The threshold can be specified as time, journal entries, or both. When time is specified, the value is the amount of time, in minutes, between the timestamp of the last journal entry read by the process and the timestamp of the last journal entry in the journal. When a journal entry quantity is specified, the value is the number of journal entries that have not been read from the journal. If *NONE is specified for a criterion, that criterion is not considered when determining whether the backlog has reached the threshold.

Database apply processing (DBAPYPRC) This parameter allows you to specify defaults for operations associated with the database apply processes. Each configured apply session uses the values specified in this parameter. The areas for which you can specify defaults are as follows:

• Force data interval You can specify the number of records that are processed before MIMIX forces the apply process information to disk from cache memory. A lower value provides easier recovery for major system failures. A higher value provides for more efficient processing.

• Maximum open members You can specify the maximum number of members (with journal transactions to be applied) that the apply process can have open at one time. Once the limit specified is reached, the apply process selectively closes one file before opening a new file. A lower value reduces disk usage by the apply process. A higher value provides more efficient processing because MIMIX does not open and close files as often.

• Threshold warning You can specify the number of entries the apply process can have waiting to be applied before a warning message is sent. When the threshold is reached, the threshold exceeded condition is indicated in the status of the database apply process and a message is sent to the primary and secondary message queues.

• Apply history log spaces You can specify the maximum number of history log spaces that are kept after the journal entries are applied. Any value other than zero (0) affects performance of the apply processes.

• Keep journal log user spaces You can specify the maximum number of journal log spaces to retain after the journal entries are applied. Log user spaces are automatically deleted by MIMIX. Only the number of user spaces you specify are kept.

• Size of log user spaces (MB) You can specify the size of each log space (in megabytes) in the log space chain. Log spaces are used as a staging area for journal entries before they are applied. Larger log spaces provide better performance.

Object processing (OBJPRC) This parameter allows you to specify defaults for object replication. The areas for which you can specify defaults are as follows:

• Object default owner You can specify the name of the default owner for objects

241

Page 242: MIMIX Reference

Tips for data group parameters

whose owning user profile does not exist on the target system. The product default uses QDFTOWN for the owner user profile.

• DLO transmission method You can specify the method used to transmit the DLO content and attributes to the target system. The value *OPTIMIZED uses i5/OS APIs. The *SAVRST uses i5/OS save and restore commands.

• IFS transmission method You can specify the method used to transmit IFS object content to the target system. The value *SAVRST uses i5/OS save and restore commands. The value *OPTIMIZED uses i5/OS APIs.

Note: It is recommended that you use the *OPTIMIZED method of IFS transmission only in environments in which the high volume of IFS activity results in persistent replication backlogs. The i5/OS save and restore method guarantees that all attributes of an IFS object are replicated. The IFS optimization method does not currently replicate digital signatures or other attributes that have been added in i5/OS V5R2 or later.

• User profile status You can specify the user profile Status value for user profiles when they are replicated. This allows you to replicate user profiles with the same status as the source system in either an enabled or disabled status for normal operations. If operations are switched to the backup system, user profiles can then be enabled or disabled as needed as part of the switching process.

• Keep deleted spooled files You can specify whether to retain replicated spooled files on the target system after they have been deleted from the source system. When you specify *YES, the replicated spooled files are retained on the target system after they are deleted from the source system. MIMIX does not perform any clean-up of these spooled files. You must delete them manually when they are no longer needed. If you specify *NO, the replicated spooled files are deleted from the target system when they are deleted from the source system.

• Keep DLO system object name You can specify whether the DLO on the target system is created with the same system object name as the DLO on the source system. The system object name is only preserved if the DLO is not being redirected during the replication process. If the DLO from the source system is being directed to a different name or folder on the target system, then the system object name will not be preserved.

• Object retrieval delay You can specify the amount of time, in seconds, to wait after an object is created or updated before MIMIX packages the object. This delay provides time for your applications to complete their access of the object before MIMIX begins packaging the object.

Object send threshold (OBJSNDTHLD) This parameter specifies the backlog threshold criteria for the object send (OBJSND) process. When the backlog reaches any of the specified criterion, the threshold exceeded condition is indicated in the status of the OBJSND process. The threshold can be specified as time, journal entries, or both. When time is specified, the value is the amount of time, in minutes, between the timestamp of the last journal entry read by the process and the timestamp of the last journal entry in the journal. When a journal entry quantity is specified, the value is the number of journal entries that have not been read from the journal. If *NONE is specified for a criterion, that criterion is not considered when determining whether the backlog has reached the threshold.

242

Page 243: MIMIX Reference

Object retrieve processing (OBJRTVPRC) This parameter allows you to specify the minimum and maximum number of jobs allowed to handle object retrieve requests and the threshold at which the number of pending requests queued for processing causes additional temporary jobs to be started. The specified minimum number of jobs will be started when the data group is started. During periods of peak activity, if the number of pending requests exceeds the backlog jobs threshold, additional jobs, up to the maximum, are started to handle the extra work. When the backlog is handled and activity returns to normal, the extra jobs will automatically end. If the backlog reaches the warning message threshold, the threshold exceeded condition is indicated in the status of the object retrieve (OBJRTV) process. If *NONE is specified for the warning message threshold, the process status will not indicate that a backlog exists.

Container send processing (CNRSNDPRC) This parameter allows you to specify the minimum and maximum number of jobs allowed to handle container send requests and the threshold at which the number of pending requests queued for processing causes additional temporary jobs to be started. The specified minimum number of jobs will be started when the data group is started. During periods of peak activity, if the number of pending requests exceeds the backlog jobs threshold, additional jobs, up to the maximum, are started to handle the extra work. When the backlog is handled and activity returns to normal, the extra jobs will automatically end. If the backlog reaches the warning message threshold, the threshold exceeded condition is indicated in the status of the container send (CNRSND) process. If *NONE is specified for the warning message threshold, the process status will not indicate that a backlog exists.

Object apply processing (OBJAPYPRC) This parameter allows you to specify the minimum and maximum number of jobs allowed to handle object apply requests and the threshold at which the number of pending requests queued for processing triggers additional temporary jobs to be started. The specified minimum number of jobs will be started when the data group is started. During periods of peak activity, if the number of pending requests exceeds the backlog threshold, additional jobs, up to the maximum, are started to handle the extra work. When the backlog is handled and activity returns to normal, the extra jobs will automatically terminate. You can also specify a threshold for warning message that indicates the number of pending requests waiting in the queue for processing before a warning message is sent. When the threshold is reached, the threshold exceeded condition is indicated in the status of the object apply process and a message is sent to the primary and secondary message queues.

User profile for submit job (SBMUSR) This parameter allows you to specify the name of the user profile used to submit jobs. The default value *JOBD indicates that the user profile named in the specified job description is used for the job being submitted. The value *CURRENT indicates that the same user profile used by the job that is currently running is used for the submitted job.

Send job description (SNDJOBD) This parameter allows you to specify the name and library of the job description used to submit send jobs. The product default uses MIMIXSND in library MIMIXQGPL for the send job description.

243

Page 244: MIMIX Reference

Tips for data group parameters

Apply job description (APYJOBD) This parameter allows you to specify the name and library of the job description used to submit apply requests. The product default uses MIMIXAPY in library MIMIXQGPL for the apply job description.

Reorganize job description (RGZJOBD) This parameter, used by database processing, allows you to specify the name and library of the job description used to submit reorganize jobs. The product default uses MIMIXRGZ in library MIMIXQGPL for the reorganize job description.

Synchronize job description (SYNCJOBD) This parameter, used by database processing, allows you to the name and library of the job description used to submit synchronize jobs. The product default uses MIMIXSYNC in library MIMIXQGPL for synchronization job description. This is valid for any synchronize command that does not have JOBD parameter on the display.

Job restart time (RSTARTTIME) MIMIX data group jobs restart daily to maintain the MIMIX environment. You can change the time at which these jobs restart. The source or target role of the system affects the results of the time you specify on a data group definition. Results may also be affected if you specify a value that uses the job restart time in a system definition defined to the data group. Changing the job restart time is considered an advanced technique.

Recovery window (RCYWIN) Configuring a recovery window1 for a data group specifies the minimum amount of time, in minutes, that a recovery window is available and identifies the replication processes that permit a recovery window. A recovery window introduces a delay in the specified processes to create a minimum time during which you can set a recovery point. Once a recovery point is set, you can react to anticipated problems and take action to prevent a corrupted object from reaching the target system. When the processes reach the recovery point, they are suspended so that any corruption in the transactions after that point will not automatically be processed.

By its nature, a recovery window can affect the data group's recovery time objective (RTO). Consider the effect of the duration you specify on the data group's ability to meet your required RTO. You should also disable auditing for any data group that has a configured recovery window. For more information, see “Preventing audits from running” in the Using MIMIX book.

Additional considerations for data groupsIf unwanted changes are recorded to a journal but not realized until a later time, you can backtrack to a time prior to when the changes were made by using the Remove Journal Changes (RMVJRNCHG) command provided by IBM. In order to use this command, your configuration must specify the following:

For the data group definition, the following values must be specified for the parameters indicated:

• DB journal entry processing (DBJRNPRC)

Before images *SEND

1. Recovery windows and recovery points are supported with the MIMIX CDP™ feature, which requires an additional access code.

244

Page 245: MIMIX Reference

• File and tracking entry options (FEOPT)

Journal image *BOTH

For each data group file entry, the following must be specified:

• File entry options

Journal image *DGDFT or *BOTH

Finally, if you are changing an existing data group to have these values, you must end and restart the data group. Once you have these values specified, you will be able to use the RMVJRNCHG command if needed.

Updated for 5.0.08.00 and 5.0.13.00.

245

Page 246: MIMIX Reference

Tips for data group parameters

246

Page 247: MIMIX Reference

Creating a data group definitionShipped default values for the Create Data Group Definition (CRTDGDFN) command result in data groups configured for MIMIX Dynamic Apply. These data group use remote journaling as an integral part of the user journal replication processes. For additional information see Table 12 in “Considerations for LF and PF files” on page 105. For information about command parameters, see “Tips for data group parameters” on page 234.

To create a data group, do the following:

1. To access the appropriate command, do the following:

a. From the From the MIMIX Basic Main Menu, type 11 (Configuration menu) and press Enter

b. From the MIMIX Configuration Menu, select option 4 (Work with data group definitions) and press Enter.

c. From the Work with Data Group Definitions display, type a 1 (Create) next to the blank line at the top of the list area and press Enter.

2. The Create Data Group Definition (CRTDGDFN) display appears. Specify a valid three-part name at the Data group definition prompts.

Note: Data group names cannot be UPSMON or begin with the characters MM.

3. For the remaining prompts on the display, verify the values shown are what you want. If necessary, change the values.

a. If you want a specific prefix to be used for jobs associated with the data group, specify a value at the Short data group name prompt. Otherwise, MIMIX will generate a prefix.

b. Ensure that the value of the Data source prompt represents the system that you want to use as the source of data to be replicated.

c. Verify that the value of the Allow to be switched prompt is what you want.

d. Verify that the value of the Data group type prompt is what you need. MIMIX Dynamic Apply requires either *ALL or *DB. Legacy cooperative processing and user journal replication of IFS objects, data areas, and data queues require *ALL.

e. Verify that the value of the Primary transfer definition prompt is what you want.

f. If you want MIMIX to have access to an alternative communications path, specify a value for the Secondary transfer definition prompt.

g. Verify that the value of the Reader wait time (seconds) prompt is what you want.

h. Press Enter.

4. If you specified *OBJ for the Data group type, skip to Step 9.

5. The Journal on target prompt appears on the display. Verify that the value shown is what you want and press Enter.

247

Page 248: MIMIX Reference

Creating a data group definition

Note: If you specify *YES and you require that the status of journaling on the target system is accurate, you should perform a save and restore operation on the target system prior to loading the data group file entries. If you are performing your initial configuration, however, it is not necessary to perform a save and restore operation. You will synchronize as part of the configuration checklist.

6. More prompts appear on the display that identify journaling information for the data group. You may need to use the Page Down key to see the prompts. Do the following:

a. Ensure that the values of System 1 journal definition and System 2 journal definition identify the journal definitions you need.

Notes: • If you have not journaled before, the value *DGDFN is appropriate. If you

have an existing journaling environment that you have identified to MIMIX in a journal definition, specify the name of the journal definition.

• If you only see one of the journal definition prompts, you have specified *NO for both the Allow to be switched prompt and the Journal on target prompt. The journal definition prompt that appears is for the source system as specified in the Data source prompt.

b. If any objects to replicate are located in an auxiliary storage pool (ASP) group on either system, specify values for System1 ASP group and System 2 ASP group as needed. The ASP group name is the name of the primary ASP device within the ASP group.

c. The default for the Use remote journal link prompt is *YES, which required for MIMIX Dynamic Apply and preferred for other configurations. MIMIX creates a transfer definition and an RJ link, if needed. To create a data group definition for a source-send configuration, change the value to *NO.

d. At the Cooperative journal (COOPJRN) prompt, specify the journal for cooperative operations. For new data groups, the value *DFT automatically resolves to *USRJRN when Data group type is *ALL or *DB and Remote journal link is *YES. The value *USRJRN processes through the user (database) journal while the value *SYSJRN processes through the system (audit) journal.

7. At the Number of DB apply sessions prompt, specify the number of apply sessions you want to use.

8. Verify that the values shown for the DB journal entry processing prompts are what you want.

Note: *SEND is required for the IBM RMVJRNCHG (Remove Journal Change) command. See “Additional considerations for data groups” on page 244 for more information.

9. At the Description prompt, type a text description of the data group definition, enclosed in apostrophes.

10. Do one of the following:

248

Page 249: MIMIX Reference

• To accept the basic data group configuration, Press Enter. Most users can accept the default values for the remaining parameters. The data group is created when you press Enter.

• To access prompts for advanced configuration, press F10 (Additional Parameters) and continue with the next step.

Advanced Data Group Options: The remaining steps of this procedure are only necessary if you need to access options for advanced configuration topics. The prompts are listed in the order they appear on the display. Because i5/OS does not allow additional parameters to be prompt-controlled, you will see all parameters regardless of the value specified for the Data group type prompt.

11. Specify the values you need for the following prompts associated with user journal replication:

• Remote journaling threshold

• Synchronization check interval

• Time stamp interval

• Verify interval

• Data area polling interval

• Journal at creation

12. Specify the values you need for the following prompts associated with system journal replication:

• Number of times to retry

• First retry delay interval

• Second retry delay interval

13. Accept the value *YES for the Adaptive cache prompt unless the system is memory constrained.

14. Specify the values you need for each of the prompts on the File and tracking ent. opts (FEOPT) parameter.

Notes: • Replication type must be *POSITION for MIMIX Dynamic Apply.

• Apply session A is used for IFS objects, data areas, and data queues that are configured for user journal replication. For more information see “Database apply session balancing” on page 87.

• The journal image value *BOTH is required for the IBM RMVJRNCHG (Remove Journal Change) command. See “Additional considerations for data groups” on page 244 for more information.

15. Specify the values you need for each element of the following parameters:

• Database reader/send threshold

• Database apply processing

• Object processing

249

Page 250: MIMIX Reference

Creating a data group definition

• Object send threshold

• Object retrieve processing

• Container send processing

• Object apply processing

16. If necessary, change the values for the following prompts:

• User profile for submit job

• Send job description and its Library

• Apply job description and its Library

• Reorganize job description and its Library

• Synchronize job description and its Library

• Job restart time

17. When you are sure that you have defined all of the values that you need, press Enter to create the data group definition.

Updated for 5.0.13.00.

250

Page 251: MIMIX Reference

Changing a data group definitionFor information about command parameters, see “Tips for data group parameters” on page 234.

To change a data group definition, do the following:

1. From the Work with DG Definitions display, type a 2 (Change) next to the data group you want and press Enter.

2. The Change Data Group Definition (CHGDGDFN) display appears. Press Enter to see additional prompts.

3. Make any changes you need for the values of the prompts. Page Down to see more of the prompts.

Note: If you change the Number of DB apply sessions prompt (NBRDBAPY), you need to start the data group specifying *YES for the Clear pending prompt (CLRPND).

4. If you need to access advanced functions, press F10 (Additional parameters). Make any changes you need for the values of the prompts.

5. When you are ready to accept the changes, press Enter.

Fine-tuning backlog warning thresholds for a data groupMIMIX supports the ability to set a backlog threshold on each of the replication jobs used by a data group. When a job has a backlog that reaches or exceeds the specified threshold, the threshold condition is indicated in the job status and reflected in user interfaces.

Threshold settings are meant to inform you that, while normal replication processes are active, a condition exists that could become a problem. What is an acceptable risk for some data groups may not be acceptable for other data groups or in some environments. For example, a threshold condition which occurs after starting a process that was temporarily ended or while processing an unusually large object which rarely changes may be an acceptable risk. However, a process that is continuously in a threshold condition or having multiple processes frequently in threshold conditions may indicate a more serious exposure that requires attention. Ultimately, each threshold setting must be a balance between allowing normal fluctuations to occur while ensuring that a job status is highlighted when a backlog approaches an unacceptable level of risk to your recovery time objectives (RTO) or risk of data loss.

Important! When evaluating whether threshold settings are compatible with your RTO, you must consider all of the processes in the replication paths for which the data group is configured and their thresholds. Each threshold represents only one process in either the user journal replication path or the system journal replication path. If the threshold for one process is set higher than its shipped value, a backlog for that process may not result in a threshold condition while being sufficiently large to cause subsequent processes to have backlogs which exceed their thresholds. Consider the cumulative effect that having multiple processes in

251

Page 252: MIMIX Reference

Fine-tuning backlog warning thresholds for a data group

threshold conditions would have on RTO and your tolerance for data loss in the event of a failure.

Table 31 lists the shipped values for thresholds available in a data group definition, identifies the risk associated with a backlog for each replication process, and identifies available options to address a persistent threshold condition. For each data group, you may need to use multiple options or adjust one or more threshold values multiple times before finding an appropriate setting.

Table 31. Shipped threshold values for replication processes and the risk associated with a backlog

Replication Process Backlog Threshold and its Shipped Default Val-ues

Risk Associated with a Backlog Options for Resolving Persistent Threshold Conditions

Note: Select a name to view a description

Remote journaling threshold 10 minutes

All journal entries in the backlog for the remote journaling function exist only in the source system journal and are waiting to be transmitted to the remote journal. These entries cannot be processed by MIMIX user journal replication processes and are at risk of being lost if the source system fails. After the source system becomes available again, journal analysis may be required.

Option 3 Option 4

Database reader/send threshold 10 minutes

For data groups that use remote journaling, all journal entries in the database reader backlog are physically located on the target system but MIMIX has not started to replicate them. If the source system fails, these entries need to be read and applied before switching.For data groups that use MIMIX source-send processing, all journal entries in the database send backlog, are waiting to be read and to be transmitted to the target system. The backlogged journal entries exist only in the source system and are at risk of being lost if the source system fails. After the source system becomes available again, journal analysis may be required.

Option 2 Option 3 Option 4

Database apply warning message threshold 100,000 entries

All of the entries in the database apply backlog are waiting to applied to the target system. If the source system fails, these entries need to be applied before switching. A large backlog can also affect performance.

Option 2 Option 3 Option 4

252

Page 253: MIMIX Reference

The following options are available, listed in order of preference. Some options are not available for all thresholds.

Option 1 - Adjust the number of available jobs. This option is available only for the object retrieve, container send, and object apply processes. Each of these processes have a configurable minimum and maximum number of jobs, a threshold at which more jobs are started, and a warning message threshold. If the number of entries in a backlog divided by the number of active jobs exceeds the job threshold, extra jobs are automatically started in an attempt to address the backlog. If the backlog reaches the higher value specified in the warning message threshold, the process status reflects the threshold condition. If the process frequently shows a threshold status, the

Object send threshold 10 minutes

All of the journal entries in the object send backlog exist only in the system journal on the source system and are at risk of being lost if the source system fails. MIMIX may not have determined all of the information necessary to replicate the objects associated with the journal entries. As this backlog clears, subsequent processes may have backlogs as replication progresses.

Option 2 Option 3 Option 4

Object retrieve warning message threshold 100 entries

All of the objects associated with journal entries in the object retrieve backlog are waiting to be packaged so they can be sent to the target system. The latest changes to these objects exist only in the source system and are at risk of being lost if the source system fails. As this backlog clears, subsequent processes may have backlogs as replication progresses.

Option 1 Option 2 Option 3 Option 4

Container send warning message threshold 100 entries

All of the packaged objects associated with journal entries in the container send backlog are waiting to be sent to the target system. The latest changes to these objects exist only in the source system and are at risk of being lost if the source system fails. As this backlog clears, subsequent processes may have backlogs as replication progresses

Option 1 Option 2 Option 3 Option 4

Object apply warning message threshold 100 requests

All of the entries in the object apply backlog are waiting to be applied to the target system. If the source system fails, these entries need to be applied before switching. Any related objects for which an automatic recovery action was collecting data may be lost.

Option 1 Option 2 Option 3 Option 4

Table 31. Shipped threshold values for replication processes and the risk associated with a backlog

Replication Process Backlog Threshold and its Shipped Default Val-ues

Risk Associated with a Backlog Options for Resolving Persistent Threshold Conditions

253

Page 254: MIMIX Reference

Fine-tuning backlog warning thresholds for a data group

maximum number of jobs may be too low or the job threshold value may be too high. Adjusting either value in the data group configuration can result in more throughput.

Option 2 - Temporarily increase job performance. This option is available for all processes except the RJ link. Use work management functions to increase the resources available to a job by increasing its run priority or its timeslice (CHGJOB command). These changes are effective only for the current instance of the job. The changes do not persist if the job is ended manually or by nightly cleanup operations resulting from the configured job restart time (RESTARTTIME) on the data group definition.

Option 3 - Change threshold values or add criterion. All processes support changing the threshold value. In addition, if the quantity of entries is more of a concern than time, some processes support specifying additional threshold criteria not used by shipped default settings. For the remote journal, database reader (or database send), and object send processes, you can adjust the threshold so that a number of journal entries is used as criteria instead of, or in conjunction with a time value. If both time and entries are specified, the first criterion reached will trigger the threshold condition. Changes to threshold values are effective the next time the process status is requested.

Option 4 - Get assistance. If you tried the other options and threshold conditions persist, contact your Certified MIMIX Consultant for assistance. It may be necessary to change configurations to adjust what is defined to each data group or to make permanent work management changes for specific jobs.

Updated for 5.0.13.00.

254

Page 255: MIMIX Reference

Chapter 11

Additional options: working with definitions

The procedures for performing common functions, such as copying, displaying, and renaming, are very similar for all types of definitions used by MIMIX. The generic procedures in this topic can be used for copying, deleting, displaying, and printing definitions. Specific procedures are included for renaming each type of definition and for swapping system definition names.

The topics in this chapter include:

• “Copying a definition” on page 255 provides a procedure for copying a system definition, transfer definition, journal definition, or a data group definition.

• “Deleting a definition” on page 256 provides a procedure for deleting a system definition, transfer definition, journal definition, or a data group definition.

• “Displaying a definition” on page 257 provides a procedure for displaying a system definition, transfer definition, journal definition, or a data group definition.

• “Printing a definition” on page 257 provides a procedure for creating a spooled file which you can print that identifies a system definition, transfer definition, journal definition, or a data group definition.

• “Renaming definitions” on page 258 provides procedure for renaming definitions, such as renaming a system definition which is typically done as a result in a change of software.

Copying a definitionUse this procedure on a management system to copy a system definition, transfer definition, journal definition, or a data group definition.

Notes for data group definitions: • The data group entries associated with a data group definition are not copied.

• Before you copy a data group definition, ensure that activity is ended for the definition to which you are copying.

Notes for journal definitions: • The journal definition identified in the From journal definition prompt must exist

before it can be copied. The journal definition identified in the To journal defining prompt cannot exist when you specify *NO for the Replace definition prompt.

• If you specify *YES for the Replace definition prompt, the To journal defining prompt must exist. It is possible to introduce conflicts in your configuration when replacing an existing journal definition. These conflicts are automatically resolved or an error message is sent when the journal environment for the definition is built.

To copy a definition, do the following:

Note: The following procedure includes using MIMIX menus. See “Accessing the

255

Page 256: MIMIX Reference

Deleting a definition

MIMIX Main Menu” on page 91 for information about using these.

1. From the MIMIX Main Menu, select option 11 (Configuration menu) and press Enter.

2. From the MIMIX Configuration Menu, select the option for the type of definition you want and press Enter.

3. The "Work with" display for the definition type appears. Type a 3 (Copy) next to definition you want and press Enter.

4. The Copy display for the definition type you selected appears. At the To definition prompt, specify a name for the definition to which you are copying information.

5. If you are copying a journal definition or a data group definition, the display has additional prompts. Verify that the values of prompts are what you want.

6. The value *NO for the Replace definition prompt prevents you from replacing an existing definition. If you want to replace an existing definition, specify *YES.

7. To copy the definition, press Enter.

Deleting a definitionUse this procedure on a management system to delete a system definition, transfer definition, journal definition, or a data group definition.

To delete a definition, do the following:

Note: The following procedure includes using MIMIX menus. See “Accessing the MIMIX Main Menu” on page 91 for information about using these.

1. Ensure that the definition you want to delete is not being used for replication. Do the following:

Attention: When you delete a system or data group definition, information associated with the definition is also deleted. Ensure that the definition you delete is not being used for replication and be aware of the following:

• If you delete a system definition, all other configuration elements associated with that definition are deleted. This includes journal definitions, transfer definitions, and data group definitions with all associated data group entries.

• If you delete a data group definition, all of its associated data group entries are also deleted.

• The delete function does not clean up any records for files in the error/hold file.

When you delete a journal definition, only the definition is deleted. The files being journaled, the journal, and the journal receivers are not deleted.

256

Page 257: MIMIX Reference

Additional options: working with definitions

a. From the MIMIX Main Menu, select option 2 (Work with systems) and press Enter.

b. Type an 8 (Work with data groups) next to the system you want and press Enter.

c. The result is a list of data groups for the system you selected. Type a 17 (File entries) next to the data group you want and press Enter.

d. On the Work with DG File Entries display, verify that the status of the file entries is *INACTIVE. If necessary, use option 10 (End journaling).

e. On the Work with Data Groups display, use option 10 (End data group).

f. Before deleting a system definition, on the Work with Systems display, uses option 10 (End managers).

2. From the MIMIX Main Menu, select option 11 (Configuration menu) and press Enter.

3. From the MIMIX Configuration Menu, select the option for the type of definition you want and press Enter.

4. The "Work with" display for the definition type appears. Type a 4 (Delete) next to definition you want and press Enter.

5. A confirmation display appears with a list of definitions to be deleted. To delete the definitions press Enter.

Displaying a definitionUse this procedure to display a system definition, transfer definition, journal definition, or a data group definition.

To display a definition, do the following:

Note: The following procedure includes using MIMIX menus. See “Accessing the MIMIX Main Menu” on page 91 for information about using these.

1. From the MIMIX Main Menu, select option 11 (Configuration menu) and press Enter.

2. From the MIMIX Configuration Menu, select the option for the type of definition you want and press Enter.

3. The "Work with" display for the definition type appears. Type a 5 (Display) next to definition you want and press Enter.

4. The definition display appears. Page Down to see all of the values.

Printing a definition Use this procedure to create a spooled file which you can print that identifies a system definition, transfer definition, journal definition, or a data group definition.

To print a definition, do the following;

257

Page 258: MIMIX Reference

Renaming definitions

Note: The following procedure includes using MIMIX menus. See “Accessing the MIMIX Main Menu” on page 91 for information about using these.

1. From the MIMIX Main Menu, select option 11 (Configuration menu) and press Enter.

2. From the MIMIX Configuration Menu, select the option for the type of definition you want and press Enter.

3. The "Work with" display for the definition type appears. Type a 6 (Print) next to definition you want and press Enter.

4. A spooled file is created with a name of MX***DFN, where *** indicates the type of definition. You can print the spooled file according to your standard print procedures.

Renaming definitionsThe procedures for renaming a system definition, transfer definition, journal definition, or data group definition must be run from a management system.

This section includes the following procedures:

• “Renaming a system definition” on page 258

• “Renaming a transfer definition” on page 261

• “Renaming a journal definition with considerations for RJ link” on page 262

• “Renaming a data group definition” on page 263

Renaming a system definitionWhen you rename a system definition, all other configuration information that references the system definition is automatically modified to include the updated system name. This includes journal definitions, transfer definitions, data group definitions, and associated data group entries.

A typical reason for renaming a system definition is a result of a change in hardware. Other reasons may include the naming convention in an environment has changed, the location of the system is changing and the name correlates to the system’s location, or simply because a new name is preferred over the current name.

Another reason for renaming a system definition is to swap system defintion names. For example, if the roles of two systems change so that the system which was the production system becomes the backup system and vice versa, the system defintion names may also be swapped to reflect this change. When swapping system defintion

Attention: Before you rename any definition, ensure that all other configuration elements related to it are not active.

258

Page 259: MIMIX Reference

Additional options: working with definitions

names, a temporary system definition name must be used because there cannot be two system definitions with the same name.

To rename system definitions, do the following for each system whose definition you are renaming from the management system unless noted otherwise:

Note: The following procedure includes using MIMIX menus. See “Accessing the MIMIX Main Menu” on page 91 for information about using these.

1. Perform a controlled end of the MIMIX installation. See the Using MIMIX book for procedures for ending MIMIX.

2. End the MIMIXSBS subsystem on all systems. See the Using MIMIX book for procedures for ending the MIMIXSBS subsystem.

3. From the MIMIX Intermediate Main Menu, select option 2 (Work with systems) and press Enter.

4. From the Work with Systems display, select option 8 (Work with data groups) on the system whose definition you are renaming, and press Enter.

5. For each data group listed, do the following:

a. From the Work with Data Groups display, select option 8 (Display status) and press Enter.

b. Record the Last Read Receiver name and Sequence # for both database and object.

6. If changing the host name or IP address, do the following steps. Otherwise, continue with Step 7.

a. From the MIMIX Intermediate Main Menu, select option 11 (Configuration menu) and press Enter.

b. From the MIMIX Configuration Menu, select option 2 (Work with transfer definitons) and press Enter.

c. The Work with Transfer Definitions display appears. Select option 2 (Change) for each transfer definition that includes the system whose definition you are renaming and press Enter.

d. The Change Transfer Definition (CHGTFRDFN) display appears. Press F10 to access additional parameters.

e. Specify the new host name or IP address for the System 1 host name or address and System 2 host name or address and press Enter.

Note: Many installations will have an autostart entry for the STRSVR command. Autostart entries must be reviewed for possible updates of a new system name or IP address. For more information, see “Identifying the autostart job entry in the MIMIXSBS subsystem” on page 191 and “Changing the job description for an autostart job entry” on page 191.

Attention: Before you rename a system definition, ensure that MIMIX activity is ended by using the End Data Group (ENDDG) and End MIMIX Manager (ENDMMXMGR) commands.

259

Page 260: MIMIX Reference

Renaming definitions

7. Start the MIMIXSBS subsystem and the port jobs on all systems using the host names or IP addresses. If you changed these, use the host name or IP address specified in Step 6.

8. For all systems, ensure communications before continuing. Follow the steps in topic “Verifying all communications links” on page 195.

9. From the Work with Systems Definitions (WRKSYSDFN) display type a 7 (Rename) next to the system whose definition is being renamed and press Enter.

10. The Rename System Definitions (RNMSYSDFN) display appears. At the To system definition prompt, specify the new name for the system whose definition is being renamed and press Enter.

11. The Confirm Rename System Defintion display appears. Press Enter.

12. From the MIMIX Intermediate Main Menu, select option 2 (Work with systems) and press Enter.

13. The Work with Systems display appears. Type a 9 (Start) next to the management system you want and press Enter.

14. The Start MIMIX Managers (STRMMXMGR) display appears. Do the following:

a. At the Manager prompt, specify *ALL.

b. Press F10 to access additional parameters.

c. In the Reset configuration prompt, specify *YES.

d. Press Enter.

15. The Work with Systems display appears. For each network system, do the following:

a. Type a 9 (Start) next to each network system you want and press Enter.

b. The Start MIMIX Managers (STRMMXMGR) display appears. Press Enter. Wait for the MIMIX Managers to start before continuing.

16. From the Work with Systems display, select option 8 (Work with data groups) on the system whose definitions you have renamed and press Enter.

17. For each data group listed, do the following:

a. From the Work with Data Groups display, select option 9 (Start DG) and press Enter.

b. The Start Data Group (STRDG) display appears. Press F10 to display additional parameters.

c. Type the Receiver names and Sequence #, adding 1 to the sequence #s, that were recorded in Step 5b for both database and object. Press Enter.

18. From the Work with Systems display, select option 8 (Work with data groups) on the system whose definition you have renamed and ensure all data groups are active. You should see the letter ‘A’, highlighted blue in the database source column. Refer to the Using MIMIX book for more information.

19. Press F3 to return to the Work with Systems display.

260

Page 261: MIMIX Reference

Additional options: working with definitions

20. From the Work with Systems display, select option 8 (Work with data groups) on the management system and press Enter.

21. From the Work with Data Groups display, select option 9 (Start DG) for data groups (highlighted red) that are not active and press Enter.

22. The Start Data Group (STRDG) display appears. Press Enter. Additional parameters are displayed. Press Enter again to start the data groups.

23. The Work with data groups display appears. Ensure all data groups are active. You should see the letter ‘A’, highlighted blue in the database source column. Refer to the Using MIMIX book for more information. Press F5 to refresh data.

Renaming a transfer definitionWhen you rename a transfer definition, other configuration information which references it is not updated with the new name. You must manually update other information which references the transfer definition. The following procedure renames the transfer definition and includes steps to update the other configuration information that references the transfer definition including the system definition, data group definition, and remote journal link. All of the steps must be completed.

To rename a transfer definition, do the following from the management system:

Note: The following procedure includes using MIMIX menus. See “Accessing the MIMIX Main Menu” on page 91 for information about using these.

1. From the MIMIX Intermediate Main Menu, select option 11 (Configuration menu) and press Enter.

2. From the MIMIX Configuration Menu, select option 2 (Work with transfer definitions) and press Enter.

3. From the Work with Transfer Definitions menu, type a 7 (Rename) next to the definition you want to rename and press Enter.

4. The Rename Transfer Definition display for the definition type you selected appears. At the To transfer definition prompt, specify the values you want for the new name and press Enter.

5. Press F12 to return to the MIMIX Configuration Menu.

6. From the MIMIX Configuration Menu, select option 1 (Work with system definitions) and press Enter.

7. From the Work with System Definitions menu, type a 2 (Change) next to the system name whose transfer definition needs to be changed and press Enter.

8. From the Change System Definition display, specify the new name for the transfer definition and press Enter.

9. Press F12 to return to the MIMIX Configuration Menu.

10. From the MIMIX Configuration Menu, select option 4 (Work with data group definitions) and press Enter.

11. From the Work with DG Definitions menu, type a 2 (Change) next to the data group name whose transfer definition needs to be changed and press Enter.

261

Page 262: MIMIX Reference

Renaming definitions

12. From the Change Data Group Definition display, specify the new name for the transfer definition and press Enter until the Work with DG Definitions display appears.

13. Press F12 to return to the MIMIX Configuration Menu.

14. From the MIMIX Configuration Menu, select option 8 (Work with remote journal links) and press Enter.

15. From the Work with RJ Links menu, press F11 to display the transfer definitions.

16. Type a 2 (Change) next to the RJ link where you changed the transfer definition and press Enter.

17. From the Change Remote Journal Link display, specify the new name for the transfer definition and press Enter.

Renaming a journal definition with considerations for RJ linkWhen you rename a journal definition, other configuration information which references it is not updated with the new name. This procedure includes steps for renaming the journal definition in the data group definition, including considerations when an RJ link is used.

If you rename a journal definition, the journal name will also be renamed if you used the default value of *JRNDFN when configuring the journal definition. If you do not want the journal name to be renamed, you must specify the journal name rather than the default of *JRNDFN for the journal (JRN) parameter.

To rename a journal definition, do the following from the management system:

Note: The following procedure includes using MIMIX menus. See “Accessing the MIMIX Main Menu” on page 91 for information about using these.

1. Perform a controlled end for the data group in your remote journaling environment. Use topic “Ending all replication in a controlled manner” in the Using MIMIX book.

2. If using remote journaling, do the following. Otherwise, continue with Step 3:

a. End the remote journal link in a controlled manner. Use topic “Ending a remote journal link independently” in the Using MIMIX book.

b. Verify that the remote journal link is not in use on both systems. Use topic “Displaying status of a remote journal link” in the Using MIMIX book. The remote journal link should have a state value of *INACTIVE before you continue.

c. From the MIMIX Intermediate Main Menu, select option 11 (Configuration menu) and press Enter.

d. From the MIMIX Configuration Menu, select option 8 (Work with remote journal links) and press Enter.

e. Remove the remote journal connection (the RJ link). From the Work with RJ Links display, type a 15 (Remove RJ connection) next to the link that you want and press Enter. A confirmation display appears. To continue removing the connections for the selected links, press Enter.

262

Page 263: MIMIX Reference

Additional options: working with definitions

f. Press F12 to return to the MIMIX Configuration Menu.

3. From the MIMIX Configuration Menu, select option 3 (Work with journal definitions) and press Enter.

4. From the Work with Journal Definitions menu, type a 7 (Rename) next to the journal definition names you want to rename and press Enter.

5. The Rename Journal Definition display for the definition you selected appears. At the To journal definition prompts, specify the values you want for the new name.

a. If the journal name is *JRNDFN, ensure that there are no journal receiver prefixes in the specified library whose names start with the journal receiver prefix. See “Building the journaling environment” on page 219 for more information.

6. Press Enter. The Work with Journal Definitions display appears.

7. If using remote journaling, do the following to change the corresponding definition for the remote journal. Otherwise, continue with Step 8:

a. Type a 2 (Change) next to the corresponding remote journal definition name you changed and press Enter.

b. Specify the values entered in Step 5 and press Enter.

8. From the Work with Journal Definitions menu, type a 14 (Build) next to the journal definition names you changed and press F4.

9. The Build Journaling Environment display appears. At the Source for values prompt, specify *JRNDFN.

10. Press Enter. You should see a message that indicates the journal environment was created.

11. Press F12 to return to the MIMIX Configuration Menu. From the MIMIX Configuration Menu, select option 4 (Work with data group definitions) and press Enter.

12. From the Work with DG Definitions menu, type a 2 (Change) next to the data group name that uses the journal definition you changed and press Enter.

13. Press F10 to access additional parameters.

14. From the Change Data Group Definition display, specify the new name for the System 1 journal definition and System 2 journal definition and press Enter twice.

Renaming a data group definitionDo the following to rename a data group definition:

Note: The following procedure includes using MIMIX menus. See “Accessing the MIMIX Main Menu” on page 91 for information about using these.

1. Ensure that the data group is ended. If the data group is active, end it using the

Attention: Before you rename a data group definition, ensure that the data group has a status of *INACTIVE.

263

Page 264: MIMIX Reference

Renaming definitions

procedure “Ending a data group in a controlled manner” in the Using MIMIX book.

2. From the MIMIX Intermediate Main Menu, select option 11 (Configuration menu) and press Enter.

3. From the MIMIX Configuration Menu, select option 4 (Work with data group definitions) and press Enter.

4. From the Work with DG Definitions menu, type a 7 (Rename) next to the data group name you want to rename and press Enter.

5. From the Rename Data Group Definition display, specify the new name for the data group definition and press Enter.

264

Page 265: MIMIX Reference

Chapter 12

Configuring data group entries

Data group entries can identify one or many objects to be replicated or excluded from replication. You can add individual data group entries, load entries from an existing source, and change entries as needed.

The topics in this chapter include:

• “Creating data group object entries” on page 267 describes data group object entries which are used to identify library-based objects for replication. Procedures for creating these are included.

• “Creating data group file entries” on page 272 describes data group file entries which are required for user journal replication of *FILE objects. Procedures for creating these are included.

• “Creating data group IFS entries” on page 282 describes data group IFS entries which identify IFS objects for replication. Procedures for creating these are included.

• “Loading tracking entries” on page 284 describes how to manually load tracking entries for IFS objects, data areas, and data queues that are configured for user journal replication.

• “Creating data group DLO entries” on page 287 describes data group DLO entries which identify document library objects (DLOs) for replication by MIMIX system journal replication processes. Procedures for creating these are included.

• “Creating data group data area entries” on page 289 describes data group data area entries which identify data areas to be replicated by the data area poller process. Procedures for creating these are included.

• “Additional options: working with DG entries” on page 291 provides procedures for performing data group entry common functions, such as copying, removing, and displaying,

The appendix “Supported object types for system journal replication” on page 549 lists i5/OS object types and indicates whether each object type is replicated by MIMIX.

265

Page 266: MIMIX Reference

266

Page 267: MIMIX Reference

Creating data group object entriesData group object entries are used to identify library-based objects for replication. How replication is performed for the objects identified depends on the object type and configuration settings. For object types that cannot be journaled to a user journal, system journal replication processes are used. For object types that can be journaled (*FILE, *DTAARA, and *DTAQ), values specified in the object entry and other configuration information determine whether the object is replicated through the system journal or is cooperatively processed with the user journal. For *FILE objects, several configuration options are available, some of which also require data group file entries to be configured.

For detailed concepts and requirements for supported configurations, see the following topics:

• “Identifying library-based objects for replication” on page 100

• “Identifying logical and physical files for replication” on page 105

• “Identifying data areas and data queues for replication” on page 112

When you configure MIMIX, you can create data group object entries by adding individual object entries or by using the custom load function for library-based objects.

The custom load function can simplify creating data group entries. This function generates a list of objects that match your specified criteria, from which you can selectively create data group object entries. For example, if you want to replicate all but a few of the data areas in a specific library, you could use the Add Data Group Object Entry (ADDDGOBJE) command to create a single data group object entry that includes all data areas in the library. Then, using the same object selection criteria with the custom load function, you can select from a list of data areas in the library to create exclude entries for the objects you do not want replicated.

Once you have created data group object entries, you can tailor them to meet your requirements. You can also use the #DGFE audit or the Check Data Group File Entries (CHKDGFE) command to ensure that the correct file entries exist for the object entries configured for the specified data group.

Loading data group object entriesIn this procedure, you specify selection criteria that results in a list of objects with similar characteristics. From the list, you can select multiple objects for which MIMIX will create appropriate data group object entries. You can customize individual entries later, if necessary.

From the management system, do the following to create a custom load of object entries:

1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and press Enter.

2. From the Work with Data Groups display, type a 20 (Object entries) next to the data group you want and press Enter.

3. The Work with DG Object Entries display appears. Press F19 (Load).

267

Page 268: MIMIX Reference

Creating data group object entries

4. The Load DG Object Entries (LODDGOBJE) display appears. Do the following to specify the selection criteria:

a. Identify the library and objects to be considered. Specify values for the System 1 library and System 1 object prompts.

b. If necessary, specify values for the Object type, Attribute, System 2 library, and System 2 object prompts.

c. At the Process type prompt, specify whether resulting data group object entries should include or exclude the identified objects.

d. Specify appropriate values for the Cooperate with database and Cooperating object types prompts. To ensure that journaled files, data areas, and data queues will be replicated from the user journal, you must specify the object types.

e. Ensure that the remaining prompts contain the values you want for the data group object entries that will be created. Press Page Down to see all of the prompts.

5. To specify file entry options that will override those set in the data group definition, do the following:

a. Press F9 (All parameters).

b. Press Page Down until you locate the File entry options prompt.

c. Specify the values you need on the elements of the File entry options prompt.

6. To generate the list of objects, press Enter.

Note: If you skipped Step 5, you may need to press Enter multiple times.

7. The Load DG Object Entries display appears with the list of objects that matched your selection criteria. Either type a 1 (Select) next to the objects you want or press F21 (Select all). Then press Enter.

8. If necessary, you can use “Adding or changing a data group object entry” on page 268 to customize values for any of the data group object entries.

Synchronize the objects identified by data group entries before starting replication processes or running MIMIX audits. The entries will be available to replication processes after the data group is ended and restarted. This includes after the nightly restart of MIMIX jobs. The entries will be available to MIMIX audits the next time an audit runs.

Adding or changing a data group object entryNote: If you are converting a data group to use user journal replication for data areas

or data queues, use this procedure when directed by “Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling” on page 154.

From the management system, do the following to add a new data group object entry or change an existing entry:

1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and press Enter.

268

Page 269: MIMIX Reference

2. From the Work with Data Groups display, type a 20 (Object entries) next to the data group you want and press Enter.

3. The Work with DG Object Entries display appears. Do one of the following:

• To add a new entry, type a 1 (Add) next to the blank line at the top of the list and press Enter.

• To change an existing entry, type a 2 (Change) next to the entry you want and press Enter.

4. The appropriate Data Group Object Entry display appears. When adding an entry, you must specify values for the System 1 library and System 1 object prompts.

Note: When changing an existing object entry to enable replication of data areas or data queues from a user journal (COOPDB(*YES)), make sure that you specify only the objects you want to enable for the System 1 object prompt. Otherwise, all objects in the library specified for System 1 library will be enabled.

5. If necessary, specify a value for the Object type prompt.

6. Press F9 (All parameters).

7. If necessary, specify values for the Attribute, System 2 library, System 2 object, and Object auditing value prompts.

8. At the Process type prompt, specify whether resulting data group object entries should include (*INCLD) or exclude (*EXCLD) the identified objects.

9. Specify appropriate values for the Cooperate with database and Cooperating object types prompts.

Note: To ensure that journaled files, data areas, or data queues will be replicated from the user journal, you must specify *YES for Cooperate with database and you must specify the appropriate object types for Cooperating object types.

10. Ensure that the remaining prompts contain the values you want for the data group object entries that will be created. Press Page Down to see more prompts.

11. To specify file entry options that will override those set in the data group definition, do the following:

a. If necessary, Press Page Down to locate the File entry options prompt.

b. Specify the values you need on the elements of the File entry options prompt.

12. Press Enter.

13. For object entries configured for user journal replication of data areas or data queues, return to Step 7 in procedure “Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling” on page 154 to complete additional steps necessary to complete the conversion.

Synchronize the objects identified by data group entries before starting replication processes or running MIMIX audits. The entries will be available to replication processes after the data group is ended and restarted. This includes after the nightly

269

Page 270: MIMIX Reference

Creating data group object entries

restart of MIMIX jobs. The entries will be available to MIMIX audits the next time an audit runs.

270

Page 271: MIMIX Reference

271

Page 272: MIMIX Reference

Creating data group file entries

Creating data group file entriesData group file entries are required for user journal replication of *FILE objects.

When you configure MIMIX, you can create data group file entry information by creating data group file entries individually or by loading entries from another source. Once you have created the file entries, you can tailor them to meet your requirements.

Note: If you plan to use either MIMIX Dynamic Apply or legacy cooperative processing, files must be defined by both data group object entries and data group file entries. It is strongly recommended that you create data group object entries first. Then, load the data group file entries from the object entry information defined for the files. You can use the #DGFE audit or the Check Data Group File Entries (CHKDGFE) command to ensure that the correct file entries exist for the object entries configured for the specified data group.

For detailed concepts and requirements for supported configurations, see the following topics:

• “Identifying library-based objects for replication” on page 100

• “Identifying logical and physical files for replication” on page 105

Loading file entries If you need to create data group file entries for many files, you can have MIMIX create the entries for you using the Load Data Group File Entries (LODDGFE) command. The Configuration source (CFGSRC) parameter supports loading from a variety of sources, listed below in order most commonly used:

• *DGOBJE - File entry information is loaded from the information in data group object entries configured for the data group. If you are configuring to use MIMIX Dynamic Apply or legacy cooperative processing, this value is recommended.

• *NONE - File entry information is loaded from a library on either the source or target system, as determined by the specified for System 1 library (LIB1), System 2 library (LIB2), and Load from system (LODSYS) parameters.

• *JRNDFN - File entry information is loaded from a journal specified in the journal definition associated with the specified data group. File entries will be created for all files currently journaled to the journal specified in the journal definition.

• *DGFE - File entry information is loaded from data group file entries defined to another data group. This option supports loading from version 4 and version 5 data groups on the same system. This value is typically used when loading file entries from a data group in a different installation of MIMIX.

When loading from a data group, you can also specify the source from which file entry options are loaded, and override elements if needed. The Default FE options source (FEOPTSRC) parameter determines whether file entry options are loaded from the specified configuration source (*CFGSRC) or from the data group definition (*DGDFT). Any file entry option with a value of *DFT is loaded from the specified source. Any values specified on elements of the File entry options (FEOPT)

272

Page 273: MIMIX Reference

parameter override the values loaded from the FEOPTSRC parameter for all data group file entries created by a load request.

Regardless of where the configuration source and file entry option source are located, the Load Data Group File Entries (LODDGFE) command must be used from a system designated as a management system.

Note: The Load Data Group File Entries (LODDGFE) command performs a journal verification check on the file entries using the Verify Journal File Entries (VFYJRNFE) command. In order to accurately determine whether files are being journaled to the target system, you should first perform a save and restore operation to synchronize the files to the target system before loading the data group file entries.

Loading file entries from a data group’s object entriesThis topic contains examples and a procedure. The examples illustrate the flexibility available for loading file entry options.

Example - Load from the same data group This example illustrates how to create file entries when converting a data group to use MIMIX Dynamic Apply. In this example, data group DGDFN1 is being converted. The data group definition specifies *SYS1 as its data source (DTASRC). However, in this example, file entries will be loaded from the target system to take advantage of a known synchronization point at which replication will later be started. LODDGFE DGDFN(DGDFN1) CFGSRC(*DGOBJE) UPDOPT(*ADD) LODSYS(*SYS2) SELECT(*NO)

Since no value was specified for FROMDGDFN, its default value *DGDFN causes the file entries to load from existing object entries for DGDFN1. The value *SYS2 for LODSYS causes this example configuration to load from its target system. Entries are added (UPDOPT(*ADD) to the existing configuration. Since all files identified by object entries are wanted, SELECT(*NO) bypasses the selection list. The data group file entries for DGDFN1 created have file entry options which match those found in the object entries because no values were specified for FEOPTSRC or FEOPT parameters.

Example - Load from another data group with mixed sources for file entry options The file entries for data group DGDFN1 are created by loading from the object entries for data group DGDFN2, with file entry options loaded from multiple sources. LODDGFE DGDFN(DGDFN1) CFGSRC(*DGOBJE) FROMDGDFN(DGDFN2) FEOPT(*CFGSRC *DGDFT *CFGSRC *DGDFT)

The data group file entries created for DGDFN1 are loaded from the configuration information in the object entries for DGDFN2, with file entry options coming from multiple sources. Because the command specified the first element (Journal image) and third element (Replication type) of the file entry options (FEOPT) as *CFGSRC, the resulting file entries have the same values for those elements as the data group object entries for DGDFN2. Because the command specified the second element (Omit open/close entries) and the fourth element (Lock member during apply) as *DGDFT, these elements are loaded from the data group definition. The rest of the file entry options are loaded from the configuration source (object entries for DGDFN2).

273

Page 274: MIMIX Reference

Creating data group file entries

Procedure: Use this procedure to create data group file entries from the object entries defined to a data group.

Note: The data group must be ended before using this procedure. Configuration changes resulting from loading file entries are not effective until the data group is restarted.

From the management system, do the following:

1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and press Enter.

2. From the Work with Data Groups display, type a 17 (File entries) next to the data group you want and press Enter.

3. The Work with DG File Entries display appears. Press F19 (Load).

4. The Load Data Group File Entries (LODDGFE) display appears. The name of the data group for which you are creating file entries and the Configuration source value of *DGOBJE are pre-selected. Press Enter.

5. The following prompts appear on the display. Specify appropriate values.

a. From data group definition - To load from entries defined to a different data group, specify the three-part name of the data group.

b. Load from system - Ensure that the value specified is appropriate. For most environments, files should be loaded from the source system of the data group you are loading. (This value should be the same as the value specified for Data source in the data group definition.)

c. Update option - If necessary, specify the value you want.

d. Default FE options source - Specify the source for loading values for default file entry options. Each element in the file entry options is loaded from the specified location unless you explicitly specify a different value for an element in Step 6.

6. Optionally, you can specify a file entry option value to override those loaded from the configuration source. Do the following:

a. Press F10 (Additional parameters).

b. Specify values as needed for the elements of the File entry options prompts. Any values you specify will be used for all of the file entries created with this procedure.

7. Press Enter. The LODDGFE Entry Selection List display appears with a list of the files identified by the specified configuration source.

8. Either type a 1 (Load) next to the files that you want or Press F21 (Select all).

9. To create the file entries, press Enter.

All selected files identified from the configuration source are represented in the resulting file entries. Each generated file entry includes all members of the file. If necessary, you can use “Changing a data group file entry” on page 279 to customize values for any of the data group file entries.

274

Page 275: MIMIX Reference

Loading file entries from a libraryExample: The data group file entries are created by loading from a library named TESTLIB on the source system. This example assumes the configuration is set up so that system 1 in the data group definition is the source for replication. LODDGFE DGDFN(DGDFN1) CFGSRC(*NONE) LIB1(TESTLIB)

Since the FEOPT parameter was not specified, the resulting data group file entries are created with a value of *DFT for all of the file entry options. Because there is no MIMIX configuration source specified, the value *DFT results in the file entry options specified in the data group definition being used.

Procedure: Use this procedure to create data group file entries from a library on either the source system or the target system.

Note: The data group must be ended before using this procedure. Configuration changes resulting from loading file entries are not effective until the data group is restarted.

From the management system, do the following:

1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and press Enter.

2. From the Work with Data Groups display, type a 17 (File entries) next to the data group you want and press Enter.

3. The Work with DG File Entries display appears. Press F19 (Load).

4. The Load Data Group File Entries (LODDGFE) display appears with the name of the data group for which you are creating file entries. At the Configuration source prompt, specify *NONE and press Enter.

5. Identify the location of the files to be used for loading. For common configurations, you can accomplish this by specifying a library name at the System 1 library prompt and accepting the default values for the System 2 library, Load from system, and File prompts.

If you are using system 2 as the data source for replication or if you want the library name to be different on each system, then you need to modify these values to appropriately reflect your data group defaults.

6. If necessary, specify the values you want for the following:

Update option prompt

Add entry for each member prompt

7. The value of the Default FE options source prompt is ignored when loading from a library. To optionally specify file entry options, do the following:

a. Press F10 (Additional parameters).

b. Specify values as needed for the elements of the File entry options prompts. Any values you specify will be used for all of the file entries created with this procedure.

8. Press Enter. The LODDGFE Entry Selection List display appears with a list of the files identified by the specified configuration source.

275

Page 276: MIMIX Reference

Creating data group file entries

9. Either type a 1 (Load) next to the files that you want or Press F21 (Select all).

10. To create the file entries, press Enter.

All selected files identified from the configuration source are represented in the resulting file entries. If necessary, you can use “Changing a data group file entry” on page 279 to customize values for any of the data group file entries.

Loading file entries from a journal definitionExample: The data group file entries are created by loading from the journal associated system 1 of the data group. This example assumes the configuration is set up so that system 1 in the data group definition is the source for replication. The journal definition 1 specified in the data group definition identifies the journal. LODDGFE DGDFN(DGDFN1) CFGSRC(*JRNDFN) LODSYS(*SYS1)

Since the FEOPT parameter was not specified, the resulting data group file entries are created with a value of *DFT for all of the file entry options. Because there is no MIMIX configuration source specified, the value *DFT results in the file entry options specified in the data group definition being used.

Procedure: Use this procedure to create data group file entries from the journal associated with a journal definition specified for the data group.

Note: The data group must be ended before using this procedure. Configuration changes resulting from loading file entries are not effective until the data group is restarted.

From the management system, do the following:

1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and press Enter.

2. From the Work with Data Groups display, type a 17 (File entries) next to the data group you want and press Enter.

3. The Work with DG File Entries display appears. Press F19 (Load).

4. The Load Data Group File Entries (LODDGFE) display appears with the name of the data group for which you are creating file entries. At the Configuration source prompt, specify *JRNDFN and press Enter.

File and library names on the source and target systems are set to the same names for the load operation.

5. At the Load from system prompt, ensure that the value specified represents the appropriate system. The journal definition associated with the specified system is used for loading. For common configurations, the value that corresponds to the source system of the data group you are loading should be used. (This value should match the value specified for Data source in the data group definition.)

6. If necessary, specify the value you want for the Update option prompt.

7. The value of the Default FE options source prompt is ignored when loading from a journal definition. To optionally specify file entry options, do the following:

a. Press F10 (Additional parameters).

276

Page 277: MIMIX Reference

b. Specify values as needed for the elements of the File entry options prompts. Any values you specify will be used for all of the file entries created with this procedure.

8. Press Enter. The LODDGFE Entry Selection List display appears with a list of the files identified by the specified configuration source.

9. Either type a 1 (Load) next to the files that you want or Press F21 (Select all).

10. To create the file entries, press Enter.

All selected files identified from the configuration source are represented in the resulting file entries. Each generated file entry includes all members of the file. If necessary, you can use “Changing a data group file entry” on page 279 to customize values for any of the data group file entries.

Loading file entries from another data group’s file entriesExample 1: The data group file entries are created by loading from file entries for another data group, DGDFN2.LODDGFE DGDFN(DGDFN1) CFGSRC(*DGFE) FROMDGDFN(DGDFN2)

Since the FEOPT parameter was not specified, the resulting data group file entries for DGDFN1 are created with a value of *DFT for all of the file entry options. Because the configuration source is another data group, the value *DFT results in file entry options which match those specified in DGDFN2.

Example 2: The data group file entries are created by loading from file entries for another data group, DGDFN2 in another installation MXTEST.LODDGFE DGDFN(DGDFN1) CFGSRC(*DGFE) PRDLIB(MXTEST) FROMDGDFN(DGDFN2)

Since the FEOPT parameter was not specified, the resulting data group file entries for DGDFN1 are created with a value of *DFT for all of the file entry options. Because the configuration source is another data group in another installation, the value *DFT results in file entry options which match those specified in DGDFN2 in installation MXTEST.

Procedure: Use this procedure to create data group file entries from the file entries defined to another data group.

Note: The data group must be ended before using this procedure. Configuration changes resulting from loading file entries are not effective until the data group is restarted.

From the management system, do the following:

1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and press Enter.

2. From the Work with Data Groups display, type a 17 (File entries) next to the data group you want and press Enter.

3. The Work with DG File Entries display appears. Press F19 (Load).

4. The Load Data Group File Entries (LODDGFE) display appears with the name of the data group for which you are creating file entries. At the Configuration source prompt, specify *DGFE and press Enter.

277

Page 278: MIMIX Reference

Creating data group file entries

5. At the Production library prompt, either accept *CURRENT or specify the name of an installation library from which the data group you are copying is located.

6. At the From data group definition prompts, specify the three-part name of the data group from which you are loading.

7. If necessary, specify the value you want for the Update option prompt.

8. Specify the source for loading values for default file entry options at the Default FE options source prompt. Each element in the file entry options is loaded from the specified location unless you explicitly specify a different value for an element in Step 9.

9. If necessary, do the following specify a file entry option value to override those loaded from the configuration source:

a. Press F10 (Additional parameters).

b. Specify values as needed for the elements of the File entry options prompts. Any values you specify will be used for all of the file entries created with this procedure.

10. Press Enter. The LODDGFE Entry Selection List display appears with a list of the files identified by the specified configuration source

11. Either type a 1 (Load) next to the files that you want or Press F21 (Select all).

12. To create the file entries, press Enter.

All selected files identified from the configuration source are represented in the resulting file entries. Each generated file entry includes all members of the file. If necessary, you can use “Changing a data group file entry” on page 279 to customize values for any of the data group file entries.

Updated for 5.0.08.00.

Adding a data group file entryWhen you add a single data group file entry to a data group definition, the configuration is dynamically updated and MIMIX automatically starts journaling of the file on the source system if the file exists and is not already journaled. Special entries are inserted into the journal data stream to enable the dynamic update. The added data group file entry is recognized by MIMIX as soon as each active process receives the special entries. For each MIMIX process, there may be a delay before the addition is recognized. This is true especially for very active data groups.

Use this procedure to add a data group file entry to a data group.

From the management system, do the following:

1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and press Enter.

2. From the Work with Data Groups display, type a 17 (File entries) next to the data group you want and press Enter.

3. From the Work with DG File Entries display, type a 1 (Add) next to the blank line

278

Page 279: MIMIX Reference

at the top of the list and press Enter.

4. The Add Data Group File Entry (ADDDGFE) display appears. At the System 1 File and Library prompts, specify the file that you want to replicate.

5. By default, all members in the file are replicated. If you want to replicate only a specific member, specify its name at the Member prompt.

Note: All replicated members of a file must be in the same database apply session. For data groups configured for multiple apply sessions, specify the apply session on the File entry options prompt. See Step 7.

6. Verify that the values of the remaining prompts on the display are what you want. If necessary, change the values as needed.

Notes: • If you change the value of the Dynamically update prompt to *NO, you need to

end and restart the data group before the addition is recognized.

• If you change the value of the Start journaling of file prompt to *NO and the file is not already journaled, MIMIX will not be able to replicate changes until you start journaling the file.

7. Optionally, you can specify file entry options that will override those defined for the data group. Do the following:

a. Press F10 (Additional parameters), then press Page Down.

b. Specify values as needed for the elements of the File entry options prompts. Any values you specify will be used for all of the file entries created with this procedure

8. Press Enter to create the data group file entry.

Changing a data group file entryUse this procedure to change an existing data group file entry.

From the management system, do the following:

1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and press Enter.

2. From the Work with Data Groups display, type a 17 (File entries) next to the data group you want and press Enter.

3. Locate the file entry you want on the Work with DG File Entries display. Type a 2 (Change) next to the entry you want and press Enter.

4. The Change Data Group File Entry (CHGDGFE) display appears. Press F10 (Additional parameters) to see all available prompts. You can change any of the values shown on the display.

Notes: • If the file is currently being journaled and transactions are being applied, do not

change the values specified for To system 1 file (TOFILE1) and To member (TOMBR1).

279

Page 280: MIMIX Reference

Creating data group file entries

• All replicated members of a file must be in the same database apply session. For data groups configured for multiple apply sessions, specify the apply session on the File entry options prompt.

5. To accept your changes, press Enter.

The replication processes do not recognize the change until the data group has been ended and restarted.

280

Page 281: MIMIX Reference

281

Page 282: MIMIX Reference

Creating data group IFS entries

Creating data group IFS entriesData group IFS entries identify IFS objects for replication. The identified objects are replicated through the system journal unless the data group IFS entries are explicitly configured to allow the objects to be replicated through the user journal.

Topic “Identifying IFS objects for replication” on page 118 provides detailed concepts and identifies requirements for configuration variations for IFS objects. Supported file systems are included, as well as examples of the effect that multiple data group IFS entries have on object auditing values.

Adding or changing a data group IFS entry Note: If you are converting a data group to use user journal replication for IFS

objects, use this procedure when directed by “Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling” on page 154.

Changes become effective after one of the following occurs:

• The data group is ended and restarted

• Nightly maintenance routines end and restart MIMIX jobs

• A MIMIX audit that uses IFS entries to select objects to audit is started.

From the management system, do the following to add a new data group IFS entry or change an existing IFS entry:

1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and press Enter.

2. From the Work with Data Groups display, type a 22 (IFS entries) next to the data group you want and press Enter.

3. The Work with Data Group IFS Entries display appears. Do one of the following:

• To add a new entry, type a 1 (Add) next to the blank line at the top of the display and press Enter.

• To change an existing entry, type a 2 (Change) next to the entry you want and press Enter.

4. The appropriate Data Group IFS Entry display appears. When adding an entry, you must specify a value for the System 1 object prompt.

Notes: • The object name must begin with the '/' character and can be up to 512

characters in total length. The object name can be a simple name, a name that is qualified with the name of the directory in which the object is located, or a generic name that contains one or more characters followed by an asterisk (*), such as /ABC*. Any component of the object name contained between two '/' characters cannot exceed 255 characters in length.

• All objects in the specified path are selected. When changing an existing IFS entry to enable replication from a user journal (COOPDB(*YES)), make sure that you specify only the IFS objects you want to enable.

282

Page 283: MIMIX Reference

5. If necessary, specify values for the System 2 object and Object auditing value prompts.

6. At the Process type prompt, specify whether resulting data group object entries should include (*INCLD) or exclude (*EXCLD) the identified objects.

7. Specify the appropriate value for the Cooperate with database prompt. To ensure that journaled IFS objects can be replicated from the user journal, specify *YES. To replicate from the system journal, specify *NO.

8. If necessary, specify a value for the Object retrieval delay prompt.

9. Ensure that the remaining prompts contain the values you want for the data group object entries that will be created. Press Page Down to see more prompts.

10. Press Enter to create the IFS entry.

11. For IFS entries configured for user journal replication, return to Step 7 in procedure “Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling” on page 154 to complete additional steps necessary to complete the conversion.

Synchronize the objects identified by data group entries before starting replication processes or running MIMIX audits. The entries will be available to replication processes after the data group is ended and restarted. This includes after the nightly restart of MIMIX jobs. The entries will be available to MIMIX audits the next time an audit runs.

283

Page 284: MIMIX Reference

Loading tracking entries

Loading tracking entriesTracking entries are associated with the replication of IFS objects, data areas, and data queues with advanced journaling techniques. A tracking entry must exist for each existing IFS object, data area, or data queue identified for replication.

IFS tracking entries identify existing IFS stream files on the source system that have been identified as eligible for replication with advanced journaling by the collection of data group IFS entries defined to a data group. Similarly, object tracking entries identify existing data areas and data queues on the source system that have been identified as eligible for replication using advanced journaling by the collection of data group object entries defined to a data group.

When you initially configure a data group, you must load tracking entries and start journaling for the objects which they identify. Similarly, if you add new or change existing data group IFS entries or object entries, tracking entries for any additional IFS objects, data areas, or data queues must be loaded and journaling must be started on the objects which they identify.

Loading IFS tracking entriesAfter you have configured the data group IFS entries for advanced journaling, use this procedure to load IFS tracking entries which match existing IFS objects. This procedure uses the Load DG IFS Tracking Entries (LODDGIFSTE) command. Default values for the command will load IFS tracking entries from objects on the system identified as the source for replication without duplicating existing IFS tracking entries.

Note: The data group must be ended before using this procedure. Configuration changes resulting from loading tracking entries are not effective until the data group is restarted.

From the management system, do the following:

1. Ensure that the data group is ended. If the data group is active, end it using the procedure “Ending a data group in a controlled manner” in the Using MIMIX book.

2. On a command line, type LODDGIFSTE and press F4 (Prompt). The Load DG IFS Tracking Entries (LODDGIFSTE) command appears.

3. At that prompts for Data group definition, specify the three-part name of the data group for which you want to load IFS tracking entries.

4. Verify that the value specified for the Load from system prompt is appropriate for your environment. If necessary, specify a different value.

5. Verify that the value specified for the Update option prompt is appropriate for your environment. If necessary, specify a different value.

6. At the Submit to batch prompt, specify the value you want.

7. Press Enter.

8. If you specified *NO for batch processing, the request is processed. If you will see additional prompts for Job description and Job name. If necessary, specify different values and press Enter.

284

Page 285: MIMIX Reference

9. You should receive message LVI3E2B indicating the number of tracking entries loaded for the data group.

Note: The command used in this procedure does not start journaling on the tracking entries. Start journaling for the tracking entries when indicated by your configuration checklist.

Loading object tracking entriesAfter you have configured the data group object entries for advanced journaling, use this procedure to load object tracking entries which match existing data areas and data queues. This procedure uses the Load DG Obj Tracking Entries (LODDGOBJTE) command. Default values for the command will load object tracking entries from objects on the system identified as the source for replication without duplicating existing object tracking entries.

Note: The data group must be ended before using this procedure. Configuration changes resulting from loading tracking entries are not effective until the data group is restarted.

From the management system, do the following:

1. Ensure that the data group is ended. If the data group is active, end it using the procedure “Ending a data group in a controlled manner” in the Using MIMIX book.

2. On a command line, type LODDGOBJTE and press F4 (Prompt). The Load DG Obj Tracking Entries (LODDGOBJTE) command appears.

3. At that prompts for Data group definition, specify the three-part name of the data group for which you want to load object tracking entries.

4. Verify that the value specified for the Load from system prompt is appropriate for your environment. If necessary, specify a different value.

5. Verify that the value specified for the Update option prompt is appropriate for your environment. If necessary, specify a different value.

6. At the Submit to batch prompt, specify the value you want.

7. Press Enter.

8. If you specified *NO for batch processing, the request is processed. If you will see additional prompts for Job description and Job name. If necessary, specify different values and press Enter.

9. You should receive message LVI3E2B indicating the number of tracking entries loaded for the data group.

Note: The command used in this procedure does not start journaling on the tracking entries. Start journaling for the tracking entries when indicated by your configuration checklist.

Updated for 5.0.08.00.

285

Page 286: MIMIX Reference

Loading tracking entries

286

Page 287: MIMIX Reference

Creating data group DLO entriesData group DLO entries identify document library objects (DLOs) for replication by MIMIX system journal replication processes.

When you configure MIMIX, you can create data group DLO entries by loading from a generic entry and selecting from documents in the list, or by creating individual DLO entries. Once you have created the DLO entries, you can tailor them to meet your requirements.

For detailed concepts and requirements, see “Identifying DLOs for replication” on page 124.

Loading DLO entries from a folderIf you need to create data group DLO entries for a group of documents within a folder, you can specify information so that MIMIX will create the data group DLO entries for you. (You can customize individual entries later, if necessary.)

The user profile you use to perform this task must be enrolled in the system distribution directory on the management system.

Note: The MIMIXOWN user profile is automatically added to the system directory when MIMIX is installed. This entry is required for DLO replication and should not be removed.

From the management system, do the following to create DLO entries by loading from a list.

1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and press Enter.

2. From the Work with Data Groups display, type a 21 (DLO entries) next to the data group you want and press Enter.

3. The Work with DG DLO Entries display appears. Press F19 (Load).

4. The Load DG DLO Entries (LODDGDLOE) display appears. Do the following to specify the selection criteria:

a. Identify the library and objects to be considered. Specify values for the System 1 folder and System 1 document prompts.

b. If necessary, specify values for the Owner, System 2 folder, System 2 object, and Object auditing value prompts.

c. At the Process type prompt, specify whether resulting data group DLO entries should include or exclude the identified documents

d. If necessary, specify a value for the Object retrieval delay prompt.

e. Press Enter.

5. Additional prompts appear to optionally use batch processing and to load entries without load without selecting entries from a list. Press Enter.

6. The Load DG DLO Entries display appears with the list of document that matched your selection criteria. Either type a 1 (Select) next to the documents you want or

287

Page 288: MIMIX Reference

Creating data group DLO entries

press F21 (Select all). Then press Enter.

7. If necessary, you can use “Adding or changing a data group DLO entry” on page 288 to customize values for any of the data group DLO entries.

Synchronize the DLOs identified by data group entries before starting replication processes or running MIMIX audits. The entries will be available to replication processes after the data group is ended and restarted. This includes after the nightly restart of MIMIX jobs. The entries will be available to MIMIX audits the next time an audit runs.

Adding or changing a data group DLO entryThe data group must be ended and restarted before any changes can become effective.

From the management system, do the following to add or change a DLO entry:

1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and press Enter.

2. From the Work with Data Groups display, type a 21 (DLO entries) next to the data group you want and press Enter.

3. The Work with DG DLO Entries display appears. Do one of the following:

• To add a new entry, type a 1 (Add) next to the blank line at the top of the list and press Enter.

• To change an existing entry, type a 2 (Change) next to the entry you want and press Enter. Then skip to Step 5.

4. If you are adding a new DLO entry, the Add Data Group DLO Entry display appears. Identify the library and objects to be considered. Specify values for the System 1 folder and System 1 document prompts.

5. Do the following:

a. If necessary, specify values for the Owner, System 2 folder, System 2 object, and Object auditing value prompts.

b. At the Process type prompt, specify whether resulting data group DLO entries should include or exclude the identified documents

c. If necessary, specify a value for the Object retrieval delay prompt.

6. Press Enter.

Synchronize the DLOs identified by data group entries before starting replication processes or running MIMIX audits. The entries will be available to replication processes after the data group is ended and restarted. This includes after the nightly restart of MIMIX jobs. The entries will be available to MIMIX audits the next time an audit runs.

288

Page 289: MIMIX Reference

Creating data group data area entriesThis procedure creates data group data area entries that identify data areas to be replicated by the data area poller process.

Note: The data area poller method is not the preferred way to replicate data areas.The preferred method of replicating data areas is with user journal replication processes using advanced journaling. The next best method is identifying them with data group object entries for system journal replication processes.

For detailed concepts and requirements for supported configurations, see the following topics:

• “Identifying library-based objects for replication” on page 100

• “Identifying data areas and data queues for replication” on page 112

You can load all data group data area entries from a library or you can add individual data area entries. Once the data group data area entries are created, you can tailor them to meet your requirements by adding, changing, or deleting entries. You must define data group data area entries from the management system. The data area entries can be created from libraries on either system. If the system manager is configured and running, all created and changed data group data area entries are sent to the network systems automatically.

Loading data area entries for a libraryBefore any addition or change is recognized, you need to end and restart the data group.

From the management system, do the following to load data area entries for use with the data area poller:

1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and press Enter.

2. From the Work with Data Groups display, type a 19 (Data area entries) next to the data group you want and press Enter.

3. The Work with DG Data Area Entries display appears. Press F19 (Load).

4. The Load DG Data Area Entries (LODDGDAE) display appears. The values of the System 1 library and System 2 library prompts indicate the name of the library on the respective systems. Specify a name for the System 1 library prompt and verify that the value shown for the System 2 library prompt is what you want.

5. Ensure that the value of the Load from system prompt indicates the system from which you want to load data areas.

6. Verify that the remaining prompts on the display contain the values you want. If necessary, change the values.

7. To create the data group data area entries, press Enter. If you submitted the job for batch processing, MIMIX sends a message indicating that a data areas load job has been submitted. A completion message is sent when the load has

289

Page 290: MIMIX Reference

Creating data group data area entries

finished.

Adding or changing a data group data area entryBefore any addition or change is recognized, you need to end and restart the data group.

From the management system, do the following to add a new entry or change an existing data area entry for use with the data area poller:

1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and press Enter.

2. From the Work with Data Groups display, type a 19 (Data area entries) next to the data group you want and press Enter.

3. From the Work with DG Data Area Entries display do one of the following

• To add a new data area entry, type a 1 (Add) at the blank line at the top of the list and press Enter. The Add Data Group Data Area Entry display appears.

• To change an existing data area entry, type a 2 (Change) next to the data group data area entry you want and press Enter. The Change Data Group Data Area Entry display appears.

4. Specify the values you want at the prompts for System 1 data area and Library and System 2 data area and Library.

5. Press Enter to create the data area entry or accept the change.

290

Page 291: MIMIX Reference

Additional options: working with DG entriesThe procedures for performing common functions, such as copying, removing, and displaying, are very similar for all types of data group entries used by MIMIX. Each generic procedure in this topic indicates the type of data group entry for which it can be used.

Copying a data group entryUse this procedure from the management system to copy a data group entry from one data group definition to another data group definition. The data group definition to which you are copying must exist.

To copy a data group entry to another data group definition, do the following:

1. From the Work with DG Definitions display, type the option you want next to the data group from which you are copying and press Enter. Any of these options will allow an entry to be copied:

Option 17 (File entries)

Option 19 (Data area entries)

Option 20 (Object entries)

Option 21 (DLO entries)

Option 22 (IFS entries)

2. The "Work with" display for the entry you selected appears. Type a 3 (Copy) next to the entry you want and press Enter.

3. The Copy display for the entry appears. Specify a name for the To definition prompt.

4. Additional prompts appear on display that are specific for the type of entry. The values of these prompts define the data to be replicated by the definition to which you are copying. Ensure that the prompts identify the necessary information.

Table 32. Values to specify for each type of data group entry.

For file entries, provide: To File 1To MemberTo File 2

For data area entries, provide: To system 1 data areaTo system 2 data area

For object entries, provide: System 1 librarySystem 1 objectObject typeAttribute

For DLO entries, provide: System 1 folderSystem 1 documentOwner

291

Page 292: MIMIX Reference

Additional options: working with DG entries

5. The value *NO for the Replace definition prompt prevents you from replacing an existing entry in the definition to which you are copying. If you want to replace an existing entry, specify *YES.

6. To copy the entry, press Enter.

7. For file entries, end and restart the data group being copied.

Removing a data group entryUse this procedure from the management system to remove a data group entry from a data group definition. You may want to remove an entry when you no longer need to replicate the information that the entry identifies.

Note: For all data group entries except file entries, the change is not recognized until after the send, receive, and apply processes for the associated data group and ended and restarted.

Data group file entries support dynamic removals if you prompt the RMVDGFE command and specify Dynamically update (*YES). If you specify Dynamically update (*YES), you do not need to end the processes for the data group when you use the default. The change is recognized as soon as each active process receives the update. If a file is on hold and you want to delete the data group file entry, it is best to use *YES. This forces all currently held entries to be deleted, all current entries to be ignored, and prevents additional entries from accumulating.

If you accept the default of Dynamically update (*NO), the change is not recognized until after the send receive, and apply processes for the associated data group are ended and restarted. When you specify Dynamically update (*NO), the remove function does not clean up any records in the error/hold log. If an entry is held when you delete it, its information remains in the error/hold log. Additional transactions for the file or member can be accumulating in the error/hold log or will be applied to the file.

To remove an entry, do the following:

1. From the Work with DG Definitions display, type the option for the entry you want next to the data group and press Enter. Any of these options will allow an entry to be removed:

Option 17 (File entries)

Option 19 (Data area entries)

Option 20 (Object entries)

Option 21 (DLO entries)

Option 22 (IFS entries)

2. The "Work with" display for the entry you selected appears. Type a 4 (Remove) next to the entry you want and press Enter.

For IFS entries, provide: To system 1 object

Table 32. Values to specify for each type of data group entry.

292

Page 293: MIMIX Reference

3. For data group file entries, a display with additional prompts appears. Specify the values you want and press Enter.

4. A confirmation display appears with a list of entries to be deleted. To delete the entries, press Enter.

Displaying a data group entryUse this procedure to display a data group entry for a data group definition.

To display a data group entry, do the following:

1. From the Work with DG Definitions display, type the option for the entry you want next to the data group and press Enter. Any of these options will allow an entry to be displayed:

Option 17 (File entries)

Option 19 (Data area entries)

Option 20 (Object entries)

Option 21 (DLO entries)

Option 22 (IFS entries)

2. The "Work with" display for the entry you selected appears. Type a 5 (Display) next to the entry you want and press Enter.

3. The appropriate data group entry display appears. Page Down to see all of the values.

Printing a data group entryUse this procedure to create a spooled file which you can print that identifies a system definition, transfer definition, journal definition, or a data group definition. Not all types of entries support the print function.

To print a data group entry, do the following;

1. From the Work with DG Definitions display, type the option for the entry you want next to the data group and press Enter. Any of these options will allow an entry to be printed:

Option 17 (File entries)

Option 19 (Data area entries)

Option 22 (IFS entries)

2. The "Work with" display for the entry you selected appears. Type a 6 (Print) next to the entry you want and press Enter.

3. A spooled file is created with a name of MXDG***E, where *** is the type of entry. You can print the spooled file according to your standard print procedures.

293

Page 294: MIMIX Reference

Chapter 13

Additional supporting tasks for configuration

The tasks in this chapter provide supplemental configuration tasks. Always use the configuration checklists to guide you though the steps of standard configuration scenarios.

• “Accessing the Configuration Menu” on page 295 describes how to access the menu of configuration options from a 5250 emulator.

• “Starting the system and journal managers” on page 296 provides procedures for starting these jobs. System and journal manager jobs must be running before replication can be started.

• “Setting data group auditing values manually” on page 297 describes when to manually set the object auditing level for objects defined to MIMIX and provides a procedure for doing so.

• “Checking file entry configuration manually” on page 303 provides a procedure using the CHKDGFE command to check the data group file entries defined to a data group.

Note: The preferred method of checking is to use MIMIX AutoGuard to automatically schedule the #DGFE audit, which calls the CHKDGFE command and can automatically correct detected problems. For additional information, see “Interpreting results for configuration data - #DGFE audit” on page 580.

• “Changes to startup programs” on page 305 describes changes that you may need to make to your configuration to support remote journaling.

• “Checking DDM password validation level in use” on page 306 describes how to check the whether the DDM communications infrastructure used by MIMIX Remote Journal support requires a password. This topic also describes options for ensuring that systems in a MIMIX configuration have the same password and the implications of these options.

• “Starting the DDM TCP/IP server” on page 308 describes how to start this server that is required in configurations that use remote journaling.

• “Identifying data groups that use an RJ link” on page 310 describes how to determine which data groups use a particular RJ link.

• “Using file identifiers (FIDs) for IFS objects” on page 312 describes the use of FID parameters on commands for IFS tracking entries. When IFS objects are configured for replication through the user journal, commands that support IFS tracking entries can specify a unique FID for the object on each system. This topic describes the processing resulting from combinations of values specified for the object and FID prompts.

• “Configuring restart times for MIMIX jobs” on page 313 describes how to change the time at which MIMIX jobs automatically restart. MIMIX jobs restart daily to ensure that the MIMIX environment remains operational.

294

Page 295: MIMIX Reference

295

Accessing the Configuration MenuThe MIMIX Configuration Menu provides access to the options you need for configuring MIMIX.

To access the MIMIX Configuration Menu, do the following:

1. Access the MIMIX Basic Main Menu. See “Accessing the MIMIX Main Menu” on page 91.

2. From the on the MIMIX Basic Main Menu, select option 11 (Configuration menu) and press Enter.

Page 296: MIMIX Reference

Starting the system and journal managers

296

Starting the system and journal managersIf the system managers are running, they will automatically send configuration information to the network system as you complete configuration tasks. This procedure starts all the system managers, journal managers, and, if the system is participating in a cluster, cluster services. The system managers, journal managers, and cluster services must be active to start replication.

To start all of the system managers, journal managers, and cluster services (for a cluster environment) during configuration, do the following:

1. Access the MIMIX Basic Main Menu. See “Accessing the MIMIX Main Menu” on page 91.

2. From the MIMIX Basic Main Menu press the F21 key (Assistance level) to access the MIMIX Intermediate Main Menu.

3. Select option 2 (Work with Systems) and press Enter.

4. The Work with Systems display appears with a list of the system definitions. Type a 9 (Start) next to each of the system definitions you want and press Enter. This will start all managers on all of these systems in the MIMIX environment.

5. The Start MIMIX Managers (STRMMXMGR) display appears. Do the following:

a. Verify that *ALL appears as the value for the Manager prompt.

b. Press Enter to complete this request.

6. If you selected more than one system definition in Step 4, the Start MIMIX Managers (STRMMXMGR) display will be shown for each system definition that you selected. Repeat Step 5 for each system definition that you selected.

Page 297: MIMIX Reference

Setting data group auditing values manuallyDefault behavior for MIMIX is to change the auditing value of IFS, DLO, and library-based objects configured for system journal replication as needed when starting data groups with the Start Data Group (STRDG) command.

To manually set the system auditing level of replicated objects, or to force a change to a lower configured level, you can use the Set Data Group Auditing (SETDGAUD) command.

The SETDGAUD command allows you to set the object auditing level for all existing objects that are defined to MIMIX by data group object entries, data group DLO entries, and data group IFS entries. The SETDGAUD command can be used for data groups configured for replicating object information (type *OBJ or *ALL).

When to set object auditing values manually - If you anticipate a delay between configuring data group entries and starting the data group, you should use the SETDGAUD command before synchronizing data between systems. Doing so will ensure that replicated objects will be properly audited and that any transactions for the objects that occur between configuration and starting the data group will be replicated.

You can also use the SETDGAUD command to reset the object auditing level for all replicated objects if a user has changed the auditing level of one or more objects to a value other than what is specified in the data group entries.

Processing options - MIMIX checks for existing objects identified by data group entries for the specified data group. The object auditing level of an existing object is set to the auditing value specified in the data group entry that most specifically matches the object. Default behavior is that MIMIX only changes an object’s auditing value if the configured value is higher than the object’s existing value. However, you can optionally force a change to a configured value that is lower than the existing value through the command’s Force audit value (FORCE) parameter.

• The default value *NO for the FORCE parameter prevents MIMIX from reducing the auditing level of an object. For example, if the SETDGAUD command processes a data group entry with a configured object auditing value of *CHANGE and finds an object identified by that entry with an existing auditing value of *ALL, MIMIX does not change the value.

• If you specify *YES for the FORCE parameter, MIMIX will change the auditing value even if it is lower than the existing value.

For IFS objects, it is particularly important that you understand the ramifications of the value specified for the FORCE parameter. For more information see “Examples of changing of an IFS object’s auditing value” on page 298.

Procedure -To set the object auditing value for a data group, do the following on each system defined to the data group:

1. Type the command SETDGAUD and press F4 (Prompt).

2. The Set Data Group Auditing (SETDGAUD) appears. Specify the name of the data group you want.

297

Page 298: MIMIX Reference

Setting data group auditing values manually

3. At the Object type prompt, specify the type of objects for which you want to set auditing values.

4. If you want to allow MIMIX to force a change to a configured value that is lower than the object’s existing value, specify *YES for the Force audit value prompt.

Note: This may affect the operation of your replicated applications. Lakeview recommends that you force auditing value changes only when you have specified *ALLIFS for the Object type.

5. Press Enter.

Examples of changing of an IFS object’s auditing valueThe following examples show the effect of the value of the FORCE parameter when manually changing the object auditing values of IFS objects configured for system journal replication.

The auditing values resulting from the SETDGAUD command can be confusing when your environment has multiple data group IFS entries, each with different auditing levels, and more than one entry references objects sharing common parent directories. The following examples illustrate how these conditions affect the results of setting object auditing for IFS objects.

Data group entries are processed in order from most generic to most specific. IFS entries are processed using the unicode character set. The first entry (more generic) found that matches the object is used until a more specific match is found.

When MIMIX processes a data group IFS entry and changes the auditing level of objects which match the entry, all of the directories in the object’s directory path are checked and, if necessary, changed to the new auditing value. In the case of an IFS entry with a generic name, all descendents of the IFS object may also have their auditing value changed.

Example 1: This scenario shows a simple implementation where data group IFS entries have been modified to have a configured value of *CHANGE from a previously configured value of *ALL. Table 33 identifies a set of data group IFS entries and their configured auditing values. The entries are listed in the order in which they are processed by the SETDGAUD command.

Simply ending and restarting the data group will not cause these configuration changes to be effective. Because the change is to a lower auditing level, the change must be forced with the SETDGAUD command. Similarly, running the SETDGAUD command with FORCE(*NO) does not change the auditing values for this scenario.

Table 33. Example 1 configuration of data group IFS entries

Order processed Specified object Object auditing value Process type

1 /DIR1/* OBJAUD(*CHANGE) PRCTYPE(*EXCLD)

2 /DIR1/DIR2/* OBJAUD(*CHANGE) PRCTYPE(*INCLD)

3 /DIR1/STMF OBJAUD(*CHANGE) PRCTYPE(*INCLD)

298

Page 299: MIMIX Reference

Table 34 shows the intermediate and final results as each data group IFS entry is processed by the force request.

Example 2: Table 35 identifies a set of data group IFS entries and their configured auditing values. The entries are listed in the order in which they are processed by the SETDGAUD command. In this scenario there are multiple configured values.

For this scenario, running the SETDGAUD command with FORCE(*NO) does not change the auditing values on any existing IFS objects because the configured values from the data group IFS entries are the same or lower than the existing values.

Running the command with FORCE(*YES) does change the existing objects’ values. Table 36 shows the intermediate values as each entry is processed by the force request and the final results of the change. Data group IFS entry #3 in Table 35

Table 34. Intermediate audit values which occur during FORCE(*YES) processing for example 1.

Existing objects Existing value

Auditing values while processing SETDGAUD FORCE(*YES)

Changed by 1st entry

Changed by 2nd entry

Changed by 3rd entry

Final results of FORCE(*YES)

/DIR1 *ALL Note 1 *CHANGE Note 2 *CHANGE

/DIR1/STMF *ALL Note 1 *CHANGE *CHANGE

/DIR1/STMF2 *ALL Note 1 *ALL

/DIR1/DIR2 *ALL Note 1 *CHANGE *CHANGE

/DIR1/DIR2/STMF *ALL Note 1 *CHANGE *CHANGE

Notes: 1. Because the first data group IFS entry excludes objects from replication, object auditing processing does

not apply.2. This object’s auditing value is evaluated when the third data group IFS entry is processed but the entry

does not cause the value to change. The existing value is the same as the configured value of the third entry at the time it is processed.

Table 35. Example 2 configuration of data group IFS entries

Order processed Specified object Object auditing value Process type

1 /DIR1/* OBJAUD(*CHANGE) PRCTYPE(*INCLD)

2 /DIR1/DIR2/* OBJAUD(*NONE) PRCTYPE(*INCLD)

3 /DIR1/STMF OBJAUD(*ALL) PRCTYPE(*INCLD)

299

Page 300: MIMIX Reference

Setting data group auditing values manually

prevents directory /DIR1 from having an auditing value of *CHANGE or *NONE because it is the last entry processed and it is the most specific entry.

Example 3: This scenario illustrates why you may need to force the configured values to take effect after changing the existing data group IFS entries from *ALL to lower values. Table 37 identifies a set of data group IFS entries and their configured auditing values. The entries are listed in the order in which they are processed by the SETDGAUD command.

For this scenario, running the SETDGAUD command with FORCE(*NO) does not change the auditing values on any existing IFS objects because the configured values from the data group IFS entries are lower than the existing values.

In this scenario, SETDGAUD FORCE(*YES) must be run to have the configured auditing values take effect. Table 38 shows the intermediate values as each entry is processed by the force request and the final results of the change.

Table 36. Intermediate audit values which occur during FORCE(*YES) processing for example 2.

Existing objects Existing value

Auditing values while processing SETDGAUD FORCE(*YES)

Changed by 1st entry

Changed by 2nd entry

Changed by 3rd entry

Final results of FORCE(*YES)

/DIR1 *ALL *CHANGE *NONE *ALL *ALL

/DIR1/STMF *ALL *CHANGE *ALL *ALL

/DIR1/STMF2 *ALL *CHANGE *CHANGE

/DIR1/DIR2 *ALL *CHANGE *NONE *NONE

/DIR1/DIR2/STMF *ALL *CHANGE *NONE *NONE

Table 37. Example 3: configuration of data group IFS entries

Order processed Specified object Object auditing value Process type

1 /DIR1/* OBJAUD(*CHANGE) PRCTYPE(*INCLD)

2 /DIR1/DIR2/* OBJAUD(*NONE) PRCTYPE(*INCLD)

3 /DIR1/STMF OBJAUD(*NONE) PRCTYPE(*INCLD)

Table 38. Intermediate audit values which occur during FORCE(*YES) processing for example 3.

Existing objects Existing value

Auditing values while processing SETDGAUD FORCE(*YES)

Changed by 1st entry

Changed by 2nd entry

Changed by 3rd entry

Final results of FORCE(*YES)

/DIR1 *ALL *CHANGE *NONE *NONE

/DIR1/STMF *ALL *CHANGE *NONE *NONE

/DIR1/STMF2 *ALL *CHANGE *CHANGE

300

Page 301: MIMIX Reference

Example 4: This example begins with the same set of data group IFS entries used in example 3 (Table 37) and uses the results of the forced change in example 3 as the auditing values for the existing objects in Table 39.

Table 39 shows how running the SETDGAUD command with FORCE(*NO) causes changes to auditing values. This scenario is quite possible as a result of a normal STRDG request. Complex data group IFS entries and multiple configured values cause these potentially undesirable results.

Note: Any addition or change to the data group IFS entries can cause these results to occur.

There is no way to maintain the existing values in Table 39 without ensuring that a forced change occurs every time SETDGAUD is run, which may be undesirable. In this example, the next time data groups are started, the objects’ auditing values will be set to those shown in Table 39 for FORCE(*NO).

Any addition or change to the data group IFS entries can potentially cause similar results the next time the data group is started. To avoid this situation, we recommend that you configure a consistent auditing value of *CHANGE across data group IFS entries which identify objects with common parent directories.

/DIR1/DIR2 *ALL *CHANGE *NONE *NONE

/DIR1/DIR2/STMF *ALL *CHANGE *NONE *NONE

Table 38. Intermediate audit values which occur during FORCE(*YES) processing for example 3.

Existing objects Existing value

Auditing values while processing SETDGAUD FORCE(*YES)

Changed by 1st entry

Changed by 2nd entry

Changed by 3rd entry

Final results of FORCE(*YES)

Table 39. Example 4: comparison of object’s actual values

Existing objects Auditing value

Existing values After SETDGAUD FORCE(*NO)

After SETDGAUD FORCE(*YES)

/DIR1 *NONE *CHANGE *NONE

/DIR1/STMF *NONE *CHANGE *NONE

/DIR1/STMF2 *CHANGE *CHANGE *CHANGE

/DIR1/DIR2 *NONE *CHANGE *NONE

/DIR1/DIR2/STMF *NONE *CHANGE *NONE

301

Page 302: MIMIX Reference

Setting data group auditing values manually

Example 5: This scenario illustrates the results of SETDGAUD command when the object’s auditing value is determined by the user profile which accesses the object (value *USRPRF). Table 40 shows the configured data group IFS entry.

Table 41 compares the results running the SETDGAUD command with FORCE(*NO) and FORCE(*YES).

Running the command with FORCE(*NO) does not change the value. The value *USRPRF is not in the range of valid values for MIMIX. Therefore, an object with an auditing value of *USRPRF is not considered for change.

Running the command with FORCE(*YES) does force a change because the existing value and the configured value are not equal.

Table 40. Example 5 configuration of data group IFS entries

Order processed Specified Object Object auditing value Process type

1 /DIR1/STMF OBJAUD(*NONE) PRCTYPE(*INCLD)

Table 41. Example 5: comparison of object’s actual values

Existing objects Auditing value

Existing values After SETDGAUD FORCE(*NO)

After SETDGAUD FORCE(*YES)

/DIR1/STMF *USRPRF *USRPRF *NONE

302

Page 303: MIMIX Reference

Checking file entry configuration manuallyThe Check DG File Entries (CHKDGFE) command provides a means to detect whether the correct data group file entries exist with respect to the data group object entries configured for a specified data group in your MIMIX configuration. When file entries and object entries are not properly matched, your replication results can be affected.

Note: The preferred method of checking is to use MIMIX AutoGuard to automatically schedule the #DGFE audit, which calls the CHKDGFE command and can automatically correct detected problems. For additional information, see “Interpreting results for configuration data - #DGFE audit” on page 580.

To check your file entry configuration manually, do the following:

1. On a command line, type CHKDGFE and press Enter. The Check Data Group File Entries (CHKDGFE) command appears.

2. At the Data group definition prompts, select *ALL to check all data groups or specify the three-part name of the data group.

3. At the Options prompt, you can specify that the command be run with special options. The default, *NONE, uses no special options. If you do not want an error to be reported if a file specified in a data group file entry does not exist, specify *NOFILECHK.

4. At the Output prompt, specify where the output from the command should be sent—to print, to an outfile, or to both. See Step 6.

5. At the User data prompt, you can assign your own 10-character name to the spooled file or choose not to assign a name to the spooled file. The default, *CMD, uses the CHKDGFE command name to identify the spooled file.

6. At the File to receive output prompts, you can direct the output of the command to the name and library of a specific database file. If the database file does not exist, it will be created in the specified library with the name MXCDGFE.

7. At the Output member options prompts, you can direct the output of the command to the name of a specific database file member. You can also specify how to handle new records if the member already exists. Do the following:

a. At the Member to receive output prompt, accept the default *FIRST to direct the output to the first member in the file. If it does not exist, a new member is created with the name of the file specified in Step 6. Otherwise, specify a member name.

b. At the Replace or add records prompt, accept the default *REPLACE if you want to clear the existing records in the file member before adding new records. To add new records to the end of existing records in the file member, specify *ADD.

8. At the Submit to batch prompt, do one of the following:

• If you do not want to submit the job for batch processing, specify *NO and press Enter to check data group file entries.

303

Page 304: MIMIX Reference

Checking file entry configuration manually

• To submit the job for batch processing, accept *YES. Press Enter and continue with the next step.

9. At the Job description prompts, specify the name and library of the job description used to submit the batch request. Accept MXAUDIT to submit the request using Lakeview’s default job description, MXAUDIT.

10. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name.

11. To start the data group file entry check, press Enter.

304

Page 305: MIMIX Reference

305

Changes to startup programsIf you use startup programs, ensure that you include the following operations when you configure for remote journaling:

• If you use TCP/IP as the communications protocol you need to start TCP/IP, including the DDM server, before starting replication.

• If you use OptiConnect as the communications protocol, the QSOC subsystem must be active.

Page 306: MIMIX Reference

Checking DDM password validation level in use

Checking DDM password validation level in useMIMIX Remote Journal support uses the DDM communications infrastructure. This infrastructure can be configured to require a password to be provided when a server connection is made. The MIMIXOWN user profile, which establishes the remote journal connection, ships with a preset password so that it is consistent on all systems. If you have implemented DDM password validation on any systems where MIMIX will be used, you should verify the DDM level in use. If the MIMIXOWN password is not the same on both systems, you may need to change the MIMIXOWN user profile or the DDM security level to allow MIMIX Remote Journal support to function properly. These changes have security implications of which you should be aware.

To check the DDM password validation level in use, do the following on both systems:

1. From a command line, type CHGDDMTCPA and press F4 (prompt).

2. Check the value of the Password required field.

• If the value is *NO or *VLDONLY, no further action is required. Press F12 (Cancel).

• If the field contains any other value, you must take further action to enable MIMIX RJ support to function in your environment. Press F12, then continue with the next step.

3. You have two options for changing your environment to enable MIMIX RJ support to function. Each option has security implications. You must decide which option is best for your environment. The options are:

• “Option 1. Enable MIMIXOWN user profile for DDM environment” on page 306. MIMIX must be installed and transfer definitions must exist before you can make the necessary changes. For new installations this should automatically configured for you.

• “Option 2. Allow user profiles without passwords” on page 307. You can use this option before or after MIMIX is installed. However, this option should be performed before configuring MIMIX RJ support.

Option 1. Enable MIMIXOWN user profile for DDM environmentThis option changes the MIMIXOWN user profile to have a password and adds server authentication entries to recognize the MIMIXOWN user profile.

Do the following from both systems:

1. Access the Work with Transfer Definitions (WRKTFRDFN) display. Then do the following:

a. Type a 5 (Display) next to each transfer definition that will be used with MIMIX RJ support and press Enter.

b. Page down to locate the value for Relational database (RDB parameter) and record the value indicated.

306

Page 307: MIMIX Reference

c. If you selected multiple transfer definitions, press Enter to advance to the next selection and record its RDB value. Ensure that you record the values for all transfer definitions you selected.

Note: If the RDB value was generated by MIMIX, it will be in the form of the characters MX followed by the System1 definition, System2 definition, and the name of the transfer definition, with up to 18 characters.

2. On the source system, change the MIMIXOWN user profile to have a password and to prevent signing on with the profile. To do this, enter the following command:

CHGUSRPRF USRPRF(MIMIXOWN) PASSWORD(user-defined-password) INLMNU(*SIGNOFF)

Note: The password is case sensitive and must be the same on all systems in the MIMIX network. If the password does not match on all systems, some MIMIX functions will fail with security error message LVE0127.

3. You need a server authentication entry for the MIMIXOWN user profile for each RDB entry you recorded in Step 1. To add a server authentication entry, type the following command, using the password you specified in Step 2 and the RDB value from Step 1. Then press Enter.

ADDSVRAUTE USRPRF(MIMIXOWN) SERVER(recorded-RDB-value) PASSWORD(user-defined-password)

4. Repeat Step 2 and Step 3 on the target system.

Option 2. Allow user profiles without passwordsThis option changes DDM TCP attributes to allow user profiles without passwords to function in environments that use DDM password validation. Do the following:

1. From a command line on the source system, type CHGDDMTCPA PWDRQD(*VLDONLY) and press Enter.

2. From a command line on the target system, CHGDDMTCPA PWDRQD(*VLDONLY) and press Enter.

307

Page 308: MIMIX Reference

Starting the DDM TCP/IP server

Starting the DDM TCP/IP serverUse this procedure if you need to start the DDM TCP/IP server in an environment configured for MIMIX RJ support.

From the system on which you want to start the TCP server, do the following:

1. Ensure that the DDM TCP/IP attributes allow the DDM server to be automatically started when the TCP/IP server is started (STRTCP). Do the following:

a. Type the command CHGDDMTCPA and press F4 (Prompt).

b. Check the value of the Autostart server prompt. If the value is *YES, it is set appropriately. Otherwise, change the value to *YES and press Enter.

2. To prevent install problems due to locks on the library name, ensure that the MIMIX product library is not in your user library list.

3. To start the DDM server, type the command STRTCPSVR(*DDM) and press Enter.

308

Page 309: MIMIX Reference

309

Page 310: MIMIX Reference

Identifying data groups that use an RJ link

Identifying data groups that use an RJ linkUse this procedure to determine which data groups use a remote journal link before you end a remote journal link or remove a remote journaling environment.

1. Enter the command WRKRJLNK and press Enter.

2. Make a note of the name indicated in the Source Jrn Def column for the RJ Link you want.

3. From the command line, type WRKDGDFN and press Enter.

4. For all data groups listed on the Work with DG Definitions display, check the Journal Definition column for the name of the source journal definition you recorded in Step 2.

• If you do not find the name from Step 2, the RJ link is not used by any data group. The RJ link can be safely ended or can have its remote journaling environment removed without affecting existing data groups.

• If you find the name from Step 2 associated with any data groups, those data groups may be adversely affected if you end the RJ link. A request to remove the remote journaling environment removes configuration elements and system objects that need to be created again before the data group can be used. Continue with the next step.

5. Press F10 (View RJ links). Consider the following and contact your MIMIX administrator before taking action that will end the RJ link or remove the remote journaling environment.

• When *NO appears in the Use RJ Link column, the data group will not be affected by a request to end the RJ link or to the remote journaling environment.

Note: If you allow applications other than MIMIX to use the RJ link, they will be affected if you end the RJ link or remove the remote journaling environment.

• When *YES appears in the Use RJ Link column, the data group may be affected by a request to end the RJ link. If you use the procedure for ending a remote journal link independently in the Using MIMIX book, ensure that any data groups that use the RJ link are inactive before ending the RJ link.

310

Page 311: MIMIX Reference

311

Page 312: MIMIX Reference

Using file identifiers (FIDs) for IFS objects

312

Using file identifiers (FIDs) for IFS objectsCommands used by advanced journaling for IFS objects use file identifiers (FIDs) to uniquely identify the correct IFS tracking entries to process. The System 1 file identifier and System 2 file identifier prompts ensure that IFS tracking entries are accurately identified during processing. These prompts can be used alone or in combination with the System 1 object prompt.

These prompts enable the following combinations:

• Processing by object path: A value is specified for the System 1 object prompt and no value is specified for the System 1 file identifier or System 2 file identifier prompts.

When processing by object path, a tracking entry is required for all commands with the exception of the SYNCIFS command. If no tracking entry exists, the command cannot continue processing. If a tracking entry exists, a query is performed using the specified object path name.

• Processing by object path and FIDs: A value is specified for the System 1 object prompt and a value is specified for either or both of the System 1 file identifier or System 2 file identifier prompts.

When processing by object path and FIDs, a tracking entry is required for all commands. If no tracking entry exists, the command cannot continue processing. If a tracking entry exists, a query is performed using the specified FID values. If the specified object path name does not match the object path name in the tracking entry, the command cannot continue processing.

• Processing by FIDs: A value is specified for either or both of the System 1 file identifier or System 2 file identifier prompts and, with the exception of the SYNCIFS command, no value is specified for the System 1 object prompt. In the case of SYNCIFS, the default value *ALL is specified for the System 1 object prompt.

When processing by FIDs, a tracking entry is required for all commands. If no tracking entry exists, the command cannot continue processing. If a tracking entry exists, a query is performed using the specified FID values.

Page 313: MIMIX Reference

Configuring restart times for MIMIX jobsCertain MIMIX jobs are restarted, or recycled, on a regular basis in order to maintain the MIMIX environment. The ability to configure this activity can ease conflicts with your scheduled workload by changing when the MIMIX jobs restart to a more convenient time for your environment.

The default operation of MIMIX is to restart MIMIX jobs at midnight (12:00 a.m.). However, you can change the restart time by setting a different value for the Job restart time parameter (RSTARTTIME) on system definitions and data group definitions. The time is based on a 24 hour clock. The values specified in the system definitions and data group definitions are retrieved at the time the MIMIX jobs are started. Changes to the specified values have no effect on jobs that are currently running. Changes are effective the next time the affected MIMIX jobs are started.

For a data group definition you can also specify either *SYSDFN1 or the *SYSDFN2 for the Job restart time (RSTARTTIME) parameter. Respectively, these values use the restart time specified in the system definition identified as System 1 or System 2 for the data group.

Both system and data group definition commands support the special value *NONE, which prevents the MIMIX jobs from automatically restarting. Be sure to read “Considerations for using *NONE” on page 315 before using this value.

Configurable job restart time operationTo make effective use of the configurable job restart time, you may need to set the job restart time in as few as one or as many as all of these locations:

• One or more data group definitions

• The system definition for the management system

• The system definitions for one or more network systems.

MIMIX system-level jobs affected by the Job restart time value specified in a system definition are: system manager (SYSMGR), system manager receive (SYSMGRRCV), and journal manager (JRNMGR).

MIMIX data group-level jobs affected by the Job restart time value specified in a data group definition are: object send (OBJSND), object receive (OBJRCV), database send (DBSND), database receive (DBRCV), database reader (DBRDR), object retrieve (OBJRTV), container send (CNRSND), container receive (CNRRCV), status send (STSSND), status receive (STSRCV), and object apply (OBJAPY).

Also, the role of the system on which you change the restart time affects the results. For system definitions, the value you specify for the restart time and the role of the system (management or network) determines which MIMIX system-level jobs will restart and when. For data group definitions, the value you specify for the restart time and the role of the system (source or target) determines which data group-level jobs will restart and when. Time zone differences between systems also influence the results you obtain.

MIMIX system-level jobs restart when they detect that the time specified in the system definition has passed.

313

Page 314: MIMIX Reference

Configuring restart times for MIMIX jobs

The system manager jobs are a pair of jobs that run between a network system and the management system. The management and network systems both have journal manager jobs, but the jobs operate independently. The job restart time specified in the management system’s system definition determines when to restart the journal manager on the management system. The job restart time specified in the network system’s system definition determines when to restart the journal manager job on the network system, when to restart the system manager jobs on both systems, and also affects when cleanup jobs on both systems are submitted. Table 42 shows how the role of the system affects the results of the specified job restart time.

For MIMIX data group-level jobs, a delay of 2 to 35 minutes from the specified time is built into the job restart processing. The actual delay is unique to each job. By distributing the jobs within this range the load on systems and communications is more evenly distributed, reducing bottlenecks caused by many jobs simultaneously attempting to end, start, and establish communications. MIMIX determines the actual restart time for the object apply (OBJAPY) jobs based on the timestamp of the system on which the jobs run. For all other affected jobs, MIMIX determines the actual start time for object or database jobs based on the timestamp of the system on which the OBJSND or the DBSND job runs. Table 43 shows how these key jobs affect when

Table 42. Effect of the system’s role on changing the job restart time in a system definition.

System Definition Role

Effect on Jobs by the value specified

Jobs Time *NONE

ManagementSystem

System managers Specified value is not used to determine restart time. Restart is determined by value specified for network system.

Cleanup jobs

Journal managers Job on management system restarts at time specified.

Job on management system is not restarted.

Collector services

Network System

System managers Jobs on both systems restart when time on the management system reaches the time specified.

Jobs are not restarted on either system.

Cleanup jobs Jobs are submitted on both systems by system manager jobs after they restart.

Jobs are submitted on both systems when midnight occurs on the management system.

Journal managers Job on network system restarts at time specified.

Job on network system is not restarted.

Collector services

314

Page 315: MIMIX Reference

other data group-level jobs restart.

For more information about MIMIX jobs see “Replication job and supporting job names” on page 47.

Considerations for using *NONE

If you specify the value *NONE for the Job restart time in a data group definition, no MIMIX data group-level jobs are automatically restarted.

If you specify the value *NONE for the Job restart time in a system definition, the cleanup jobs started by the system manager will continue to be submitted based on when midnight occurs on the management system. All other affected MIMIX system-level jobs will not be restarted. Table 42 shows the effect of the value *NONE.

Examples: job restart time“Restart time examples: system definitions” on page 316 and “Restart time examples: system and data group definition combinations” on page 316 illustrate the effect of using the Job restart time (RSTARTTIME) parameter. These examples assume that the system configured as the management system for MIMIX operations is also the target system for replication during normal operation. For each example, consider the effect it would have on nightly backups that complete between midnight and 1 a.m. on the target system.

Table 43. Systems on which data group-level jobs run.

In each row, the highlighted job determines the restart time for all jobs in the row.

Source System Jobs Target System Jobs

Object send (OBJSND) Object retrieve (OBJRTV)Container send (CNRSND)Status receive (STSRCV)

Object receive (OBJRCV)Container receive (CNRRCV)Status Send (STSSND)

Database send (DBSND) 1 Database receive (DBRCV) 1

Database reader (DBRDR) 1

Object apply (OBJAPY) 1 When MIMIX is configured for remote journaling, the DBSND and DBRCV jobs are replaced by the DBRDR job. The DBRDR job restarts when the specified time occurs on the target system.

Attention: The value *NONE for the Job restart time parameter is not recommended. If you specify *NONE in a system definition or a data group definition, you need to develop and implement alternative procedures to ensure that the affected MIMIX jobs are periodically restarted. Restarting the jobs ensures that long running MIMIX jobs are not ended by the system due to resource constraints and refreshes the job log to avoid overflow and abnormal job termination.

315

Page 316: MIMIX Reference

Configuring restart times for MIMIX jobs

Restart time examples: system definitions These examples show the effect of changing the job restart time only in system definitions.

Example 1: MIMIX is running Monday noon when you change the job restart time to 013000 in system definition NEWYORK, which is the management system. The network system’s system definition uses the default value 000000 (midnight). MIMIX remains up the rest of the day. Because the current jobs use values that existed prior to your change, all the MIMIX system-level jobs on NEWYORK automatically restart at midnight. As a result of your change, the journal manager on NEWYORK restarts at 1:30 a.m. Tuesday and thereafter. The network system’s journal manager restarts when midnight occurs on that system. The system manager jobs on both systems restart and submit the cleanup jobs when the management system reaches midnight.

Example 2: It is Friday evening and all MIMIX processes on the system CHICAGO are ended while you perform planned maintenance. During that time you change the job restart time to 040000 in system definition CHICAGO, which is a network system. You start MIMIX processing again at 11:07 p.m. so your changes are in effect. The MIMIX system-level jobs that restart Saturday and thereafter at 4 a.m. Chicago time are:

• The journal manager job on CHICAGO

• The system manager jobs on the management system and on CHICAGO

• The cleanup jobs are submitted on the management system and on CHICAGO

Because the management system’s system definition uses the default value of midnight, the journal manager on the management system restarts when midnight occurs on that system.

Example 3: Friday afternoon you change system definition HONGKONG to have a job restart time value of *NONE. HONGKONG is the management system. LONDON is the associated network system and its system definition uses the default setting 000000 (midnight). You end and restart the MIMIX jobs to make the change effective. The journal manager on HONGKONG is no longer restarted. At midnight (00:00 a.m. Saturday and thereafter) HONGKONG time, the system manager jobs on both systems restart and submit cleanup jobs on both systems. In your runbook you document the new procedures to manually restart the journal manager on HONGKONG.

Example 4: Wednesday evening you change the system definitions for LONDON and HONG KONG to both have a job restart time of *NONE. HONGKONG is the management system. You restart the MIMIX jobs to make the change effective. At midnight HONGKONG time, only the cleanup jobs on both systems are submitted. In your runbook you document the new procedures to manually restart the journal managers and system managers.

Restart time examples: system and data group definition combinations These examples show the effect of changing the job restart time in various combinations of system definitions and data group definitions.

316

Page 317: MIMIX Reference

Example 5: You have a data group that operates between SYSTEMA and SYSTEMB, which are both in the same time zone. Both the system definitions and the data group definition use the default value 000000 (midnight) for the job restart time. For both systems, the MIMIX system-level jobs restart at midnight. The data group jobs on both systems restart between 2 and 35 minutes after midnight.

Example 6: 10:30 Tuesday morning you change data group definition APP1 to have a job restart time value of 013500. The data group operates between SYSTEMA and SYSTEMB, which are both in the same time zone. Both system definitions use the default restart time of midnight. MIMIX jobs remain up and running. At midnight, the system-level jobs on both systems restart using the values from the preexisting configuration; the data group-level jobs restart on both systems between 0:02 and 0:35 a.m. On Wednesday and thereafter, APP1 data group-level jobs restart between 1:37 and 2:10 a.m. while the MIMIX system-level jobs and jobs for other data groups restart at midnight.

Example 7: You have a data group that operates between SYSTEMA and SYSTEMB which are both in the same time zone and are defined as the values of System 1 and System 2, respectively. The data group definition specifies a job restart time value of *SYSDFN2. The system definition for SYSTEMA specifies the default job restart time of 000000 (midnight). SYSTEMB is the management system and its system definition specifies the value *NONE for the job restart time. The journal manager on SYSTEMB does not restart and the data group jobs do not restart on either system because of the *NONE value specified for SYSTEMB. The journal manager on SYSTEMA restarts at midnight. System manager jobs on both systems restart and submit cleanup jobs at midnight as a result of the value in the network system and the fact that the systems are in the same time zone.

Example 8A: You have a data group defined between CHICAGO and NEWYORK (System 1 and System 2, respectively) and the data group’s job restart time is set to 030000 (3 a.m.). CHICAGO is the source system as well as a network system; its system definition uses the default job restart time of midnight. NEWYORK is the target system as well as the management system; its system definition uses a job restart time of 020000 (2 a.m.). There is a one hour time difference between the two systems; said another way, NEWYORK is an hour ahead of CHICAGO. Figure 17 shows the effect of the time zone difference on this configuration.

The journal manager on CHICAGO restarts at midnight Chicago time and the journal manager on NEWYORK restarts at 2 a.m. New York time. The system manager jobs on both systems restart when the management system (NEWYORK) reaches the restart time specified for the network system (CHICAGO). The cleanup jobs are submitted by the system manager jobs when they restart.

With the exception of the object apply jobs (OBJAPY), the data group jobs restart during the same 2 to 35 minute timeframe based on Chicago time (between 2 and 35 minutes after 3 a.m. in Chicago; after 4 a.m. in New York). Because the OBJAPY jobs are based on the time on the target system, which is an hour ahead of the source

317

Page 318: MIMIX Reference

Configuring restart times for MIMIX jobs

system time used for the other jobs, the OBJAPY jobs restart between 3:02 and 3:35 a.m. New York time.

Figure 17. Results of Example 8A. This is configured as a standard MIMIX environment.

Example 8B: This scenario is the same as example 8A with one exception. In this scenario, the MIMIX environment is configured to use MIMIX Remote Journal support. Figure 18 shows that the database reader (DBRDR) job restarts based on the time on the target system. Because the database send (DBSND) and database receive (DBRCV) jobs are not used in a remote journaling environment, those jobs do not restart.

Figure 18. Results of example 8B. This environment is configured to use MIMIX Remote Journal support.

318

Page 319: MIMIX Reference

Configuring the restart time in a system definitionTo configure the restart time for MIMIX system-level jobs in an existing environment, do the following:

1. On the Work with System Definitions display, type a 2 (Change) next to the system definition you want and press F4 (Prompt).

2. Press F10 (Additional parameters), then scroll down to the bottom of the display.

3. At the Job restart time prompt, specify the value you want. You need to consider the role of the system definition (management or network system) and the effect of any time zone differences between the management system and the network system.

Notes: • The time is based on a 24 hour clock, and must be specified in HHMMSS

format. Although seconds are ignored, the complete time format must be specified. Valid values range from 000000 to 235959. The value 000000 is the default and is equivalent to midnight (00:00:00 a.m.).

• If you specify *NONE, cleanup jobs are submitted on both the network and management systems based on when midnight occurs on the management system. System manager and journal manager jobs will not restart. The value *NONE is not recommended. For more information, see “Considerations for using *NONE” on page 315.

4. To accept the change, press Enter.

The change has no effect on jobs that are currently running. The value for the Job restart time is retrieved from the system definition at the time the jobs are started. The change is effective the next time the jobs are started.

Configuring the restart time in a data group definitionTo configure the restart time for MIMIX data group-level jobs in an existing environment, do the following:

1. On the Work with Data Group Definitions display, type a 2 (Change) next to the data group definition you want and press F4 (Prompt).

2. Press F10 (Additional parameters), then scroll down to the bottom of the display.

3. At the Job restart time prompt, specify the value you want. You need to consider the effect of any time zone differences between the systems defined to the data group.

Notes: • The time is based on a 24 hour clock, and must be specified in HHMMSS

format. Although seconds are ignored, the complete time format must be specified. Valid values range from 000000 to 235959. The value 000000 is the default and is equivalent to midnight (00:00:00 a.m.).

• The value *NONE is not recommended. For more information, see “Considerations for using *NONE” on page 315.

319

Page 320: MIMIX Reference

Configuring restart times for MIMIX jobs

4. To accept the change, press Enter.

Changes have no effect on jobs that are currently running. The value for the Job restart time is retrieved at the time the jobs are started. The change is effective the next time the jobs are started.

320

Page 321: MIMIX Reference

321

Page 322: MIMIX Reference

322

Chapter 14 Starting, ending, and verifying journaling

This chapter describes procedures for starting and ending journaling. Journaling must be active on all files, IFS objects, data areas and data queues that you want to replicate through a user journal. Normally, journaling is started during configuration. However, there are times when you may need to start or end journaling on items identified to a data group.

The topics in this chapter include:

• “What objects need to be journaled” on page 323 describes, for supported configuration scenarios, what types of objects must have journaling started before replication can occur. It also describes when journaling is started implicitly, as well as the authority requirements necessary for user profiles that create the objects to be journaled when they are created.

• “MIMIX commands for starting journaling” on page 325 identifies the MIMIX commands available for starting journaling and describes the checking performed by the commands.

• “Journaling for physical files” on page 326 includes procedures for displaying journaling status, starting journaling, ending journaling, and verifying journaling for physical files identified by data group file entries.

• “Journaling for IFS objects” on page 330 includes procedures for displaying journaling status, starting journaling, ending journaling, and verifying journaling for IFS objects replicated cooperatively (advanced journaling). IFS tracking entries are used in these procedures.

• “Journaling for data areas and data queues” on page 334 includes procedures for displaying journaling status, starting journaling, ending journaling, and verifying journaling for data area and data queue objects replicated cooperatively (advanced journaling). IFS tracking entries are used in these procedures.

Page 323: MIMIX Reference

What objects need to be journaledA data group can be configured in a variety of ways that involve a user journal in the replication of files, data areas, data queues and IFS objects. Journaling must be started for any object to be replicated through a user journal or to be replicated by cooperative processing between a user journal and the system journal.

Requirements for system journal replication - System journal replication processes use a special journal, the security audit (QAUDJRN) journal. The IBM i system logs events in this journal to create a security audit trail. When data group object entries, IFS entries, and DLO entries are configured, each entry specifies an object auditing value that determines the type of activity on the objects to be logged in the journal. Object auditing is automatically set for all objects defined to a data group when the data group is first started, or any time a change is made to the object entries, IFS entries, or DLO entries for the data group. Because security auditing logs the object changes in the system journal, no special action is need.

Requirements for user journal replication - User journal replication processes require that the journaling be started for the objects identified by data group file entries. Both MIMIX Dynamic Apply and legacy cooperative processing use data group file entries and therefore require journaling to be started. Configurations that include advanced journaling for replication of data areas, data queues, or IFS objects also require that journaling be started on the associated object tracking entries and IFS tracking entries, respectively. Starting journaling ensures that changes to the objects are recorded in the user journal, and are therefore available for MIMIX to replicate.

During initial configuration, the configuration checklists direct you when to start journaling for objects identified by data group file entries, IFS tracking entries, and object tracking entries. The MIMIX commands STRJRNFE, STRJRNIFSE, and STRJRNOBJE simplify the process of starting journaling. For more information about these commands, see “MIMIX commands for starting journaling” on page 325.

Although MIMIX commands for starting journaling are preferred, you can also use IBM commands (STRJRNPF, STRJRN, STRJRNOBJ) to start journaling if you have the appropriate authority for starting journaling.

Requirements for implicit starting of journaling - Journaling can be automatically started for newly created database files, data areas, data queues, or IFS objects when certain requirements are met.

The user ID creating the new objects must have the required authority to start journaling and the following requirements must be met:

• IFS objects - A new IFS object is automatically journaled if the directory in which it is created is journaled as a result of a request that permitted journaling inheritance for new objects. Typically, if MIMIX started journaling on the parent directory, inheritance is permitted. If you manually start journaling on the parent directory using the IBM command STRJRN, specify INHERIT(*YES). This will allow IFS objects created within the journaled directory to inherit the journal options and journal state of the parent directory.

• Database files created by SQL statements - A new file created by a CREATE

323

Page 324: MIMIX Reference

What objects need to be journaled

TABLE statement is automatically journaled if the library in which it is created contains a journal named QSQJRN.

• New *FILE, *DTAARA, *DTAQ objects - A new object is automatically journaled if it is created in a library that contains a QDFTJRN data area and the data area has enabled automatic journaling for the object type. The Journal at creation (JRNATCRT parameter) in the data group definition enables MIMIX to create the QDFTJRN data area and enable automatic journaling for an object type.

• When a data group is started, MIMIX may automatically create the QDFTJRN data area. If the data group configuration meets the requirements for MIMIX Dynamic Apply, MIMIX evaluates all data group entries for each object type to determine whether to create the QDFTJRN data area. MIMIX uses the data group entry with the most specific match to the object type and library that also specifies *ALL for its System 1 object (OBJ1) and Attribute (OBJATR) prompts.

Note: MIMIX prevents the QDFTJRN data area from being created the following libraries: QSYS*, QRECOVERY, QRCY*, QUSR*, QSPL*, QRPL*, QRCL*, QRPL*, QGPL, QTEMP and SYSIB*. Automatic journaling of new *DTAARA or *DTAQ objects is only supported in IBM i V5R4 and higher.

For example, if MIMIX finds only the following data group object entries for library MYLIB, it would use the first entry when determining whether to create the QDFTJRN data area because it is the most specific entry that also meets the OBJ1(*ALL) and OBJATR(*ALL) requirements. The second entry is not considered in the determination because its OBJ1 and OBJATR values do not meet these requirements.

LIB1(MYLIB) OBJ1(*ALL) OBJTYPE(*FILE) OBJATR(*ALL) COOPDB(*YES) PRCTYPE(*INCLD)LIB1(MYLIB) OBJ1(MYAPP) OBJTYPE(*FILE) OBJATR(DSPF) COOPDB(*YES) PRCTYPE(*INCLD)

Updated for 5.0.02.00.

Authority requirements for starting journalingNormal MIMIX processes run under the MIMIXOWN user profile, which ships with *ALLOBJ special authority. Therefore, it is not necessary for other users to account for journaling authority requirements when using MIMIX commands (STRJRNFE, STRJRNIFSE, STRJRNOBJE) to start journaling.

When the MIMIX journal managers are started, or when the Build Journaling Environment (BLDJRNENV) command is used, MIMIX checks the public authority (*PUBLIC) for the journal. If necessary, MIMIX changes public authority so the user ID in use has the appropriate authority to start journaling.

Authority requirements must be met to enable the automatic journaling of newly created objects and if you use IBM commands to start journaling instead of MIMIX commands.

• If you create database files, data areas, or data queues for which you expect automatic journaling at creation, the user ID creating these objects must have the required authority to start journaling.

324

Page 325: MIMIX Reference

• If you use the IBM commands (STRJRNPF, STRJRN, STRJRNOBJ) to start journaling, the user ID that performs the start journaling request must have the appropriate authority requirements.

For journaling to be successfully started on an object, one of the following authority requirements must be satisfied:

• The user profile of the user attempting to start journaling for an object must have *ALLOBJ special authority.

• The user profile of the user attempting to start journaling for an object must have explicit *ALL object authority for the journal to which the object is to be journaled.

• Public authority (*PUBLIC) must have *OBJALTER, *OBJMGT, and *OBJOPR object authorities for the journal to which the object is to be journaled.

MIMIX commands for starting journalingBefore you use any of the MIMIX commands for starting journaling, the data group file entries, IFS tracking entries, or object tracking entries associated with the command’s object class must be loaded.

The MIMIX commands for starting journaling are:

• Start Journal Entry (STRJRNFE) - This command starts journaling for files identified by data group file entries.

• Start Journaling IFS Entries (STRJRNIFSE) - This command starts journaling of IFS objects configured for advanced journaling. Data group IFS entries must be configured and IFS tracking entries be loaded (LODDGIFSTE command) before running the STRJRNIFSE command to start journaling.

• Start Journaling Obj Entries (STRJRNOBJE) - This command starts journaling of data area and data queue objects configured for advanced journaling. Data group object entries must be configured and object tracking entries be loaded (LODDGOBJTE command) before running the STRJRNOBJE command to start journaling.

If you attempt to start journaling for a data group file entry, IFS tracking entry, or object tracking entry and the files or objects associated with the entry are already journaled, MIMIX checks that the physical file, IFS object, data area, or data queue is journaled to the journal associated with the data group. If the file or object is journaled to the correct journal, the journaling status of the data group file entry, IFS tracking or object tracking entry is changed to *YES. If the file or object is not journaled to the correct journal or the attempt to start journaling fails, an error occurs and the journaling status is changed to *NO.

325

Page 326: MIMIX Reference

Journaling for physical files

Journaling for physical filesData group file entries identify physical files to be replicated. When data group file entries are added to a configuration, they may have an initial status of *ACTIVE. However, the physical files which they identify may not be journaled. In order for replication to occur, journaling must be started for the files on the source system.

This topic includes procedures to display journaling status, and to start, end, or verify journaling for physical files.

Displaying journaling status for physical filesUse this procedure to display journaling status for physical files identified by data group file entries. Do the following:

1. From the MIMIX Intermediate Main Menu, type 1 and press Enter to access the Work with Data Groups display.

2. On the Work with Data Groups display, type 17 (File entries) next to the data group you want and press Enter.

3. The Work with DG File Entries display appears. The initial view shows the current and requested status of the data group file entry. Press F10 (Journaled view).

At the right side of the display, the Journaled System 1 and System 2 columns indicate whether the physical file associated with the file entry is journaled on each system.

Note: Logical files will have a status of *NA. Data group file entries exist for logical files only in data groups configured for MIMIX Dynamic Apply.

Starting journaling for physical filesUse this procedure to start journaling for physical files identified by data group file entries. In order for replication to occur, journaling must be started for the file on the source system.

This procedure invokes the Start Journal Entry (STRJRNFE) command. The command can also be entered from a command line.

Do the following:

1. Access the journaled view of the Work with DG File Entries display as described in “Displaying journaling status for physical files” on page 326.

2. From the Work with DG File Entries display, type a 9 (Start journaling) next to the file entries you want. Then do one of the following:

• To start journaling using the command defaults, press Enter.

• To modify command defaults, press F4 (Prompt) then continue with the next step.

3. The Start Journal Entry (STRJRNFE) display appears. The Data group definition prompts and the System 1 file prompts identify your selection. Accept these values or specify the values you want.

326

Page 327: MIMIX Reference

4. Specify the value you want for the Start journaling on system prompt. Press F4 to see a list of valid values.

When *DGDFN, *SRC, or *TGT is specified, MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) and starts or prevents journaling from starting as required.

5. If you want to use batch processing, specify *YES for the Submit to batch prompt.

6. To start journaling for the physical file associated with the selected data group, press Enter.

The system returns a message to confirm the operation was successful.

Ending journaling for physical filesUse this procedure to end journaling for a physical file associated with a data group file entry. Once journaling for a file is ended, any changes to that file are not captured and are not replicated. You may need to end journaling if a file no longer needs to be replicated, to prepare for upgrading MIMIX software, or to correct an error.

This procedure invokes the End Journaling File Entry (ENDJRNFE) command. The command can also be entered from a command line.

To end journaling, do the following:

1. Access the journaled view of the Work with DG File Entries display as described in “Displaying journaling status for physical files” on page 326.

2. From the Work with DG File Entries display, type a 10 (End journaling) next to the file entry you want and do one of the following:

Note: MIMIX cannot end journaling on a file that is journaled to the wrong journal. For example, a file that does not match the journal definition for that data group. If you want to end journaling outside of MIMIX, use the ENDJRNPF command.

• To end journaling using command defaults, press Enter. Journaling is ended.

• To modify additional prompts for the command, press F4 (Prompt) and continue with the next step.

3. The End Journal File Entry (ENDJRNFE) display appears. If you want to end journaling for all files in the library, specify *ALL at the System 1 file prompt.

4. Specify the value you want for the End journaling on system prompt. Press F4 to see a list of valid values.

When *DGDFN, *SRC, or *TGT is specified, MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) and ends or prevents journaling from ending as required.

5. If you want to use batch processing, specify *YES for the Submit to batch prompt.

6. To end journaling, press Enter.

327

Page 328: MIMIX Reference

Journaling for physical files

Verifying journaling for physical filesUse this procedure to verify if a physical file defined by a data group file entry is journaled correctly. This procedure invokes the Verify Journaling File Entry (VFYJRNFE) command to determine whether the file is journaled and whether it is journaled to the journal defined in the journal definition. When these conditions are met, the journal status on the Work with DG File Entries display is set to *YES. The command can also be entered from a command line.

To verify journaling for a physical file, do the following:

1. Access the journaled view of the Work with DG File Entries display as described in “Displaying journaling status for physical files” on page 326.

2. From the Work with DG File Entries display, type a 11 (Verify journaling) next to the file entry you want and do one of the following:

• To verify journaling using command defaults, press Enter.

• To modify additional prompts for the command, press F4 (Prompt) and continue with the next step.

3. The Verify Journaling File Entry (VFYJRNFE) display appears. The Data group definition prompts and the System 1 file prompts identify your selection. Accept these values or specify the values you want.

4. Specify the value you want for the Verify journaling on system prompt. When *DGDFN is specified, MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) when determining where to verify journaling.

5. If you want to use batch processing, specify *YES for the Submit to batch prompt

6. Press Enter.

328

Page 329: MIMIX Reference

329

Page 330: MIMIX Reference

Journaling for IFS objects

Journaling for IFS objectsIFS tracking entries are loaded for a data group after the data group IFS entries have been configured for replication through the user journal (advanced journaling). However, loading IFS tracking entries does not automatically start journaling on the IFS objects they identify. In order for replication to occur, journaling must be started on the source system for the IFS objects identified by IFS tracking entries.

This topic includes procedures to display journaling status, and to start, end, or verify journaling for IFS objects identified for replication through the user journal.

These references go to different files in different books.

You should be aware of the information in “Long IFS path names” on page 119

Displaying journaling status for IFS objectsUse this procedure to display journaling status for IFS objects identified by IFS tracking entries. Do the following:

1. From the MIMIX Intermediate Main Menu, type 1 and press Enter to access the Work with Data Groups display.

2. On the Work with Data Groups display, type 50 (IFS trk entries) next to the data group you want and press Enter.

3. The Work with DG IFS Trk. Entries display appears. The initial view shows the object type and status at the right of the display. Press F10 (Journaled view).

At the right side of the display, the Journaled System 1 and System 2 columns indicate whether the IFS object identified by the tracking is journaled on each system.

Starting journaling for IFS objectsUse this procedure to start journaling for IFS objects identified by IFS tracking entries.

This procedure invokes the Start Journaling IFS Entries (STRJRNIFSE) command. The command can also be entered from a command line.

To start journaling for IFS objects, do the following:

1. If you have not already done so, load the IFS tracking entries for the data group. Use the procedure in “Loading IFS tracking entries” on page 284.

2. Access the journaled view of the Work with DG IFS Trk. Entries display as described in “Displaying journaling status for IFS objects” on page 330.

3. From the Work with DG IFS Trk. Entries display, type a 9 (Start journaling) next to the IFS tracking entries you want. Then do one of the following:

• To start journaling using the command defaults, press Enter.

• To modify the command defaults, press F4 (Prompt) and continue with the next step.

4. The Start Journaling IFS Entries (STRJRNIFSE) display appears. The Data group

330

Page 331: MIMIX Reference

definition and IFS objects prompts identify the IFS object associated with the tracking entry you selected. You cannot change the values shown for the IFS objects prompts1.

5. Specify the value you want for the Start journaling on system prompt. Press F4 to see a list of valid values.

When *DGDFN, *SRC, or *TGT is specified, MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) and starts or prevents journaling from starting as required.

6. To use batch processing, specify *YES for the Submit to batch prompt and press Enter. Additional prompts for Job description and Job name appear. Either accept the default values or specify other values.

7. The System 1 file identifier and System 2 file identifier prompts identify the file identifier (FID) of the IFS object on each system. You cannot change the values2.

8. To start journaling on the IFS objects specified, press Enter.

Ending journaling for IFS objectsUse this procedure to end journaling for IFS objects identified by IFS tracking entries.

This procedure invokes the End Journaling IFS Entries (ENDJRNIFSE) command. The command can also be entered from a command line.

To end journaling for IFS objects, do the following:

1. Access the journaled view of the Work with DG IFS Trk. Entries display as described in “Displaying journaling status for IFS objects” on page 330.

2. From the Work with DG IFS Trk. Entries display, type a 10 (End journaling) next to the IFS tracking entries you want. Then do one of the following:

• To end journaling using the command defaults, press Enter.

• To modify the command defaults, press F4 (Prompt) and continue with the next step.

3. The End Journaling IFS Entries (ENDJRNIFSE) display appears. The Data group definition and IFS objects prompts identify the IFS object associated with the tracking entry you selected. You cannot change the values shown for the IFS objects prompts1.

4. Specify the value you want for the End journaling on system prompt. Press F4 to see a list of valid values.

When *DGDFN, *SRC, or *TGT is specified, MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) and ends or prevents journaling from ending as required.

1. When the command is invoked from a command line, you can change values specified for the IFS objects prompts. Also, you can specify as many as 300 object selectors by using the + for more values prompt.

2. When the command is invoked from a command line, use F10 to see the FID prompts. Then you can optionally specify the unique FID for the IFS object on either system. The FID values can be used alone or in combination with the IFS object path name.

331

Page 332: MIMIX Reference

Journaling for IFS objects

5. To use batch processing, specify *YES for the Submit to batch prompt and press Enter. Additional prompts for Job description and Job name appear. Either accept the default values or specify other values.

6. The System 1 file identifier and System 2 file identifier identify the file identifier (FID) of the IFS object on each system. You cannot change the values shown2.

7. To end journaling on the IFS objects specified, press Enter.

Verifying journaling for IFS objects Use this procedure to verify if an IFS object identified by an IFS tracking entry is journaled correctly. This procedure invokes the Verify Journaling IFS Entries (VFYJRNIFSE) command to determine whether the IFS object is journaled, whether it is journaled to the journal defined in the data group definition, and whether it is journaled with the attributes defined in the data group definition. The command can also be entered from a command line.

To verify journaling for IFS objects, do the following:

1. Access the journaled view of the Work with DG IFS Trk. Entries display as described in “Displaying journaling status for IFS objects” on page 330.

2. From the Work with DG IFS Trk. Entries display, type a 11 (Verify journaling) next to the IFS tracking entries you want. Then do one of the following:

• To verify journaling using the command defaults, press Enter.

• To modify the command defaults, press F4 (Prompt) and continue with the next step.

3. The Verify Journaling IFS Entries (VFYJRNIFSE) display appears. The Data group definition and IFS objects prompts identify the IFS object associated with the tracking entry you selected. You cannot change the values shown for the IFS objects prompts1.

4. Specify the value you want for the Verify journaling on system prompt. Press F4 to see a list of valid values.

When *DGDFN is specified, MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) and verifies journaling on the appropriate systems as required.

5. To use batch processing, specify *YES for the Submit to batch prompt and press Enter. Additional prompts for Job description and Job name appear. Either accept the default values or specify other values.

6. The System 1 file identifier and System 2 file identifier identify the file identifier (FID) of the IFS object on each system. You cannot change the values shown2.

7. To verify journaling on the IFS objects specified, press Enter.

“Using file identifiers (FIDs) for IFS objects” on page 312.

332

Page 333: MIMIX Reference

333

Page 334: MIMIX Reference

Journaling for data areas and data queues

Journaling for data areas and data queuesObject tracking entries are loaded for a data group after the data group object entries have been configured replication through the user journal (advanced journaling). However, loading object tracking entries does not automatically start journaling on the objects they identify. In order for replication to occur, journaling must be started for the objects on the source system for the objects identified by object tracking entries.

This topic includes procedures to display journaling status, and to start, end, or verify journaling for data areas and data queues identified for replication through the user journal.

Displaying journaling status for data areas and data queuesTo check journaling status for data areas and data queues identified by object tracking entries. Do the following:

1. From the MIMIX Intermediate Main Menu, type 1 and press Enter to access the Work with Data Groups display.

2. On the Work with Data Groups display, type 52 (Obj trk entries) next to the data group you want and press Enter.

3. The Work with DG Obj. Trk. Entries display appears. The initial view shows the object type and status at the right of the display. Press F10 (Journaled view).

At the right side of the display, the Journaled System 1 and System 2 columns indicate whether the object identified by the tracking is journaled on each system.

Starting journaling for data areas and data queues Use this procedure to start journaling for data areas and data queues identified by object tracking entries.

This procedure invokes the Start Journaling Obj Entries (STRJRNOBJE) command. The command can also be entered from a command line.

To start journaling for data areas and data queues, do the following:

1. If you have not already done so, load the object tracking entries for the data group. Use the procedure in “Loading object tracking entries” on page 285.

2. Access the journaled view of the Work with DG Obj. Trk. Entries display as described in “Displaying journaling status for data areas and data queues” on page 334.

3. From the Work with DG Obj. Trk. Entries display, type a 9 (Start journaling) next to the object tracking entries you want. Then do one of the following:

• To start journaling using the command defaults, press Enter.

• To modify the command defaults, press F4 (Prompt) and continue with the next step.

4. The Start Journaling Obj Entries (STRJRNOBJE) display appears. The Data group definition and Objects prompts identify the object associated with the

334

Page 335: MIMIX Reference

tracking entry you selected. Although you can change the values shown for these prompts, it is not recommended unless the command was invoked from a command line.

5. Specify the value you want for the Start journaling on system prompt. Press F4 to see a list of valid values.

When *DGDFN, *SRC, or *TGT is specified, MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) and starts or prevents journaling from starting as required.

6. To use batch processing, specify *YES for the Submit to batch prompt and press Enter. Additional prompts for Job description and Job name appear. Either accept the default values or specify other values.

7. To start journaling on the objects specified, press Enter.

Ending journaling for data areas and data queuesUse this procedure to end journaling for data areas and data queues identified by object tracking entries.

This procedure invokes the End Journaling Obj Entries (ENDJRNOBJE) command. The command can also be entered from a command line.

To end journaling for data areas and data queues, do the following:

1. Access the journaled view of the Work with DG Obj. Trk. Entries display as described in “Displaying journaling status for data areas and data queues” on page 334.

2. From the Work with DG Obj. Trk. Entries display, type a 10 (End journaling) next to the object tracking entries you want. Then do one of the following:

• To verify journaling using the command defaults, press Enter.

• To modify the command defaults, press F4 (Prompt) and continue with the next step.

3. The End Journaling Obj Entries (ENDJRNOBJE) display appears. The Data group definition and IFS objects prompts identify the object associated with the tracking entry you selected. Although you can change the values shown for these prompts, it is not recommended unless the command was invoked from a command line.

4. Specify the value you want for the End journaling on system prompt. Press F4 to see a list of valid values.

When *DGDFN, *SRC, or *TGT is specified, MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) and ends or prevents journaling from ending as required.

5. To use batch processing, specify *YES for the Submit to batch prompt and press Enter. Additional prompts for Job description and Job name appear. Either accept the default values or specify other values.

6. To end journaling on the objects specified, press Enter.

335

Page 336: MIMIX Reference

Journaling for data areas and data queues

Verifying journaling for data areas and data queues Use this procedure to verify if an object identified by an object tracking entry is journaled correctly. This procedure invokes the Verify Journaling Obj Entries (VFYJRNOBJE) command to determine whether the object is journaled, whether it is journaled to the journal defined in the data group definition, and whether it is journaled with the attributes defined in the data group definition. The command can also be entered from a command line.

To verify journaling for objects, do the following:

1. Access the journaled view of the Work with DG Obj. Trk. Entries display as described in “Displaying journaling status for data areas and data queues” on page 334.

2. From the Work with DG Obj. Trk. Entries display, type a 11 (Verify journaling) next to the object tracking entries you want. Then do one of the following:

• To verify journaling using the command defaults, press Enter.

• To modify the command defaults, press F4 (Prompt) and continue with the next step.

3. The Verify Journaling Obj Entries (VFYJRNOBJE) display appears. The Data group definition and Objects prompts identify the object associated with the tracking entry you selected. Although you can change the values shown for these prompts, it is not recommended unless the command was invoked from a command line.

4. Specify the value you want for the Verify journaling on system prompt. Press F4 to see a list of valid values.

When *DGDFN is specified, MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) and verifies journaling on the appropriate systems as required.

5. To use batch processing, specify *YES for the Submit to batch prompt and press Enter. Additional prompts for Job description and Job name appear. Either accept the default values or specify other values.

6. To verify journaling on the objects specified, press Enter.

336

Page 337: MIMIX Reference

Chapter 15

Configuring for improved performance

This chapter describes how to modify your configuration to use advanced techniques to improve journal performance and MIMIX performance.

Journal performance: The following topics describe how to improve journal performance:

• “Minimized journal entry data” on page 339 describes benefits of and restrictions for using minimized user journal entries for *FILE and *DTAARA objects. A discussion of large object (LOB) data in minimized entries and configuration information are included.

• “Configuring for high availability journal performance enhancements” on page 341 describes journal caching and journal standby state within MIMIX to support IBM’s High Availability Journal Performance i5/OS option 42, Journal Standby feature and Journal caching. Requirements and restrictions are included.

MIMIX performance: The following topics describe how to improve MIMIX performance:

• “Caching extended attributes of *FILE objects” on page 345 describes how to change the maximum size of the cache used to store extended attributes of *FILE objects replicated from the system journal.

• “Increasing data returned in journal entry blocks by delaying RCVJRNE calls” on page 346 describes how you can improve object send performance by changing the size of the block of data from a receive journal entry (RCVJRNE) call and delaying the next call based on a percentage of the requested block size.

• “Configuring high volume objects for better performance” on page 350 describes how to change your configuration to improve system journal performance.

• “Improving performance of the #MBRRCDCNT audit” on page 351 describes how to use the CMPRCDCNT commit threshold policy to limit comparisons and thereby improve performance of this audit in environments which use commitment control.

337

Page 338: MIMIX Reference

338

Page 339: MIMIX Reference

Minimized journal entry dataMIMIX supports the ability to process minimized journal entries placed in a user journal for object types of file (*FILE) and data area (*DTAARA).

The i5/OS operating system provides the ability to create journal entries using an internal format that minimizes the data specific to these object types that are stored in the journal entry. This support is enabled in the MIMIX create or change journal definitions commands and built using the Build Journal Environment (BLDJRNENV) command.

When a journal entry for one of these object types is generated, the system compares the size of the minimized format to the standard format and places whichever is smaller in the journal. For database files, only update journal entries (R-UP and R-UB) and rollback-type update entries (R-BR and R-UR) can be minimized.

If MINENTDTA(*FILE) or MINENTDTA(*FLDBDY) is in effect and a database record includes LOB fields, LOB data is journaled only when that LOB is changed. Changes to other fields in the record will not cause the LOB data to be journaled unless the LOB is also changed. When database files have records with static LOB values, minimized journal entries can produce considerable savings.

The benefit of using minimized journal entries is that less data is stored in the journal. In a MIMIX replication environment, you also benefit by having less data sent over communications lines and saved in MIMIX log spaces. Factors in your environment such as the percentage of journal entries that are updates (R-UP), the size of database records, the number of bytes typically changed in an update, may influence how much benefit you achieve.

Restrictions of minimized journal entry dataThe following MIMIX and operating system restrictions apply:

• If you plan to use keyed replication do not use minimized journal entry data. Minimized journal entries cannot be used when MIMIX support for keyed replication is in use, since the key may not be present in a minimized journal entry.

• The use of the value *FLDBDY for minimized journal entry data is limited to systems running i5/OS V5R4 or higher.

• Minimized before-images cannot be selected for automatic before-image synchronization checking.

Your environment may impose additional restrictions:

• If you rely on full image captures in the receiver as part of your auditing rules, do not configure for minimized entry data.

• Even if you do not rely on full image captures for auditing purposes, consider the effect of how data is minimized. The minimizing resulting from specifying *FILE does not occur on field boundaries. Therefore, the entry specific data may not be viewable and may not be used for auditing purposes. When *FLDBDY is specified, file data for modified fields is minimized on field boundaries. With *FLDBDY, entry-specific data is viewable and may be used for auditing purposes.

339

Page 340: MIMIX Reference

Minimized journal entry data

• Configuring for minimized journal entry data may affect your ability to use the Work with Data Group File Entries on Hold (WRKDGFEHLD) command. For example, using option 2 (Change) on WRKDGFEHLD to convert a minimized record update (RUP) to a record put (RPT), will result in failure when applied. RPTs requires the presence of a full, non-minimized, record.

See the IBM book, Backup and Recovery for restrictions and usage of journal entries with minimized entry-specific data.

Updated for 5.0.02.00.

Configuring for minimized journal entry dataBy default, MIMIX user journal replication processes use complete journal entry data. To enable MIMIX to use minimized journal entry data for specific object types, do the following:

1. From the Work with Journal Definitions display, use option 2 (Change) to access the journal definition you want.

2. On the following display, press Enter twice to see all prompts for the display. Page down to the bottom of the display.

3. Press F10 (Additional parameters) to access the Minimize entry specific data prompt.

4. Specify the values you want at the Minimize entry specific data prompt and press Enter.

5. In order for the changes to be effective, you must build the journaling environment using the updated journal definition. To do this, type 14 (Build) next to the definition you just modified on the Work with Journal Definitions display and press Enter.

340

Page 341: MIMIX Reference

Configuring for high availability journal performance enhancements

MIMIX supports IBM’s High Availability Journal Performance i5/OS option 42, Journal Standby feature and Journal caching. These high availability performance enhancements improve replication performance on the target system and provide significant performance improvement by eliminating the need to start journaling at switch time.

MIMIX support of IBM’s high availability performance enhancements consists of two independent components: journal standby state and journal caching. These components work individually or together, although when used together, each component must be enabled separately. Journal standby state minimizes replication impact on the target system by providing the benefits of an active journal without writing the journal entries to disk. As such, journal standby state is particularly helpful in saving disk space in environments that do not rely on journal entries for other purposes. Moreover, journal standby state minimizes switch times by retaining the journal relationship for replicated objects.

Journal caching provides a means by which to cache journal entries and their corresponding database records into main storage and write to disks only as necessary. Journal caching is particularly helpful during batch operations when large numbers of add, update, and delete operations against journaled objects are performed.

Journal standby state and journal caching can be used in source send configuration environments as well as in environments where remote journaling is enabled. For restrictions of MIMIX support of IBM’s high availability performance enhancements, see “Restrictions of high availability journal performance enhancements” on page 343.

Note: For more information, also see the topics on journal management and system performance in the IBM eServer iSeries Information Center.

Journal standby stateJournal standby state minimizes replication impact by providing the benefits of an active journal without writing the journal entries to disk. As such, journal standby state is particularly helpful in saving disk space in environments that do not rely on journal entries for other purposes. Moreover, If you are journaling on apply, journal standby state can provide a performance improvement on the apply session.

If you are not using journaling on target and want to have a switchable data group, then using journal standby state may offer a benefit in reduced switch time. When a journal is in standby state, it is not necessary to start journaling for objects on the target system prior to switching. All that is necessary prior to switching, is to change the journal state to active.

You can start or stop journaling while the journal standby state is enabled. However, commitment control cannot be used for files that are journaled to any journal in standby state. Most referential constraints cannot be used when the journal is in standby state. When journal standby state is not an option because of these

341

Page 342: MIMIX Reference

Configuring for high availability journal performance enhancements

restrictions, journal caching can be used as an alternative. See “Journal caching” on page 342.

Minimizing potential performance impacts of standby state It is possible to experience degraded performance of database apply (DBAPY) processing after enabling journal standby state. You can reduce potential impacts by using the Change Recovery for Access Paths (CHGRCYAP) command, which allows you to change the target access path recovery time for the system.

Note: While this procedure improves performance, it can cause potentially longer initial program loads (IPL). Deciding to use standby state is a trade off between run-time performance and IPL duration.

Do the following:

1. On a command line, type the following and press Enter:

CHGRCYAP

2. At the Include access paths prompt, specify *ELIGIBLE to include only eligible access paths in the recovery time specification.

Journal cachingJournal caching is an attribute of the journal that is defined. When journal caching is enabled, the system caches journal entries and their corresponding database records into main storage. This means that neither the journal entries nor their corresponding database records are written to disk until an efficient disk write can be scheduled. This usually occurs when the buffer is full, or at the first commit, close, or file end of data. Because most database transactions must no longer wait for a synchronous write of the journal entries to disk, the performance gain can be significant.

For example, batch operations must usually wait for each new journal entry to be written to disk. Journal caching can be helpful during batch operations when large numbers of add, update, and delete operations against journaled objects are performed.

The default value for journal caching is *BOTH. It is recommended that you use the default value of *BOTH to perform journal caching on both the source and the target systems.

For more information about journal caching, see IBM’s Redbooks Technote””Journal Caching: Understanding the Risk of Data Loss”.

MIMIX processing of high availability journal performance enhancementsYou can enable both journal standby state and journal caching using a combination of MIMIX and IBM commands. For example, the Journal state (JRNSTATE) parameter, available on the IBM command Change Journal (CHGJRN), offers equivalent and complementary function to the MIMIX parameter Target journal state (TGTSTATE).

Note: For purposes of this document, only MIMIX parameters are described in detail.

342

Page 343: MIMIX Reference

To enable journal standby state or journal caching in a MIMIX environment, two parameters have been added to the Create Journal Definition (CRTJRNDFN) and Change Journal Definition (CHGJRNDFN) commands: Target journal state (TGTSTATE) and Journal caching (JRNCACHE). See “Creating a journal definition” on page 215 and “Changing a journal definition” on page 217.

When journaling is used on the target system, the TGTSTATE parameter specifies the requested status of the target journal. Valid values for the TGTSTATE parameter are *ACTIVE and *STANDBY. When *ACTIVE is specified and the data group associated with the journal definition is journaling on the target system (JRNTGT(*YES)), the target journal state is set to active when the data group is started. When *STANDBY is specified, objects are journaled on the target system, but most journal entries are prevented from being deposited into the target journal. An additional value, *SAME, is valid for the CHGJRNDFN command, which indicates the TGTSTATE value should remain unchanged.

The JRNCACHE parameter specifies whether the system should cache journal entries in main storage before writing them to disk. Valid values for the JRNCACHE parameter are *TGT, *BOTH, *NONE, or *SRC. Although journal caching can be configured on the target system, source system, or both, it is recommended to be performed on both (*BOTH) the target system and source system. The recommended value of *BOTH is the default. An additional value, *SAME, is valid for the CHGJRNDFN command, which indicates the JRNCACHE value should remain unchanged.

Requirements of high availability journal performance enhancementsTable 44 identifies the software required in order to use MIMIX support of IBM’s high availability performance enhancements. Each system in the replication environment must have this software installed and be up to date with the latest PTFs and service packs applied.

Restrictions of high availability journal performance enhancementsMIMIX support of IBM’s high availability performance enhancements has a unique set of restrictions and high availability considerations. Make sure that you are aware of these restrictions before using journal standby state or journal caching in your MIMIX environment.

When using journal standby state or journal caching, be aware of the following restrictions documented by IBM:

• Do not use these high availability performance enhancements in conjunction with

Table 44. Software requirements for MIMIX support of IBM’s high availability performance enhancements

Software Minimum level

i5/OS V5R3M0 or higher

LPP installed and available

Product 5722SS1, option 42, feature 5117, i5/OS - HA Journal Performance

343

Page 344: MIMIX Reference

Configuring for high availability journal performance enhancements

commitment control. For journals in standby mode, commitment control entries are not sent to or deposited in the journal.

Note: MIMIX does not use commitment control on the target system. As such, MIMIX support of IBM’s high availability performance enhancements can be configured on the target system even if commitment control is being used on the source system.

• Do not use these high availability performance enhancements in conjunction with referential constraints, with the exception of referential constraint types of *RESTRICT.

Also be aware of the following additional restrictions:

• Do not change journal standby state or journal caching on IBM-supplied journals. These journal names begin with “Q” and reside in libraries which names also begin with “Q” (not QGPL). Attempting to change these journals results in an error message.

• Do not place a remote journal in journal standby state. Journal caching is also not allowed on remote journals.

• Do not use MIMIX support of IBM’s high availability performance enhancements in a cascading environment.

344

Page 345: MIMIX Reference

345

Caching extended attributes of *FILE objectsIn order to accurately replicate actions against *FILE objects, it is sometimes necessary to retrieve the extended attribute of a *FILE object, such as PF, LF or DSPF. Whenever large volumes of journal entries for *FILE objects are replicated from the security audit journal (system journal), MIMIX caches this information for a fixed set of *FILE objects to prevent unnecessary retrievals of the extended attribute. The result is a potential reduction of CPU consumption by the object send job and a significant performance improvement.

This function can be tailored to suit your environment. The maximum size of the cache is controlled though the use of a data area in the MIMIX product library. The cache size indicates the number of entries that can be contained in the cache. If the data area is not created or does not exist in the MIMIX product library, the size of the cache defaults to 15.

To configure the extended attribute cache, do the following:

1. Create the data area on the systems on which the object send jobs are running. Type the following command:

CRTDTAARA DTAARA(installation_library/MXOBJSND) TYPE(*CHAR) LEN(2)

2. Specify the cache size (xx). Valid cache values are numbers 00 through 99. Type the following command:

CHGDTAARA DTAARA(installation_library/MXOBJSND) VALUE('xx, RCVJRNE_delay_values')

Notes:• The four RCVJRNE delay values are specified in this string along with the

cache size. See topic “Increasing data returned in journal entry blocks by delaying RCVJRNE calls” on page 346 for more information.

• Using 00 for the cache size value disables the extended attribute cache.

Page 346: MIMIX Reference

Increasing data returned in journal entry blocks by delaying RCVJRNE calls

Increasing data returned in journal entry blocks by delay-ing RCVJRNE calls

Enhancements have been made to MIMIX to increase the performance of the object send job when a small number of journal entries are present during the Receive Journal Entry (RCVJRNE) call. Journal entries are received in configurable-sized blocks that have a default size of 99,999 bytes. When multiple RCVJRNE calls are performed and each block retrieved is less than 99,999 bytes, unnecessary overhead is created.

Through additional controls added to the MXOBJSND *DTAARA objects within the MIMIX installation library, you can now specify the size of the block of data received from RCVJRNE and delay the next RCVJRNE call based on a percentage of the requested block size. Doing so increases the probability of receiving a full journal entry block and improves object send performance—reducing the number of RCVJRNE calls while simultaneously increasing the quantity of data returned in each block. This delay, along with the extended file attribute cache capability, also reduces CPU consumption by the object send job. See “Caching extended attributes of *FILE objects” on page 345 for related information.

Understanding the data area format This enhancement allows you to provide byte values for the block size to receive data from RCVJRNE, as well as specify the percentage of that block size to use for both a small delay block and a medium delay block in the data area. These values are added in segments to the string of characters used by the file attribute cache size. Each block segment is followed by a multiplier value, which determines how long the previously specified journal entry block is delayed. The duration of the delay is the multiplier value multiplied by the value specified on the Reader wait time (seconds) (RDRWAIT) parameter in the data group definition. The RDRWAIT default value is 1 second. The RCVJRNE block size is specified in kilobytes, ranging from 32 Kb to 4000 Kb. If not specified, the default size is 99,999 bytes (100 Kb -1).

The following defines each segment and includes the number of characters that particular segment can contain:

DTAARA VALUE(‘cache_size2, small_block_percentage2, small_multipler2, medium_block_percentage2, medium_multiplier2, block_size4’)

To illustrate the effect of specific delay and multiplier values, let us assume the following:

DTAARA VALUE(‘15,10,02,30,01,0200’)

In this example, a small block is defined as any journal entry block consisting of 10 percent of the RCVJRNE block size of 200 Kb, or 20,000 bytes. Assuming the RDRWAIT default is in effect, small journal entry blocks will be delayed for 2 seconds before the next RCVJRNE call. Similarly, a medium block is defined as any journal entry block containing between 10 and 30 percent of the RCVJRNE block size, between 20,001 and 60,000 bytes. Medium blocks are then delayed for 1 second assuming the default RDRWAIT value is used.

346

Page 347: MIMIX Reference

Note: Delays are not applied to blocks larger than the specified medium block percentage. In the previous example, no delays will be applied to blocks larger than 30 percent of the RCVJRNE block size, or 60,000 bytes.

Determining if the data area should be changedBefore changing the data area, it is recommended that you contact a Certified MIMIX Consultant for assistance with running object send processing with diagnostic messages enabled. Review the set of LVI0001 messages returned as a result.

By default, the RCVJRNE block size is 99,999 bytes, with the small block value set to 5,000 bytes and the medium block value set to 20,000 bytes. If the resulting messages indicate that you are processing full journal entry blocks, there is no need to add a delay to the RCVJRNE call. In this case, the object send job is already running as efficiently as possible. Note that a block is considered full when the next journal entry in the sequence cannot fit within the size limitations of the block currently being processed.

Note: Reviewing these messages can also be helpful once you have changed the default values, to ensure that the object send job is operating efficiently.

The following is an example of LVI0001 messages:

LVI0001 OM2120 Block Sizes (in Kb): Small=20; Medium=60

LVI0001 OM2120 Block Counts: Small=129; Medium=461; Large=46; Full=1

LVI0001 OM2120 Using RCVJRNE Block Size (in Kb): 200

LVI0001 OM2120 - Range Counts: 0%=80; 2%=28; 5%=21; 10%=23; 15%=56; 20%=161; 25%=221; 30%=23

LVI0001 OM2120 - Range Counts: 40%=10; 50%=4; 60%=5; 70%=3; 80%=0; 90%=1; Full=1

OM2120 File Attr Cache: Size= 30, no cache lookup attempts

In the above example, 636 blocks were sent but only one of the sent blocks were full. Making changes to the delay multiplier or altering the small or medium block size specification would probably make sense in this scenario. Lakeview provides recommendations for changing the block size values in “Configuring the RCVJRNE call delay and block values” on page 347.

Configuring the RCVJRNE call delay and block valuesTo configure the delay and block values when retrieving journal entry blocks, do the following:

Note: Prior to configuring the RCVJRNE call delay, carefully read the information provided in “Understanding the data area format” on page 346 and “Determining if the data area should be changed” on page 347.

1. Create the data area on the systems on which the object send jobs are running. Type the following command:

CRTDTAARA DTAARA(installation_library/MXOBJSND) TYPE(*CHAR)

347

Page 348: MIMIX Reference

Increasing data returned in journal entry blocks by delaying RCVJRNE calls

LEN(20)

Note: Although you will see improvements from the file attribute cache with the default character value (LEN(2)), enhancements are maximized by recreating the MXOBJSEND data area as a LEN(20) to use the RCVJRNE call delays.

2. Specify the RCVJRNE block size, percentages, and multipliers to be used for the delay. Valid values for the RCVJRNE block size are 32Kb to 4000Kb. Valid values for the percentages and multipliers are numbers 01 through 99. Lakeview recommends typing the following as a starting point where cache size is the two character number for the size of the file attribute cache:

CHGDTAARA DTAARA(installation_library/MXOBJSND) VALUE(‘cache_size,10,02,30,01,0100’)

Note: For information about the cache size, see “Caching extended attributes of *FILE objects” on page 345.

348

Page 349: MIMIX Reference

349

Page 350: MIMIX Reference

Configuring high volume objects for better performance

350

Configuring high volume objects for better performanceSome objects, such as data areas and data queues can have significant activity against them and can cause MIMIX to use significant CPU resource.

One or several programs can use the QSNDDTAQ and QRCVDTAQ APIs to generate thousands of journal entries for a single *DTAQ. For each journal entry, system journal replication processes package all of the entries of the *DTAQ and sends it to the apply system. MIMIX then individually applies each *DTAQ entry using the QSNDDTAQ API.

If the data group is configured for multiple Object retrieve processing (OBJRTVPRC) jobs, then several object retrieve jobs could be started (up to the maximum configured) to handle the activity against the *DTAQ.

MIMIX contains redundancy logic that eliminates multiple journal entries for the same object when the entire object is replicated. When you configure a data group for system journal replication, you should:

• Place all *DTAQs in the same object-only data group

• Limit the maximum number of object retrieve jobs for the data group to one. Defaults can be used for the other object data group jobs.

Page 351: MIMIX Reference

Improving performance of the #MBRRCDCNT auditEnvironments that use commitment control may find that, in some conditions, a request to run the #MBRRCDCNT audit or the Compare Record Count (CMPRCDCNT) command can be extremely long-running. This is possible in environments that use commitment control with long-running commit transactions that include large numbers (tens of thousands) of record operations within one transaction. In such an environment, the compare request can be long running when the number of members to be compared is very large and there are uncommitted changes present at the time of the request.

The Set MIMIX Policies (SETMMXPCY) command includes the policy CMPRCDCNT commit threshold policy (CMPRCDCMT parameter) that provides the ability to specify a threshold at which requests to compare record counts will no longer perform the comparison due to commit cycle activity on the source system.

The shipped default values for this policy (CMPRCDCMT parameter) permit record count comparison requests without regard to commit cycle activity on the source system. These policy default values are suitable for environments that do not have the commitment control environment indicated, or that can tolerate a long-running comparison.

If your environment cannot tolerate a long-running request, you can specify a numeric value for the CMPRCDCMT parameter for either the MIMIX installation or for a specific data group. This will change the behavior of MIMIX by affecting what is compared, and can improve performance of #MBRRCDCNT and CMPRCDCNT requests.

Note: Equal record counts suggest but do not guarantee that files are synchronized. When a threshold is specified for the CMPRCDCNT commit threshold policy, record count comparisons can have a higher number of file members that are not compared. This must be taken into consideration when using the comparison results to gauge of whether systems are synchronized.

A numeric value for the CMPRCDCMT parameter defines the maximum number of uncommitted record operations that can exist for files waiting to be applied in an apply session at the time a compare record count request is invoked. The number specified must be representative of the number of uncommitted record operations.

When a numeric value is specified, MIMIX recognizes whether the number of uncommitted record operations for an apply session exceeds the threshold at the time a compare request is invoked. If an apply session has not reached the threshold, the comparison is performed. If the threshold is exceeded, MIMIX will not attempt to compare members from that apply session. Instead, the results will display the *CMT value for the difference indicator, indicating that commit cycle activity on the source system prevented active processing from comparing counts of current records and deleted records in the selected member.

Each database apply session is evaluated against the threshold independently. As a result, it is possible for record counts to be compared for files in one apply session but not be compared in another apply session, as illustrated in the following example.

351

Page 352: MIMIX Reference

Improving performance of the #MBRRCDCNT audit

Example: This example shows the result of setting the policy for a data group to a value of 10,000. Table 45 shows the files replicated by each of the apply sessions used by the data group and the result of comparison. Because of the number of uncommitted record operations present at the time of the request, files processed by apply sessions A and C are not compared.

Table 45. Sample results with a policy threshold value of 10,000.

Apply Session

Files Uncommitted Record Operation Result

Per File Apply Session Total

A A01A02

11,0000

> 10,000 Not compared, *CMTNot compared, *CMT

B B01B02

5,0000

< 10,000 ComparedCompared

C C01C02

7,0006,000

> 10,000 Not compared, *CMTNot compared, *CMT

D D01D02

50500

< 10,000 ComparedCompared

352

Page 353: MIMIX Reference

Chapter 16

Configuring advanced replication techniques

This chapter describes how to modify your configuration to support advanced replication techniques for user journal (database) and system journal (object) replication.

User journal replication: The following topics describe advanced techniques for user journal replication:

• “Keyed replication” on page 355 describes the requirements and restrictions of replication that is based on key values within the data. This topic also describes how to configure keyed replication at the data group or file entry level as well as how to verify key attributes.

• “Data distribution and data management scenarios” on page 361 defines and identifies configuration requirements for the following techniques: bi-directional data flow, file combining, file sharing, file merging, broadcasting, and cascading.

• “Trigger support” on page 368 describes how MIMIX handles triggers and how to enable trigger support. Requirements and considerations for replication of triggers, including considerations for synchronizing files with triggers, are included.

• “Constraint support” on page 370 identifies the types of constraints MIMIX supports. This topic also describes delete rules for referential constraints that can cause dependent files to change and MIMIX considerations for replication of constraint-induced modifications.

• “Handling SQL identity columns” on page 373 describes the problem of duplicate identity column values and how the Set Identity Column Attribute (SETIDCOLA) command can be used to support replication of SQL tables with identity columns. Requirements and limitations of the SETIDCOLA command as well as alternative solutions are included.

• “Collision resolution” on page 381 describes available support within MIMIX to automatically resolve detected collisions without user intervention and its requirements. This topic also describes how to define and work with collision resolution classes.

System journal replication: The following topics describe advanced techniques for system journal replication:

• “Omitting T-ZC content from system journal replication” on page 387 describes considerations and requirements for omitting content of T-ZC journal entries from replicated transactions for logical and physical files.

• “Selecting an object retrieval delay” on page 391 describes how to set an object retrieval delay value so that a MIMIX lock on an object does not interfere with your applications. This topic includes several examples.

• “Configuring to replicate SQL stored procedures and user-defined functions” on page 393 describes the requirements for replicating these constructs and how configure MIMIX to replicate them.

353

Page 354: MIMIX Reference

• “Using Save-While-Active in MIMIX” on page 396 describes how to change type of save-while-active option to be used when saving objects. You can view and change these configuration values for a data group through an interface such as SQL or DFU.

354

Page 355: MIMIX Reference

Keyed replicationBy default, MIMIX user journal replication processes use positional replication. You can change from positional replication to keyed replication for database files.

Keyed vs positional replicationIn data groups that are configured for user journal replication, default values use positional replication. In positional file replication, data on the target system is identified by position, or relative record number (RRN), in the file member. If data exists in a file on the source system, an exact copy must exist in the same position in a file on the target system. When the file on the source system is updated, MIMIX finds the data in the exact location on the target system and updates that data with the changes.

User journal replication processes support the update of files by key, allowing replication to be based on key values within the data instead of by the position of the data within the file. Key replication support is subject to the requirements and restrictions described.

Positional file replication provides the best performance. Keyed file replication offers a greater level of flexibility, but you may notice greater CPU usage when MIMIX must search each file for the specified key. You also need to be aware that data “collisions” can occur when an attempt is made to simultaneously update the same data from two different sources.

Lakeview Technology recommends positional replication for most high availability requirements. Keyed replication is best used for more flexible scenarios, such as file sharing, file routing, or file combining.

Requirements for keyed replicationJournal images - MIMIX may need to be configured so that both before and after images of the journal transaction are placed in the journal.

The Journal image element of the File and tracking entry options (FEOPT) parameter controls which journal images are placed in the journal. Default values result in only an after-image of the record. However, some configurations require both before-images and after-images. The Journal image value specified in the data group definition is in effect unless a different value is specified for the FEOPT parameter in a file entry or object entry.

It is recommended that you use the Journal image value of *BOTH whenever there are file entries with keyed replication to prevent before images from being filtered out by the database send process. If the unique key fields of the database file are updated by applications, you must use the value *BOTH.

Unique access path - At least one unique access path must exist for the file being replicated.The access path can be either part of the physical file itself or it can be defined in a logical file dependent on the physical file.

355

Page 356: MIMIX Reference

Keyed replication

You can use the Verify Key Attributes (VFYKEYATR) command to determine whether a physical file is eligible for keyed replication. See “Verifying key attributes” on page 359.

Restrictions of keyed replicationMIMIX does not support keyed replication in data groups that are configured for MIMIX Dynamic Apply.

The Compare File Data (CMPFILDTA) command cannot compare files that are configured for keyed replication. If you run the the #FILDTA audit or the CMPFILDTA command against keyed files, the files are excluded from the comparison and a message indicates that files using *KEYED replication were not processed.

When keyed replication is in use, the journal and journal definition cannot be configured to allow object types to support minimized entry specific data. For more information, see “Minimized journal entry data” on page 339.

Implementing keyed replicationYou can implement keyed replication for an entire data group or for individual data group file entries. If you configure a data group for keyed replication, MIMIX uses keyed replication as the default for all processing of all associated data group file entries. If you configure individual data group file entries for keyed replication, the values you define in the data group file entry override the defaults used by the data group for the associated file.

Changing a data group configuration to use keyed replicationYou can define keyed replication for a data group when you are initially configuring MIMIX or you can change the configuration later. To use keyed replication for all database replication defined for a data group, the following requirements must be met:

1. Before you change a data group definition to support keyed replication, do the following:

a. Verify that the files defined to the data group are journaled correctly.

b. If the files are not currently journaled correctly, you need to end journaling for the file entries defined to the data group. Use topic “Ending Journaling” in the Using MIMIX book.

2. In the data group definition used for replication you must specify the following:

• Data group type of *ALL or *DB.

Attention: If you attempt to change the file replication from *KEYED to *POSITION, a warning message will be returned that indicates that the position of the file may not match the position of the file on the backup system. Attempting to change from keyed to positional replication can result in a mismatch of the relative record numbers (RRN) between the target system and source system.

356

Page 357: MIMIX Reference

• DB journal entry processing must have Before images as *SEND for source send configurations. When using remote journaling, all journal entries are sent.

• Verify that you have the value you need specified for the Journal image element of the File and tracking ent. options. *BOTH is recommended.

• File and tracking ent. options must specify *KEYED for the Replication type element.

3. The files identified by the data group file entries for the data group must be eligible for keyed replication. See topic “Verifying Key Attributes” in the Using MIMIX book.

4. If you have modified file entry options on individual data group file entries, you need to ensure that the values used are compatible with keyed replication.

5. Start journaling for the file entries using “Starting journaling for physical files” on page 326.

Changing a data group file entry to use keyed replicationBy default, data group file entries use the same file entry options as specified in the data group definition. If you configure individual data group file entries for keyed replication, the values you define in the data group file entry override the defaults used by the data group for the associated file.

If you want to use keyed replication for one or more individual data group file entries defined for a data group, you need the following:

1. Before you change a data group file entry to support keyed replication, you should ensure that the file is already journaled correctly. If the file is not being journaled correctly, for example the data group file entry is not set as described in Step 4, you will need to end journaling for the file entries.

2. The data group definition used for replication must have a Data group type of *ALL or *DB.

3. DB journal entry processing must have Before images as *SEND for source send configurations. When using remote journaling, all journal entries are sent.

4. The data group file entry must have File and tracking ent. options set as follows:

• To override the defaults from the data group definition to use keyed replication on only selected data group file entries, verify that you have the value you need specified for the Journal image (*BOTH is recommended) and specify *KEYED for the Replication type.

• If you are using keyed replication at the data group level, the data group file entries can use the default value *DGDFT for both Journal image and Replication type.

Note: You can use any of the following ways to configure data group file entries for keyed replication:

• Use either procedure in topic “Loading file entries” on page 272 to add or modify a group of data group file entries. If you are modifying existing file

357

Page 358: MIMIX Reference

Keyed replication

entries in this way, you should specify *UPDADD for the Update option parameter.

• Use topic “Adding a data group file entry” on page 278 to create a new file entry.

• Use topic “Changing a data group file entry” on page 279 to modify an existing file entry.

5. The files identified by the data group file entries for the data group must be eligible for keyed replication. See topic “Verifying Key Attributes” in the Using MIMIX book.

6. After you have changed individual data group file entries, you need to start journaling for the file entries using “Starting journaling for physical files” on page 326.

358

Page 359: MIMIX Reference

Verifying key attributesBefore you configure for keyed replication, verify that the file or files you for which you want to use keyed replication are actually eligible.

Do the following to verify that the attributes of a file are appropriate for keyed replication:

1. On a command line, type VFYKEYATR (Verify Key Attributes). The Verify Key Attributes display appears.

2. Do one of the following:

• To verify a file in a library, specify a file name and a library.

• To verify all files in a library, specify *ALL and a library.

• To verify files associated with the file entries for a data group, specify *MIMIXDFN for the File prompt and press Enter. Prompts for the Data group definition appear. Specify the name of the data group that you want to check.

3. Press Enter.

4. A spooled file is created that indicates whether you can use keyed replication for the files in the library or data group you specified. Display the spooled file (WRKSPLF command) or use your standard process for printing. You can use keyed replication for the file if *BOTH appears in the Replication Type Allowed column. If a value appears in the Replication Type Defined column, the file is already defined to the data group with the replication type shown.

359

Page 360: MIMIX Reference

360

Page 361: MIMIX Reference

Data distribution and data management scenariosMIMIX supports a variety of scenarios for data distribution and data management including bi-directional data flow, file combining, file sharing, and file merging. MIMIX also supports data distribution techniques such as broadcasting, and cascading. Often, this support requires a combination of advanced replication techniques as well as customizing. These techniques require additional planning before you configure MIMIX. You may need to consider the technical aspects of implementing a technique as well as how your business practices may be affected. Consider the following:

• Can each system involved modify the data?

• Do you need to filter data before sending to it to another system?

• Do you need to implement multiple techniques to accomplish your goal?

• Do you need customized exit programs?

• Do any potential collision points exist and how will each be resolved?

MIMIX user journal replication provides filtering options within the data group definition. Also, MIMIX provides options within the data group definition and for individual data group file entries for resolving most collision points. Additionally, collision resolution classes allow you to specify different resolution methods for each collision point.

Configuring for bi-directional flowBoth MIMIX user journal and system journal replication processes allow data to flow bi-directionally, but their implementations and configuration requirements are distinct.

• In user journal replication processing, bi-directional data flow is a data sharing technique in which the same named database file can be replicated between databases on two systems in two directions at the same time. When MIMIX user journal replication processes are configured for bi-directional data flow, each system is both a source system and a target system.

• System journal replication processing supports the bi-directional flow of objects between two systems, but it does not support simultaneous (bi-directional) updates to the same object on multiple systems. Updating the same object from two systems at the same time can cause a loss of data integrity.

File sharing is a scenario in which a file can be shared among a group of systems and can be updated from any of the systems in the group. MIMIX implements file sharing among systems defined to the same MIMIX installation. To enable file sharing, MIMIX must be configured to allow bi-directional data flow. An example of file sharing is when an enterprise maintains a single database file that must be updated from any of several systems.

Bi-directional requirements: system journal replicationTo configure system journal replication processes to support bi-directional flow of objects, you need the following:

361

Page 362: MIMIX Reference

Data distribution and data management scenarios

• Configure two data group definitions between the two systems. In one data group, specify *SYS1 for the Data source (DTASRC) parameter. In the other data group, specify *SYS2 for this parameter.

• Each data group definition should specify *NO for the Allow to be switched (ALWSWT) parameter.

Note: In system journal replication, MIMIX does not support simultaneous updates to the same object on multiple systems and does not support conflict resolution for objects. Once an object is replicated to a target system, system journal replication processes prevent looping by not allowing the same object, regardless of name mapping, to be replicated back to its original source system.

Bi-directional requirements: user journal replicationTo configure user journal replication processes to support bi-directional data flow, you need the following:

• Configure two data group definitions between the two systems. In one data group, specify *SYS1 for the Data source (DTASRC) parameter. In the other data group, specify *SYS2 for this parameter.

• For each data group definition, set the DB journal entry processing (DBJRNPRC) parameter so that its Generated by MIMIX element is set to *IGNORE. This prevents any journal entries that are generated by MIMIX from being sent to the target system and prevents looping.

• The files defined to each data group must be configured for keyed replication. Use topic “Verifying key attributes” on page 359 to determine if files can use keyed replication.

• Analyze your environment to determine the potential collision points in your data. You need to understand how each collision point will be resolved. Consider the following:

– Can the collision be resolved using the collision resolution methods provided in MIMIX or do you need customized exit programs? See “Collision resolution” on page 381.

– How will your business practices be affected by collision scenarios?

For example, say that you have an order entry application that updates shared inventory records such as Figure 19. If two locations attempt to access the last item in stock at the same time, which location will be allowed to fill the order? Does the other location automatically place a backorder or generate a report?

Figure 19. Example of bi-directional configuration to implement file sharing.

362

Page 363: MIMIX Reference

Configuring for file routing and file combiningFile routing and file combining are data management techniques supported by MIMIX user journal replication processes. The way in which data is used can affect the configuration requirements for a file routing or file combining operation. Evaluate the needs for each pair of systems (source and target) separately. Consider the following:

• Does the data need to be updated in both directions between the systems? If you need bi-directional data flow, see topic “Configuring for bi-directional flow” on page 361.

• Will users update the data from only one or both systems? If users can update data from both systems, you need to prevent the original data from being returned to its original source system (recursion).

• Is the file routing or file combining scenario a complete solution or is it part of a larger solution? Your complete solution may be a combination of multiple data management and data distribution techniques. Evaluate the requirements for each technique separately for a pair of systems (source and target). Each technique that you need to implement may have different configuration requirements.

File combining is a scenario in which all or partial information from files on multiple systems can be sent to and combined in a single file on a target system. In its user journal replication processes, MIMIX implements file combining between multiple source systems and a target system that are defined to the same MIMIX installation. MIMIX determines what data from the multiple source files is sent to the target system based on the contents of a journal transaction. An example of file combining is when many locations within an enterprise update a local file and the updates from all local files are sent to one location to update a composite file. The example in Figure 20

363

Page 364: MIMIX Reference

Data distribution and data management scenarios

shows file combining from multiple source systems onto a composite file on the management system.

Figure 20. Example of file combining

To enable file combining between two systems, MIMIX user journal replication must be configured as follows:

• Configure the data group definition for keyed replication. See topic “Keyed replication” on page 355.

• If only part of the information from the source system is to be sent to the target system, you need an exit program to filter out transactions that should not be sent to the target system.

• If you allow the data group to be switched (by specifying *YES for Allow to be switched (ALWSWT) parameter) and a switch occurs, the file combining operation effectively becomes a file routing operation. To ensure that the data group will perform file combining operations after a switch, you need an exit program that allows the appropriate transactions to be processed regardless of which system is acting as the source for replication.

• After the combining operating is complete, if the combined data will be replicated or distributed again, you need to prevent it from returning to the system on which it originated.

File routing is a scenario in which information from a single file can be split and sent to files on multiple target systems. In user journal replication processes, MIMIX implements file routing between a source system and multiple target systems that are defined to the same MIMIX installation. To enable file routing, MIMIX calls a user exit program that makes the file routing decision. The user exit program determines what data from the source file is sent to each of the target systems based on the contents

364

Page 365: MIMIX Reference

of a journal transaction. An example of file routing is when one location within an enterprise performs updates to a file for all other locations, but only updated information relevant to a location is sent back to that location. The example in Figure 21 shows the management system routing only the information relevant to each network system to that system.

Figure 21. Example of file routing

To enable file routing, MIMIX user journal replication processes must be configured as follows:

• Configure the data group definition for keyed replication. See topic “Keyed replication” on page 355.

• The data group definition must call an exit program that filters transactions so that only those transactions which are relevant to the target system are sent to it.

• If you allow the data group to be switched (by specifying *YES for Allow to be switched (ALWSWT) parameter) and a switch occurs, the file routing operation effectively becomes a file combining operation. To ensure that the data group will perform file routing operations after a switch, you need an exit program that allows the appropriate transactions to be processed regardless of which system is acting as the source for replication.

Configuring for cascading distributionsCascading is a distribution technique in which data passes through one or more intermediate systems before reaching its destination. MIMIX supports cascading in both its user journal and system journal replication paths. However, the paths differ in their implementation.

365

Page 366: MIMIX Reference

Data distribution and data management scenarios

Data can pass through one intermediate system within a MIMIX installation. Additional MIMIX installations will allow you to support cascading in scenarios that require data to flow though two or more intermediate systems before reaching its destination. Figure 22 shows the basic cascading configuration that is possible within one MIMIX installation.

Figure 22. Example of a simple cascading scenario

To enable cascading you must have the following:

• Within a MIMIX installation, the management system must be the intermediate system.

• Configure a data group between the originating system (a network system) to the intermediate (management) system. Configure another data group for the flow from the intermediate (management) system to the destination system.

• For user journal replication, you also need the following:

– The data groups should be configured to send journal entries that are generated by MIMIX. To do this, specify *SEND for the Generated by MIMIX element of the DB journal entry processing (DBJRNPRC) parameter. When this is the case, MIMIX performs the database updates.

– If it is possible for the data to be routed back to the originating or any intermediate systems, you need to use keyed replication.

Note: Once an object is replicated to a target system, MIMIX system journal replication processes prevent looping by not allowing the same object, regardless of name mapping, to be replicated back to its original source system.

Cascading may be used with other data management techniques to accomplish a specific goal. Figure 23 shows an example where the Chicago system is a management system in a MIMIX installation that collects data from the network systems and broadcasts the updates to the other participating systems. The network systems send unfiltered data to the management system. Figure 23 is a cascading scenario because changes that originate on the Hong Kong system pass through an intermediate system (Chicago) before being distributed to the Mexico City system and other network systems in the MIMIX installation. Exit programs are required for the

366

Page 367: MIMIX Reference

data groups acting between the management system and the destination systems and need to prevent updates from flowing back to their system of origin.

Figure 23. Bi-directional example that implements cascading for file distribution.

367

Page 368: MIMIX Reference

Trigger support

Trigger supportA trigger program is a user exit program that is called by the database when a database modification occurs. Trigger programs can be used to make other database modifications which are called trigger-induced database modifications.

How MIMIX handles triggersThe method used for handling triggers is determined by settings in the data group definition and file entry options. MIMIX supports database trigger replication using one of the following ways:

• Using i5/OS trigger support to prevent the triggers from firing on the target system and replicating the trigger-induced modifications.

• Ignoring trigger-induced modifications found in the replication stream and allowing the triggers to fire on the target system.

Considerations when using triggersYou should choose only one of these methods for each data group file entry. Which method you use depends on a variety of considerations:

• The default replication type for data group file entry options is positional replication. With positional replication, each file is replicated based on the position of the record within the file. The value of the relative record number used in the journal entry is used to locate a database record being updated or deleted. When positional replication is used and triggers fire on the target system they can cause trigger-induced modifications to the files being replicated. These trigger-induced modifications can change the relative record number of the records in the file because the relative record numbers of the trigger-induced modifications are not likely to match the relative record numbers generated by the same triggers on the source system. Because of this, triggers should not be allowed to fire on the target system. You should prevent the triggers from firing on the target system and replicate the trigger-induced modifications from source to the target system.

• When trigger-induced modifications are made by replicated files to files not replicated by MIMIX, you may want the triggers to fire on the target system. This will ensure that the files that are not replicated receive the same trigger-induced modifications on the target system as they do on the source system.

• When triggers do not cause database record changes, you may choose to allow them to fire on the target system. However, if non-database changes occur and you are using object replication, the object replication will replicate trigger-induced object changes from the source system. In this case, the triggers should not be permitted to fire.

• When triggers are allowed to fire on the target system, the files being updated by these triggers should be replicated using the same apply session as the parent files to avoid lock contention.

• A slight performance advantage may be achieved by replicating the trigger-induced modifications instead of ignoring them and allowing the triggers to fire.

368

Page 369: MIMIX Reference

This is because the database apply process checks each transaction before processing to see if filtering is required, and firing the trigger adds additional overhead to database processing.

Enabling trigger supportTrigger support is enabled for user journal replication by specifying the appropriate file entry option values for parameters on the Create Data Group Definition (CRTDGDFN) and Change Data Group Definition (CHGDGDFN) commands. You can also enable trigger support at a file level by specifying the appropriate file entry options associated with the file.

If you already have a trigger solution in place you can continue to use that implementation or you can use the MIMIX trigger support.

Synchronizing files with triggersWhen you are synchronizing a file with triggers and you are using MIMIX trigger support, you must specify *DATA on the Sending mode parameter on the Synchronize DG File Entry (SYNCDGFE) command.

On the Disable triggers on file parameter, you can specify if you want the triggers disabled on the target system during file synchronization. The default is *DGFE, which will use the value indicated for the data group file entry. If you specify *YES, triggers will be disabled on the target system during synchronization. A value of *NO will leave triggers enabled.

For more information on synchronizing files with triggers, see “About synchronizing file entries (SYNCDGFE command)” on page 480.

369

Page 370: MIMIX Reference

Constraint support

Constraint supportA constraint is a restriction or limitation placed on a file. There are four types of constraints: referential, unique, primary key and check. Unique, primary key and check constraints are single file operations transparent to MIMIX. If a constraint is met for a database operation on the source system, the same constraint will be met for the replicated database operation on the target. Referential constraints, however, ensure the integrity between multiple files. For example, you could use a referential constraint to:

• Ensure when an employee record is added to a personnel file that it has an associated department from a company organization file.

• Empty a shopping cart and remove the order records if an internet shopper exits without placing an order.

When constraints are added, removed or changed on files replicated by MIMIX, these constraint changes will be replicated to the target system. With the exception of files that have been placed on hold, MIMIX always enables constraints and applies constraint entries. MIMIX tolerates mismatched before images or minimized journal entry data CRC failures when applying constraint-generated activity. Because the parent record was already applied, entries with mismatched before images are applied and entries with minimized journal entry data CRC failures are ignored. To use this support:

• Ensure that your target system is at the same release level or greater than the source system to ensure the target system is able to use all of the i5/OS function that is available on the source system. If an earlier i5/OS level is installed on the target system the operation will be ignored.

• You must have your MIMIX environment configured for either MIMIX Dynamic Apply or legacy cooperative processing.

Referential constraints with delete rulesReferential constraints can cause changes to dependent database files when the parent file is changed. Referential constraints defined with the following delete rules cause dependent files to change:

• *CASCADE: Record deletion in a parent file causes records in the dependent file to be deleted when the parent key value matches the foreign key value.

• *SETNULL: Record deletion in a parent file updates those records in the dependent file where the value of the parent non-null key matches the foreign key value. For those dependent records that meet the preceding criteria, all null capable fields in the foreign key are set to null. Foreign key fields with the non-null attribute are not updated.

• *SETDFT: Record deletion in a parent file updates those records in the dependent file where the value of the parent non-null key matches the foreign key value. For those dependent records that meet the preceding criteria, the foreign key field or fields are set to their corresponding default values.

370

Page 371: MIMIX Reference

Referential constraint handling for these dependent files is supported through the replication of constraint-induced modifications.

MIMIX does not provide the ability to disable constraints because i5/OS would check every record in the file to ensure constraints are met once the constraint is re-enabled. This would cause a significant performance impact on large files and could impact switch performance. If the need exists, this can be done through automation.

Replication of constraint-induced modificationsMIMIX always attempts to apply constraint-induced modifications. Earlier levels of MIMIX provided the Process constraint entries element in the File entry options (FEOPT) parameter, which now is removed.1 Any previously specified value is now mapped to *YES so that processing always occurs.

The considerations for replication of constraint-induced modifications are:

• Files with referential constraints and any dependent files must be replicated by the same apply session.

• When referential constraints cause changes to dependent files not replicated by MIMIX, enabling the same constraints on the target system will allow changes to be made to the dependent files.

Updated for 5.0.08.00.

1. This element was removed in version 5 service pack 5.0.08.00.

371

Page 372: MIMIX Reference

Constraint support

372

Page 373: MIMIX Reference

Handling SQL identity columnsIf you replicate an SQL table with an identity column with a switchable data group, you may experience problems following a switch to the backup system. The next identity column generated on the backup system may not be as expected.

In environments with both systems running i5/OS V5R4 or higher and MIMIX service pack 5.0.09.00 or higher, MIMIX automatically checks for scenarios that can cause duplicate identity column values and, if possible, attempts to prevent the problem from occurring. Even in this environment, MIMIX cannot prevent all troublesome scenarios from occurring.

As a result, the Set Identity Column Attribute (SETIDCOLA) command is available to help support SQL tables with identity columns. This command is useful for handling scenarios that would otherwise result in errors caused by duplicate identity column values when inserting rows into tables.

The identity column problem explainedIn SQL, a table may have a single numeric column which is designated an identity column. When rows are inserted into the table, the database automatically generates a value for this column, incrementing the value with each insertion. Several attributes define the behavior of the identity column, including: Minimum value, Maximum value, Increment amount, Start value, Cycle/No Cycle, Cache amount. This discussion is limited to the following attributes:

• Increment amount - the amount by which each new row’s identity column differs from the previously inserted row. This can be a positive or negative value.

• Start value - the value used for the next row added. This can be any value, including one that is outside of the range defined by the minimum and maximum values.

• Cycle/No Cycle - indicates whether or not values cycle from maximum back to minimum, or from minimum to maximum if the increment is negative.

Nothing prevents identity column values from being generated more than once. However, in typical usage, the identity column is also a primary, unique key and set to not cycle.

The value generator for the identity column is stored internally with the table. Following certain actions which transfer table data from one system to another, the next identity column value generated on the receiving system may not be as expected. This can occur after a MIMIX switch and after other actions such as certain save/restore operations on the backup system. Similarly, other actions such as applying journaled changes (APYJRNCHG), also do not keep the value generator synchronized.

Any SQL table with an identity column that is replicated by a switchable data group can potentially experience this problem. Journal entries used to replicate inserted rows on the production system do not contain information that would allow the value generator to remain synchronized. The result is that after a switch to the backup system, rows can be inserted on the backup system using identity column values

373

Page 374: MIMIX Reference

Handling SQL identity columns

other than the next expected value. The starting value for the value generator on the backup system is used instead of the next expected value based on the table’s content. This can result in the reuse of identity column values which in turn can cause a duplicate key exception.

Detailed technical descriptions of all attributes are available in the IBM eServer iSeries Information Center. Look in the Database section for the SQL Reference for CREATE TABLE and ALTER TABLE statements.

When the SETIDCOLA command is usefulImportant! The SETIDCOLA command should not be used in all environments. Its use is subject to the limitations described in “SETIDCOLA command limitations” on page 374. If you cannot use the SETIDCOLA command, see “Alternative solutions” on page 375.

Examples of when you may need to run the SETIDCOLA command are:

• The SETIDCOLA command can be used to determine whether a data group replicates tables which contain identity columns and report the results. To do so, specify ACTION(*CHECKONLY) on the command. It is recommended that you initially use this capability before setting values. You may want to perform this type of check whenever new tables are created that might contain identity columns. See “Checking for replication of tables with identity columns” on page 378.

• For many environments, default values on the SETIDCOLA command are appropriate for use following a planned switch to the backup system to ensure that the identity column values inserted on the backup system start at the proper point. After performing a switch to the backup system, run the command from the backup system before starting replication in the reverse direction.

• After a restore (RSTnnn command) from a "save of backup machine." For this scenario, run the command on the system on which you performed the restore.

• Before saving files to tape or other media from the backup system. For this scenario, run the command from the backup system. By doing this, you avoid the need to run the command after restoring.

Also, the SETIDCOLA command is needed in any environment in which you are attempting to restore from a save that was created while replication processes were running.

SETIDCOLA command limitationsIn general, SETIDCOLA only works correctly for the most typical scenario where all values for identity columns have been generated by the system, and no cycles are allowed. In other scenarios, it may not restart the identity column at a useful value.

Limited support for unplanned switch - Following an unplanned switch, the backup system may not be caught up with all the changes that occurred on the production system. Using the SETIDCOLA command on the backup system may result in the generation of identity column values that were used on the production system but not yet replicated to the backup system. Careful selection of the value of the INCREMENTS parameter can minimize the likelihood of this problem, but the value

374

Page 375: MIMIX Reference

chosen must be valid for all tables in the data group. See “Examples of choosing a value for INCREMENTS” on page 377.

Not supported -The following scenarios are known to be problematic and are not supported. If you cannot use the SETIDCOLA command in your environment, consider the “Alternative solutions” on page 375.

• Columns that have cycled - If an identity column allows cycling and adding a row increments its value beyond the maximum range, the restart value is reset to the beginning of the range. Because cycles are allowed, the assumption is that duplicate keys will not be a problem. However, unexpected behavior may occur when cycles are allowed and old rows are removed from the table with a frequency such that the identity column values never actually complete a cycle. In this scenario, the ideal starting point would be wherever there is the largest gap between existing values. The SETIDCOLA command cannot address this scenario; it must be handled manually.

• Rows deleted on production table - An application may require that an identity column value never be generated twice. For example, the value may be stored in a different table, data area or data queue, given to another application, or given to a customer. The application may also require that the value always locate either the original row or, if the row is deleted, no row at all. If rows with values at the end of the range are deleted and you perform a switch followed by the SETIDCOLA command, the identity column values of the deleted rows will be re-generated for newly inserted rows. The SETIDCOLA command is not recommended for this environment. This must be handled manually.

• No rows in backup table - If there are no rows in the table on the backup system, the restart value will be set to the initial start value. Running the SETIDCOLA command on the backup system may result in re-generating values that were previously used. The SETIDCOLA command cannot address this scenario; it must be handled manually.

• Application generated values - Optionally, applications can supply identity column values at the time they insert rows into a table. These application-generated identity values may be outside the minimum and maximum values set for the identity column. For example, a table’s identity column range may be from 1 through 100,000,000 but an application occasionally supplies values in the range of 200,000,000 through 500,000,000. If cycling is permitted and the SETIDCOLA command is run, the command would recognize the higher values from the application and would cycle back to the minimum value of 1. Because the result would be problematic, the SETIDCOLA command is not recommended for tables which allow application-generated identity values. This must be handled manually.

Alternative solutionsIf you cannot use the SETIDCOLA command because of its known limitations, you have these options.

Manually reset the identity column starting point: Following a switch to the backup system, you can manually reset the restart value for tables with identity

375

Page 376: MIMIX Reference

Handling SQL identity columns

columns. The SQL statement ALTER TABLE name ALTER COLUMN can be used for this purpose.

Convert to SQL sequence objects: To overcome the limitations of identity column switching and to avoid the need to use the SETIDCOLA command, SQL sequence objects can be used instead of identity columns. Sequence objects are implemented using a data area which can be replicated by MIMIX. The data area for the sequence object must be configured for replication through the user journal (cooperatively processed).

SETIDCOLA command details The Set Identity Column Attribute (SETIDCOLA) command performs a RESTART WITH alteration on the identity column of any SQL tables defined for replication in the specified data group. For each table, the new restart value determines the identity column value for the next row added to the table. Careful selection of values can ensure that, when applications are started, the identity column starting values exceed the last values used prior to the switch or save/restore operation.

If you use Lakeview-provided product-level security, the minimum authority level for this command is *OPR.

Note: For systems running i5/OS V5R3, it is recommended that you apply IBM PTFs before you use the SETIDCOLA command. For more information, log in to Support Central and refer to the Technical Documents page for a list of recommended operating system fixes.

The Data group definition (DGDFN) parameter identifies the data group against which the specified action is taken. Only tables that are identified for replication by the specified data group are addressed.

The Action (ACTION) parameter specifies what action is to be taken by the command. Only tables which can be replicated by the specified data group are acted upon. Possible values are:

*SET The command checks and sets the attribute of the identity column of each table which meets the criteria. This is the default value.

*CHECKONLY The command checks for tables which have identity columns. It does not set the attributes of the identity columns. The result of the check is reported in the job log. If there are affected tables, message LVE3E2C will be issued. If no tables are affected, message LVI3E26 will be issued.

The Number of jobs (JOBS) parameter specifies the number of jobs to use to process tables which meet the criteria for processing by the command. A table will only be updated by one job; each job can update multiple tables. The default value, *DFT, is currently set to one job. You can specify as many as 30 jobs.

The Number of increments to skip (INCREMENTS) parameter specifies how many increments of the counter which generates the starting value for the identity column to skip. The value specified is used for all tables which meet the criteria for processing by the command. Be sure to read the information in “Examples of choosing a value for INCREMENTS” on page 377. Possible values are:

*DFT Skips the default number of increments, currently set to 1 increment.

376

Page 377: MIMIX Reference

Following a planned switch where tables are synchronized, you can usually use *DFT.

number-of-increments-to-skip Specify the number of increments to skip. Valid values are 1 through 2,147,483,647. Following an unplanned switch, use a larger value to ensure that you skip any values used on the production system that may not have been replicated to the backup system.

Usage notes • The reason you are using this command determines which system you should run

it from. See “When the SETIDCOLA command is useful” on page 374 for details.

• The command can be invoked manually or as part of a MIMIX Model Switch Framework custom switching program. Evaluation of your environment to determine an appropriate increment value is highly recommended before using the command.

• This command can be long running when many files defined for replication by the specified data group contain identity columns. This is especially true when affected identity columns do not have indexes over them or when they are referenced by constraints. Specifying a higher number of jobs (JOBS) can reduce this time.

• This command creates a work library named SETIDCOLA which is used by the command. The SETIDCOLA library is not deleted so that it can be used for any error analysis.

• Internally, the SETIDCOLA command builds RUNSQLSTM scripts (one for each job specified) and uses RUNSQLSTM in spawned jobs to execute the scripts. RUNSQLSTM produces spooled files showing the ALTER TABLE statements executed, along with any error messages received. If any statement fails, the RUNSQLSTM will also fail, and return the failing status back to the job where SETIDCOLA is running and an escape message will be issued.

Examples of choosing a value for INCREMENTS When choosing a value for INCREMENTS, consider the rate at which each table consumes its available identity values. Account for the needs of the table which consumes numbers at the highest rate, as well as any backlog in MIMIX processing and the activity causing you to run the command. If you have available numbers to use, add a safety factor of at least 100 percent. For example, if the rate of the fastest file is 1,000 numbers per hour and MIMIX is 15 minutes behind (0.25 hours), the value you specify for INCREMENTS needs to result in at least 250 numbers (1000 x 0.25) being skipped. Adding 100% to 250, results in an increment of 500.

Note: The MIMIX backlog, sometimes called the latency of changes being transferred to the backup system, is the amount of time from when an operation occurs on the production system until it is successfully sent to the backup system by MIMIX. It does not include the time it takes for MIMIX to apply the entry. Use the DSPDGSTS command to view the Unprocessed entry count for the DB Apply process; this value is the size of the backlog. You need to approximate how long it would take for this value to become zero (0) if application activity were to be stopped on the production system.

377

Page 378: MIMIX Reference

Handling SQL identity columns

For example, data group ORDERS contains tables A and B. Each row added to table A increases the identity value by 1 and each row added to table B increases the identify value by 1,000. Rows are inserted into table A at a rate of approximately 600 rows per hour. Rows are inserted into table B at a rate of approximately 20 rows per hour. Prior to a switch, on the production system the latest value for table A was 75 and the latest value for table B was 30,000. Consider the following scenarios:

• Scenario 1. You performed a planned switch for test purposes. Because replication of all transactions completed before the switch and no users have been allowed on the backup system, the backup system has the same values as the production. Before starting replication in the reverse direction you run the SETIDCOLA command with an INCREMENTS value of 1. The next rows added to table A and B will have values of 76 and 31,000, respectively.

• Scenario 2. You performed an unplanned switch. From previous experience, you know that the latency of changes being transferred to the backup system is approximately 15 minutes. Rows are inserted into Table A at the highest rate. In 15 minutes, approximately 150 rows will have been inserted into Table A (600 rows/hour * 0.25 hours). This suggests an INCREMENTS value of 150. However, since all measurements are approximations or based on historical data, this amount should be adjusted by a factor of at least 100% to 300 to ensure that duplicate identity column values are not generated on the backup system. The next rows added to table A and B will have values of 75+(300*1) = 375 and 30,000 + (300*1000)= 330,000 respectively.

Checking for replication of tables with identity columnsTo determine whether any files being replicated by a data group have identity columns, do the following.

1. From the production system, specify the data group to check in the following command:

SETIDCOLA DGDFN(name system1 system2) ACTION(*CHECKONLY)

2. Check the job log for the following messages. Message LVE3E2C identifies the number of tables found with identity columns. Message LVI3E26 indicates that no tables were found with identity columns.

3. If the results found tables with identity columns, you need to evaluate the tables and determine whether you can use the SETIDCOLA command to set values.

Setting the identity column attribute for replicated filesAt a high level, the steps you need to perform to set the identity columns of files being replicated by a data group are listed below. You may want to plan for the time required for investigation steps and time to run the command to set values.

1. Run the SETIDCOLA command in check only mode first to determine if you need to set values. See “Checking for replication of tables with identity columns” on page 378.

2. Determine whether limitations exist in the replicated tables that would prevent you from running the command to set values. See “SETIDCOLA command

378

Page 379: MIMIX Reference

limitations” on page 374.

3. Determine what increment value is appropriate for use for all tables replicated by the data group. Consider the needs of each table. Also consider the MIMIX backlog at the time you plan to use the command. See “Examples of choosing a value for INCREMENTS” on page 377.

4. From the appropriate system, as defined in “When the SETIDCOLA command is useful” on page 374 specify a data group and the number of increments to skip in the command:

SETIDCOLA DGDFN(name system1 system2) ACTION(*SET) INCREMENTS(number)

379

Page 380: MIMIX Reference

Handling SQL identity columns

380

Page 381: MIMIX Reference

Collision resolutionCollision resolution is a function within MIMIX user journal replication that automatically resolves detected collisions without user intervention. MIMIX supports the following choices for collision resolution that you can specify in the file entry options (FEOPT) parameter in either a data group definition or in an individual data group file entry:

• Held due to error: (*HLDERR) This is the default value for collision resolution in the data group definition and data group file entries. MIMIX flags file collisions as errors and places the file entry on hold. Any data group file entry for which a collision is detected is placed in a "held due to error" state (*HLDERR). This results in the journal entries being replicated to the target system but they are not applied to the target database. If the file entry specifies member *ALL, a temporary file entry is created for the member in error and only that file entry is held. Normal processing will continue for all other members in the file. You must take action to apply the changes and return the file entry to an active state. When held due to error is specified in the data group definition or the data group file entry, it is used for all 12 of the collision points.

• Automatic synchronization: (*AUTOSYNC) MIMIX attempts to automatically synchronize file members when an error is detected. The member is put on hold while the database apply process continues with the next transaction. The file member is synchronized using copy active file processing, unless the collision occurred at the compare attributes collision point. In the latter case, the file is synchronized using save and restore processing. When automatic synchronization is specified in the data group definition or data group file entry, it is used for all 12 of the collision points.

• Collision resolution class: A collision resolution class is a named definition which provides more granular control of collision resolution. Some collision points also provide additional methods of resolution that can only be accessed by using a collision resolution class. With a defined collision resolution class, you can specify how to handle collision resolution at each of the 12 collision points. You can specify multiple methods of collision resolution to attempt at each collision point. If the first method specified does not resolve the problem, MIMIX uses the next method specified for that collision point.

Additional methods available with CR classesAutomatic synchronization (*AUTOSYNC) and held due to error (*HLDERR) are essentially predefined resolution methods. When you specify *HLDERR or *AUTOSYNC in a data group definition or a data group file entry, that method is used for all 12 of the collision points. If you specify a named collision resolution class in a data group definition or data group file entry, you can customize what resolution method to use at each collision point.

Within a collision resolution class, you can specify one or more resolution method to use for each collision point. *AUTOSYNC and *HLDERR are available for use at each collision point. Additionally, the following resolution methods are also available:

• Exit program: (*EXITPGM) A specified user exit program is called to handle the

381

Page 382: MIMIX Reference

Collision resolution

data collision. This method is available for all collision points.

The MXCCUSREXT service program dynamically links your exit program. The MXCCUSREXT service program is shipped with MIMIX and runs on the target system.

The exit program is called on three occasions. The first occasion is when the data group is started. This call allows the exit program to handle any initialization or set up you need to perform.

The MXCCUSREXT service program (and your exit program) is called if a collision occurs at a collision point for which you have indicated that an exit program should perform collision resolution actions.

Finally, the exit program is called when the data group is ended.

• Field merge: (*FLDMRG) This method is only available for the update collision point 3, used with keyed replication. If certain rules are met, fields from the after-image are merged with the current image of the file to create a merged record that is written to the file. Each field within the record is checked using the series of algorithms below.

In the following algorithms, these abbreviations are used:

RUB = before-image of the source file

RUP = after-image of the source file

RCD = current record image of the target file

a. If the RUB equals the RUP and the RUB equals the RCD, do not change the RUP field data.

b. If the RUB equals the RUP and the RUB does not equal the RCD, copy the RCD field data into the RUP record.

c. If the RUB does not equal the RUP and the RUB equals the RCD, do not change the RUP field data.

d. If the RUB does not equal the RUP and the RUB does not equal the RCD, fail the field-level merge.

• Applied: (*APPLIED) This method is only available for the update collision point 3 and the delete collision point 1. For update collision point 3, the transaction is ignored if the record to be updated already equals the data in the updated record. For delete collision point 1, the transaction is ignored because the record does not exist.

If multiple collision resolution methods are specified and do not resolve the problem MIMIX will always use *HLDERR as the last resort, placing the file on hold.

Requirements for using collision resolutionTo use a collision resolution other than the default *HLDERR, you must have the following:

• The data group definition used for replication must specify a data group type of *ALL or *DB.

382

Page 383: MIMIX Reference

• You must specify either *AUTOSYNC or the name of a collision resolution class for the Collision resolution element of the File entry option (FEOPT) parameter. Specify the value as follows:

– If you want to implement collision resolution for all files processed by a data group, specify a value in the parameter within the data group definition.

– If you want to implement collision resolution for only specific files, specify a value in the parameter within an individual data group file entry.

Note: Ensure that data group activity is ended before you change a data group definition or a data group file entry.

• If you plan to use an exit program for collision resolution, you must first create a named collision resolution class. In the collision resolution class, specify *EXITPGM for each of the collision points that you want to be handled by the exit program and specify the name of the exit program.

Working with collision resolution classes Do the following to access options for working with collision resolution:

1. From the MIMIX Main Menu, select option 11 (Configuration menu) and press Enter.

2. From the MIMIX Configuration Menu, select option 5 (Work with collision resolution classes) and press Enter. The Work with CR Classes display appears.

Creating a collision resolution classTo create a collision resolution class, do the following:

1. From the Work with CR Classes display, type a 1 (Create) next to the blank line at the top of the display and press Enter.

2. The Create Collision Res. Class (CRTCRCLS) display appears. Specify a name at the Collision resolution class prompt.

3. At each of the collision point prompts on the display, specify the value for the type of collision resolution processing you want to use. Press F1 (Help) to see a description of the collision point.

Note: You can specify more than one method of collision resolution for each prompt by typing a + (plus sign) at the prompt. With the exception of the *HLDERR method, the methods are attempted in the order you specify. If the first method you specify does not successfully resolve the collision, then the next method is run. *HLDERR is always the last method attempted. If all other methods fail, the member is placed on hold due to error.

4. Press Page Down to see additional prompts.

5. At each of the collision point prompts on the second display, specify the value for the type of collision resolution processing you want to use.

6. If you specified *EXITPGM at any of the collision point prompts, specify the name and library of program to use at the Exit point prompt.

383

Page 384: MIMIX Reference

Collision resolution

7. At the Number of retry attempts prompt, specify the number of times to try to automatically synchronize a file. If this number is exceeded in the time specified in the Retry time limit, the file will be placed on hold due to error

8. At the Retry time limit prompt, specify the number of maximum number of hours to retry a process if a failure occurs due to a locking condition or an in-use condition.

Note: If a file encounters repeated failures, an error condition that requires manual intervention is likely to exist. Allowing excessive synchronization requests can cause communications bandwidth degradation and negatively impact communications performance.

9. To create the collision resolution class, press Enter.

Changing a collision resolution classTo change an existing collision resolution class, do the following:

1. From the Work with CR Classes display, type a 2 (Change) next to the collision resolution class you want and press Enter.

2. The Change CR Class Details display appears. Make any changes you need. Page Down to see all of the prompts.

3. Provide the required values in the appropriate fields. Inspect the default values shown on the display and either accept the defaults or change the value.

4. You can specify as many as 3 values for each collision point prompt. To expand this field for multiple entries, type a plus sign (+) in the entry field opposite the phrase "+ for more" and press Enter.

5. To accept the changes, press Enter.

Deleting a collision resolution classTo delete a collision resolution class, do the following:

1. From the Work with CR Classes display, type a 4 (Delete) next to the collision resolution class you want and press Enter.

2. A confirmation display appears. Verify that the collision resolution class shown on the display is what you want to delete.

3. Press Enter.

Displaying a collision resolution classTo display a collision resolution class, do the following:

1. From the Work with CR Classes display, type a 5 (Display) next to the collision resolution class you want and press Enter.

2. The Display CR Class Details display appears. Press Page Down to see all of the values.

384

Page 385: MIMIX Reference

Printing a collision resolution classUse this procedure to create a spooled file of a collision resolution class which you can print.

1. From the Work with CR Classes display, type a 6 (Print) next to the collision resolution class you want and press Enter.

2. A spooled file is created with the name MXCRCLS on which you can use your standard printing procedure.

385

Page 386: MIMIX Reference

Collision resolution

386

Page 387: MIMIX Reference

Omitting T-ZC content from system journal replicationFor logical and physical files configured for replication solely through the system journal, MIMIX provides the ability to prevent replication of predetermined sets of T-ZC journal entries associated with changes to object attributes or content changes.

Default T-ZC processing: Files that have an object auditing value of *CHANGE or *ALL will generate T-ZC journal entries whenever changes to the object attributes or contents occur. The access type field within the T-ZC journal entry indicates what type of change operation occurred. Table 46 lists the T-ZC journal entry access types that are generated by PF-DTA, PF38-DTA, PF-SRC, PF38-SRC, LF, and LF-38 file types.

Table 46. T-ZC journal entry access types generated by file objects. These T-ZC journal entries are eligible for replication through the system journal.

Access Type

Access Type Description

Operation Type Operations that Generate T-ZC Access Type

File Member Data

1 Add X Add member for physical files and logical files (ADDPFM)

7 Change1 X X Change Physical File (CHGPF), Change Logical File (CHGLF), Change Physical File Member (CHGPFM), Change Logical File Member (CHGLFM), Change Object Description (CHGOBJD)

10 Clear X Clear member for physical files (CLRPFM)

25 Initialize X Initialize member for physical files (INZPFM)

30 Open X Opening member for write for physical files

36 Reorganize X Reorganize member for physical files (RGZPFM)

37 Remove X Remove member for physical files and logical files (RMVM)

38 Rename X Rename member for physical files and logical files (RNMM)

62 Add constraint

X Adding constraint for physical files (ADDPFCST)

63 Change constraint

X Changing constraint for physical files (CHGPFCST)

64 Remove constraint

X Removing constraint for physical files (RMVPFCST)

1. These T-ZC journal entries may or may not have a member name associated with them. If a member name is associ-ated with the journal entry, the T-ZC is a member operation. If no member name is associated with the journal entry, the T-ZC is assumed to be a file operation.

387

Page 388: MIMIX Reference

Omitting T-ZC content from system journal replication

By default, MIMIX replicates file attributes and file member data for all T-ZC entries generated for logical and physical files configured for system journal replication. While MIMIX recreates attribute changes on the target system, member additions and data changes require MIMIX to replicate the entire object using save, send, and restore processes. This can cause unnecessary replication of data and can impact processing time, especially in environments where the replication of file data transactions is not necessary.

Omitting T-ZC entries: Through the Omit content (OMTDTA) parameter on data group object entry commands, you can specify a predetermined set of access types for *FILE objects to be omitted from system journal replication. T-ZC journal entries with access types within the specified set are omitted from processing by MIMIX.

The OMTDTA parameter is useful when a file or member’s data does not need to be the replicated. For example, when replicating work files and temporary files, it may be desirable to replicate the file layout but not the file members or data. The OMTDTA parameter can also help you reduce the number of transactions that require substantial processing time to replicate, such as T-ZC journal entries with access type 30 (Open).

Each of the following values for the OMTDTA parameter define a set of access types that can be omitted from replication:

*NONE - No T-ZCs are omitted from replication. All file, member, and data operations in transactions for the access types listed in Table 46 are replicated. This is the default value.

*MBR - Data operations are omitted from replication. File and member operations in transactions for the access types listed in Table 46 are replicated. Access type 7 (Change) for both file and member operations are replicated.

*FILE - Member and data operations are omitted from replication. Only file operations in transactions for the access types listed in Table 46 are replicated. Only file operations in transactions with access type 7 (Change) are replicated.

Configuration requirements and considerations for omitting T-ZC contentTo omit transactions, logical and physical files must be configured for system journal replication and meet these configuration requirements:

• The data group definition must specify *ALL or *OBJ for the Data group type (TYPE).

• The file for which you want to omit transactions must be identified by a data group object entry that specifies the following:

– Cooperate with database (COOPDB) must be *NO when Cooperating object types (COOPTYPE) specifies *FILE. If COOPDB is *YES, then COOPTYPE cannot specify *FILE.

– Omit content (OMTDTA) must be either *FILE or *MBR.

Object auditing value considerations - The file must have an object auditing value of *CHANGE or *ALL in order for any T-ZC journal entry resulting from a change operation to be created in the system journal. To ensure that changes to the file

388

Page 389: MIMIX Reference

continue to be journaled and replicated, the data group object entry should also specify *CHANGE or *ALL for the Object auditing value (OBJAUD) parameter.

For all library-based objects, MIMIX evaluates the object auditing level when starting data a group after a configuration change. If the configured value specified for the OBJAUD parameter is higher than the object’s actual value, MIMIX will change the object to use the higher value. If you use the SETDGAUD command to force the object to have an auditing level of *NONE and the data group object entry also specifies *NONE, any changes to the file will no longer generate T-ZC entries in the system journal. For more information about object auditing, see “Managing object auditing” on page 57.

Object attribute considerations - When MIMIX evaluates a system journal entry and finds a possible match to a data group object entry which specifies an attribute in its Attribute (OBJATR) parameter, MIMIX must retrieve the attribute from the object in order to determine which object entry is the most specific match.

If the object attribute is not needed to determine the most specific match to a data group object entry, it is not retrieved.

After determining which data group object entry has the most specific match, MIMIX evaluates that entry to determine how to proceed with the journal entry. When the matching object entry specifies *FILE or *MBR for OMTDTA, MIMIX does not need to consider the object attribute in any other evaluations. As a result, the performance of the object send job may improve.

Updated for 5.0.03.00.

Omit content (OMTDTA) and cooperative processing The OMTDTA and COOPDB parameters are mutually exclusive. MIMIX allows only a value of *NONE for OMTDTA when a data group object entry specifies cooperative processing of files with COOPDB(*YES) and COOPTYPE(*FILE).

When using MIMIX Dynamic Apply for cooperative processing, logical files and physical files (source and data) are replicated primarily through the user journal.

Legacy cooperative processing replicates only physical data files. When using legacy cooperative processing, system journal replication processes select only file attribute transactions. File attribute transactions are T-ZC journal entries with access types 7 (Change), 62 (Add constraint), 63 (Change constraint), and 64 (Remove constraint). These transactions are replicated by system journal replication during legacy cooperative processing, while most other transactions are replicated by user journal replication.

Updated for 5.0.03.00.

Omit content (OMTDTA) and comparison commandsAll T-ZC journal entries for files are replicated when *NONE is specified for the OMTDTA parameter. However, when OMTDTA is enabled by specifying *FILE or *MBR, some T-ZC journal entries for file objects are omitted from system journal

389

Page 390: MIMIX Reference

Omitting T-ZC content from system journal replication

replication. This may affect whether replicated files on the source and target systems are identical.

For example, recall how a file with an object auditing attribute value of *NONE is processed. After MIMIX replicates the initial creation of the file through the system journal, the file on the target system reflects the original state of the file on the source system when it was retrieved for replication. However, any subsequent changes to file data are not replicated to the target system. According to the configuration information, the files are synchronized between source and target systems, but the files are not the same.

A similar situation can occur when OMTDTA is used to prevent replication of predetermined types of changes. For example, if *MBR is specified for OMTDTA, the file and member attributes are replicated to the target system but the member data is not. The file is not identical between source and target systems, but it is synchronized according to configuration. Comparison commands will report these attributes as *EC (equal configuration) even though member data is different. MIMIX audits, which call comparison commands with a data group specified, will have the same results.

Running a comparison command without specifying a data group will report all the synchronized-but-not-identical attributes as *NE (not equal) because no configuration information is considered.

Consider how the following comparison commands behave when faced with non-identical files that are synchronized according to the configuration.

• The Compare File Attributes (CMPFILA) command has access to configuration information from data group object entries for files configured for system journal replication. When a data group is specified on the command, files that are configured to omit data will report those omitted attributes as *EC (equal configuration). When CMPFILA is run without specifying a data group, the synchronized-but-not-identical attributes are reported as *NE (not equal).

• The Compare File Data (CMPFILDTA) command uses data group file entries for configuration information. As a result, when a data group is specified on the command, any file objects configured for OMTDTA will not be compared. When CMPFILDTA is run without specifying a data group, the synchronized-but-not-identical file member attributes are reported as *NE (not equal).

• The Compare Object Attributes (CMPOBJA) command can be used to check for the existence of a file on both systems and to compare its basic attributes (those which are common to all object types). This command never compares file-specific attributes or member attributes and should not be used to determine whether a file is synchronized.

390

Page 391: MIMIX Reference

Selecting an object retrieval delayWhen replicating objects, particularly documents (*DOC) and stream files (*STMF), MIMIX will obtain a lock on the object that can prevent your applications from accessing the object in a timely manner.

Some of your applications may be unable to recover from this condition and may fail in an unexpected manner.

You can reduce, or eliminate, contention for an object between MIMIX and your applications if the object retrieval processing is delayed for a predetermined amount of time before obtaining a lock on the object to retrieve it for replication.

You can use the Object retrieval delay element within the Object processing parameter on the change or create data group definition commands to set the delay time between the time the object was last changed on the source system and the time MIMIX attempts to retrieve the object on the source system.

Although you can specify this value at the data group level, you can override the data group value at the object level by specifying an Object retrieval delay value on the commands for creating or changing data group entries.

You can specify a delay time from 0 through 999 seconds. The default is 0.

If the object retrieval latency time (the difference between when the object was last changed and the current time) is less than the configured delay value, then MIMIX will delay its object retrieval processing until the difference between the time the object was last changed and the current time exceeds the configured delay value.

If the object retrieval latency time is greater than the configured delay value, MIMIX will not delay and will continue with the object retrieval processing.

Object retrieval delay considerations and examplesYou should use care when choosing the object retrieval delay. A long delay may impact the ability of system journal replication processes to move data from a system in a timely manner. Too short a delay may allow MIMIX to retrieve an object before an application is finished with it. You should make the value large enough to reduce or eliminate contention between MIMIX and applications, but small enough to allow MIMIX to maintain a suitable high availability environment.

Example 1 - The object retrieval delay value is configured to be 3 seconds:

• Object A is created or changed at 9:05:10.

• The Object Retrieve job encounters the create/change journal entry at 9:05:14. It retrieves the “last change date/time” attribute from the object and determines that the delay time (object last changed date/time of 9:05:10 + configured delay value of :03 = 9:05:13) is less than the current date/time (9:05:14). Because the object retrieval delay time has already been exceeded, the object retrieve job continues normal processing and attempts to package the object.

Example 2 - The object retrieval delay value is configured to be 2 seconds:

• Object A is created or changed at 10:45:51.

391

Page 392: MIMIX Reference

Selecting an object retrieval delay

• The Object Retrieve job encounters the create/change journal entry at 10:45:52. It retrieves the “last change date/time” attribute from the object and determines that the delay time (object last changed date/time of 10:45:51 + configured delay value of :02 = 10:45:53) exceeds the current date/time (10:45:52). Because the object retrieval delay value has not be met or exceeded, the object retrieve job delays for 1 second to satisfy the configured delay value.

• After the delay (at time 10:45:53), the Object Retrieve job again retrieves the “last change date/time” attribute from the object and determines that the delay time (object last changed date/time of 10:45:51 + configured delay value of :02 = 10:45:53) is equal to the current date/time (10:45:53). Because the object retrieval delay value has been met, the object retrieve job continues with normal processing and attempts to package the object.

Example 3 - The object retrieval delay value is configured to be 4 seconds:

• Object A is created or changed at 13:20:26.

• The Object Retrieve job encounters the create/change journal entry at 13:20:27. It retrieves the “last change date/time” attribute from the object and determines that the delay time (object last changed date/time of 13:20:26 + configured delay value of :04 = 13:20:30) exceeds the current date/time (13:20:27) and delays for 3 seconds to satisfy the configured delay value.

• While the object retrieve job is waiting to satisfy the configured delay value, the object is changed again at 13:20:28.

• After the delay (at time 13:20:30), the Object Retrieve job again retrieves the “last change date/time” attribute from the object and determines that the delay time (object last changed date/time of 13:20:28 + configured delay value of :04 = 13:20:32) again exceeds the current date/time (13:20:30) and delays for 2 seconds to satisfy the configured delay value.

• After the delay (at time 13:20:32), the Object Retrieve job again retrieves the “last change date/time” attribute from the object and determines that the delay time (object last changed date/time of 13:20:28 + configured delay value of :04 = 13:20:32) is equal to the current date/time (13:20:32). Because the object retrieval delay value has now been met, the object retrieve job continues with normal processing and attempts to package the object.

392

Page 393: MIMIX Reference

Configuring to replicate SQL stored procedures and user-defined functions

DB2 UDB for System i5 supports external stored procedures and SQL stored procedures. This information is specifically for replicating SQL stored procedures and user-defined functions. SQL stored procedures are defined entirely in SQL and may contain SQL control statements. MIMIX can replicate operations related to stored procedures that are written in SQL (SQL stored procedures), such as CREATE PROCEDURE (create), DROP PROCEDURE (delete), GRANT PRIVILEGES ON PROCEDURE (authority), and REVOKE PRIVILEGES ON PROCEDURE (authority).

An SQL procedure is a program created and linked to the database as the result of a CREATE PROCEDURE statement that specifies the language SQL and is called using the SQL CALL statement. For example, the following statement creates program SQLPROC in LIBX and establishes it as a stored procedure associated with LIBX:

CREATE PROCEDURE LIBX/SQLPROC(OUT NUM INT) LANGUAGE SQL

SELECT COUNT(*) INTO NUM FROM FILEX

For SQL stored procedures, an independent program object is created by the system and contains the code for the procedure. The program object usually shares the name of the procedure and resides in the same library with which the procedure is associated. A DROP PROCEDURE statement for an SQL procedure removes the procedure from the catalog and deletes the external program object.

Procedures are associated with a particular library. Because information about the procedure is stored in the database catalog and not the library, it cannot be seen by looking at the library. Use System i5 Navigator to view the stored procedures associated with a particular library (select Databases > Libraries).

Requirements for replicating SQL stored procedure operationsThe following configuration requirements and restrictions must be met:

• Apply any IBM PTFs (or their supersedes) associated with i5/OS releases as they pertain to your environment. Log in to Support Central and refer the Technical Documents page for a list of required and recommended IBM PTFs.

• To correctly replicate a create operation, name mapping cannot be used for either the library or program name.

• GRANT and REVOKE only affect the associated program object. MIMIX replicates these operations correctly.

• The COMMENT statement cannot be replicated.

To replicate SQL stored procedure operationsDo the following:

1. Ensure that the replication requirements for the various operations are followed. See “Requirements for replicating SQL stored procedure operations” on page 393.

393

Page 394: MIMIX Reference

Configuring to replicate SQL stored procedures and user-defined functions

2. Ensure that you have a data group object entry that includes the associated program object. For example:

ADDDGOBJE DGDFN(name system1 system2) LIB1(library) OBJ1(*ALL) OBJTYPE(*PGM)

394

Page 395: MIMIX Reference

395

Page 396: MIMIX Reference

Using Save-While-Active in MIMIX

Using Save-While-Active in MIMIXMIMIX system journal replication processes use save/restore when replicating most types of objects. If there is conflict for the use of an object between MIMIX and some other process, the initial save of the object may fail. When such a failure occurs, MIMIX will attempt to process the object by automatically starting delay or retry processing using the values configured in the data group definition.

For the initial save of *FILE objects, save-while-active capabilities will be used unless it is disabled. By default, save-while-active is only used when saving *FILE objects; it is not used when saving other library-based object types, DLOs, or IFS objects. However, you can specify to have MIMIX attempt saves of DLOs and IFS objects using save-while-active.

Values for retry processing are specified in the First retry delay interval (RTYDLYITV1) and Number of times to retry (RTYNBR) parameters in the data group definition. After the initial failed save attempt, MIMIX delays for the number of seconds specified in the RTYDLYITV1 value, before retrying the save operation. This is repeated for the number of times that is specified for the RTYNBR value in the data group definition. If the object cannot be saved after the attempts specified in RTYNBR, then MIMIX uses the delay interval value which is specified in the RTYDLYITV2 parameter. The save is then attempted for the number of retries specified in the RTYNBR parameter. For the initial default values for a data group, this calculates to be 7 save attempts (1 initial attempt, 3 attempts using the first delay value of 5 seconds, and 3 attempts using the second delay value of 300 seconds), in a time frame of approximately 20 minutes. For more information on retry processing, see the parameters for automatic retry processing in “Tips for data group parameters” on page 234.

Considerations for save-while-activeIf a file is being saved and it shares a journal with another file that has uncommitted transactions, then the file may be successfully saved by using a normal (non save-while-active) save. This assumes that the file being saved does not have uncommitted transactions. If you disable save-while-active, attempts to save any type of object will use a normal save.

In addition to providing the ability to enable the use of save-while-active for object types other than *FILE, MIMIX provides the abilities to control the wait time when using save-while-active or to disable the use of save-while-active for all object types.

Save-while-active wait timeFor the default (*FILE objects), MIMIX uses save-while-active with a wait time of 120 seconds on the initial save attempt. MIMIX then uses normal (non save-while-active) processing on all subsequent save attempts if the initial save attempt fails.

You can configure the save-while-active wait time when specifying to use save-while-active for the initial save attempt of a *FILE, a DLO, or and IFS object. When specifying to use save-while-active, the first attempt to save the object after delaying the amount of time configured for the Second retry delay interval (RTYDLYITV2)

396

Page 397: MIMIX Reference

value will also use save-while-active. All other attempts to save the object will use a normal save.

Note: Although MIMIX has the capability to replicate DLOs using save/restore techniques, it is recommended that DLOs be replicated using optimized techniques, which can be configured using the DLO transmission method under Object processing in the data group definition.

Types of save-while-active optionsMIMIX uses the configuration value (DGSWAT) to select the type of save-while-active option to be used when saving objects. You can view and change these configuration values for a data group through an interface such as SQL or DFU.

DGSWAT: Save-while-active type. You can specify the following values:

• A value of 0 (the default) indicates that save-while-active is to be used when saving files, with a save-while-active wait time of 120 seconds. For DLOs and IFS objects, a normal save will be attempted.

• A value of 1 through 99999 indicates that save-while-active is to be used when saving files, DLOs and IFS objects. The value specified will be used as the save-while-active wait time, such as when passed to the SAVACTWAIT parameter on the SAVOBJ and SAVDLO commands.

• A value of -1 indicates that save-while-active is disabled and is not to be used when saving files, DLOs or IFS objects. Normal saves will always be used to save any type of object.

Example configurationsThe following examples describe the SQL statements that could be used to view or set the configuration settings for a data group definition (data group name, system 1 name, system 2 name) of MYDGDFN, SYS1, SYS2.

Example - Viewing: Use this SQL statement to view the values for the data group definition:

SELECT DGDGN, DGSYS, DGSYS2, DGSWAT FROM MIMIX/DM0200P WHERE DGDGN=’MYDGDFN’ AND DGSYS=’SYS1’ AND DGSYS2=’SYS2’

Example - Disabling: If you want to modify the values for a data group definition to disable use of save-while-active for a data group and use a normal save, you could use the following statement:

UPDATE MIMIX/DM0200P SET DGSWAT=-1 WHERE DGDGN=’MYDGDFN’ AND DGSYS=’SYS1’ AND DGSYS2=’SYS2’

Example - Modifying: If you want to modify a data group definition to enable use of save-while-active with a wait time of 30 seconds for files, DLOs and IFS objects, you could use the following statement:

UPDATE MIMIX/DM0200P SET DGSWAT=30 WHERE DGDGN=’MYDGDFN’ AND DGSYS=’SYS1’ AND DGSYS2=’SYS2’

Note: You only have to make this change on the management system; the network system will be automatically updated by MIMIX.

397

Page 398: MIMIX Reference

Using Save-While-Active in MIMIX

398

Page 399: MIMIX Reference

Chapter 17

Object selection for Compare and Synchronize commands

Many of the Compare and Synchronize commands, which provide underlying support for MIMIX AutoGuard, use an enhanced set of common parameters and a common processing methodology that is collectively referred to as ‘object selection.’ Object selection provides powerful, granular capability for selecting objects by data group, object selection parameter, or a combination.

The following commands use the MIMIX object selection capability:

• Compare File Attributes (CMPFILA)

• Compare Object Attributes (CMPOBJA)

• Compare IFS Attributes (CMPIFSA)

• Compare DLO Attributes (CMPDLOA)

• Compare File Data (CMPFILDTA)

• Compare Record Count (CMPRCDCNT)

• Synchronize Object (SYNCOBJ)

• Synchronize IFS Object (SYNCIFS)

• Synchronize DLO (SYNCDLO)

The topics in this chapter include:

• “Object selection process” on page 399 describes object selection which interacts with your input from a command so that the objects you expect are selected for processing.

• “Parameters for specifying object selectors” on page 402 describes object selectors and elements which allow you to work with classes of objects

• “Object selection examples” on page 407 provides examples and graphics with detailed information about object selection processing, object order precedence, and subtree rules.

• “Report types and output formats” on page 418 describes the output of compare commands: spooled files and output files (outfiles).

Object selection process It is important to be able to predict the manner in which object selection interacts with your input from a command so that the objects you expect are selected for processing.

The object selection capability provides you with the option to select objects by data group, object selection parameter, or a combination. Object selection supports four classes of objects: files, objects, IFS objects, and DLOs.

399

Page 400: MIMIX Reference

Object selection process

The object selection process takes a candidate group of objects, subsets them as defined by a list of object selectors, and produces a list of objects to be processed. Figure 24 illustrates the process flow for object selection.

Figure 24. Object selection process flow

Candidate objects are those objects eligible for selection. They are input to the object selection process. Initially, candidate objects consist of all objects on the

400

Page 401: MIMIX Reference

Object selection for Compare and Synchronize commands

system. Based on the command, the set of candidate objects may be narrowed down to objects of a particular class (such as IFS objects).

The values specified on the command determine the object selectors used to further refine the list of candidate objects in the class. An object selector identifies an object or group of objects. Object selectors can come from the configuration information for a specified data group, from items specified in the object selector parameter, or both.

MIMIX processing for object selection consists of two distinct steps. Depending on what is specified on the command, one or both steps may occur.

The first major selection step is optional and is performed only if a data group definition is entered on the command. In that case, data group entries are the source for object selectors. Data group entries represent one of four classes of objects: files, library-based objects, IFS objects, and DLOs. Only those entries that correspond to the class associated with the command are used. The data group entries subset the list of candidate objects for the class to only those objects that are eligible for replication by the data group.

If the command specifies a data group and items on the object selection parameter, the data group entries are processed first to determine an intermediate set of candidate objects that are eligible for replication by the data group. That intermediate set is input to the second major selection step. The second step then uses the input specified on the object selection parameter to further subset the objects selected by the data group entries.

If no data group is specified on the data group definition parameter, the object selection parameter can be used independently to select from all objects on the system.

The second major object selection step subsets the candidate objects based on Object selectors from the command’s object selector parameter (file, object, IFS object, or DLO). Up to 300 object selectors may be specified on the parameter. If none are specified, the default is to select all candidate objects.

Note: A single object selector can select multiple objects through the use of generic names and special values such as *ALL, so the resulting object list can easily exceed the limit of 300 object selectors that can be entered on a command.

The selection parameter is separate and distinct from the data group configuration entries. If a data group is specified, the possible object selectors are 1 to N, where N is defined by the number of data group entries. The remaining candidate objects make up the resultant list of objects to be processed.

Each object selector consists of multiple object selector elements, which serve as filters on the object selector. The object selector elements vary by object class. Elements provide information about the object such as its name, an indicator of whether the objects should be included in or omitted from processing, and name mapping for dual-system and single-system environments. See Table 47 for a list of object selector elements by object class.

Order precedenceObject selectors are always processed in a well-defined sequence, which is important when an object matches more than one selector.

401

Page 402: MIMIX Reference

Parameters for specifying object selectors

Selectors from a data group follow data group rules and are processed in most- to least-specific order. Selectors from the object selection parameter are always processed last to first. If a candidate object matches more than one object selector, the last matching selector in the list is used.

As a general rule when specifying items on an object selection parameter, first specify selectors that have a broad scope and then gradually narrow the scope in subsequent selectors. In an IFS-based command, for example, include /A/B* and then omit /A/B1.

“Object selection examples” on page 407 illustrates the precedence of object selection.

For each object selector, the elements are checked according to a priority defined for the object class. The most specific element is checked for a match first, then the subsequent elements are checked according to their priority. For additional, detailed information about order precedence and priority of elements, see the following topics:

• “How MIMIX uses object entries to evaluate journal entries for replication” on page 101

• “Identifying IFS objects for replication” on page 118

• “How MIMIX uses DLO entries to evaluate journal entries for replication” on page 124

• “Processing variations for common operations” on page 130

Parameters for specifying object selectors The object selectors and elements allow you to work with classes of objects. These objects can be library-based, directory-based, or folder-based. An object selector consists of several elements that identify an object or group of objects, indicates if those objects should be included in or omitted from processing, and may describe name mapping for those objects. The elements vary, depending on the class of objects with which a particular command works.

Library-based selection allows you to work with files or objects based on object name, library name, member name, object type, or object attribute. Directory-based selection allows you to work with objects based on a IFS object path name and includes a subtree option that determines the scope of directory-based objects to include. Folder-based selection allows you to work with objects based on DLO path name. Folder-based selection also includes a subtree object selector.

Object selection supports generic object name values for all object classes. A generic name is a character string that contains one or more characters followed by an asterisk (*). When a generic name is specified, all candidate objects that match the generic name are selected.

For all classes of objects, you can specify as many as 300 object selectors. However, the specific object selector elements that you can specify on the command is determined by the class of object.

Object selector elements provide three functions:

• Object identification elements define the selected object by name, including

402

Page 403: MIMIX Reference

Object selection for Compare and Synchronize commands

generic name specifications.

• Filtering elements provide additional filtering capability for candidate objects.

• Name mapping elements are required primarily for environments where objects exist in different libraries or paths.

• Include or omit elements identify whether the object should be processed or explicitly excluded from processing.

Table 47 lists object selection elements by function and identifies which elements are available on the commands.

File name and object name elements: The File name and Object name elements allow you to identify a file or object by name. These elements allow you to choose a specific name, a generic name, or the special value *ALL.

Using a generic name, you can select a group of files or objects based on a common character string. If you want to work with all objects beginning with the letter A, for example, you would specify A* for the object name.

To process all files within the related selection criteria, select *ALL for the file or object name. When a data group is also specified on the command, a value of *ALL results in the selection of files and objects defined to that data group by the respective data group file entries or data group object entries. When no data group is specified on the command, specifying *ALL and a library name, only the objects that reside within the given library are selected.

Library name element: The library name element specifies the name of the library that contains the files or objects to be included or omitted from the resultant list of

Table 47. Object selection parameters and parameter elements by class

Class File Library-based object

IFS DLO

Commands: CMPFILA, CMPFILDTA, CMPRCDCNT1

CMPOBJA, SYNCOBJ

CMPIFSA, SYNCIFS

CMPDLOA, SYNCDLO

Parameter: FILE OBJ OBJ DLO

Identification elements:

FileLibraryMember

ObjectLibrary

PathSubtreeName Pattern

Path SubtreeName Pattern

Filtering elements: Attribute1 TypeAttribute

Type TypeOwner

Processing elements: Include/Omit Include/Omit Include/Omit Include/Omit

Name mapping elements:

System 2 file1 System 2 library1

System 2 objectSystem 2 library

System 2 pathSystem 2 name pattern

System 2 pathSystem 2 name pattern

1. The Compare Record Count (CMPRCDCNT) command does not support elements for attributes or name mapping.

403

Page 404: MIMIX Reference

Parameters for specifying object selectors

objects. Like the file or object name, this element allows you to define a library a specific name, a generic name, or the special value *ALL.

Note: The library value *ALL is supported only when a data group is specified.

Member element: For commands that support the ability to work with file members, the Member element provides a means to select specific members. The Member element can be a specific name, a generic name, or the special value *ALL.

Refer to the individual commands for detailed information on member processing.

Object path name (IFS) and DLO path name elements: The Object path name (IFS) and DLO path name elements identify an object or DLO by path name. They allow a specific path, a generic path, or the special value *ALL.

Traditionally, DLOs are identified by a folder path and a DLO name. Object selection uses an element called DLO path, which combines the folder path and the DLO name.

If you specify a data group, only those objects defined to that data group by the respective data group IFS entries or data group DLO entries are selected.

Directory subtree and folder subtree elements: The Directory subtree and Folder subtree elements allow you to expand the scope of selected objects and include the descendants of objects identified by the given object or DLO path name. By default, the subtree element is *NONE, and only the named objects are selected. However, if *ALL is used, all descendants of the named objects are also selected.

Figure 25 illustrates the hierarchical structure of folders and directories prior to processing, and is used as the basis for the path, pattern, and subtree examples shown later in this document. For more information, see the graphics and examples beginning with “Example subtree” on page 410.

Figure 25. Directory or folder hierarchy

404

Page 405: MIMIX Reference

Object selection for Compare and Synchronize commands

Directory subtree elements for IFS objects: When selecting IFS objects, only the objects in the file system specified will be included. Object selection will not cross file system boundaries when processing subtrees with IFS objects. Objects from other file systems do not need to be explicitly excluded, however you will need to specify if you want to include objects from other file systems. For more information, see the graphic and examples beginning with “Example subtree for IFS objects” on page 415.

Name pattern element: The Name pattern element provides a filter on the last component of the object path name. The Name pattern element can be a specific name, a generic name, or the special value *ALL.

If you specify a pattern of $*, for example, only those candidate objects with names beginning with $ that reside in the named DLO path or IFS object path are selected.

Keep in mind that improper use of the Name pattern element can have undesirable results. Let us assume you specified a path name of /corporate, a subtree of *NONE, and pattern of $*. Since the path name, /corporate, does not match the pattern of $*, the object selector will identify no objects. Thus, the Name pattern element is generally most useful when subtree is *ALL.

For more information, see the “Example Name pattern” on page 414.

Object type element: The Object type element provides the ability to filter objects based on an object type. The object type is valid for library-based objects, IFS objects, or DLOs, and can be a specific value or *ALL. The list of allowable values varies by object class.

When you specify *ALL, only those object types which MIMIX supports for replication are included. For a list of replicated object types, see “Supported object types for system journal replication” on page 549.

Supported object types for CMPIFSA and SYNCIFS are listed in Table 48.

Supported object types for CMPDLOA and SYNCDLO are listed in Table 49.

Table 48. Supported object types for CMPIFSA and SYNCIFS

Object type Description

*ALL All directories, stream files, and symbolic links are selected

*DIR Directories

*STMF Stream files

*SYMLNK Symbolic links

Table 49. Supported DLO types for CMPDLOA and SYNCDLO

DLO type Description

*ALL All documents and folders are selected

*DOC Documents

*FLR Folders

405

Page 406: MIMIX Reference

Parameters for specifying object selectors

For unique object types supported by a specific command, see the individual commands.

Object attribute element: The Object attribute element provides the ability to filter based on extended object attribute. For example, file attributes include PF, LF, SAVF, and DSPF, and program attributes include CLP and RPG. The attribute can be a specific value, a generic value, or *ALL.

Although any value can be entered on the Object attribute element, a list of supported attributes is available on the command. Refer to the individual commands for the list of supported attributes.

Owner element: The Owner element allows you to filter DLOs based on DLO owner. The Owner element can be a specific name or the special value *ALL. Only candidate DLOs owned by the designated user profile are selected.

Include or omit element: The Include or omit element determines if candidate objects or included in or omitted from the resultant list of objects to be processed by the command.

Included entries are added to the resultant list and become candidate objects for further processing. Omitted entries are not added to the list and are excluded from further processing.

System 2 file and system 2 object elements: The System 2 file and System 2 object elements provide support for name mapping. Name mapping is useful when working with multiple sets of files or objects in a dual-system or single-system environment.

This element may be a specific name or the special value *FILE1 for files or *OBJ1 for objects. If the File or Object element is not a specific name, then you must use the default value of *FILE1 or *OBJ1. This specification indicates that the name of the file or object on system 2 is the same as on system 1 and that no name mapping occurs. Generic values are not supported for the system 2 value if a generic value was specified on the File or Object parameter.

System 2 library element: The System 2 library element allows you to specify a system 2 library name that differs from the system 1 library name, providing name mapping between files or objects in different libraries.

This element may be a specific name or the special value *LIB1. If the System 2 library element is not a specific name, then you must use the default value of *LIB1. This specification indicates that the name of the library on system 2 is the same as on system 1 and that no name mapping occurs. Generic values are not supported for the system 2 value if a generic value was specified on the Library object selector.

System 2 object path name and system 2 DLO path name elements: The System 2 object path name and System 2 DLO path name elements support name mapping for the path specified in the Object path name or DLO path name element. Name mapping is useful when working with two sets of IFS objects or DLOs in different paths in either a dual-system or single-system environment.

Generic values are not supported for the system 2 value if you specified a generic value for the IFS Object or DLO element. Instead, you must choose the default values of *OBJ1 for IFS objects or *DLO1 for DLOs. These values indicate that the name of

406

Page 407: MIMIX Reference

Object selection for Compare and Synchronize commands

the file or object on system 2 is the same as that value on system 1. The default provides support for a two-system environment without name mapping.

System 2 name pattern element: The System 2 name pattern provides support for name mapping for the descendents of the path specified for the Object path name or DLO path name element.

The System 2 name pattern element may be a specific name or the special value *PATTERN1. If the Object path name or DLO path name element is not a specific name, then you must use the default value of *PATTERN1. This specification indicates that no name mapping occurs. Generic values are not supported for the System 2 name pattern element if you specified a generic value for the Name pattern element.

Object selection examplesIn this section, examples and graphics provide you with detailed information about object selection processing, object order precedence, and subtree rules. These illustrations show how objects are selected based on specific selection criteria.

Processing example with a data group and an object selection parameterUsing the CMPOBJA command, let us assume you want to compare the objects defined to data group DG1. For simplicity, all candidate objects in this example are defined to library LIBX.

Table 50 lists all candidate objects on your system .

Next, Table 51 represents the object selectors based on the data group object entry configuration for data group DG1. Objects are evaluated against data group entries in the same order of precedence used by replication processes.

Table 50. Candidate objects on system

Object Library Object type

ABC LIBX *FILE

AB LIBX *SBSD

A LIBX *OUTQ

DEF LIBX *PGM

DE LIBX *DTAARA

D LIBX *CMD

Table 51. Object selectors from data group entries for data group DG1

Order Processed Object Library Object type Include or omit

3 A* LIBX *ALL *INCLUDE

2 ABC* LIBX *FILE *OMIT

407

Page 408: MIMIX Reference

Object selection examples

The object selectors from the data group subset the candidate object list, resulting in the list of objects defined to the data group shown in Table 52. This list is internal to MIMIX and not visible to users.

Note: Although job queue DEF in library LIBX did not appear in Table 50, it would be added to the list of candidate objects when you specify a data group for some commands that support object selection. These commands are required to identify or report candidate objects that do not exist.

Perhaps you now want to include or omit specific objects from the filtered candidate objects listed in Table 52. Table 53 shows the object selectors to be processed based on the values specified on the object selection parameter. These object selectors serve as an additional filter on the candidate objects.

The objects compared by the CMPOBJA command are shown in Table 54. These are the result of the candidate objects selected by the data group (Table 52) that were subsequently filtered by the object selectors specified for the Object parameter on the CMPOBJA command (Table 53).

In this example, the CMPOBJA command is used to compare a set of objects. The input source is a selection parameter. No data group is specified.

1 DEF LIBX *JOBQ *INCLUDE

Table 52. Objects selected by data group DG1

Object Library Object type

A LIBX *OUTQ

AB LIBX *SBSD

DEF LIBX *JOBQ

Table 53. Object selectors for CMPOBJA object selection parameter

Order Processed Object Library Object type Include or omit

1 *ALL LIBX *OUTQ *INCLUDE

2 *ALL LIBX *SBSD *INCLUDE

3 *ALL LIBX *JOBQ *OMIT

Table 54. Resultant list of objects to be processed

Object Library Object type

A LIBX *OUTQ

AB LIBX *SBSD

Table 51. Object selectors from data group entries for data group DG1

Order Processed Object Library Object type Include or omit

408

Page 409: MIMIX Reference

Object selection for Compare and Synchronize commands

The data in the following tables show how candidate objects would be processed in order to achieve a resultant list of objects.

Table 55 lists all the candidate objects on your system.

Table 56 represents the object selectors chosen on the object selection parameter. The sequence column identifies the order in which object selectors were entered. The object selectors serve as filters to the candidate objects listed in Table 55.

The last object selector entered on the command is the first one used when determining whether or not an object matches a selector. Thus, generic object selectors with the broadest scope, such as A*, should be specified ahead of more specific generic entries, such as ABC*. Specific entries should be specified last.

Table 57 illustrates how the candidate objects are selected.

Table 55. Candidate objects on system

Object Library Object type

ABC LIBX *FILE

AB LIBX *SBSD

A LIBX *OUTQ

DEFG LIBX *PGM

DEF LIBX *PGM

DE LIBX *DTAARA

D LIBX *CMD

Table 56. Object selectors entered on CMPOBJA selection parameter

Sequence Entered

Object Library Object type Include or omit

1 A* LIBX *ALL *INCLUDE

2 D* LIBX *ALL *INCLUDE

3 ABC* LIBX *ALL *OMIT

4 *ALL LIBX *PGM *OMIT

5 DEFG LIBX *PGM *INCLUDE

Table 57. Candidate objects selected by object selectors

Sequence Processed

Object Library Object type Include or omit

Selected candidate objects

5 DEFG LIBX *PGM *INCLUDE DEFG

4 *ALL LIBX *PGM *OMIT DEF

409

Page 410: MIMIX Reference

Object selection examples

Table 58 represents the included objects from Table 57. This filtered set of candidate objects is the resultant list of objects to be processed by the CMPOBJA command.

Example subtreeIn the following graphics, the shaded area shows the objects identified by the combination of the Object path name and Subtree elements of the Object parameter for an IFS command. Circled objects represent the final list of objects selected for processing.

3 ABC* LIBX *ALL *OMIT ABC

2 D* LIBX *ALL *INCLUDE D, DE

1 A* LIBX *ALL *INCLUDE A, AB

Table 58. Resultant list of objects to be processed

Object Library Object type

A LIBX *OUTQ

AB LIBX *SBSD

D LIBX *CMD

DE LIBX *DTAARA

DEFG LIBX *PGM

Table 57. Candidate objects selected by object selectors

Sequence Processed

Object Library Object type Include or omit

Selected candidate objects

410

Page 411: MIMIX Reference

Object selection for Compare and Synchronize commands

Figure 26 illustrates a path name value of /corporate/accounting, a subtree specification of *ALL, a pattern value of *ALL, and an object type of *ALL. The candidate objects selected include /corporate/accounting and all descendants.

Figure 26. Directory of /corporate/accounting/

Figure 27 shows a path name of /corporate/accounting/*, a subtree specification of *NONE, a pattern value of *ALL, and an object type of *ALL. In this case, no

411

Page 412: MIMIX Reference

Object selection examples

additional filtering is performed on the objects identified by the path and subtree. The candidate objects selected consist of the specified objects only.

Figure 27. Subtree *NONE for /corporate/accounting/*

412

Page 413: MIMIX Reference

Object selection for Compare and Synchronize commands

Figure 28 displays a path name of /corporate/accounting/*, a subtree specification of *ALL, a pattern value of *ALL, and an object type of *ALL. All descendants of /corporate/accounting/* are selected.

Figure 28. Subtree *ALL for /corporate/accounting/*

413

Page 414: MIMIX Reference

Object selection examples

Figure 29 is a subset of Figure 28. Figure 29 shows a path name of /corporate/accounting, a subtree specification of *NONE, a pattern value of *ALL, and an object type of *ALL, where only the specified directory is selected.

Figure 29. Subtree *NONE for /corporate/accounting

Example Name patternThe Name pattern element acts as a filter on the last component of the object path name. Figure 30 specifies a path name of /corporate/accounting, a subtree specification of *ALL, a pattern value of $*, and an object type of *ALL. In this

414

Page 415: MIMIX Reference

Object selection for Compare and Synchronize commands

scenario, only those candidate objects which match the generic pattern value ($123, $236, and $895) are selected for processing.

Figure 30. Pattern $* for /corporate/accounting

Example subtree for IFS objectsIn the following graphic, the shaded areas show file systems containing IFS objects. When selecting objects in file systems that contain IFS objects, only the objects in the file system specified will be included. The non-generic part of a path name indicates the file system to be searched. Object selection does not cross file system boundaries when processing subtrees with IFS objects.

415

Page 416: MIMIX Reference

Object selection examples

Figure 31 illustrates a directory with a subtree that contains IFS objects. The shaded areas are the file systems. Table 59 contains examples showing what file systems would be selected with the path names specified and a subtree specification of *ALL.

Figure 31. Directory with a subtree containing IFS objects.

.

Table 59. Examples of specified paths and objects selected for Figure 31

Path specified File system Objects selected

/qsy* Root file system /qsyabc

/PARIS/* Root file system in independent ASP PARIS

/PARIS/qsyabc

/PARIS* Root file system None

416

Page 417: MIMIX Reference

Object selection for Compare and Synchronize commands

417

Page 418: MIMIX Reference

Report types and output formats

Report types and output formatsThe following compare commands support output in spooled files and in output files (outfiles): the Compare Attributes commands (CMPFILA, CMPOBJA, CMPIFSA, CMPDLOA), the Compare Record Count (CMPRCDCNT) command, the Compare File Data (CMPFILDTA) command, and the Check DG File Entries (CHKDGFE) command.

The spooled output is a human-readable print format that is intended to be delivered as a report. The output file, on the other hand, is primarily intended for automated purposes such as automatic synchronization. It is also a format that is easily processed using SQL queries.

The level of information in the output is determined by the value specified on the Report type parameter. These values vary by command. For the CMPFILA, CMPOBJA, CMPIFSA, and CMPDLOA commands, the levels of output available are *DIF, *SUMMARY, and *ALL. The report type of *DIF includes information on objects with detected differences. A report type of *SUMMARY provides a summary of all objects compared as well as an object-level indication whether differences were detected. *SUMMARY does not, however, include details about specific attribute differences. Specifying *ALL for the report type will provide you with information found on both *DIF and *SUMMARY reports.

The CMPRCDCNT command supports the *DIF and *ALL report types. The report type of *DIF includes information on objects with detected differences. Specifying *ALL for the report type will provide you with information found on all objects and attributes that were compared.

The CMPFILDTA supports the *DIF and *ALL report types, as well as *RRN. The *RRN value allows you to output, using the MXCMPFILR outfile format, the relative record number of the first 1,000 objects that failed to compare. Using this value can help resolve situations where a discrepancy is known to exist, but you are unsure which system contains the correct data. In this case, the *RRN value provides information that enables you to display the specific records on the two systems and to determine the system on which the file should be repaired.

Spooled filesThe spooled output is generated when a value of *PRINT is specified on the Output parameter. The spooled output consists of four main sections—the input or header section, the object selection list section, the differences section, and the summary section.

First, the header section of the spooled report includes all of the input values specified on the command, including the data group value (DGDFN), comparison level (CMPLVL), report type (RPTTYPE), attributes to compare (CMPATR), actual attributes compared, number of files, objects, IFS objects or DLOs compared, and number of detected differences. It also provides a legend that provides a description of special values used throughout the report.

418

Page 419: MIMIX Reference

The second section of the report is the object selection list. This section lists all of the object selection entries specified on the comparison command. Similar to the header section, it provides details on the input values specified on the command.

The detail section is the third section of the report, and provides details on the objects and attributes compared. The level of detail in this section is determined by the report type specified on the command. A report type value of *ALL will list all objects compared, and will begin with a summary status that indicates whether or not differences were detected. The summary row indicates the overall status of the object compared. Following the summary row, each attribute compared is listed—along with the status of the attribute and the attribute value. In the event the attribute compared is an indicator, a special value of *INDONLY will be displayed in the value columns.

A comparison level value of *DIF will list details only for those objects with detected attribute differences. A value of *SUMMARY will not include the detail section for any object.

The fourth section of the report is the summary, which provides a one row summary for each object compared. Each row includes an indicator that indicates whether or not attribute differences were detected.

OutfilesThe output file is generated when a value of *OUTFILE is specified on the Output parameter. Similar to the spooled output, the level of output in the output file is dependent on the report type value specified on the Report type parameter.

Each command is shipped with an outfile template that uses a normalized database to deliver a self-defined record, or row, for every attribute you compare. Key information, including the attribute type, data group name, timestamp, command name, and system 1 and system 2 values, helps define each row. A summary row precedes the attribute rows. The normalized database feature ensures that new object attributes can be added to the audit capabilities without disruption to current automation processing.

The template files for the various commands are located in the MIMIX product library.

419

Page 420: MIMIX Reference

Chapter 18

Comparing attributes

This chapter describes the commands that compare attributes: Compare File Attributes (CMPFILA), Compare Object Attributes (CMPOBJA), Compare IFS Attributes (CMPIFSA), and Compare DLO Attributes (CMPDLOA). These commands are designed to audit the attributes, or characteristics, of the objects within your environment and report on the status of replicated objects. Together, these command are collectively referred to as the compare attributes commands.

You may already be using the compare attributes commands when they are called by audit functions within MIMIX AutoGuard. When used in combination with the automatic recovery features in MIMIX AutoGuard, the compare attributes commands provide robust functionality to help you determine whether your system is in a state to ensure a successful rollover for planned events or failover for unplanned events.

The topics in this chapter include:

• “About the Compare Attributes commands” on page 420 describes the unique features of the Compare Attributes commands (CMPFILA, CMPOBJA, CMPIFSA, and CMPDLOA.

• “Comparing file and member attributes” on page 425 includes the procedure to compare the attributes of files and members.

• “Comparing object attributes” on page 428 includes the procedure to compare object attributes.

• “Comparing IFS object attributes” on page 431 includes the procedure to compare IFS object attributes.

• “Comparing DLO attributes” on page 434 includes the procedure to compare DLO attributes.

About the Compare Attributes commandsWith the Compare Attributes commands (CMPFILA, CMPOBJA, CMPIFSA, and CMPDLOA), you have significant flexibility in selecting objects for comparison, the attributes to be compared, and the format in which the resulting report is created.

Each command generates a candidate list of objects on both systems and can detect objects missing from either system. For each object compared, the command checks for the existence of the object on the source and target systems and then compares the attributes specified on the command. The results from the comparisons performed are placed in a report.

Each command offers several unique features as well.

• CMPFILA provides significant capability to audit file-based attributes such as triggers, constraints, ownership, authority, database relationships, and the like. Although the CMPFILA command does not specifically compare the data within the database file, it does check attributes such as record counts, deleted records, and others that check the size of data within a file. Comparing these attributes

420

Page 421: MIMIX Reference

Comparing attributes

provides you with assurance that files are most likely synchronized.

• The CMPOBJA command supports many attributes important to other library-based objects, including extended attributes. Extended attributes are attributes unique to given objects, such as auto-start job entries for subsystems.

• The CMPIFSA and CMPDLOA commands provide enhanced audit capability for IFS objects and DLOs, respectively.

Choices for selecting objects to compareYou can select objects to compare by using a data group, the object selection parameters, or both. The compare attributes commands do not require active data groups to run.

• By data group only: If you specify only by data group, all of the objects of the same class as the command that are within the name space configured for the data group are compared. For example, specifying a data group on the CMPIFSA command would compare all IFS objects in the name space created by data group IFS entries associated with the data group.

• By object selection parameters only: You can compare objects that are not replicated by a data group. By specifying *NONE for the data group and specifying objects on the object selection parameters, you define a name space—the library for CMPFILA or CMPOBJA, or the directory path for CMPIFSA or CMPDLOA. Detailed information about object selection is available in “Object selection for Compare and Synchronize commands” on page 399.

• By data group and object selection parameters: When you specify a data group name as well as values on the object selection parameters, the values specified in object selection parameters act as a filter for the items defined to the data group.

Unique parametersThe following parameters for object selection are unique to the compare attributes commands and allow you to specify an additional level of detail when comparing objects or files.

Unique File and Object elements: The following are unique elements on the File parameter (CMPFILA command) and Objects parameter (CMPOBJA command):

• Member: On the CMPFILA command, the value specified on the Member element is only used when *MBR is also specified on the Comparison level parameter.

• Object attribute: The Object attribute element enables you to select particular characteristics of an object or file, and provides a level of filtering. For details, see “CMPFILA supported object attributes for *FILE objects” on page 423 and “CMPOBJA supported object attributes for *FILE objects” on page 423.

System 2: The System 2 parameter identifies the remote system name, and represents the system to which objects on the local system are compared. This parameter is ignored when a data group is specified, since the system 2

421

Page 422: MIMIX Reference

About the Compare Attributes commands

information is derived from the data group. A value is required if no data group is specified.

Comparison level (CMPFILA only): The Comparison level parameter indicates whether attributes are compared at the file level or at the member level.

System 1 ASP group and System 2 ASP group (CMPFILA and CMPOBJA only): The System 1 ASP group and System 2 ASP group parameters identify the name of the auxiliary storage pool (ASP) group where objects configured for replication may reside. The ASP group name is the name of the primary ASP device within the ASP group. This parameter is ignored when a data group is specified.

Choices for selecting attributes to compareThe Attributes to compare parameter allows you to select which combination attributes to compare.

Each compare attribute command supports an extensive list of attributes. Each command provides the ability to select pre-determined sets of attributes (basic or extended), all supported attributes, as well as any other unique combination of attributes that you require.

The basic set of attributes is intended to compare attributes that provide an indication that the objects compared are the same, while avoiding attributes that may be different but do not provide a valid indication that objects are not synchronized, such as the create timestamp (CRTTSP) attribute. Some objects, for example, cannot be replicated using IBM's save and restore technology. Therefore, the creation date established on the source system is not maintained on the target system during the replication process. The comparison commands take this factor into consideration and check the creation date for only those objects whose values are retained during replication.

The extended set of attributes includes the basic set of attributes and some additional attributes.

The following topics list the supported attributes for each command:

• “Attributes compared and expected results - #FILATR, #FILATRMBR audits” on page 591

• “Attributes compared and expected results - #OBJATR audit” on page 596

• “Attributes compared and expected results - #IFSATR audit” on page 604

• “Attributes compared and expected results - #DLOATR audit” on page 606

All comparison attributes supported by a specific compare attribute command may not be applicable for all object types supported by the command. For example, CMPOBJA supports a large number of object types and related comparison attributes. There are many cases where a specific comparison attributes are only valid for a particular object type.

Comparison attributes not supported by a given object type are ignored. For example, auto-start job entries is a valid comparison attribute for object types of subsystem description (*SBSD). For all other object types selected as a result of running the

422

Page 423: MIMIX Reference

Comparing attributes

report, the auto-start job entry attribute is ignored for object types that are not of type *SBSD.

If a data group is specified on a compare request, configuration data is used when comparing objects that are identified for replication through the system journal. If an object’s configured object auditing value (OBJAUD) is *NONE, its attribute changes are not replicated. When differences are detected on attributes of such an object, they are reported as *EC (equal configuration) instead of being reported as *NE (not equal).

For *FILE objects configured for replication through the system journal and configured to omit T-ZC journal entries, also see “Omit content (OMTDTA) and comparison commands” on page 389.

CMPFILA supported object attributes for *FILE objectsWhen you specify a data group to compare, the CMPFILA command obtains information from the configured data group entries for all PF and LF files and their subtypes. Those files that are within the name space created by data group entries are compared.

Table 60 lists the extended attributes for objects of type *FILE that are supported as values on the Object attribute element.

CMPOBJA supported object attributes for *FILE objectsWhen you specify a data group to compare, the CMPOBJA command obtains data group information from the data group object entries. Those objects defined to the data group object entries are compared.

The default value on the Object attribute element is *ALL, which represents the entire list of supported attributes. Any value is supported, but a list of recommended attributes is available in the online help.

Table 60. CMPFILA supported extended attributes for *FILE objects

Object attribute Description

*ALL All physical and logical file types are selected for processing

LF Logical file

LF38 Files of type LF38

PF Physical file types, including PF, PF-SRC, and PF-DTA

PF-DTA Files of type PF-DTA

PF-SRC Files of type PF-SRC

PF38 Files of type PF38, including PF38, PF38-SRC, and PF38-DTA

PF38-DTA Files of type PF38-DTA

PF38-SRC Files of type PF38-SRC

423

Page 424: MIMIX Reference

About the Compare Attributes commands

424

Page 425: MIMIX Reference

Comparing file and member attributesYou can compare file attributes to ensure that files and members needed for replication exist on both systems or any time you need to verify that files are synchronized between systems. You can optionally specify that results of the comparison are placed in an outfile.

Note: If you have automation programs monitoring escape messages for differences in file attributes, be aware that differences due to active replication (Step 16) are signaled via a new difference indicator (*UA) and escape message. See the auditing and reporting topics in this book.

To compare the attributes of files and members, do the following:

1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter.

2. From the MIMIX Compare, Verify, and Synchronize menu, select option 1 (Compare file attributes) and press Enter.

3. The Compare File Attributes (CMPFILA) command appears. At the Data group definition prompts, do one of the following:

• To compare attributes for all files defined by the data group file entries for a particular data group definition, specify the data group name and skip to Step 6.

• To compare files by name only, specify *NONE and continue with the next step.

• To compare a subset of files defined to a data group, specify the data group name and continue with the next step.

4. At the File prompts, you can specify elements for one or more object selectors that either identify files to compare or that act as filters to the files defined to the data group indicated in Step 3. For more information, see “Object selection for Compare and Synchronize commands” on page 399.

You can specify as many as 300 object selectors by using the + for more prompt. For each selector, do the following:

a. At the File and library prompts, specify the name or the generic value you want.

b. At the Member prompt, accept *ALL or specify a member name to compare a particular member within a file.

c. At the Object attribute prompt, accept *ALL to compare the entire list of supported attributes or press F4 to see a valid list of attributes.

d. At the Include or omit prompt, specify the value you want.

e. At the System 2 file and System 2 library prompts, if the file and library names on system 2 are equal to system 1, accept the defaults. Otherwise, specify the name of the file and library to which files on the local system are compared.

Note: The System 2 file and System 2 library values are ignored if a data group is specified on the Data group definition prompts.

425

Page 426: MIMIX Reference

Comparing file and member attributes

f. Press Enter.

5. The System 2 parameter prompt appears if you are comparing files not defined to a data group. If necessary, specify the name of the remote system to which files on the local system are compared.

6. At the Comparison level prompt, accept the default to compare files at a file level only. Otherwise, specify *MBR to compare files at a member level.

Note: If *FILE is specified, the Member prompt is ignored (see Step 4b).

7. At the Attributes to compare prompt, accept *BASIC to compare a pre-determined set of attributes based on whether the comparison is at a file or member level or press F4 to see a valid list of attributes.

8. At the Attributes to omit prompt, accept *NONE to compare all attributes specified in Step 7, or enter the attributes to exclude from the comparison. Press F4 to see a valid list of attributes.

9. At the System 1 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 1. Otherwise, specify the name of the ASP group that contains objects to be compared on system 1.

Note: This parameter is ignored when a data group definition is specified.

10. At the System 2 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 2. Otherwise, specify the name of the ASP group that contains objects to be compared on system 2.

Note: This parameter is ignored when a data group definition is specified.

11. At the Report type prompt, specify the level of detail for the output report.

12. At the Output prompt, do one of the following

• To generate print output, accept *PRINT and press Enter.

• To generate both print output and an outfile, specify *BOTH and press Enter. Skip to Step 14.

• To generate an outfile, specify *OUTFILE and press Enter. Skip to Step 14.

13. The User data prompt appears if you selected *PRINT or *BOTH in Step 12. Accept the default to use the command name to identify the spooled output or specify a unique name. Skip to Step 18.

14. At the File to receive output prompts, specify the file and library to receive the output. (Press F1 (Help) to see the name of the supplied database file.)

15. At the Output member options prompts, do the following:

a. At the Member to receive output prompt, specify the name of the database file member to receive the output of the command.

b. At the Replace or add prompt, specify whether new records should replace existing file members or be added to the existing list.

16. At the Maximum replication lag prompt, specify the maximum amount of time between when a file in the data group changes and when replication of the change is expected to be complete, or accept *DFT to use the default maximum

426

Page 427: MIMIX Reference

time of 300 seconds (5 minutes). You can also specify *NONE, which indicates that comparisons should occur without consideration for replication in progress.

Note: This parameter is only valid when a data group is specified in Step 3.

17. At the Object difference messages prompt, specify whether you want detail messages placed in the job log. The value *INCLUDE places detail messages in the job log, and is the default used outside of shipped rules. When used as part of shipped rules, the default value is *OMIT since the results are already placed in an outfile.

18. At the Submit to batch prompt, do one of the following:

• If you do not want to submit the job for batch processing, specify *NO and press Enter to start the comparison.

• To submit the job for batch processing, accept the default. Press Enter continue with the next step.

19. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request.

20. At the Job name prompt, specify *CMD to use the command name to identify the job or specify a simple name.

21. To start the comparison, press Enter.

427

Page 428: MIMIX Reference

Comparing object attributes

Comparing object attributesYou can compare object attributes to ensure that objects needed for replication exist on both systems or any time you need to verify that objects are synchronized between systems. You can optionally specify that results of the comparison are placed in an outfile.

Note: If you have automation programs monitoring escape messages for differences in object attributes, be aware that differences due to active replication (Step 15) are signaled via a new difference indicator (*UA) and escape message. See the auditing and reporting topics in this book.

To compare the attributes of objects, do the following:

1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter.

2. From the MIMIX Compare, Verify, and Synchronize menu, select option 2 (Compare object attributes) and press Enter.

3. The Compare Object Attributes (CMPOBJA) command appears. At the Data group definition prompts, do one of the following:

• To compare attributes for all objects defined by the data group object entries for a particular data group definition, specify the data group name and skip to Step 6.

• To compare objects by object name only, specify *NONE and continue with the next step.

• To compare a subset of objects defined to a data group, specify the data group name and skip to continue with the next step.

4. At the Object prompts, you can specify elements for one or more object selectors that either identify objects to compare or that act as filters to the objects defined to the data group indicated in Step 3. For more information, see “Object selection for Compare and Synchronize commands” on page 399.

You can specify as many as 300 object selectors by using the + for more prompt. For each selector, do the following:

a. At the Object and library prompts, specify the name or the generic value you want.

b. At the Object type prompt, accept *ALL or specify a specific object type to compare.

c. At the Object attribute prompt, accept *ALL to compare the entire list of supported attributes or press F4 to see a valid list of attributes.

d. At the Include or omit prompt, specify the value you want.

e. At the System 2 file and System 2 library prompts, if the object and library names on system 2 are equal to system 1, accept the defaults. Otherwise, specify the name of the object and library to which objects on the local system are compared.

Note: The System 2 file and System 2 library values are ignored if a data

428

Page 429: MIMIX Reference

group is specified on the Data group definition prompts.

f. Press Enter.

5. The System 2 parameter prompt appears if you are comparing objects not defined to a data group. If necessary, specify the name of the remote system to which objects on the local system are compared.

6. At the Attributes to compare prompt, accept *BASIC to compare a pre-determined set of attributes or press F4 to see a valid list of attributes.

7. At the Attributes to omit prompt, accept *NONE to compare all attributes specified in Step 6, or enter the attributes to exclude from the comparison. Press F4 to see a valid list of attributes.

8. At the System 1 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 1. Otherwise, specify the name of the ASP group that contains objects to be compared on system 1.

Note: This parameter is ignored when a data group definition is specified.

9. At the System 2 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 2. Otherwise, specify the name of the ASP group that contains objects to be compared on system 2.

Note: This parameter is ignored when a data group definition is specified.

10. At the Report type prompt, specify the level of detail for the output report.

11. At the Output prompt, do one of the following

• To generate print output, accept *PRINT and press Enter.

• To generate both print output and an outfile, specify *BOTH and press Enter. Skip to Step 13.

• To generate an outfile, specify *OUTFILE and press Enter. Skip to Step 13.

12. The User data prompt appears if you selected *PRINT or *BOTH in Step 11. Accept the default to use the command name to identify the spooled output or specify a unique name. Skip to Step 17.

13. At the File to receive output prompts, specify the file and library to receive the output. (Press F1 (Help) to see the name of the supplied database file.)

14. At the Output member options prompts, do the following:

a. At the Member to receive output prompt, specify the name of the database file member to receive the output of the command.

b. At the Replace or add prompt, specify whether new records should replace existing file members or be added to the existing list.

15. At the Maximum replication lag prompt, specify the maximum amount of time between when an object in the data group changes and when replication of the change is expected to be complete, or accept *DFT to use the default maximum time of 300 seconds (5 minutes). You can also specify *NONE, which indicates that comparisons should occur without consideration for replication in progress.

Note: This parameter is only valid when a data group is specified in Step 3.

429

Page 430: MIMIX Reference

Comparing object attributes

16. At the Object difference messages prompt, specify whether you want detail messages placed in the job log. The value *INCLUDE places detail messages in the job log, and is the default used outside of shipped rules. When used as part of shipped rules, the default value is *OMIT since the results are already placed in an outfile.

17. At the Submit to batch prompt, do one of the following:

• If you do not want to submit the job for batch processing, specify *NO and press Enter to start the comparison.

• To submit the job for batch processing, accept the default. Press Enter and continue with the next step.

18. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request.

19. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name.

20. To start the comparison, press Enter.

430

Page 431: MIMIX Reference

Comparing IFS object attributesYou can compare IFS object attributes to ensure that IFS objects needed for replication exist on both systems or any time you need to verify that IFS objects are synchronized between systems. You can optionally specify that results of the comparison are placed in an outfile.

Note: If you have automation programs monitoring for differences in IFS object attributes, be aware that differences due to active replication (Step 13) are signaled via a new difference indicator (*UA) and escape message. See the auditing and reporting topics in this book.

To compare the attributes of IFS objects, do the following:

1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter.

2. From the MIMIX Compare, Verify, and Synchronize menu, select option 3 (Compare IFS attributes) and press Enter.

3. The Compare IFS Attributes (CMPIFSA) command appears. At the Data group definition prompts, do one of the following:

• To compare attributes for all IFS objects defined by the data group IFS object entries for a particular data group definition, specify the data group name and skip to Step 6.

• To compare IFS objects by object path name only, specify *NONE and continue with the next step.

• To compare a subset of IFS objects defined to a data group, specify the data group name and continue with the next step.

4. At the IFS objects prompts, you can specify elements for one or more object selectors that either identify IFS objects to compare or that act as filters to the IFS objects defined to the data group indicated in Step 3. For more information, see “Object selection for Compare and Synchronize commands” on page 399.

You can specify as many as 300 object selectors by using the + for more prompt. For each selector, do the following:

a. At the Object path name prompt, accept *ALL or specify the name or the generic value you want.

b. At the Directory subtree prompt, accept *NONE or specify *ALL to define the scope of IFS objects to be processed.

c. At the Name pattern prompt, specify a value if you want to place an additional filter on the last component of the IFS object path name.

Note: The *ALL default is not valid if a data group is specified on the Data group definition prompts.

d. At the Object type prompt, accept *ALL or specify a specific IFS object type to compare.

e. At the Include or omit prompt, specify the value you want.

431

Page 432: MIMIX Reference

Comparing IFS object attributes

f. At the System 2 object path name and System 2 name pattern prompts, if the IFS object path name and name pattern on system 2 are equal to system 1, accept the defaults. Otherwise, specify the name of the path name and pattern to which IFS objects on the local system are compared.

Note: The System 2 object path name and System 2 name pattern values are ignored if a data group is specified on the Data group definition prompts.

g. Press Enter.

5. The System 2 parameter prompt appears if you are comparing IFS objects not defined to a data group. If necessary, specify the name of the remote system to which IFS objects on the local system are compared.

6. At the Attributes to compare prompt, accept *BASIC to compare a pre-determined set of attributes or press F4 to see a valid list of attributes.

7. At the Attributes to omit prompt, accept *NONE to compare all attributes specified in Step 6, or enter the attributes to exclude from the comparison. Press F4 to see a valid list of attributes.

8. At the Report type prompt, specify the level of detail for the output report.

9. At the Output prompt, do one of the following

• To generate print output, accept *PRINT and press Enter.

• To generate both print output and an outfile, specify *BOTH and press Enter. Skip to Step 11.

• To generate an outfile, specify *OUTFILE and press Enter. Skip to Step 11.

10. The User data prompt appears if you selected *PRINT or *BOTH in Step 9. Accept the default to use the command name to identify the spooled output or specify a unique name. Skip to Step 15.

11. At the File to receive output prompts, specify the file and library to receive the output. (Press F1 (Help) to see the name of the supplied database file.)

12. At the Output member options prompts, do the following:

a. At the Member to receive output prompt, specify the name of the database file member to receive the output of the command.

b. At the Replace or add prompt, specify whether new records should replace existing file members or be added to the existing list.

13. At the Maximum replication lag prompt, specify the maximum amount of time between when an IFS object in the data group changes and when replication of the change is expected to be complete, or accept *DFT to use the default maximum time of 300 seconds (5 minutes). You can also specify *NONE, which indicates that comparisons should occur without consideration for replication in progress.

Note: This parameter is only valid when a data group is specified in Step 3.

14. At the Object difference messages prompt, specify whether you want detail messages placed in the job log. The value *INCLUDE places detail messages in

432

Page 433: MIMIX Reference

the job log, and is the default used outside of shipped rules. When used as part of shipped rules, the default value is *OMIT since the results are already placed in an outfile.

15. At the Submit to batch prompt, do one of the following:

• If you do not want to submit the job for batch processing, specify *NO and press Enter to start the comparison.

• To submit the job for batch processing, accept the default. Press Enter continue with the next step.

16. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request.

17. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name.

18. To start the comparison, press Enter.

433

Page 434: MIMIX Reference

Comparing DLO attributes

Comparing DLO attributesYou can compare DLO attributes to ensure that DLOs needed for replication exist on both systems or any time you need to verify that DLOs are synchronized between systems. You can optionally specify that results of the comparison are placed in an outfile.

Note: If you have automation programs monitoring escape messages for differences in DLO attributes, be aware that differences due to active replication (Step 13) are signaled via a new difference indicator (*UA) and escape message. See the auditing and reporting topics in this book.

To compare the attributes of DLOs, do the following:

1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter.

2. From the MIMIX Compare, Verify, and Synchronize menu, select option 4 (Compare DLO attributes) and press Enter.

3. The Compare DLO Attributes (CMPDLOA) command appears. At the Data group definition prompts, do one of the following:

• To compare attributes for all DLOs defined by the data group DLO entries for a particular data group definition, specify the data group name and skip to Step 6.

• To compare DLOs by path name only, specify *NONE and continue with the next step.

• To compare a subset of DLOs defined to a data group, specify the data group name and continue with the next step.

4. At the Document library objects prompts, you can specify elements for one or more object selectors that either identify DLOs to compare or that act as filters to the DLOs defined to the data group indicated in Step 3. For more information, see “Object selection for Compare and Synchronize commands” on page 399.

You can specify as many as 300 object selectors by using the + for more prompt. For each selector, do the following:

a. At the DLO path name prompt, accept *ALL or specify the name or the generic value you want.

b. At the Folder subtree prompt, accept *NONE or specify *ALL to define the scope of IFS objects to be processed.

c. At the Name pattern prompt, specify a value if you want to place an additional filter on the last component of the DLO path name.

Note: The *ALL default is not valid if a data group is specified on the Data group definition prompts.

d. At the DLO type prompt, accept *ALL or specify a specific DLO type to compare.

e. At the Owner prompt, accept *ALL or specify the owner of the DLO.

434

Page 435: MIMIX Reference

f. At the Include or omit prompt, specify the value you want.

g. At the System 2 DLO path name and System 2 DLO name pattern prompts, if the DLO path name and name pattern on system 2 are equal to system 1, accept the defaults. Otherwise, specify the name of the path name and pattern to which DLOs on the local system are compared.

Note: The System 2 DLO path name and System 2 DLO name pattern values are ignored if a data group is specified on the Data group definition prompts.

h. Press Enter.

5. The System 2 parameter prompt appears if you are comparing DLOs not defined to a data group. If necessary, specify the name of the remote system to which DLOs on the local system are compared.

6. At the Attributes to compare prompt, accept *BASIC to compare a pre-determined set of attributes or press F4 to see a valid list of attributes.

7. At the Attributes to omit prompt, accept *NONE to compare all attributes specified in Step 6, or enter the attributes to exclude from the comparison. Press F4 to see a valid list of attributes.

8. At the Report type prompt, specify the level of detail for the output report.

9. At the Output prompt, do one of the following

• To generate print output, accept *PRINT and press Enter.

• To generate both print output and an outfile, specify *BOTH and press Enter. Skip to Step 11.

• To generate an outfile, specify *OUTFILE and press Enter. Skip to Step 11.

10. The User data prompt appears if you selected *PRINT or *BOTH in Step 9. Accept the default to use the command name to identify the spooled output or specify a unique name. Skip to Step 15.

11. At the File to receive output prompts, specify the file and library to receive the output. (Press F1 (Help) to see the name of the supplied database file.)

12. At the Output member options prompts, do the following:

a. At the Member to receive output prompt, specify the name of the database file member to receive the output of the command.

b. At the Replace or add prompt, specify whether new records should replace existing file members or be added to the existing list.

13. At the Maximum replication lag prompt, specify the maximum amount of time between when a DLO in the data group changes and when replication of the change is expected to be complete, or accept *DFT to use the default maximum time of 300 seconds (5 minutes). You can also specify *NONE, which indicates that comparisons should occur without consideration for replication in progress.

Note: This parameter is only valid when a data group is specified in Step 3.

14. At the Object difference messages prompt, specify whether you want detail messages placed in the job log. The value *INCLUDE places detail messages in

435

Page 436: MIMIX Reference

Comparing DLO attributes

the job log, and is the default used outside of shipped rules. When used as part of shipped rules, the default value is *OMIT since the results are already placed in an outfile.

15. At the Submit to batch prompt, do one of the following:

• If you do not want to submit the job for batch processing, specify *NO and press Enter to start the comparison.

• To submit the job for batch processing, accept the default. Press Enter continue with the next step.

16. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request.

17. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name.

18. To start the comparison, press Enter.

436

Page 437: MIMIX Reference

Chapter 19

Comparing file record counts and file member data

This chapter describes the features and capabilities of the Compare Record Counts (CMPRCDCNT) command and the Compare File Data (CMPFILDTA) command.

The topics in this chapter include:

• “Comparing file record counts” on page 437 describes the CMPRCDCNT command and provides a procedure for performing the comparison.

• “Significant features for comparing file member data” on page 440 identifies enhanced capabilities available for use when comparing file member data.

• “Considerations for using the CMPFILDTA command” on page 441 describes recommendations and restrictions of the command. This topic also describes considerations for security, use with firewalls, comparing records that are not allocated, as well as comparing records with unique keys, triggers, and constraints.

• “Specifying CMPFILDTA parameter values” on page 445 provides additional information about the parameters for selecting file members to compare and using the unique parameters of this command.

• “Advanced subset options for CMPFILDTA” on page 451 describes how to use the capability provided by the Advanced subset options (ADVSUBSET) parameter.

• “Ending CMPFILDTA requests” on page 454 describes how to end a CMPFILDTA request that is in progress and describes the results of ending the job.

• “Comparing file member data - basic procedure (non-active)” on page 455 describes how to compare file data in a data group that is not active.

• “Comparing and repairing file member data - basic procedure” on page 458 describes how to compare and repair file data in a data group that is not active.

• “Comparing and repairing file member data - members on hold (*HLDERR)” on page 461 describes how to compare and repair file members that are held due to error using active processing.

• “Comparing file member data using active processing technology” on page 464 describes how to use active processing to compare file member data.

• “Comparing file member data using subsetting options” on page 467 describes how to use the subset feature of the CMPFILDTA command to compare a portion of member data at one time.

Comparing file record countsThe Compare Record Counts (CMPRCDCNT) command allows you to compare the record counts of members of a set of physical files between two systems. This

437

Page 438: MIMIX Reference

Comparing file record counts

command compares the number of current records (*CURRDS) and the number of deleted records (*NBRDLTRCDS) for members of physical files that are defined for replication by an active data group. In resource-constrained environments, this capability provides a less-intensive means to gauge whether files are likely to be synchronized.

Note: Equal record counts suggest but do not guarantee that members are synchronized. To check for file data differences, use the Compare File Data (CMPFILDTA) command. To check for attribute differences, use the Compare File Attributes (CMPFILA) command.

Members to be processed must be defined to a data group that permits replication from a user journal. Journaling is required on the source system. User journal replication processes must be active when this command is used.

Members on both systems can be actively modified by applications and by MIMIX apply processes while this command is running.

For information about the results of a comparison, see “What differences were detected by #MBRRCDCNT” on page 583.

The #MBRRCDCNT calls the CMPRCDCNT command during its compare phase. Unlike other audits, the #MBRRCDCNT audit does not have an associated recovery phase. Differences detected by this audit appear as not recovered in the Audit Summary user interfaces. Any repairs must be undertaken manually, in the following ways:

• In MIMIX Availability Manager, repair actions are available for specific errors when viewing the output file for the audit.

• Run the #FILDTA audit for the data group to detect and correct problems.

• Run the Synchronize DG File Entry (SYNCDGFE) command to correct problems.

To compare file record countsDo the following to compare record counts for an active data group:

1. From a command line, type installation_library/CMPRCDCNT and press F4 (Prompt).

2. The Compare Record Counts (CMPRCDCNT) display appears. At the Data group definition prompts, do one of the following:

• To compare data for all files defined by the data group file entries for a particular data group definition, specify the data group name and skip to Step 4.

• To compare a subset of files defined to a data group, specify the data group name and continue with the next step.

3. At the File prompts, you can specify elements for one or more object selectors to act as filters to the files defined to the data group indicated in Step 2. For more information, see “Object selection for Compare and Synchronize commands” on page 399.

You can specify as many as 300 object selectors by using the + for more prompt

438

Page 439: MIMIX Reference

Comparing file record counts and file member data

for each selector. For each selector, do the following:

a. At the File and library prompts, specify the name or the generic value you want.

b. At the Member prompt, accept *ALL or specify a member name to compare a particular member within a file.

c. At the Include or omit prompt, specify the value you want.

4. At the Report type prompt, do one of the following:

• If you want all compared objects to be included in the report, accept the default.

• If you only want objects with detected differences to be included in the report, specify *DIF.

5. At the Output prompt, do one of the following:

• To generate spooled output that is printed, accept the default, *PRINT. Press Enter and continue with the next step.

• To generate an outfile and spooled output that is printed, specify *BOTH. Press Enter and continue with the next step.

• If you do not want to generate output, specify *NONE. Press Enter and skip to Step 9.

• To generate an outfile, specify *OUTFILE. Press Enter and continue with the next step.

6. At the File to receive output prompts, specify the file and library to receive the output. (Press F1 (Help) to see the name of the supplied database file.)

7. At the Output member options prompts, do the following:

a. At the Member to receive output prompt, specify the name of the database file member to receive the output of the command.

b. At the Replace or add prompt, specify whether new records should replace existing file members or be added to the existing list.

8. At the Object difference messages prompt, specify whether you want detail messages placed in the job log. The value *INCLUDE places detail messages in the job log, and is the default used outside of shipped rules. When used as part of shipped rules, the default value is *OMIT since the results are already placed in an outfile.

9. At the Submit to batch prompt, do one of the following:

• If you do not want to submit the job for batch processing, specify *NO and press Enter to start the comparison.

• To submit the job for batch processing, accept the default. Press Enter continue with the next step.

10. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request.

11. At the Job name prompt, accept *CMD to use the command name to identify the

439

Page 440: MIMIX Reference

Significant features for comparing file member data

job or specify a simple name.

12. To start the comparison, press Enter.

Significant features for comparing file member dataThe Compare File Data (CMPFILDTA) command provides ability to compare data within members of physical files. The CMPFILDTA command is called programmatically by MIMIX AutoGuard functions that help you determine whether files are synchronized and whether your MIMIX environment is prepared for switching. You can also use the CMPFILDTA command interactively or call it from a program.

Unique features of the CMPFILDTA command include active server technology and isolated data correction capability. Together, these features enable the detection and correction of file members that are not synchronized while applications and replication processes remain active. File members that are held due to an error can also be compared and repaired.

Repairing dataYou can optionally choose to have the CMPFILDTA command repair differences it detects in member data between systems.

When files are not synchronized, the CMPFILDTA command provides the ability to resynchronize the file at the record level by sending only the data for the incorrect member to the target system. (In contrast, the Synchronize DG File Entry (SYNCDGFE) command would resynchronize the file by transferring all data for the file from the source system to the target system.)

Active and non-active processingThe Process while active (ACTIVE) parameter determines whether a requested comparison can occur while application and replication activity is present.

Two modes of operation are available: active and non-active. In non-active mode, CMPFILDTA assumes that all files are quiesced and performs file comparisons and repairs without regard to application or replication activity. In active mode, processing begins in the same manner, performing an internal compare and generating a list of records that are not synchronized. This list is not reported, however. Instead, CMPFILDTA checks the mismatched records against the activity that is happening on the source system and the apply activity that is occurring on the target. If there is a member that needs repair, CMPFILDTA will then report the error. At that time, the command will also repair the target file member if *YES was specified on the Repair parameter.

During active processing of a member, the DB apply threshold (DBAPYTHLD) parameter can be used to specify what action CMPFILDTA should take if the database apply session backlog exceeds the threshold warning value configured for the database apply process.

440

Page 441: MIMIX Reference

Comparing file record counts and file member data

Processing members held due to errorThe CMPFILDTA command also provides the ability to compare and repair members being held due to error (*HLDERR). When members in *HLDERR status are processed, the CMPFILDTA command works cooperatively with the database apply (DBAPY) process to compare and repair the file members—and when possible, restore them to an active state. To repair members in *HLDERR status, you must also specify that the repair be performed on the target system and request that active processing be enabled.

To support the cooperative efforts of CMPFILDTA and DBAPY, the following transitional states are used for file entries undergoing compare and repair processing:

• *CMPRLS - The file in *HLDERR status has been released. DBAPY will clear the journal entry backlog by applying the file entries in catch-up mode.

• *CMPACT - The journal entry backlog has been applied. CMPFILDTA and DBAPY are cooperatively repairing the member previously in *HLDERR status, and incoming journal entries continue to be applied in forgiveness mode.

When a member held due to error is being processed by the CMPFILDTA command, the entry transitions from *HLDERR status to *CMPRLS to *CMPACT. The member then changes to *ACTIVE status if compare and repair processing is successful. In the event that compare and repair processing is unsuccessful, the member-level entry is set back to *HLDERR.

Additional featuresThe CMPFILDTA command incorporates many other features to increase performance and efficiency.

Subsetting and advanced subsetting options provide a significant degree of flexibility for performing periodic checks of a portion of the data within a file.

Parallel processing uses multi-threaded jobs to break up file processing into smaller groups for increased throughput. Rather than having a single-threaded job on each system, multiple “thread groups” break up the file into smaller units of work. This technology can benefit environments with multiple processors as well as systems with a single processor.

Considerations for using the CMPFILDTA commandBefore you use the CMPFILDTA command, you should be aware of the information in this topic.

Recommendations and restrictionsIt is recommended that the CMPFILDTA command be used in tandem with the CMPFILA command. Use the CMPFILA command to determine whether you have a matching set of files and attributes on both systems and use the CMPFILDTA command to compare the actual data within the files.

441

Page 442: MIMIX Reference

Considerations for using the CMPFILDTA command

Keyed replication - Although you can run the CMPFILDTA command on keyed files, the command only supports files configured for *POSITIONAL replication. The CMPFILDTA command cannot compare files configured for *KEYED replication.

SNA environments - CMPFILDTA requires a TCP/IP transfer definition—you cannot use SNA. You can be configured for SNA, but then you must override CMPFILDTA to refer to a transfer definition. For more information, see “System-level communications” on page 159.

Apply threshold and apply backlog - Do not compare data using active processing technology if the apply process is 180 seconds or more behind, or has exceeded a threshold limit.

Using the CMPFILDTA command with firewallsThe CMPFILDTA command uses a communications port based on the port number specified in the transfer definition. If you need to run simultaneous CMPFILDTA jobs, you must open the equivalent number of ports in your firewall. For example, if the port number in your transfer definition is 5000 and you want to run 10 CMPFILDTA jobs at once, you should open at least 10 ports in your firewall—minimally, ports 5001 through 5010. If you attempt to run more jobs than there are open ports, those jobs will fail.

Security considerations You should take extra precautions when using CMPFILDTA’s repair function, as it is capable of accessing and modifying data on your system.

To compare file data, you must have read access on both systems. When using the repair function, write access on the system to be repaired may also be necessary when active technology is not used.

CMPFILDTA builds upon the RUNCMD support in MIMIX. CMPFILDTA starts a remote process using RUNCMD, which requires two conditions to be true. First, the user profile of the job that is invoking CMPFILDTA must exist on the remote system and have the same password on the remote system as it does on the local system. Second, the user profile must have appropriate read or update access to the members to be compared or repaired. If active processing and repair is requested, only read access is needed. In this case, the repair processing would be done by the database apply process.

Comparing allocated records to records not yet allocated In some situations, members differ in the number of records allocated. One member may have allocated records, while the corresponding records of the other member are not yet allocated. If the member to be repaired is the smaller of the two members, records are added to make the members the same size.

If the member to be repaired is the larger of the two members, however, the excess records are deleted. When MIMIX replication encounters these situations, no error is generated nor is the member placed on error hold.

442

Page 443: MIMIX Reference

Comparing file record counts and file member data

If one or more members differ in the manner described above, a distinct escape message is issued. If you use CMPFILDTA in a CL program, you may wish to monitor these escape messages specifically.

Comparing files with unique keys, triggers, and constraintsIf members being repaired have unique keys, active triggers, or constraints, special care should be taken. An updated or insert repair action that results in one or more duplicate key exceptions automatically results in the deletion of records with duplicate keys.

Note: The records that could be deleted include those outside the subset of records being compared. Deletion of records with duplicate keys is not recorded in the outfile statistics.

If triggers are enabled, any compare or repair action causes the applicable trigger to be invoked. Triggers should be disabled if this action is not desired by the user. When a compare is specified, read triggers are invoked as records are read. If repair action is specified, update, insert, and delete triggers are invoked as records are repaired.

Table 61 describes the interaction of triggers with CMPFILDTA repair and active processing.

Attention: If an attempt is made to use one of the unsupported situations listed in Table 61, the job that invokes the trigger will end abruptly. You will see a CEE0200 information message in the job log shortly before the job ends. You may also see an MCH2004 message.

Table 61. CMPFILDTA and trigger support

Trigger type Trigger activation group (ACTGRP)

CMPFILDTA -Repair on system (REPAIR)

CMPFILDTA -Process while active (ACTIVE)

CMPFILDTAsupport

Read *NEW Any value Any value Not supported

Read NAMED or *CALLER

Any value Any value Supported

Update, insert, and delete

*NEW *NONE Any value Supported

Update, insert, and delete

*NEW Any value other than *NONE

*NO Not supported

Update, insert, and delete

*NEW Any value other than *NONE

*YES Supported

Update, insert, and delete

NAMED or *CALLER

Any value Any value Supported

443

Page 444: MIMIX Reference

Considerations for using the CMPFILDTA command

Avoiding issues with triggersIt is possible to avoid potential trigger restrictions. You can use any one of the following techniques, which are listed in the preferred order:

• Recreate the trigger program, specifying the ACTGRP(*CALLER) or ACTGRP(NAMED)

• Use the Update Program (UPDPRG) command to change to ACTGRP(NAMED)

• Disable trigger programs on the file

• Use the Synchronize Objects (SYNCOBJ) command rather than CMPFILDTA

• Use the Synchronize Data Group File Entries (SYNCDGFE) command rather than CMPFILDTA

• Use the Copy Active File (CPYACTF) command rather than CMPFILDTA

• Save and restore outside of MIMIX

Referential integrity considerationsReferential integrity enforcement can present complex CMPFILDTA repair scenarios. Like triggers, a delete rule of “cascade”, “set null”, or “set default” can cause records in other tables to be modified or deleted as a result of a repair action. In other situations, a repair action may be prevented due to referential integrity constraints.

Consider the case where a foreign key is defined between a “department” table and an “employee” table. The referential integrity constraint requires that records in the employee table only be permitted if the department number of the employee record corresponds to a row in the department table with the same department number.

It will not be possible for CMPFILDTA repair processing to add a row to the employee table if the corresponding parent row is not present in the department table. Because of this, you should use CMPFILDTA to repair parent tables before using CMPFILDTA to repair dependant tables. Note that the order you specify the tables on the CMPFILDTA command is not necessarily the order in which they will be processed, so you must issue the command once for the parent table, and then again for the dependant table.

Repairing the parent department table first may present its own problems. If CMPFILDTA attempts to delete a row in the department table and the delete rule for the constraint is “restrict”, the row deletion may fail if the employee table still contains records corresponding to the department to be deleted. Such constraints should use a delete rule of “cascade”, “set null”, or “set default”. Otherwise, CMPFILDTA may not be able to make all repairs.

See the IBM Database Programming manual (SC41-5701) for more information on referential integrity.

Job priorityWhen run, the remote CMPFILDTA job uses the run priority of the local CMPFILDTA job. However, the run priority of either CMPFILDTA job is superseded if a

444

Page 445: MIMIX Reference

Comparing file record counts and file member data

CMPFILDTA class object (*CLS) exists in the installation library of the system on which the job is running.

Note: Use the Change Job (CHGJOB) command on the local system to modify the run priority of the local job. CMPFILDTA uses the priority of the local job to set the priority of the remote job, so that both jobs have the same run priority. To set the remote job to run at a different priority than the local job, use the Create Class (CRTCLS) command to create a *CLS object for the job you want to change.

Specifying CMPFILDTA parameter valuesThis topic provides information about specific parameters of the CMPFILDTA command.

Specifying file members to compareThe CMPFILDTA command allows you to work with physical file members only. You can select the files to compare by using a data group, the object selection parameters, or both.

• By data group only: If you specify only by data group, the list of candidate objects to compare is determined by the data group configuration from the local system only. If a file exists on the remote system that meets the object selection criteria but it does not exist on the local system, the data within that file is not compared. If a file exists on the local system but not on the remote system, however, the command will signal an error condition.

• By object selection parameters only: You can compare file members that are not replicated by a data group. By specifying *NONE for the data group and specifying file and member information on the object selection parameters, you define a name space on the local system from which a list of candidate objects is created.

The Object attribute element on the File parameter enables you to select particular characteristics of a file. Table 62 lists the extended attributes for objects of type *FILE that are supported as values for the Object attribute element

• By data group and object selection parameters: When you specify a data group name as well as values on the object selection parameters, the values specified in object selection parameters act as a filter for the items defined to the data group.

Detailed information about object selection is available in “Object selection for Compare and Synchronize commands” on page 399.

Table 62. CMPFILDTA supported extended attributes for *FILE objects

Object attribute Description

PF Physical file types, including PF, PF-SRC, and PF-DTA

PF-DTA Files of type PF-DTA

445

Page 446: MIMIX Reference

Specifying CMPFILDTA parameter values

Tips for specifying values for unique parametersThe CMPFILDTA command includes several parameters that are unique among MIMIX commands.

Repair on system: When you choose to repair files that do not match, CMPFILDTA allows you to select the system on which the repair should be made.

File repairs can be performed on system 1, system 2, local, target, source, or you can specify the system definition name.

Note: *TGT and *SRC are only valid when a data group is specified. However, you cannot select *SRC when *YES is specified for the Process while active parameter. Refer to the “Process while active” section.

Process while active: CMPFILDTA includes while-active support. This parameter allows you to indicate whether compares should be made while file activity is taking place. For efficiency’s sake, it is always best to perform active repairs during a period of low activity. CMPFILDTA, however, uses a mechanism that retries comparison activity until it detects no interference from active files.

Three values are allowed on the Process while active parameter—*DFT, *NO, and *YES. The *NO option should be used when the files being compared are not actively being updated by either application activity or MIMIX replication activity. All file repairs are handled directly by CMPFILDTA. *YES is only allowed when a data group is specified and should be used when the files being compared are actively being updated by application activity or MIMIX replication activity. In this case, all file repairs are routed through the data group and require that the data group is active. If a data group is specified, the default value of *DFT is equivalent to *YES. If a data group is not specified, *DFT is the same as *NO.

Specifying *NO for the Process while active parameter is the recommended option for running in a quiesced environment. When used in combination with an active data group, it assumes there is no application activity and MIMIX replication is current. If you specify *NO for the Process while active parameter in combination with repairing the file, the data group apply process must be configured not to lock the files on the apply system. This configuration can be accomplished by specifying *NO on the Lock on apply parameter of the data group definition.

Note: Do not compare data using active processing technology if the apply process is 180 seconds or more behind, or has exceeded a threshold limit.

File entry status: The File entry status parameter provides options for selecting members with specific statuses, including members held due to error (*HLDERR).

PF-SRC Files of type PF-SRC

PF38 Files of type PF38, including PF38, PF38-SRC, and PF38-DTA

PF38-DTA Files of type PF38-DTA

PF38-SRC Files of type PF38-SRC

Table 62. CMPFILDTA supported extended attributes for *FILE objects

Object attribute Description

446

Page 447: MIMIX Reference

Comparing file record counts and file member data

When members in *HLDERR status are processed, the CMPFILDTA command works cooperatively with the database apply (DBAPY) process to compare and repair members held due to error—and when possible, restore them to an active state.

Valid values for the File entry status parameter are *ALL, *ACTIVE, and *HLDERR. A data group must also be specified on the command or the parameter is ignored. The default value, *ALL, indicates that all supported entry statuses (*ACTIVE and *HLDERR) are included in compare and repair processing. The value *ACTIVE processes only those members that are active1. When *HLDERR is specified, only member-level entries being held due to error are selected for processing. To repair members held due to error using *ALL or *HLDERR, you must also specify that the repair be performed on the target system and request that active processing be used.

System 1 ASP group and System 2 ASP group: The System 1 ASP group and System 2 ASP group parameters identify the name of the auxiliary storage pool (ASP) group where objects configured for replication may reside. The ASP group name is the name of the primary ASP device within the ASP group. This parameter is ignored when a data group is specified. You must be running on OS V5R2 or greater to use these parameters.

Subsetting option: The Subsetting option parameter provides a robust means by which to compare a subset of the data within members. In some instances, the value you select will determine which additional elements are used when comparing data.

Several options are available on this parameter: *ALL, *ADVANCED, *ENDDTA, or *RANGE. If *ALL is specified, all data within all selected files is compared, and no additional subsetting is performed. The other options compare only a subset of the data.

The following are common scenarios in which comparing a subset of your data is preferable:

• If you only need to check a specific range of records, use *RANGE.

• When a member, such as a history file, is primarily modified with insert operations, only recently inserted data needs to be compared. In this situation, use *ENDDTA.

• If time does not permit a full comparison, you can compare a random sample using *ADVANCED.

• If you do not have time to perform a full comparison all at once but you want all data to be compared over a number of days, use *ADVANCED.

*RANGE indicates that the Subset range parameter will be used to specify the subset of records to be compared. For more information, see the “Subset range” section.

If you select *ENDDTA, the Records at end of file parameter specifies how many trailing records are compared. This value allows you to compare a selected number of records at the end of all selected members. For more information, see the section titled “Records at end of file.”

Advanced subsetting can be used to audit your entire database over a number of days or to request that a random subset of records be compared. To specify

1. The File entry status parameter was introduced in V4R4 SPC05SP2. If you want to preserve pre-vious behavior, specify STATUS(*ACTIVE).

447

Page 448: MIMIX Reference

Specifying CMPFILDTA parameter values

advanced subsetting select *ADVANCED. For more information see “Advanced subset options for CMPFILDTA” on page 451.

Subset range: Subset range is enabled when *RANGE is specified on the Subsetting option parameter, as described in the “Subsetting option” section.

Two elements are included, First record and Last record. These elements allow you to specify a range of records to compare. If more than one member is selected for processing, all members are compared using the same relative record number range. Thus, using the range specification is usually only useful for a single member or a set of members with related records.

The First record element can be specified as *FIRST or as a relative record number. In the case of *FIRST, records in the member are compared beginning with the first record.

The Last record element can be specified as *LAST or as a relative record number. In the case of *LAST, records in the member are compared up to, and including, the last record.

Advanced subset options: The Advanced subset options (ADVSUBSET) provides the ability to use sophisticated comparison techniques. For detailed information and examples, see “Advanced subset options for CMPFILDTA” on page 451.

Records at end of file: The Records at end of file (ENDDTA) parameter allows you to compare recently inserted data without affecting the other subsetting criteria. If you specified *ENDDTA in the Subsetting option parameter, as indicated in the “Subsetting option” section, only those records specified in the Records at end of file parameter will be processed.

This parameter is also valid if values other than *ENDDTA were specified in the Subsetting option. In this case, both records at the end of the file as well as any additional subsetting options factor into the compare. If some records are selected by both by the ENDDTA parameter and another subsetting option, those records are only processed once.

The Records at end of file parameter can be specified as *NONE or number-of-records. When *NONE is specified, records at the end of the members are not compared unless they are selected by other subset criteria. To compare particular records at the end of each member, you must specify the number of records.

The ENDDTA value is always applied to the smaller of the System 1 and System 2 members, and continues through until the end of the larger member. Let us assume that you specify 200 for the ENDDTA value. If one system has 1000 records while the other has 1100, relative records 801-1100 would be checked. The relative record numbers of the last 200 records of the smaller file are compared as well as the additional 100 relative record numbers due to the difference in member size.

Using the Records at end of file parameter in daily processing can keep you from missing records that were inserted recently.

448

Page 449: MIMIX Reference

Comparing file record counts and file member data

Specifying the report type, output, and type of processingThe options for selecting processing method, output format, and the contents of the reported differences are similar to that provided for other MIMIX compare commands. For additional details, see “Report types and output formats” on page 418.

System to receive output The System to receive output (OUTSYS) parameter indicates the system on which the output will be created. By default, the output is created on the local system.

When Output is *OUTFILE and Process while active is *YES, complete outfile information is only available if the System to receive output parameter indicates that the output file is on the data group target system. In this case, the outfile will be updated as the database apply encounters journal entries relating to possible mismatched records.

The Wait time (seconds) parameter can be used to ensure that all such outfile updates are complete before the command completes.

Interactive and batch processing On the Submit to batch parameter, the *YES default submits a multi-thread capable batch job. When *NO is specified for the parameter, CMPFILDTA generates a batch immediate job to do the bulk of the processing. A batch immediate job is not processed through a job queue and is identified with a job type of BCI on the WRKACTJOB screen. Similarly, if CMPFILDTA is issued from a batch job whose ALWMLTTHD attribute is *NO, a batch immediate job will also be spawned.

In cases where a batch immediate job is generated, the original job waits for the batch immediate job to complete and re-issues any messages generated by CMPFILDTA. Interactive jobs are not permitted to have multiple threads, which are required for CMPFILDTA processing. Thus, you need to be aware of the following issues when a batch immediate job is generated:

• The identity of the job will be issued in a message in the original job.

• Since the batch immediate job cannot access the interactive job’s QTEMP library, outfiles and files to be compared may not reside in QTEMP, even when CMPFILDTA is issued from a multi-thread capable batch job.

• Re-issued messages will not have the original “from” and “to” program information. Instead, you must view the job log of the generated job to determine this information.

• Escape messages created prior to the final message will be converted to diagnostic messages.

• Canceling the interactive request will not cancel the batch immediate job.

Using the additional parametersThe following parameters allow you to specify an additional level of detail regarding CMPFILDTA command processing. These parameters are available by pressing F10 (Additional parameters).

449

Page 450: MIMIX Reference

Specifying CMPFILDTA parameter values

Transfer definition: The default for the Transfer definition parameter is *DFT. If a data group was specified, the default uses the transfer definition associated with the data group. If no data group was specified, the transfer definition associated with system 2 is used.

The CMPFILDTA command requires that you have a TCP/IP transfer definition for communication with the remote system. If your data group is configured for SNA, override the SNA configuration by specifying the name of the transfer definition on the command.

Number of thread groups: The Number of thread groups parameter indicates how many thread groups should be used to perform the comparison. You can specify from 1 to 100 thread groups.

When using this parameter, it is important to balance the time required for processing against the available resources. If you increase the number of thread groups in order to reduce processing time, for example, you also increase processor and memory use. The default, *CALC, will determine the number of thread groups automatically. To maximize processing efficiency, the value *CALC does not calculate more than 25 thread groups.

The actual number of threads used in the comparison is based on the result of the formula 2x + 1, where x is the value specified or the value calculated internally as the result of specifying *CALC. When *CALC is specified, the CMPFILDTA command displays a message showing the value calculated as the number of thread groups.

Note: Thread groups are created for primary compare processing only. During setup, multiple threads may be utilized to improve performance, depending on the number of members selected for processing. The number of threads used during setup will not exceed the total number of threads used for primary compare processing. During active processing, only one thread will be used.

Wait time (seconds): The Wait time (seconds) value is only valid when active processing is in effect and specifies the amount of time to wait for active processing to complete. You can specify from 0 to 3600 seconds, or the default *NOMAX.

If active processing is enabled and a wait time is specified, CMPFILDTA processing waits the specified time for all pending compare operations processed through the MIMIX replication path to complete. In most cases, the *NOMAX default is highly recommended.

DB apply threshold: The DB apply threshold parameter is only valid during active processing and requires that a data group be specified. The parameter specifies what action CMPFILDTA should take if the database apply session backlog exceeds the threshold warning value configured for the database apply process. The default value *END stops the requested compare and repair action when the database apply threshold is reached; any repair actions that have not been completed are lost. The value *NOMAX allows the compare and repair action to continue even when the database apply threshold has been reached. Continuing processing when the apply process has a large backlog may adversely affect performance of the CMPFILDTA job and its ability to compare a file with an excessive number of outstanding entries. Therefore, *NOMAX should only be used in exceptional circumstances.

450

Page 451: MIMIX Reference

Comparing file record counts and file member data

Advanced subset options for CMPFILDTAYou can use the Advanced subset options (ADVSUBSET) parameter on the Compare File Data (CMPFILDTA) command for advanced techniques such as comparing records over time and comparing a random sample of data. These techniques provide additional assurance that files are replicated correctly.

For example, let us assume you have a limited batch window. You do not have time to run a total compare everyday, but have the requirement to assure that all data is compared over the course of a week. Using the advanced CMPFILDTA capability, you can divide this work over a number of days.

Advanced subsetting makes it simple to accomplish this task by comparing 10 percent of your data each weeknight and completing the remaining 50 percent over the weekend. However, as the following example demonstrates, it is always best to compare a random representative sampling of data. The Advanced subset options also provides this capability.

For example, if a member contains 1000 records on Monday, records 1 through 100 will be compared on Monday. By Tuesday, perhaps the member has grown to 1500 records. The second 10 percent, to be processed on Tuesday, will contain records 151 through 300. Records 101 through 150 will not get checked at all. Advanced subsetting provides you with an alternative that does not skip records when members are growing.

Advanced subset options are applied independently for each member processed. The advanced subset function assigns the data in each member to multiple non-overlapping subsets in one of two ways. It also allows a specified range of these subsets to be compared, which permits a representative sample subset of the data to be compared. It also permits a full compare to be partitioned into multiple CMPFILDTA requests that, in combination, assures that all data that existed at the time of the first request is compared.

To use advanced subsetting, you will need to identify the following:

• The number of subsets or “bins” to define for the compare

• The manner in which records are assigned to bins

• The specific bins to process

Number of subsets: The first issue to consider when performing advanced subset options is how many subsets or bins to establish. The Number of subsets element is the number of approximately equal-sized bins to define. These bins are numbered from 1 up to the number specified (N). You must specify at least one bin. Each record is assigned to one of these bins.

The Interleave element specifies the manner in which members are assigned to a bin.

Interleave: The Interleave factor specifies the mapping between the relative record number and the bin number. There are two approaches that can be used.

451

Page 452: MIMIX Reference

Advanced subset options for CMPFILDTA

If you specify *NONE, records in each member are divided on a percentage basis. For example:

Note that when the total number of records in a member changes, the mapping also changes. Records that were once assigned to bin 2 may in the future be assigned to bin 1. If you wish to compare all records over the course of a few days, the changing mapping may cause you to miss records. A specific Interleave value is preferable in this case.

Using bytes, the Interleave value specifies a number of contiguous records that should be assigned to each bin before moving to the next bin. Once the last bin is filled, assignment restarts at the first bin. Let us assume you have specified in interleave value of 20 bytes. The following example is based on the one provided in Table 63:

Table 63. Interleave *NONE

Member A on Monday Member A on Tuesday

Total records in member: 30 45

Number of subsets (bins): 3 3

Interleave: *NONE *NONE

Records assigned to bin 1: 1-10 1-15

Records assigned to bin 2: 11-20 16-30

Records assigned to bin 3: 21-30 31-45

Table 64. Interleave(20)

Member A on Monday Member A on Tuesday

Total records in member: 30 45

Record length: 10 bytes 10 bytes

Number of subsets (bins): 3 3

Interleave (bytes): 20 20

Interleave (records): 2 2

Records assigned to bin 1: 1-27-813-1419-2025-26

1-27-813-1419-2025-2631-3237-3843-44

452

Page 453: MIMIX Reference

Comparing file record counts and file member data

If the Interleave and Number of Subsets is constant, the mapping of relative record numbers to bins is maintained, despite the growth of member size. Because every bin is eventually selected, comparisons made over several days will compare every record that existed on the first day.

In most circumstances, *CALC is recommended for the interleave specification. When you select *CALC, the system determines how many contiguous bytes are assigned to each bin before subsequent bytes are placed in the next bin. This calculated value will not change due to member size changes.

Specifying *NONE or a very large interleave factor maximizes processing efficiency, since data in each bin is processed sequentially. Specifying a very small interleave factor can greatly reduce efficiency, as little sequential processing can be done before the file must be repositioned. If you wish to compare a random sample, a smaller interleave factor provides a more random, or scattered, sample to compare.

The next parameters, the First subset and the Last subset, allow you to specify which bin to process.

First and last subset: The First subset and Last subset values work in combination to determine a range of bins to compare. For the First subset, the possible values are *FIRST and subset-number. If you select *FIRST, the range to compare will start with bin 1. Last subset has similar values, *LAST and subset-number. When you specify *LAST, the highest numbered bin is the last one processed.

To compare a random sample of your data, specify a range of subsets that represent the size of the sample. For example, suppose you wish to compare seven percent of your data. If the number of subsets are 100, the first subset is 1, and the last subset is 7, seven percent of the data is compared. A first subset value of 21 and a last subset value of 27 would also compare seven percent of your data, but it would compare a different seven percent than the first example.

Records assigned to bin 2: 3-49-1015-1621-2227-28

3-49-1015-1621-2227-2833-3439-4045

Records assigned to bin 3: 5-611-1217-1823-2429-30

5-611-1217-1823-2429-3035-3641-42

Table 64. (Continued)Interleave(20)

Member A on Monday Member A on Tuesday

453

Page 454: MIMIX Reference

Ending CMPFILDTA requests

To compare all your data over the course of several days, specify the number of subsets and interleave factor that allows you to size each day’s workload as your needs require. For example, you would keep the subset value and interleave factor a constant, but vary the First and Last subset values each day. The following settings could be used over the course of a week to compare all of your data:

Note: You can automate these tasks using MIMIX Monitor. Refer to the MIMIX Monitor documentation for more information.

Ending CMPFILDTA requestsThe Compare File Data (CMPFILDTA) command, or a rule which calls it, can be long running and may exceed the time which you have available for it to run.

The CMPFILDTA command recognizes requests to end the job in a controlled manner (ENDJOB OPTION(*CNTRLD)). Messages indicate the step within CMPFILDTA processing at which the end was requested. The report and output file contain as much information as possible with the data available at the step in progress when the job ended. The output may not be accurate because the full CMPFILDTA request did not complete.

The content of the report and output file is most valuable if the command completed processing through the end of phase 1 compare. The output may be incomplete if the end occurred earlier. If processing did not complete to a point where MIMIX can accurately determine the result of the compare, the value *UN (unknown) is placed in the Difference Indicator.

Note: If the CMPFILDTA command has been long running or has encountered many errors, you may need to specify more time on the ENDJOB command’s Delay time, if *CNTRLD (DELAY) parameter. The default value of 30 seconds may not be adequate in these circumstances.

Table 65. Using First and last subset to compare data

Day of week Number of subsets (bins)

Interleave First subset

Last subset

Percentage compared

Monday 100 *CALC 1 10 10

Tuesday 100 *CALC 11 20 10

Wednesday 100 *CALC 21 30 10

Thursday 100 *CALC 31 40 10

Friday 100 *CALC 41 50 10

Saturday 100 *CALC 51 65 15

Sunday 100 *CALC 66 100 35

454

Page 455: MIMIX Reference

Comparing file member data - basic procedure (non-active)

You can use the CMPFILDTA command to ensure that data required for replication exists on both systems and any time you need to verify that files are synchronized between systems. You can optionally specify that results of the comparison are placed in an outfile.

Before you begin, see the recommendations, restrictions, and security considerations described in “Considerations for using the CMPFILDTA command” on page 441. You should also read “Specifying CMPFILDTA parameter values” on page 445 for additional information about parameters and values that you can specify.

To perform a basic data comparison, do the following:

1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter.

2. From the MIMIX Compare, Verify, and Synchronize menu, select option 7 (Compare file data) and press Enter.

3. The Compare File Data (CMPFILDTA) command appears. At the Data group definition prompts, do one of the following:

• To compare data for all files defined by the data group file entries for a particular data group definition, specify the data group name and skip to Step 6.

• To compare data by file name only, specify *NONE and continue with the next step.

• To compare a subset of files defined to a data group, specify the data group name and continue with the next step.

4. At the File prompts, you can specify elements for one or more object selectors that either identify files to compare or that act as filters to the files defined to the data group indicated in Step 3. For more information, see “Object selection for Compare and Synchronize commands” on page 399.

You can specify as many as 300 object selectors by using the + for more prompt for each selector. For each selector, do the following:

a. At the File and library prompts, specify the name or the generic value you want.

b. At the Member prompt, accept *ALL or specify a member name to compare a particular member within a file.

c. At the Object attribute prompt, accept *ALL to compare the entire list of supported attributes or press F4 to see a valid list of attributes.

d. At the Include or omit prompt, specify the value you want.

e. At the System 2 file and System 2 library prompts, if the file and library names on system 2 are equal to system 1, accept the defaults. Otherwise, specify the name of the file and library to which files on the local system are compared.

455

Page 456: MIMIX Reference

Comparing file member data - basic procedure (non-active)

Note: The System 2 file and System 2 library values are ignored if a data group is specified on the Data group definition prompts.

f. Press Enter.

5. The System 2 parameter prompt appears if you are comparing files not defined to a data group. If necessary, specify the name of the remote system to which files on the local system are compared.

6. At the Repair on system prompt, accept *NONE to indicate that no repair action is done.

7. At the Process while active prompt, specify *NO to indicate that active processing technology should not be used in the comparison.

8. At the File entry status prompt, specify *ACTIVE to process only those file members that are active.

9. At the System 1 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 1. Otherwise, specify the name of the ASP group that contains objects to be compared on system 1.

Note: This parameter is ignored when a data group definition is specified.

10. At the System 2 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 2. Otherwise, specify the name of the ASP group that contains objects to be compared on system 2.

Note: This parameter is ignored when a data group definition is specified.

11. At the Subsetting option prompt, specify *ALL to select all data and to indicate that no subsetting is performed.

12. At the Report type prompt, do one of the following:

• If you want all compared objects to be included in the report, accept the default.

• If you only want objects with detected differences to be included in the report, specify *DIF.

• If you want to include the member details and relative record number (RRN) of the first 1,000 objects that have differences, specify *RRN.

Notes:• The *RRN value can only be used when *NONE is specified for the Repair

on system prompt and *OUTFILE is specified for the Output prompt.

• The *RRN value outputs to a unique outfile (MXCMPFILR). Specifying *RRN can help resolve situations where a discrepancy is known to exist but you are unsure which system contains the correct data. This value provides the information that enables you to display the specific records on the two systems and determine the system on which the file should be repaired.

13. At the Output prompt, do one of the following:

• To generate spooled output that is printed, accept the default, *PRINT. Press Enter and continue with the next step.

456

Page 457: MIMIX Reference

• To generate an outfile and spooled output that is printed, specify *BOTH. Press Enter and continue with the next step.

• If you do not want to generate output, specify *NONE. Press Enter and skip to Step 18.

• To generate an outfile, specify *OUTFILE. Press Enter and continue with the next step.

14. At the File to receive output prompts, specify the file and library to receive the output. (Press F1 (Help) to see the name of the supplied database file.)

15. At the Output member options prompts, do the following:

a. At the Member to receive output prompt, specify the name of the database file member to receive the output of the command.

b. At the Replace or add prompt, specify whether new records should replace existing file members or be added to the existing list.

16. At the System to receive output prompt, specify the system on which the output should be created.

Note: If *YES is specified on the Process while active prompt and *OUTFILE was specified on the Outfile prompt, you must select *SYS2 for the System to receive output prompt.

17. At the Object difference messages prompt, specify whether you want detail messages placed in the job log. The value *INCLUDE places detail messages in the job log, and is the default used outside of shipped rules. When used as part of shipped rules, the default value is *OMIT since the results are already placed in an outfile.

18. At the Submit to batch prompt, do one of the following:

• If you do not want to submit the job for batch processing, specify *NO and press Enter to start the comparison.

• To submit the job for batch processing, accept the default. Press Enter continue with the next step.

19. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request.

20. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name.

21. To start the comparison, press Enter.

457

Page 458: MIMIX Reference

Comparing and repairing file member data - basic procedure

Comparing and repairing file member data - basic proce-dure

You can use the CMPFILDTA command to repair data on the local or remote system.

Before you begin, see the recommendations, restrictions, and security considerations described in “Considerations for using the CMPFILDTA command” on page 441. You should also read “Specifying CMPFILDTA parameter values” on page 445 for additional information about parameters and values that you can specify.

To compare and repair data, do the following:

1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter.

2. From the MIMIX Compare, Verify, and Synchronize menu, select option 7 (Compare file data) and press Enter.

3. The Compare File Data (CMPFILDTA) command appears. At the Data group definition prompts, do one of the following:

• To compare data for all files defined by the data group file entries for a particular data group definition, specify the data group name and skip to Step 6.

• To compare data by file name only, specify *NONE and continue with the next step.

• To compare a subset of files defined to a data group, specify the data group name and continue with the next step.

4. At the File prompts, you can specify elements for one or more object selectors that either identify files to compare or that act as filters to the files defined to the data group indicated in Step 3. For more information, see “Object selection for Compare and Synchronize commands” on page 399.

You can specify as many as 300 object selectors by using the + for more prompt for each selector. For each selector, do the following:

a. At the File and library prompts, specify the name or the generic value you want.

b. At the Member prompt, accept *ALL or specify a member name to compare a particular member within a file.

c. At the Object attribute prompt, accept *ALL to compare the entire list of supported attributes or press F4 to see a valid list of attributes.

d. At the Include or omit prompt, specify the value you want.

e. At the System 2 file and System 2 library prompts, if the file and library names on system 2 are equal to system 1, accept the defaults. Otherwise, specify the name of the file and library to which files on the local system are compared.

Note: The System 2 file and System 2 library values are ignored if a data group is specified on the Data group definition prompts.

f. Press Enter.

458

Page 459: MIMIX Reference

5. The System 2 parameter prompt appears if you are comparing files not defined to a data group. If necessary, specify the name of the remote system to which files on the local system are compared.

6. At the Repair on system prompt, specify *SYS1, *SYS2, *LOCAL, *TGT, *SRC, or the system definition name to indicate the system on which repair action should be performed.

Note: *TGT and *SRC are only valid if you are comparing files defined to a data group. *SRC is not valid if active processing is in effect.

7. At the Process while active prompt, specify *NO to indicate that active processing technology should not be used in the comparison.

8. At the File entry status prompt, specify *ACTIVE to process only those file members that are active.

9. At the System 1 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 1. Otherwise, specify the name of the ASP group that contains objects to be compared on system 1.

Note: This parameter is ignored when a data group definition is specified.

10. At the System 2 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 2. Otherwise, specify the name of the ASP group that contains objects to be compared on system 2.

Note: This parameter is ignored when a data group definition is specified.

11. At the Subsetting option prompt, specify *ALL to select all data and to indicate that no subsetting is performed.

12. At the Report type prompt, do one of the following:

• If you want all compared objects to be included in the report, accept the default.

• If you only want objects with detected differences to be included in the report, specify *DIF.

13. At the Output prompt, do one of the following:

• To generate spooled output that is printed, accept the default, *PRINT. Press Enter and continue with the next step.

• To generate an outfile and spooled output that is printed, specify *BOTH. Press Enter and continue with the next step.

• If you do not want to generate output, specify *NONE. Press Enter and skip to Step 18.

• To generate an outfile, specify *OUTFILE. Press Enter and continue with the next step.

14. At the File to receive output prompts, specify the file and library to receive the output. (Press F1 (Help) to see the name of the supplied database file.)

15. At the Output member options prompts, do the following:

459

Page 460: MIMIX Reference

Comparing and repairing file member data - basic procedure

a. At the Member to receive output prompt, specify the name of the database file member to receive the output of the command.

b. At the Replace or add prompt, specify whether new records should replace existing file members or be added to the existing list.

16. At the System to receive output prompt, specify the system on which the output should be created.

Note: If *YES is specified on the Process while active prompt and *OUTFILE was specified on the Outfile prompt, you must select *SYS2 for the System to receive output prompt.

17. At the Object difference messages prompt, specify whether you want detail messages placed in the job log. The value *INCLUDE places detail messages in the job log, and is the default used outside of shipped rules. When used as part of shipped rules, the default value is *OMIT since the results are already placed in an outfile.

18. At the Submit to batch prompt, do one of the following:

• If you do not want to submit the job for batch processing, specify *NO and press Enter to start the comparison.

• To submit the job for batch processing, accept the default. Press Enter.

19. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request.

20. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name.

21. To start the comparison, press Enter.

460

Page 461: MIMIX Reference

Comparing and repairing file member data - members on hold (*HLDERR)

Members that are being held due to error (*HLDERR) can be repaired with the Compare File Data (CMPFILDTA) command during active processing. When members in *HLDERR status are processed, the CMPFILDTA command works cooperatively with the database apply (DBAPY) process to compare and repair the members—and when possible, restore them to an active state.

Before you begin, see the recommendations, restrictions, and security considerations described in “Considerations for using the CMPFILDTA command” on page 441. You should also read “Specifying CMPFILDTA parameter values” on page 445 for additional information about parameters and values that you can specify.

The following procedure repairs a member without transmitting the entire member. As such, this method is generally faster than other methods of repairing members in *HLDERR status that transmit the entire member or file. However, if significant activity has occurred on the source system that has not been replicated on the target system, it may be faster to synchronize the member using the Synchronize Data Group File Entry (SYNCDGFE) command.

To repair a member with a status of *HLDERR, do the following:

1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter.

2. From the MIMIX Compare, Verify, and Synchronize menu, select option 7 (Compare file data) and press Enter.

3. The Compare File Data (CMPFILDTA) command appears. At the Data group definition prompts, you must specify a data group name.

Note: If you want to compare data for all files defined by the data group file entries for a particular data group definition, skip to Step 5.

4. At the File prompts, you can optionally specify elements for one or more object selectors that act as filters to the files defined to the data group indicated in Step 3. For more information, see “Object selection for Compare and Synchronize commands” on page 399.

You can specify as many as 300 object selectors by using the + for more prompt for each selector. For each selector, do the following:

a. At the File and library prompts, specify the name or the generic value you want.

b. At the Member prompt, accept *ALL or specify a member name to compare a particular member within a file.

c. At the Object attribute prompt, accept *ALL to compare the entire list of supported attributes or press F4 to see a valid list of attributes.

d. At the Include or omit prompt, specify the value you want.

e. Press Enter.

Note: The System 2 file and System 2 library values are ignored when a data

461

Page 462: MIMIX Reference

Comparing and repairing file member data - members on hold (*HLDERR)

group is specified on the Data group definition prompts.

5. At the Repair on system prompt, specify *TGT to indicate that repair action be performed on the target system.

6. At the Process while active prompt, specify *YES to indicate that active processing technology should be used in the comparison.

7. At the File entry status prompt, specify *HLDERR to process members being held due to error only.

8. At the System 1 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 1. Otherwise, specify the name of the ASP group that contains objects to be compared on system 1.

Note: This parameter is ignored when a data group definition is specified.

9. At the System 2 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 2. Otherwise, specify the name of the ASP group that contains objects to be compared on system 2.

Note: This parameter is ignored when a data group definition is specified.

10. At the Output prompt, do one of the following:

• To generate spooled output that is printed, accept the default, *PRINT. Press Enter and continue with the next step.

• To generate an outfile and spooled output that is printed, specify *BOTH. Press Enter and continue with the next step.

• If you do not want to generate output, specify *NONE. Press Enter and skip to Step 15.

• To generate an outfile, specify *OUTFILE. Press Enter and continue with the next step.

11. At the File to receive output prompts, specify the file and library to receive the output. (Press F1 (Help) to see the name of the supplied database file.)

12. At the Output member options prompts, do the following:

a. At the Member to receive output prompt, specify the name of the database file member to receive the output of the command.

b. At the Replace or add prompt, specify whether new records should replace existing file members or be added to the existing list.

13. At the System to receive output prompt, specify the system on which the output should be created.

14. At the Object difference messages prompt, specify whether you want detail messages placed in the job log. The value *INCLUDE places detail messages in the job log, and is the default used outside of shipped rules. When used as part of shipped rules, the default value is *OMIT since the results are already placed in an outfile.

15. At the Submit to batch prompt, do one of the following:

462

Page 463: MIMIX Reference

• If you do not want to submit the job for batch processing, specify *NO and press Enter to start the comparison.

• To submit the job for batch processing, accept the default. Press Enter.

16. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request.

17. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name.

18. To compare and repair the file, press Enter.

463

Page 464: MIMIX Reference

Comparing file member data using active processing technology

Comparing file member data using active processing technology

You can set the CMPFILDTA command to use active processing technology when a data group is specified on the command.

Before you begin, see the recommendations, restrictions, and security considerations described in “Considerations for using the CMPFILDTA command” on page 441. You should also read “Specifying CMPFILDTA parameter values” on page 445 for additional information about parameters and values that you can specify.

Note: Do not compare data using active processing technology if the apply process is 180 seconds or more behind, or has exceeded a threshold limit.

To compare data using the active processing, do the following:

1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter.

2. From the MIMIX Compare, Verify, and Synchronize menu, select option 7 (Compare file data) and press Enter.

3. The Compare File Data (CMPFILDTA) command appears. At the Data group definition prompts, do one of the following:

• To compare data for all files defined by the data group file entries for a particular data group definition, specify the data group name and skip to Step 6.

• To compare a subset of files defined to a data group, specify the data group name and continue with the next step.

4. At the File prompts, you can specify elements for one or more object selectors that either identify files to compare or that act as filters to the files defined to the data group indicated in Step 3. For more information, see “Object selection for Compare and Synchronize commands” on page 399.

You can specify as many as 300 object selectors by using the + for more prompt for each selector. For each selector, do the following:

a. At the File and library prompts, specify the name or the generic value you want.

b. At the Member prompt, accept *ALL or specify a member name to compare a particular member within a file.

c. At the Object attribute prompt, accept *ALL to compare the entire list of supported attributes or press F4 to see a valid list of attributes.

d. At the Include or omit prompt, specify the value you want.

e. At the System 2 file and System 2 library prompts, accept the defaults.

f. Press Enter.

5. At the Repair on system prompt, specify *TGT to indicate that repair action be performed on the target system of the data group.

464

Page 465: MIMIX Reference

6. At the Process while active prompt, specify *YES or *DFT to indicate that active processing technology be used in the comparison. Since a data group is specified on the Data group definition prompts, *DFT will render the same results as *YES.

7. At the File entry status prompt, specify *ACTIVE to process only those file members that are active.

8. At the System 1 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 1. Otherwise, specify the name of the ASP group that contains objects to be compared on system 1.

Note: This parameter is ignored when a data group definition is specified.

9. At the System 2 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 2. Otherwise, specify the name of the ASP group that contains objects to be compared on system 2.

Note: This parameter is ignored when a data group definition is specified.

10. At the Subsetting option prompt, specify *ALL to select all data and to indicate that no subsetting is performed.

11. At the Report type prompt, do one of the following:

• If you want all compared objects to be included in the report, accept the default.

• If you only want objects with detected differences to be included in the report, specify *DIF.

12. At the Output prompt, do one of the following:

• To generate spooled output that is printed, accept the default, *PRINT. Press Enter and continue with the next step.

• To generate an outfile and spooled output that is printed, specify *BOTH. Press Enter and continue with the next step.

• If you do not want to generate output, specify *NONE. Press Enter and skip to Step 17.

• To generate an outfile, specify *OUTFILE. Press Enter and continue with the next step.

13. At the File to receive output prompts, specify the file and library to receive the output. (Press F1 (Help) to see the name of the supplied database file.)

14. At the Output member options prompts, do the following:

a. At the Member to receive output prompt, specify the name of the database file member to receive the output of the command.

b. At the Replace or add prompt, specify whether new records should replace existing file members or be added to the existing list.

15. At the System to receive output prompt, specify the system on which the output should be created.

Note: If *OUTFILE was specified on the Outfile prompt, it is recommended that you select *SYS2 for the System to receive output prompt.

465

Page 466: MIMIX Reference

Comparing file member data using active processing technology

16. At the Object difference messages prompt, specify whether you want detail messages placed in the job log. The value *INCLUDE places detail messages in the job log, and is the default used when the command is invoked from outside of shipped audits. When used as part of shipped audits, the default value is *OMIT since the results are already placed in an outfile.

17. At the Submit to batch prompt, do one of the following:

• If you do not want to submit the job for batch processing, specify *NO and press Enter to start the comparison.

• To submit the job for batch processing, accept the default. Press Enter continue with the next step.

18. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request.

19. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name.

20. To start the comparison, press Enter.

466

Page 467: MIMIX Reference

Comparing file member data using subsetting options You can use the CMPFILDTA command to audit your entire database over a number of days.

Before you begin, see the recommendations, restrictions, and security considerations described in “Considerations for using the CMPFILDTA command” on page 441. You should also read “Specifying CMPFILDTA parameter values” on page 445 for additional information about parameters and values that you can specify.

Note: Do not compare data using active processing technology if the apply process is 180 seconds or more behind, or has exceeded a threshold limit.

To compare data using the subsetting options, do the following:

1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter.

2. From the MIMIX Compare, Verify, and Synchronize menu, select option 7 (Compare file data) and press Enter.

3. The Compare File Data (CMPFILDTA) command appears. At the Data group definition prompts, do one of the following:

• To compare data for all files defined by the data group file entries for a particular data group definition, specify the data group name and skip to Step 6.

• To compare data by file name only, specify *NONE and continue with the next step.

• To compare a subset of files defined to a data group, specify the data group name and continue with the next step.

4. At the File prompts, you can specify elements for one or more object selectors that either identify files to compare or that act as filters to the files defined to the data group indicated in Step 3. For more information, see “Object selection for Compare and Synchronize commands” on page 399.

You can specify as many as 300 object selectors by using the + for more prompt for each selector. For each selector, do the following:

a. At the File and library prompts, specify the name or the generic value you want.

b. At the Member prompt, accept *ALL or specify a member name to compare a particular member within a file.

c. At the Object attribute prompt, accept *ALL to compare the entire list of supported attributes or press F4 to see a valid list of attributes.

d. At the Include or omit prompt, specify the value you want.

e. At the System 2 file and System 2 library prompts, if the file and library names on system 2 are equal to system 1, accept the defaults. Otherwise, specify the name of the file and library to which files on the local system are compared.

Note: The System 2 file and System 2 library values are ignored if a data

467

Page 468: MIMIX Reference

Comparing file member data using subsetting options

group is specified on the Data group definition prompts.

f. Press Enter.

5. The System 2 parameter prompt appears if you are comparing files not defined to a data group. If necessary, specify the name of the remote system to which files on the local system are compared.

6. At the Repair on system prompt, specify a value if you want repair action performed.

Note: To process members in *HLDERR status, you must specify *TGT. See Step 8.

7. At the Process while active prompt, specify whether active processing technology should be used in the comparison.

Notes:• To process members in *HLDERR status, you must specify *YES. See

Step 8.

• If you are comparing files associated with a data group, *DFT uses active processing. If you are comparing files not associated with a data group, *DFT does not use active processing.

• Do not compare data using active processing technology if the apply process is 180 seconds or more behind, or has exceeded a threshold limit.

8. At the File entry status prompt, you can select files with specific statuses for compare and repair processing. Do one of the following:

a. To process active members only, specify *ACTIVE.

b. To process both active members and members being held due to error (*ACTIVE and *HLDERR), specify the default value *ALL.

c. To process members being held due to error only, specify *HLDERR.

Note: When *ALL or *HLDERR is specified for the File entry status prompt, *TGT must also be specified for the Repair on system prompt (Step 6) and *YES must be specified for the Process while active prompt (Step 7).

9. At the Subsetting option prompt, you must specify a value other than *ALL to use additional subsetting. Do one of the following:

• To compare a fixed range of data, specify *RANGE then press Enter to see additional prompts. Skip to Step 10.

• To define how many subsets should be established, how member data is assigned to the subsets, and which range of subsets to compare, specify *ADVANCED and press Enter to see additional prompts. Skip to Step 11.

• To indicate that only data specified on the Records at end of file prompt is compared, specify *ENDDTA and press Enter to see additional prompts. Skip to Step 12.

10. At the Subset range prompts, do the following:

468

Page 469: MIMIX Reference

a. At the First record prompt, specify the relative record number of the first record to compare in the range.

b. At the Last record prompt, specify the relative record number of the last record to compare in the range.

c. Skip to Step 12.

11. At the Advanced subset options prompts, do the following:

a. At Number of subsets prompt, specify the number of approximately equal-sized subsets to establish. Subsets are numbered beginning with 1.

b. At the Interleave prompt, specify the interleave factor. In most cases, the default *CALC is highly recommended.

c. At the First subset prompt, specify the first subset in the sequence of subsets to compare.

d. At the Last subset prompt, specify the last subset in the sequence of subsets to compare.

12. At the Records at end of file prompt, specify the number of records at the end of the member to compare. These records are compared regardless of other subsetting criteria.

Note: If *ENDDTA is specified on the Subsetting option prompt, you must specify a value other than *NONE.

13. At the Report type prompt, do one of the following:

• If you want all compared objects to be included in the report, accept the default.

• If you only want objects with detected differences to be included in the report, specify *DIF.

• If you want to include the member details and relative record number (RRN) of the first 1,000 objects that have differences, specify *RRN.

Notes:• The *RRN value can only be used when *NONE is specified for the Repair

on system prompt and *OUTFILE is specified for the Output prompt.

• The *RRN value outputs to a unique outfile (MXCMPFILR). Specifying *RRN can help resolve situations where a discrepancy is known to exist but you are unsure which system contains the correct data. This value provides the information that enables you to display the specific records on the two systems and determine the system on which the file should be repaired.

14. At the Output prompt, do one of the following:

• To generate spooled output that is printed, accept the default, *PRINT. Press Enter and continue with the next step.

• To generate an outfile and spooled output that is printed, specify *BOTH. Press Enter and continue with the next step.

469

Page 470: MIMIX Reference

Comparing file member data using subsetting options

• If you do not want to generate output, specify *NONE. Press Enter and skip to Step 19.

• To generate an outfile, specify *OUTFILE. Press Enter and continue with the next step.

15. At the File to receive output prompts, specify the file and library to receive the output. (Press F1 (Help) to see the name of the supplied database file.)

16. At the Output member options prompts, do the following:

a. At the Member to receive output prompt, specify the name of the database file member to receive the output of the command.

b. At the Replace or add prompt, specify whether new records should replace existing file members or be added to the existing list.

17. At the System to receive output prompt, specify the system on which the output should be created.

Note: If *YES is specified on the Process while active prompt and *OUTFILE was specified on the Outfile prompt, you must select *SYS2 for the System to receive output prompt.

18. At the Object difference messages prompt, specify whether you want detail messages placed in the job log. The value *INCLUDE places detail messages in the job log, and is the default used outside of shipped rules. When used as part of shipped rules, the default value is *OMIT since the results are already placed in an outfile.

19. At the Submit to batch prompt, do one of the following:

• If you do not want to submit the job for batch processing, specify *NO and press Enter to start the comparison.

• To submit the job for batch processing, accept the default. Press Enter continue with the next step.

20. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request.

21. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name.

22. To start the comparison, press Enter.

470

Page 471: MIMIX Reference

471

Page 472: MIMIX Reference

Chapter 20

Synchronizing data between systems

This chapter contains information about support provided by MIMIX commands for synchronizing data between two systems. The data that MIMIX replicates must by synchronized on several occasions.

• During initial configuration of a data group, you need to ensure that the data to be replicated is synchronized between both systems defined in a data group.

• If you change the configuration of a data group to add new data group entries, the objects must be synchronized.

• You may also need to synchronize a file or object if an error occurs that causes the two systems to become not synchronized.

• The automatic recovery features of MIMIX® AutoGuard™ also use synchronize commands to recover differences detected during replication and audits. If automatic recovery policies are disabled, you may need to use synchronize commands to correct a file or object in error or to correct differences detected by audits or compare commands.

The Lakeview-provided synchronize commands can be loosely grouped by common characteristics and the level of function they provide. Topic “Considerations for synchronizing using MIMIX commands” on page 474 describes subjects that apply to more than one group of commands, such as the maximum size of an object that can be synchronized, how large objects are handled, and how user profiles are addressed.

Initial synchronization: Initial synchronization can be performed manually with a variety of MIMIX and IBM commands, or by using the Synchronize Data Group (SYNCDG) command. The SYNCDG command is intended especially for performing the initial synchronization of one or more data groups and uses the auditing and automatic recovery support provided by MIMIX AutoGuard. The command can be long-running. For information about initial synchronization, see these topics:

• “Performing the initial synchronization” on page 483 describes how to establish a synchronization point and identifies other key information.

• Environments using MIMIX support for IBM WebSphere MQ have additional requirements for the initial synchronization of replicated queue managers. For more information, see the MIMIX for IBM WebSphere MQ book.

Synchronize commands: The commands Synchronize Object (SYNCOBJ), Synchronize IFS Object (SYNCIFS), and Synchronize DLO (SYNCDLO) provide robust support in MIMIX environments, for synchronizing library-based objects, IFS objects, and DLOs, as well as their associated object authorities. Each command has considerable flexibility for selecting objects associated with or independent of a data group. Additionally, these commands are often called by other functions, such as by the automatic recovery features of MIMIX AutoGuard and by options to synchronize objects identified in tracking entries used with advanced journaling. For additional information, see:

• “About MIMIX commands for synchronizing objects, IFS objects, and DLOs” on

472

Page 473: MIMIX Reference

Synchronizing data between systems

page 478

• “About synchronizing tracking entries” on page 482

Synchronize Data Group Activity Entry: The Synchronize DG Activity Entry (SYNCDGACTE) command provides the ability to synchronize library-based objects, IFS objects, and DLOs that are associated with data group activity entries which have specific status values. The contents of the object and its attributes and authorities are synchronized. For additional information, see “About synchronizing data group activity entries (SYNCDGACTE)” on page 479.

Synchronize Data Group File Entry: The Synchronize DG File Entry (SYNCDGFE) command provides the means to synchronize database files associated with a data group by data group file entries. Additional options provide the means to address triggers, referential constraints, logical files, and related files. For more information about this command, see “About synchronizing file entries (SYNCDGFE command)” on page 480.

Send Network commands: The Send Network Object (SNDNETOBJ), Send Network IFS Object (SNDNETIFS), and Send Network DLO (SNDNETDLO) commands support fewer usage options and usability benefits than the Synchronize commands. These commands may require multiple invocations per library, path, or directory, respectively. These commands do not support synchronizing based on a data group name.

Procedures: The procedures in this chapter are for commands that are accessible from the MIMIX Compare, Verify, and Synchronize menu. Typically, when you need to synchronize individual items in your configuration, the best approach is to use the options provided on the displays where they are appropriate to use. The options call the appropriate command and, in many cases, pre-select some of the fields. The following procedures are included:

• “Synchronizing database files” on page 489

• “Synchronizing objects” on page 491

• “Synchronizing IFS objects” on page 495

• “Synchronizing DLOs” on page 499

• “Synchronizing data group activity entries” on page 503

• “Synchronizing tracking entries” on page 505

• “Sending library-based objects” on page 506

• “Sending IFS objects” on page 508

• “Sending DLO objects” on page 509

473

Page 474: MIMIX Reference

Considerations for synchronizing using MIMIX commands

Considerations for synchronizing using MIMIX com-mands

For discussion purposes, the synchronize commands are grouped as follows:

• Synchronize commands (SYNCOBJ, SYNCIFS, and SYNCDLO)

• Synchronize Data Group Activity Entry (SYNCDGACTE)

• Synchronize Data Group File Entry (SYNCDGFE)

The following subtopics apply to more than one group of commands. Before you synchronize you should be aware of information in the following topics:

• “Limiting the maximum sending size” on page 474

• “Synchronizing user profiles” on page 474

• “Synchronizing large files and objects” on page 476

• “Status changes caused by synchronizing” on page 476

• “Synchronizing objects in an independent ASP” on page 477

Limiting the maximum sending size The Synchronize commands (SYNCOBJ, SYNCIFS, and SYNCDLO) and the Synchronize Data Group File Entry (SYNCDGFE) command provide the ability to limit the size of files or objects transmitted during synchronization with the Maximum sending size (MAXSIZE) parameter. By default, no maximum value is specified. You can also specify the value *TFRDFN to use the threshold size from the transfer definition associated with the data group1, or specify a value between 1 and 9,999,999 megabytes (MB). On the SYNCDGFE command, the value *TFRDFN is only allowed when the Sending mode (METHOD) parameter specifies *SAVRST.

When automatic recovery actions initiate a Synchronize or SYNCDGFE command, the policies in effect determine the value used for the command’s MAXSIZE parameter. The Set MIMIX Policies (SETMMXPCY) command sets policies for automatic recovery actions and for the synchronize threshold used by the commands MIMIX invokes to perform recovery actions. When any of the automatic recovery policies are enabled (DBRCY, OBJRCY, or AUDRCY), the value of the Sync. threshold size (SYNCTHLD) policy is used for the MAXSIZE value on the command. You can adjust the SYNCTHLD policy value for the installation or optionally set a value for a specific data group.

Synchronizing user profilesUser profile objects (*USRPRF) can be synchronized explicitly or implicitly.

The Synchronize commands (SYNCOBJ, SYNCIFS, and SYNCDLO) and the Send Network Objects (SNDNETOBJ) command can synchronize user profiles either

1. To preserve behavior prior to changes made in V4R4 service pack SPC05SP4, specify *TFRDFN.

474

Page 475: MIMIX Reference

implicitly or explicitly. The following information describes slight variations in processing.

Synchronizing user profiles with SYNCnnn commands The SYNCOBJ command explicitly synchronizes user profiles when you specify *USRPRF for the object type on the command. The status of the user profile on the target system is affected as follows:

• If you specified a data group and a user profile which is configured for replication, the status of the user profile on the target system is the value specified in the configured data group object entry.

• If you specified a user profile but did not specify a data group, the following occurs:

– If the user profile exists on the target system, its status on the target system remains unchanged.

– If the user profile does not exist on the target system, it is synchronized and its status on the target system is set to *DISABLED.

When synchronizing other object types, the SYNCOBJ, SYNCIFS, and SYNCDLO commands implicitly synchronize user profiles associated with the object if they do not exist on the target system. Although only the requested object type, such as *PGM, is specified on these commands, the owning user profile, the primary group profile, and user profiles that have private authorities to an object are implicitly synchronized, as follows:

• When the Synchronize command specifies a data group and that data group has a data group object entry which includes the user profile, the object and the user profile are synchronized. The status of the user profile on the target system is set to match the value from the data group object entry.

• If a data group object entry excludes the user profile from replication, the object is synchronized and its owner is changed to the default owner indicated in the data group definition. The user profile is not synchronized.

• When the Synchronize command specifies a data group and that data group does not have a data group object entry for the user profile, the object and the associated user profile are synchronized. The status of the user profile on the target system is set to *DISABLED.

Synchronizing user profiles with the SNDNETOBJ commandThe Send Network Objects (SNDNETOBJ) command explicitly synchronizes user profiles when you specify *USRPRF for the object type on the command. The status of the user profile on the target system is affected as follows:

• If the user profile exists on the target system, its status on the target system remains unchanged.

• If the user profile does not exist on the target system, it is synchronized and its status on the target system is set to *DISABLED.

475

Page 476: MIMIX Reference

Considerations for synchronizing using MIMIX commands

When synchronizing other object types, this command implicitly synchronizes user profiles associated with the object if they do not exist on the target system. Although only the requested object type, such as *PGM, is specified on the command, the owning user profile, the primary group profile, and user profiles that have private authorities to an object are implicitly synchronized. The object and associated user profiles are synchronized. The status of the user profile on the target system is set to *DISABLED.

Missing system distribution directory entries automatically addedWhen a missing user profile is detected during replication or synchronization of an object, MIMIX automatically adds any missing system distribution directory entries for user profiles. The synchronize (SYNCnnn) and the SNDNETOBJ commands provide this capability.

If replication or a synchronization request determines that a user profile is missing on the target system and a system directory entry exists on the source system for that user profile, MIMIX adds the system distribution directory entry for the user profile on the target system and specifies these values:

• User ID: same value as retrieved from the source system

• Description: same value as retrieved from the source system

• Address: local-system name

• User profile: user-profile name

• All other directory entry fields are blank

Synchronizing large files and objectsWhen configured for advanced journaling, large objects (LOBs) can be synchronized through the user (database) journal. You can synchronize a database file that contains LOB data using the Synchronize Data Group File Entry (SYNCDGFE) command.

If advanced journaling is not used in your environment, you may want to consider synchronizing large files or objects (over 1 GB) outside of MIMIX. During traditional synchronization, large files or objects can negatively impact performance by consuming too much bandwidth. Certain commands for synchronizing provide the ability to limit the size of files or objects transmitted during synchronization. See “Limiting the maximum sending size” on page 474 for more information.

On certain commands, it is possible to control the size of files and objects sent to another system. The Threshold size (THLDSIZE) parameter on the transfer definition can be used to limit the size of objects transmitted with the Send Network Object commands.

Status changes caused by synchronizing In some circumstances the Synchronize Data Group Activity Entry (SYNCDGACTE) command changes the status of activity entries when the command completes. For additional details, see “About synchronizing data group activity entries (SYNCDGACTE)” on page 479.

476

Page 477: MIMIX Reference

The Synchronize commands (SYNCOBJ, SYNCIFS and SYNCDLO) do not change the status of activity entries associated with the objects being synchronized. Activity entries retain the same status after the command completes.

Note: The SYNCIFS command will change the status of an activity entry for an IFS object configured for advanced journaling.

When advanced journaling is configured, each replicated activity has associated tracking entries. When you use the SYNCOBJ or SYNCIFS commands to synchronize an object that has a corresponding tracking entry, the status of the tracking entry will change to *ACTIVE upon successful completion of the synchronization request. If the synchronization is not successful, the status of the tracking entry will remain in its original status or have a status of *HLD. If the data group is not active, the status of the tracking entry will be updated once the data group is restarted.

Synchronizing objects in an independent ASPWhen synchronizing data that is located in an independent ASP, be aware of the following:

• In order for MIMIX to access objects located in an independent ASP, do one of the following on the Synchronize Object (SYNCOBJ) command:

– Specify the data group definition.

– If no data group is specified, you must specify values for the System 1 ASP group or device, System 2 ASP device number, and System 2 ASP device number parameters.

• In order for the Send Network Object (SNDNETOBJ) command to access objects that are located in an independent auxiliary storage pool (ASP) on the source system, you must first use the IBM command Set ASP Group (SETASPGRP) on the local system before using the SNDNETOBJ command.

477

Page 478: MIMIX Reference

About MIMIX commands for synchronizing objects, IFS objects, and DLOs

About MIMIX commands for synchronizing objects, IFS objects, and DLOs

The Synchronize Object (SYNCOBJ), Synchronize IFS (SYNCIFS), and Synchronize DLO (SYNCDLO) commands provide versatility for synchronizing objects and their authority attributes.

Where to run: The synchronize commands can be run from either system. However, if you run these commands from a target system, you must specify the name of a data group to avoid overwriting the objects on the source system.

Identifying what to synchronize: On each command, you can identify objects to synchronize by specifying a data group, a subset of a data group, or by specifying objects independently of a data group.

• When you specify a data group, its source system determines the objects to synchronize. The objects to be synchronized by the command are the same as those identified for replication by the data group. For example, specifying a data group on the SYNCOBJ command, will synchronize the same library-based objects as those configured for replication by the data group.

• If you specify a data group as well as specify additional object information in command parameters, the additional parameter information is used to filter the list of objects identified for the data group.

• When no data group is specified, the local system becomes the source system and a target system must be identified. The list of objects to synchronize is generated on the local system. For more information about the object selection criteria used when no data group is specified on these commands, see “Object selection for Compare and Synchronize commands” on page 399.

Each command has a Synchronize authorities parameter to indicate whether authority attributes are synchronized. By default, the object and all authority-related attributes are synchronized. You can also synchronize only the object or only the authority attributes of an object. Authority attributes include ownership, authorization list, primary group, public and private authorities.

When you use the SYNCOBJ command to synchronize only the authorities for an object and a data group name is not specified, if any files processed by the command are cooperatively processed and a data group that contains these files is active, the command could fail if the database apply job has a lock on these files.

When to run: Each command can be run when the data group is in an active or an inactive state. You can synchronize objects whether or not the data group is active.

Using the SYNCOBJ, SYNCIFS, and SYNCDLO commands during off-peak usage or when the objects being synchronized are in a quiesced state reduces contention for object locks.

When using the SYNCIFS command for a data group configured for advanced journaling, the data group can be active but it should not have a backlog of unprocessed entries.

478

Page 479: MIMIX Reference

Additional parameters: On each command, the following parameters provide additional control of the synchronization process.

• The Save active parameter provides the ability to save the object in an active environment using IBM's save while active support. Values supported are the same as those used in related IBM commands.

• The Save active wait time parameter specifies the amount of time to wait for a commit boundary or for a lock on an object. If a lock is not obtained in the specified time, the object is not saved. If a commit boundary is not reached in the specified time, the save operation ends and the synchronization attempt fails.

• The Maximum sending size (MB) parameter specifies the maximum size that an object can be in order to be synchronized. For more information, see “Limiting the maximum sending size” on page 474.

About synchronizing data group activity entries (SYNCD-GACTE)

The Synchronize Data Group Activity Entry (SYNCDGACTE) command supports the ability to synchronize library-based objects, IFS objects, or DLOs associated with data group activity entries. Activity entries whose status falls in the following categories can be synchronized: *ACTIVE, *COMPLETED, *DELAYED, or *FAILED. The contents of the object, its attributes, and its authorities are synchronized between the source and target systems.

Note: From the 5250 emulator, data group activity and the status category of the represented object are listed on the Work with Data Group Activity display (WRKDGACT command). The specific status of individual activity entries appear on the Work with DG Activity Entries display (WRKDGACTE command).

The data group can either be active or inactive during the synchronization request.

If the item you are synchronizing has multiple activity entries with varying statuses (for example, an entry with a status of completed, followed by a failed entry, and subsequent delayed entries), the SYNCDGACTE command will find the first non-completed activity entry and synchronize it. The same SYNCDGACTE request will then find the next non-completed entry and synchronize it. The SYNCDGACTE request will continue to synchronize these non-completed entries until all entries for that object have been synchronized.

Any existing active, delayed, or failed activity entries for the specified object are processed and set to ‘completed by synchronization’ (PZ) when the synchronization request completes successfully.

When all activity entries are completed for the specified object, when the synchronization request completes successfully, only the status of the very last completed entry is changed from complete (CP) to ‘completed by synchronization’ (CZ).

Not supported: Spooled files and cooperatively processed files are not eligible to be synchronized using the SYNCDGACTE command.

479

Page 480: MIMIX Reference

About synchronizing file entries (SYNCDGFE command)

Status changes during to synchronization: During synchronization processing, if the data group is active, the status of the activity entries being synchronized are set to a status of ‘pending synchronization’ (PZ) and then to ‘pending completion’ (PC). When the synchronization request completes, the status of the activity entries is set to either ‘completed by synchronization’ (CZ) or to ‘failed synchronization’ (FZ).

If the data group is inactive, the status of the activity entries remains either ‘pending synchronization’ (PZ) or ‘pending completion’ (PC) when the synchronization request completes. When the data group is restarted, the status of the activity entries is set to either ‘completed by synchronization’ (CZ) or to ‘failed synchronization’ (FZ).

About synchronizing file entries (SYNCDGFE command)The Synchronize Data Group File Entry (SYNCDGFE) command synchronizes database files associated with a data group by data group file entries.

Active data group required: Because the SYNCDGFE command runs through a database apply job, the data group must be active when the command is used.

Choice of what to synchronize: The Sending mode (METHOD) parameter provides granularity in specifying what is synchronized. Table 66 describes the choices.

Files with triggers: The SYNCDGFE command provides the ability to optionally disable triggers during synchronization processing and enable them again when processing is complete. The Disable triggers on file (DSBTRG) parameter specifies whether the database apply process (used for synchronization) disables triggers when processing a file.

The default value *DGFE uses data group file entry to determine whether triggers should be disabled. The value *YES will disable triggers on the target system during synchronization.

Table 66. Sending mode (METHOD) choices on the SYNCDGFE command.

*DATA This is the default value. Only the physical file data is replicated using MIMIX Copy Active File processing. File attributes are not replicated using this method. If the file exists on the target system, MIMIX refreshes its contents. If the file format is different on the target system, the synchronization will fail. If the file does not exist on the target system, MIMIX uses save and restore operations to create the file on the target system and then uses copy active file processing to fill it with data from the file on the source system.

*ATR1

1. Available when service pack SP070.00.0 or higher is installed.

Only the physical file attributes are replicated and synchronized.

*AUT1 Only the authorities for the physical file are replicated and synchronized.

*SAVRST The content and attributes are replicated using the IBM i save and restore commands. This method allows save-while-active operations. This method also has the capability to save associated logical files.

480

Page 481: MIMIX Reference

If configuration options for the data group, or optionally for a data group file entry, allow MIMIX to replicate trigger-generated entries and disable the triggers, when synchronizing a file with triggers you must specify *DATA as the sending mode.

Including logical files: The Include logical files (INCLF) parameter allows you to include any attached logical files in the synchronization request. This parameter is only valid when *SAVRST is specified for the Sending mode prompt.

Physical files with referential constraints: Physical files with referential constraints require a field in another physical file to be valid. When synchronizing physical files with referential constraints, ensure all files in the referential constraint structure are synchronized concurrently during a time of minimal activity on the source system. Doing so will ensure the integrity of synchronization points.

Including related files: You can optionally choose whether the synchronization request will include files related to the file specified by specifying *YES for the Include related (RELATED) parameter. Related files are those physical files which have a relationship with the selected physical file by means of one or more join logical files. Join logical files are logical files attached to fields in two or more physical files.

The Include related (RELATED) parameter defaults to *NO. In some environments, specifying *YES could result in a high number of files being synchronized and could potentially strain available communications and take a significant amount of time to complete.

A physical file being synchronized cannot be name mapped if it is not in the same library as the logical file associated with it. Logical files may be mapped by using object entries.

481

Page 482: MIMIX Reference

About synchronizing tracking entries

482

About synchronizing tracking entriesTracking entries provide status of IFS objects, data areas, and data queues that are replicated using MIMIX advanced journaling. Object tracking entries represent data areas or data queues. IFS tracking entries represent IFS objects. IFS tracking entries also track the file identifier (FID) of the object on the source and target systems.

You can synchronize the object represented by a tracking entry by using the synchronize option available on the Work with DG Object Tracking Entries display or the Work with DG IFS Tracking Entries display. For object tracking entries, the option calls the Synchronize Object (SYNCOBJ) command. For IFS tracking entries, the option calls the Synchronize IFS Object (SYNCIFS) command.

The contents, attributes, and authorities of the item are synchronized between the source and target systems.

Notes: • Before starting data groups for the first time, any existing objects to be replicated

from the source system must be synchronized to the target system.

• If tracking entries do not exist, you must create them by doing one of the following:

• Change the data group IFS entry or object entry configuration as needed and end and restart the data groups .

• Load tracking entries using the Load DG IFS Tracking Entries (LODDGIFSTE) or Load DG Obj Tracking Entries (LODDGOBJTE) commands. See “Loading tracking entries” on page 284.

• Tracking entries may not exist for existing IFS objects, data areas, or data queues that have been configured for replication with advanced journaling since the last start of the data group.

• For status changes to be effective for a tracking entry that is being synchronized, the data group must be active. When the apply session receives notification that the object represented by the tracking entry is synchronized successfully, the tracking entry status changes to *ACTIVE.

Page 483: MIMIX Reference

Performing the initial synchronizationEnsuring that data is synchronized before you begin replication is crucial to successful replication. How you perform the initial synchronization can be influenced by the available communications bandwidth, the complexity of describing the data, the size of the data, as well as time.

Note: If you have configured or migrated a MIMIX configuration to use integrated support for IBM WebSphere MQ, you must use the procedure ‘Initial synchronization for replicated queue managers’ in the MIMIX for IBM WebSphere MQ book. Large IBM WebSphere MQ environments should plan to perform this during off-peak hours.

Establish a synchronization point Just before you start the initial synchronization, establish a known start point for replication by changing journal receivers. The information gathered in this procedure will be used when you start replication for the first time.

From the source system, do the following:

1. Quiesce your applications before continuing with the next step.

2. For each data group that will replicate from a user journal, use the following command to change the user journal receiver. Record the new receiver names shown in the posted message. On a command line, type:

(installation-library-name)/CHGDGRCV DGDFN(data-group-name) TYPE(*DB)

3. Change the system journal receiver and record the new receiver name shown in the posted message. On a command line, type:

CHGJRN JRN(QAUDJRN) JRNRCV(*GEN)

Resources for synchronizingThe available choices for synchronizing are, in order of preference:

• SYNCDG command: The SYNCDG command is intended especially for performing the initial synchronization of one or more data groups and uses the auditing and automatic recovery support provided by MIMIX AutoGuard. Using the SYNCDG command may help shorten the initial synchronization completion time as only needed data that is not already synchronized will be identified and replicated. The command can be long-running. MIMIX IntelliStart uses this command for automatic replication and synchronization.

• IBM Save and Restore commands: IBM save and restore commands are best suited for initial synchronization and are used when performing a manual synchronization. While MIMIX SYNCDG, SYNC, and SNDNET commands can be used, the communications bandwidth required for the size and quantity of objects may exceed capacity.

• SYNC commands: The Synchronize commands (SYNCOBJ, SYNCIFS, SYNCDLO) should be your starting point. These commands provide significantly

483

Page 484: MIMIX Reference

Using SYNCDG to perform the initial synchronization

more flexibility in object selection and also provide the ability to synchronize object authorities. By specifying a data group on any of these commands, you can synchronize the data defined by its data group entries.

You can also use the Synchronize Data Group File Entry (SYNCDGFE) to synchronize database files and members. This command provides the ability to choose between MIMIX copy active file processing and save/restore processing and provides choices for handling trigger programs during synchronization.

If you have configured or migrated to integrated advanced journaling, follow the SYNCIFS procedures for IFS objects, SYNCOBJ procedures for data areas and data queues, and SYNCDGFE procedures for files containing LOB data. You can also use options to synchronize objects associated with tracking entries from the Work with DG IFS Trk. Entries display and the Work with DG Obj. Trk. Entries display.

• SNDNET commands: The Send Network commands (SNDNETIFS, SNDNETDLO, SNDNETOBJ) support fewer options for selecting and specifying multiple objects and do not provide a way to specify by data group. These commands may require multiple invocations per path, folder, or library, respectively.

This chapter (“Synchronizing data between systems” on page 472) includes additional information about the MIMIX SYNC and SNDNET commands.

Using SYNCDG to perform the initial synchronizationThis topic describes the procedure for performing the initial synchronization using the Synchronize Data Group (SYNCDG) command prior to beginning replication. The initial synchronization ensures that data is the same on each system and reduces the time and complexity involved with starting replication for the first time.

The SYNDG command utilizes the auditing and automatic recovery functions of MIMIX® AutoGuard™ to synchronize an enabled data group between the source system and the target system. The SYNCDG command is intended to be used for initial synchronization of a data group and can be used in other situations where data groups are not synchronized. The SYNCDG command can only be run on the management system, and only one instance of the command per data group can be running at any time. This command submits a batch program that can run for several days. The SYNCDG command can be performed automatically through MIMIX IntelliStart.

Note: The SYNCDG command will not process a request to synchronize a data group that is currently using the MIMIX CDP™ feature. This feature is in use if a recovery window is configured or when a recovery point is set for a data group. Also, do not configure a recovery window or set a recovery point if a SYNCDG request is in progress for the data group. The MIMIX CDP feature may not protect data under these circumstances.

Ensure the following conditions are met for each data group that you want to synchronize, before running this command:

484

Page 485: MIMIX Reference

• Apply any IBM PTFs (or their supersedes) associated with IBM i releases as they pertain to your environment. Log in to Support Central and access the Technical Documents page for a list of required and recommended IBM PTFs.

• Journaling is started on the source system for everything defined to the data group.

• All replication processes are active.

• The user ID submitting the SYNCDG has *MGT authority in product level security if it is enabled for the installation.

• No other audits (comparisons or recoveries) are in progress when the SYNCDG is requested.

• Collector services has been started.

While the synchronization is in progress, other audits for the data group are prevented from running. MIMIX Availability Manager displays initialization mode on the Audit Summary and Compliance interfaces while running this command if the data group definition (DGDFN) specifies *ALL.

To perform the initial synchronization using the SYNCDG command defaults

From MIMIX Availability Manager, do the following:

1. Select the following from the navigation bar:

a. Systems - select the system for which you want to perform the initial synchronization.

b. Installations - select the installation for which you want to perform the initial synchronization.

c. Details - select Data Groups.

2. From the upper portion of the Data Groups Status window, select Start All from the Action drop-down.

3. The Start Data Groups window appears. Accept the defaults and click OK.

4. From the Details section of the navigation bar, select Command History.

5. In the Command History window type SYNCDG and click on the Prompt button.

6. The Synchronize Data Group (SYNCDG) command prompt opens. Click Advanced and specify the following values by pressing F4 for valid options on each parameter or use the drop-down menu:

• Data group definition (DGDFN).

• Job description (JOBD).

7. Click on OK to perform the initial synchronization.

8. Verify your configuration is using MIMIX AutoGuard. This step includes performing audits to verify that journaling and other aspects of your environment are ready to use. Audits automatically check for and attempt to correct differences found between the source system and the target system. Use “Verifying the initial

485

Page 486: MIMIX Reference

Using SYNCDG to perform the initial synchronization

synchronization” on page 487“.

From a 5250 emulator, do the following:

1. Use the command STRDG DGDFN(*ALL).

2. Type the command SYNCDG and press Enter. Specify the following values, pressing F4 for valid options on each parameter:

• Data group definition (DGDFN).

• Job description (JOBD).

3. Press Enter to perform the initial synchronization.

4. Verify your configuration is using MIMIX AutoGuard. This step includes performing audits to verify that journaling and other aspects of your environment are ready to use. Audits automatically check for and attempt to correct differences found between the source system and the target system. Use “Verifying the initial synchronization” on page 487.

486

Page 487: MIMIX Reference

Verifying the initial synchronizationThis procedure uses MIMIX AutoGuard™ to ensure your environment is ready to start replication. Shipped policy settings for MIMIX allow audits to automatically attempt recovery actions for any problems they detect. You should not use this procedure if you have already synchronized your systems using the Synchronize Data Group (SYNCDG) command or the automatic synchronization method in MIMIX IntelliStart.

The audits used in this procedure will:

• Verify that journaling is started on the source and target systems for the items you identified in the deployed replication patterns. Without journaling, replication will not occur.

• Verify that data is synchronized between systems. Audits will detect potential problems with synchronization and attempt to automatically recover differences found.

Do the following:

1. Check whether all necessary journaling is started for each data group. Enter the following command:

(installation-library-name)/DSPDGSTS DGDFN(data-group-name) VIEW(*DBFETE)

On the File and Tracking Entry Status display, The File Entries column identifies how many file entries were configured from your replication patterns and indicates whether any file entries are not journaled on the source and target systems. If you are configured for advanced journaling, the Tracking Entries columns provide similar information.

2. Use MIMIX AutoGuard to audit your environment. To access the audits, enter the following command:

(installation-library-name)/WRKAUD

3. Each audit listed on the Work with Audits display is a unique combination of data group and MIMIX rule. When verifying an initial configuration, you need to perform a subset of the available audits for each data group in a specific order, shown in Table 67. Do the following:

a. To change the number of active audits at any one time, enter the following command:CHGJOBQE SBSD(MIMIXQGPL/MIMIXSBS) JOBQ(MIMIXQGPL/MIMIXVFY) MAXACT(*NOMAX)

b. Use F18 (Subset) to subset the audits by the name of the rule you want to run.

c. Type a 9 (Run rule) next to the audit for each data group and press Enter.

487

Page 488: MIMIX Reference

Verifying the initial synchronization

Repeat Step 3b and Step 3c for each rule in Table 67 until you have started all the listed audits for all data groups.

d. Reset the number of active audit jobs to values consistent with regular auditing:CHGJOBQE SBSD(MIMIXQGPL/MIMIXSBS) JOBQ(MIMIXQGPL/MIMIXVFY) MAXACT(5)

4. Wait for all audits to complete. Some audits may take time to complete. Then check the results and resolve any problems. You may need to change subsetting values again so you can view all rule and data group combinations at once. On the Work with Audits display, check the Audit Status column for the following value:

*NOTRCVD - The comparison performed by the rule detected differences. Some of the differences were not automatically recovered. Action is required. View notifications for more information and resolve the problem.

Note: See the MIMIX AutoGuard document for more information about viewing audit results.

Table 67. Rules for initial validation, listed in the order to be performed.

Rule Name

1. #DGFE

2. #OBJATR

3. #FILATR

4. #IFSATR

5. #FILATRMBR

6. #DLOATR

488

Page 489: MIMIX Reference

Synchronizing database filesThe procedures in this topic use the Synchronize DG File Entry (SYNCDGFE) command to synchronize selected database files associated with a data group, between two systems. If you use this command when performing the initial synchronization of a data group, use the procedure from the source system to send database files to the target system.

You should be aware of the information in the following topics:

• “Considerations for synchronizing using MIMIX commands” on page 474

• “About synchronizing file entries (SYNCDGFE command)” on page 480.

To synchronize a database file between two systems using the SYNCDGFE command defaults, do the following or use the alternative process described below:

1. From the Work with DG Definitions display, type 17 (File entries) next to the data group to which the file you want to synchronize is defined and press Enter.

2. The Work with DG File Entries display appears. Type 16 (Sync DG file entry) next to the file entry for the file you want to synchronize and press Enter.

Note: If you are synchronizing file entries as part of your initial configuration, you can type 16 next to the first file entry and then press F13 (Repeat). When you press Enter, all file entries will be synchronized.

Alternative Process:

You will need to identify the data group and data group file entry in this procedure. In Step 8 and Step 9, you will need to make choices about the sending mode and trigger support. For additional information, see “About synchronizing file entries (SYNCDGFE command)” on page 480.

1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter.

2. From the MIMIX Compare, Verify, and Synchronize menu, select option 41 (Synchronize DG File Entry) and press Enter.

3. The Synchronize DG File Entry (SYNCDGFE) display appears. At the Data group definition prompts, specify the name of the data group to which the file is associated.

4. At the System 1 file and Library prompts, specify the name of the database file you want to synchronize and the library in which it is located on system 1.

5. If you want to synchronize only one member of a file, specify its name at the Member prompt.

6. At the Data source prompt, ensure that the value matches the system that you want to use as the source for the synchronization.

7. The default value *YES for the Release wait prompt indicates that MIMIX will hold the file entry in a release-wait state until a synchronization point is reached. Then it will change the status to active. If you want to hold the file entry for your intervention, specify *NO.

489

Page 490: MIMIX Reference

Synchronizing database files

8. At the Sending mode prompt, specify the value for the type of data to be synchronized.

9. At the Disable triggers on file prompt, specify whether the database apply process should disable triggers when processing the file. Accept *DGFE to use the value specified in the data group file entry or specify another value. Skip to Step 14.

10. At the Save active prompt, accept *NO so that objects in use are not saved, or, specify another value.

11. At the Save active wait time prompt, specify the number of seconds to wait for a commit boundary or a lock on the object before continuing the save.

12. At the Allow object differences prompt, accept the default or specify *YES to indicate whether certain differences encountered during the restore of the object on the target system should be allowed.

13. At the Include logical files prompt, accept the default or *NO to indicate whether you want to include attached logical files when sending the file.

14. To change any of the additional parameters, press F10 (Additional parameters). Verify that the values shown for Include related files, Maximum sending file size (MB) and Submit to batch are what you want.

15. To synchronize the file, press Enter

490

Page 491: MIMIX Reference

Synchronizing objectsThe procedures in this topic use the Synchronize Object (SYNCOBJ) command to synchronize library-based objects between two systems. The objects to be synchronized can be defined to a data group or can be independent of a data group.

You should be aware of the information in the following topics:

• “Considerations for synchronizing using MIMIX commands” on page 474

• “About MIMIX commands for synchronizing objects, IFS objects, and DLOs” on page 478

To synchronize library-based objects associated with a data groupTo synchronize objects between two systems that are identified for replication by data group object entries, do the following:

1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter.

2. From the MIMIX Compare, Verify, and Synchronize Menu, select option 42 (Synchronize object) and press Enter. The Synchronize Object (SYNCOBJ) command appears.

3. At the Data group definition prompts, specify the data group for which you want to synchronize objects.

Note: if you run this command from a target system, you must specify the name of a data group to avoid overwriting the objects on the source system.

4. To synchronize all objects identified by data group object entries for this data group, skip to Step 5. To synchronize a subset of objects defined to the data group, at the Object prompts specify elements for one or more object selectors to act as filters to the objects defined to the data group. For more information, see see “Object selection for Compare and Synchronize commands” on page 399.

You can specify as many as 300 object selectors by using the + for more prompt for each selector. For each selector, do the following:

a. At the Object and library prompts, specify the name or the generic value you want.

b. At the Object type prompt, accept *ALL or specify a specific object type to synchronize.

c. At the Object attribute prompt, accept *ALL to synchronize the entire list of supported attributes or press F4 to select from a list of attributes.

d. At the Include or omit prompt, accept *INCLUDE to include the object for synchronization or specify *OMIT to omit the object from synchronization.

Note: The System 2 object and System 2 library prompts are ignored when a data group is specified.

e. Press Enter.

5. At the Synchronize authorities prompt, accept *YES to synchronize both

491

Page 492: MIMIX Reference

Synchronizing objects

authorities and objects or specify another value.

6. At the Save active prompt, accept *NO to specify that objects in use are not saved. Or, specify another value.

7. At the Save active wait time, specify the number of seconds to wait for a commit boundary or a lock on the object before continuing the save.

8. At the Maximum sending size (MB) prompt, specify the maximum size that an object can be and still be synchronized.

Note: When a data group is specified the following parameters are ignored: System 1 ASP group or device, System 2 ASP device number, and System 2 ASP device name.

9. Determine how the synchronize request will be processed. Choose one of the following:

• To submit the job for batch processing, accept the default value *YES for the Submit to batch prompt and press Enter. Continue with the next step.

• To not use batch processing for the job, specify *NO for the Submit to batch prompt and press Enter. The request to synchronize will be started.

10. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request.

11. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name.

12. To start the synchronization, press Enter.

To synchronize library-based objects without a data groupTo synchronize objects between two systems, do the following:

1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter.

2. From the MIMIX Compare, Verify, and Synchronize Menu, select option 42 (Synchronize object) and press Enter. The Synchronize Object (SYNCOBJ) command appears.

3. At the Data group definition prompts, specify *NONE.

4. At the Object prompts, specify elements for one or more object selectors that identify objects to synchronize.

You can specify as many as 300 object selectors by using the + for more prompt for each selector. For more information, see “Object selection for Compare and Synchronize commands” on page 399.

For each selector, do the following:

a. At the Object and library prompts, specify the name or the generic value you want.

b. At the Object type prompt, accept *ALL or specify a specific object type to synchronize.

492

Page 493: MIMIX Reference

c. At the Object attribute prompt, accept *ALL to synchronize the entire list of supported attributes or press F4 to see a valid list of attributes.

d. At the Include or omit prompt, accept *INCLUDE to include the object for synchronization or specify *OMIT to omit the object from synchronization.

e. At the System 2 object and System 2 library prompts, if the object and library names on system 2 are equal to the system 1 names, accept the defaults. Otherwise, specify the name of the object and library on system 2 to which you want to synchronize the objects.

f. Press Enter.

5. At the System 2 parameter prompt, specify the name of the remote system to which to synchronize the objects.

6. At the Synchronize authorities prompt, accept *YES to synchronize both authorities and objects or specify another value.

Note: When you specify *ONLY and a data group name is not specified, if any files that are processed by this command are cooperatively processed and the data group that contains these files is active, the command could fail if the database apply job has a lock on these files.

7. At the Save active prompt, accept *NO to specify that objects in use are not saved or specify another value.

8. At the Save active wait time, specify the number of seconds to wait for a commit boundary or a lock on the object before continuing the save.

9. At the Maximum sending size (MB) prompt, specify the maximum size that an object can be and still be synchronized.

10. At the System 1 ASP group or device prompt, specify the name of the auxiliary storage pool (ASP) group or device where objects configured for replication may reside on system 1. Otherwise, accept the default to use the current job’s ASP group name.

11. At the System 2 ASP device number prompt, specify the number of the auxiliary storage pool (ASP) where objects configured for replication may reside on system 2. Otherwise, accept the default to use the same ASP number from which the object was saved (*SAVASP). Only the libraries in the system ASP and any basic user ASPs from system 2 will be in the library name space.

12. At the System 2 ASP device name prompt, specify the name of the auxiliary storage pool (ASP) device where objects configured for replication may reside on system 2. Otherwise, accept the default to use the value specified for the system 1 ASP group or device (*ASPGRP1).

13. Determine how the synchronize request will be processed. Choose one of the following

• To submit the job for batch processing, accept the default value *YES for the Submit to batch prompt and press Enter.

• To not use batch processing for the job, specify *NO for the Submit to batch prompt and press Enter. The request to synchronize will be started.

493

Page 494: MIMIX Reference

Synchronizing objects

14. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request.

15. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name.

16. To start the synchronization, press Enter.

494

Page 495: MIMIX Reference

Synchronizing IFS objectsThe procedures in this topic use the Synchronize IFS Object (SYNCIFS) command to synchronize IFS objects between two systems. The IFS objects to be synchronized can be defined to a data group or can be independent of a data group.

You should be aware of the information in the following topics:

• “Considerations for synchronizing using MIMIX commands” on page 474

• “About MIMIX commands for synchronizing objects, IFS objects, and DLOs” on page 478

To synchronize IFS objects associated with a data groupTo synchronize IFS objects between two systems that are identified for replication by data group IFS entries, do the following:

1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter.

2. From the MIMIX Compare, Verify, and Synchronize menu, select option 43 (Synchronize IFS object) and press Enter. The Synchronize IFS Object (SYNCIFS) command appears.

3. At the Data group definition prompts, specify the data group for which you want to synchronize objects.

Note: if you run this command from a target system, you must specify the name of a data group to avoid overwriting the objects on the source system.

4. To synchronize all IFS objects identified by data group IFS entries for this data group, skip to Step 5. To synchronize a subset of IFS objects defined to the data group, at the IFS objects prompts specify elements for one or more object selectors to act as filters to the objects defined to the data group. For more information, see “Object selection for Compare and Synchronize commands” on page 399.

You can specify as many as 300 object selectors by using the + for more prompt for each selector. For each selector, do the following:

a. At the Object path name prompt, you can optionally accept *ALL or specify the name or generic value you want.

Note: The IFS object path name can be used alone or in combination with FID values. See Step 12.

b. At the Directory subtree prompt, accept *NONE or specify *ALL to define the scope of IFS objects to be processed.

c. At the Name pattern prompt, specify a value if you want to place an additional filter on the last component of the IFS object path name.

d. At the Object type prompt, accept *ALL or specify a specific IFS object type to synchronize.

495

Page 496: MIMIX Reference

Synchronizing IFS objects

e. At the Include or omit prompt, accept *INCLUDE to include the object for synchronization or specify *OMIT to omit the object from synchronization.

Note: The System 2 object path name and System 2 name pattern values are ignored when a data group is specified.

f. Press Enter.

5. At the Synchronize authorities prompt, accept *YES to synchronize both authorities and objects or specify another value.

6. At the Save active prompt, accept *NO to specify that objects in use are not saved or specify another value.

7. If you chose values in Step 6 to save active objects, you can optionally specify additional options at the Save active option prompt. Press F1 (Help) for additional information.

8. At the Maximum sending size (MB) prompt, specify the maximum size that an object can be and still be synchronized.

9. Determine how the synchronize request will be processed. Choose one of the following:

• To submit the job for batch processing, accept the default value *YES for the Submit to batch prompt and press Enter. Continue with the next step.

• To not use batch processing for the job, specify *NO for the Submit to batch prompt and press Enter. Continue with Step 12.

10. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request.

11. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name.

12. To optionally specify a file identifier (FID) for the object on either system, do the following:

a. At the System 1 file identifier prompt, specify the file identifier (FID) of the IFS object on system 1. Values for System 1 file identifier prompt can be used alone or in combination with the IFS object path name.

b. At the System 2 file identifier prompt, specify the file identifier (FID) of the IFS object on system 2. Values for System 2 file identifier prompt can be used alone or in combination with the IFS object path name.

Note: For more information, see “Using file identifiers (FIDs) for IFS objects” on page 312.

13. To start the synchronization, press Enter.

To synchronize IFS objects without a data groupTo synchronize IFS objects not associated with a data group between two systems, do the following:

1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter.

496

Page 497: MIMIX Reference

2. From the MIMIX Compare, Verify, and Synchronize menu, select option 43 (Synchronize IFS object) and press Enter. The Synchronize IFS Object (SYNCIFS) command appears.

3. At the Data group definition prompts, specify *NONE.

4. At the IFS objects prompts, specify elements for one or more object selectors that identify IFS objects to synchronize. You can specify as many as 300 object selectors by using the + for more prompt for each selector. For more information, see the topic on object selection in the MIMIX Reference book.

For each selector, do the following:

a. At the Object path name prompt, you can optionally accept *ALL or specify the name or generic value you want.

Note: The IFS object path name can be used alone or in combination with FID values. See Step 13.

b. At the Directory subtree prompt, accept *NONE or specify *ALL to define the scope of IFS objects to be processed.

c. At the Name pattern prompt, specify a value if you want to place an additional filter on the last component of the IFS object path name.

d. At the Object type prompt, accept *ALL or specify a specific IFS object type to synchronize.

e. At the Include or omit prompt, accept *INCLUDE to include the object for synchronization or specify *OMIT to omit the object from synchronization.

f. At the System 2 object path name and System 2 name pattern prompts, if the IFS object path name and name pattern on system 2 are equal to the system 1 names, accept the defaults. Otherwise, specify the path name and pattern on system 2 to which you want to synchronize the IFS objects.

g. Press Enter.

5. At the System 2 parameter prompt, specify the name of the remote system on which to synchronize the IFS objects.

6. At the Synchronize authorities prompt, accept *YES to synchronize both authorities and objects or specify another value.

7. At the Save active prompt, accept *NO to specify that objects in use are not saved or specify another value.

8. If you chose values in Step 7 to save active objects, you can optionally specify additional options at the Save active option prompt. Press F1 (Help) for additional information.

9. At the Maximum sending size (MB) prompt, specify the maximum size that an object can be and still be synchronized.

10. Determine how the synchronize request will be processed. Choose one of the following:

• To submit the job for batch processing, accept the default value *YES for the Submit to batch prompt and press Enter. Continue with the next step.

497

Page 498: MIMIX Reference

Synchronizing IFS objects

• To not use batch processing for the job, specify *NO for the Submit to batch prompt and press Enter. Continue with Step 13.

11. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request.

12. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name.

13. To optionally specify a file identifier (FID) for the object on either system, do the following:

a. At the System 1 file identifier prompt, specify the file identifier (FID) of the IFS object on system 1. Values for System 1 file identifier prompt can be used alone or in combination with the IFS object path name.

b. At the System 2 file identifier prompt, specify the file identifier (FID) of the IFS object on system 2. Values for System 2 file identifier prompt can be used alone or in combination with the IFS object path name.

Note: For more information, see “Using file identifiers (FIDs) for IFS objects” on page 312.

14. To start the synchronization, press Enter.

498

Page 499: MIMIX Reference

Synchronizing DLOsThe procedures in this topic use the Synchronize DLO (SYNCDLO) command to synchronize document library objects (DLOs) between two systems. The DLOs to be synchronized can be defined to a data group or can be independent of a data group.

You should be aware of the information in the following topics:

• “Considerations for synchronizing using MIMIX commands” on page 474

• “About MIMIX commands for synchronizing objects, IFS objects, and DLOs” on page 478

To synchronize DLOs associated with a data groupTo synchronize DLOs between two systems that are identified for replication by data group DLO entries, do the following:

1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter.

2. From the MIMIX Compare, Verify, and Synchronize Menu, select option 44 (Synchronize DLO) and press Enter. The Synchronize DLO (SYNCDLO) command appears.

3. At the Data group definition prompts, specify the data group for which you want to synchronize DLOs.

Note: if you run this command from a target system, you must specify the name of a data group to avoid overwriting the objects on the source system.

4. To synchronize all objects identified by data group DLO entries for this data group, skip to Step 5. To synchronize a subset of objects defined to the data group, at the Document library objects prompts specify elements for one or more object selectors to act as filters to DLOs defined to the data group. For more information, see “Object selection for Compare and Synchronize commands” on page 399.

You can specify as many as 300 object selectors by using the + for more prompt for each selector. For each selector, do the following:

a. At the DLO path name prompt, accept *ALL or specify the name or the generic value you want.

b. At the Folder subtree prompt, accept *NONE or specify *ALL to define the scope of DLOs to be processed.

c. At the Name pattern prompt, specify a value if you want to place an additional filter on the last component of the DLO path name.

d. At the DLO type prompt, accept *ALL or specify a specific DLO type to synchronize.

e. At the Owner prompt, accept *ALL or specify the owner of the DLO.

f. At the Include or omit prompt, accept *INCLUDE to include the object for synchronization or specify *OMIT to omit the object from synchronization.

Note: The System 2 DLO path name and System 2 DLO name pattern values

499

Page 500: MIMIX Reference

Synchronizing DLOs

are ignored when a data group is specified.

g. Press Enter.

5. At the Synchronize authorities prompt, accept *YES to synchronize both authorities and objects or specify another value.

6. At the Save active prompt, accept *NO to specify that objects in use are not saved or specify another value.

7. At the Save active wait time, specify the number of seconds to wait for a lock on the object before continuing the save.

8. At the Maximum sending size (MB) prompt, specify the maximum size that an object can be and still be synchronized.

9. Determine how the synchronize request will be processed. Choose one of the following:

• To submit the job for batch processing, accept the default value *YES for the Submit to batch prompt and press Enter. Continue with the next step.

• To not use batch processing for the job, specify *NO for the Submit to batch prompt and press Enter. The request to synchronize will be started.

10. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request.

11. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name.

12. To start the synchronization, press Enter.

To synchronize DLOs without a data groupTo synchronize DLOs between two systems, do the following:

1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter.

2. From the MIMIX Compare, Verify, and Synchronize Menu, select option 44 (Synchronize DLO) and press Enter. The Synchronize DLO (SYNCDLO) command appears.

3. At the Data group definition prompts, specify *NONE.

4. At the Document library objects prompts, specify elements for one or more object selectors that identify DLOs to synchronize.

You can specify as many as 300 object selectors by using the + for more prompt for each selector. For more information, see “Object selection for Compare and Synchronize commands” on page 399.

For each selector, do the following:

a. At the DLO path name prompt, accept *ALL or specify the name or the generic value you want.

b. At the Folder subtree prompt, accept *NONE or specify *ALL to define the scope of DLOs to be processed.

500

Page 501: MIMIX Reference

c. At the Name pattern prompt, specify a value if you want to place an additional filter on the last component of the DLO path name.

d. At the DLO type prompt, accept *ALL or specify a specific DLO type to synchronize.

e. At the Owner prompt, accept *ALL or specify the owner of the DLO.

f. At the Include or omit prompt, accept *INCLUDE to include the object for synchronization or specify *OMIT to omit the object from synchronization.

g. At the System 2 DLO path name and System 2 DLO name pattern prompts, if the DLO path name and name pattern on system 2 are equal to the system 1 names, accept the defaults. Otherwise, specify the path name and pattern on system 2 to which you want to synchronize the DLOs.

h. Press Enter.

5. At the System 2 parameter prompt, specify the name of the remote system on which to synchronize the DLOs.

6. At the Synchronize authorities prompt, accept *YES to synchronize both authorities and objects or specify another value.

7. At the Save active prompt, accept *NO to specify that objects in use are not saved or specify another value.

8. At the Save active wait time, specify the number of seconds to wait for a lock on the object before continuing the save.

9. At the Maximum sending size (MB) prompt, specify the maximum size that an object can be and still be synchronized.

10. Determine how the synchronize request will be processed. Choose one of the following:

• To submit the job for batch processing, accept the default value *YES for the Submit to batch prompt and press Enter. Continue with the next step.

• To not use batch processing for the job, specify *NO for the Submit to batch prompt and press Enter. The request to synchronize will be started.

11. At the Submit to batch prompt, do one of the following:

• If you do not want to submit the job for batch processing, specify *NO and press Enter to start the comparison.

• To submit the job for batch processing, accept the default. Press Enter and continue with the next step.

12. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request.

13. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name.

14. To start the synchronization, press Enter.

501

Page 502: MIMIX Reference

Synchronizing DLOs

502

Page 503: MIMIX Reference

Synchronizing data group activity entriesThe procedures in this topic use the Synchronize DG Activity Entry (SYNCDGACTE) command to synchronize an object that is identified by a data group activity entry with any status value—*ACTIVE, *DELAYED, *FAILED, or *COMPLETED.

You should be aware of the information in the following topics:

• “Considerations for synchronizing using MIMIX commands” on page 474

• “About synchronizing data group activity entries (SYNCDGACTE)” on page 479

To synchronize an object identified by a data group activity entry, do the following:

1. From the Work with Data Group Activity Entry display, type 16 (Synchronize) next to the activity entry that identifies the object you want to synchronize and press Enter.

2. The Confirm Synchronize of Object display appears. Press Enter to confirm the synchronization.

Alternative Process:

You will need to identify the data group and data group activity entry in this procedure.

1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter.

2. From the MIMIX Compare, Verify, and Synchronize menu, select option 45 (Synchronize DG File Entry) and press Enter.

3. At the Data group definition prompts, specify the data group name.

4. At the Object type prompt, specify a specific object type to synchronize or press F4 to see a valid list.

5. Additional parameters appear based on the object type selected. Do one of the following:

• For files, you will see the Object, Library, and Member prompts. Specify the object, library and member that you want to synchronize.

• For objects, you will see the Object and Library prompts. Specify the object and library of the object you want to synchronize.

• For IFS objects, you will see the IFS object prompt. Specify the IFS object that you want to synchronize.

• For DLOs, you will see the Document library object and Folder prompts. Specify the folder path and DLO name of the DLO you want to synchronize.

6. Determine how the synchronize request will be processed. Choose one of the following:

• To submit the job for batch processing, accept the default value *YES for the Submit to batch prompt and press Enter. Continue with the next step.

• To not use batch processing for the job, specify *NO for the Submit to batch prompt and press Enter. The request to synchronize will be started.

503

Page 504: MIMIX Reference

Synchronizing data group activity entries

7. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request.

8. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name.

9. To start the synchronization, press Enter.

504

Page 505: MIMIX Reference

505

Synchronizing tracking entries Tracking entries are MIMIX constructs which identify IFS objects, data areas, or data queues configured for replication with MIMIX advanced journaling. You can use a tracking entry to synchronize the contents, attributes, and authorities of the item it represents.

You should be aware of the information in the following topics:

• “Considerations for synchronizing using MIMIX commands” on page 474

• “About MIMIX commands for synchronizing objects, IFS objects, and DLOs” on page 478

• “About synchronizing tracking entries” on page 482

To synchronize an IFS tracking entryTo synchronize an object represented by an IFS tracking entry, do the following:

1. From the Work with DG IFS Tracking Entries (WRKDGIFSTE) display, type option 16 (Synchronize) next to the IFS tracking entry you want to synchronize. If you want to change options on the command SYNCIFS command, press F4 (Prompt).

2. To synchronize the associated IFS object, press Enter.

3. When the apply session has been notified that the object has been synchronized, the status will change to *ACTIVE. To monitor the status, press F5 (Refresh).

4. If the synchronization fails, correct the errors and repeat the previous steps.

To synchronize an object tracking entryTo synchronize an object represented by an object tracking entry, do the following:

1. From the Work with DG Object Tracking Entries (WRKDGOBJTE) display, type option 16 (Synchronize) next to the object tracking entry you want to synchronize. If you want to change options on the SYNCOBJ command, press F4 (Prompt).

2. To synchronize the associated data area or data queue, press Enter.

3. When the apply session has been notified that the object has been synchronized, the status will change to *ACTIVE. To monitor the status, press F5 (Refresh).

4. If the synchronization fails, correct the errors and repeat the previous steps.

Page 506: MIMIX Reference

Sending library-based objects

Sending library-based objectsThis procedure sends one or more library-based objects between two systems using the Send Network Object (SNDNETOBJ) command.

Use the appropriate command: In general, you should use the SYNCOBJ command to synchronize objects between systems. For more information about differences between commands, see “Performing the initial synchronization” on page 483.

You should be familiar with the information in the following topics before you use this command:

• “Considerations for synchronizing using MIMIX commands” on page 474

• “Synchronizing user profiles with the SNDNETOBJ command” on page 475

• “Missing system distribution directory entries automatically added” on page 476

To send library-based objects between two systems, do the following:

1. If the objects you are sending are located in an independent auxiliary storage pool (ASP) on the source system, you must use the IBM command Set ASP Group (SETASPGRP) on the local system to change the ASP group for your job. This allows MIMIX to access the objects.

2. From the MIMIX Intermediate Main Menu, select option 13 (Utilities menu) and press Enter.

3. The MIMIX Utilities Menu appears. Select option 11 (Send object) and press Enter.

4. The Send Network Object (SNDNETOBJ) display appears. At the Object prompt, specify either *ALL, the name of an object, or a generic name.

Note: You can specify as many as 50 objects. To expand this prompt for multiple entries, type a plus sign (+) at the prompt and press Enter.

5. Specify the name of the library that contains the objects at the Library prompt.

6. Specify the type of objects to be sent from the specified library at the Object type prompt.

Notes: • If you specify *ALL, all object types supported by the i5/OS Save Object

(SAVOBJ) command are selected. The single values that are listed for this parameter are not included when *ALL is specified because they are not supported by the i5/OS SAVOBJ command.

• To expand this field for multiple entries, type a plus sign (+) at the prompt and press Enter.

7. Press Enter.

8. Additional prompts appear on the display. Do the following:

a. Specify the name of the system to which you are sending objects at the Remote system prompt.

506

Page 507: MIMIX Reference

b. If the library on the remote system has a different name, specify its name at the Remote library prompt.

c. The remaining prompts on the display are used for objects synchronized via a save and restore operation. Verify that the values shown are what you want. To see a description of each prompt and its available values, place the cursor on the prompt and press F1 (Help).

9. By default, objects are restored to the same ASP device or number from which they were saved. To change the location where objects are restored, press F10 (Additional parameters), then specify a value for either the Restore to ASP device prompt or the Restore to ASP number prompt.

Note: Object types *JRN, *JRNRCV, *LIB, and *SAVF can be restored to any ASP. IBM restricts which object types are allowed in user ASPs. Some object types may not be restored to user ASPs. Specifying a value of 1 restores objects to the system ASP. Specifying 2 through 32 restores values to the basic user ASP specified. If the specified ASP number does not exist on the target system or if it has overflowed, the objects are placed in the system ASP on the target system.

10. By default, authority to the object on the remote system is determined by that system. To have the authorities on the remote system determined by the settings of the local system, press F10 (Additional parameters), then specify *SRC at the Target authority prompt.

11. To start sending the specified objects, press Enter.

507

Page 508: MIMIX Reference

Sending IFS objects

508

Sending IFS objectsThis procedure uses i5/OS save and restore functions to send one or more integrated files system (IFS) objects between two systems with the Send Network IFS (SNDNETIFS) command.

Use the appropriate command: In general, you should use the SYNCIFS command to synchronize IFS objects between systems. For more information about differences between commands, see “Performing the initial synchronization” on page 483.

You should be familiar with the information in “Considerations for synchronizing using MIMIX commands” on page 474.

To send IFS objects between two systems, do the following:

1. From the MIMIX Intermediate Main Menu, select option 13 (Utilities menu) and press Enter.

2. The MIMIX Utilities Menu appears. Select option 13 (Send IFS object) and press Enter.

3. The Send Network IFS (SNDNETIFS) display appears. At the Object prompt, the name of the IFS object to send.

Note: You can specify as many as 30 IFS objects. To expand this prompt for multiple entries, type a plus sign (+) at the prompt and press Enter.

4. Specify the name of the system to which you are sending IFS objects at the Remote system prompt.

5. Press F10 (Additional parameters).

6. Additional parameters appear which MIMIX uses in the save and restore operations. Verify that the values shown for the additional prompts are what you want. To see a description of each prompt and its available values, place the cursor on the prompt and press F1 (Help).

7. To start sending the specified IFS objects, press Enter.

Page 509: MIMIX Reference

509

Sending DLO objectsThis procedure uses i5/OS save and restore functions to send one or more document library objects (DLOs) between two systems using the Send Network DLO (SNDNETDLO) command. When you are configuring for system journal replication, use this procedure from the source system to send DLOs to the target system for replication.

Use the appropriate command: In general, you should use the SYNCDLO command to synchronize objects between systems. For more information about differences between commands, see “Performing the initial synchronization” on page 483.

You should be familiar with the information in “Considerations for synchronizing using MIMIX commands” on page 474.

To send DLO objects between systems, do the following:

1. From the MIMIX Intermediate Main Menu, select option 13 (Utilities menu) and press Enter.

2. The MIMIX Utilities Menu appears. Select option 12 (Send DLO object) and press Enter.

3. The Send Network DLO (SNDNETDLO) display appears. At the Document library object prompt, specify either *ALL or the name of the DLO.

Note: You can specify multiple DLOs. To expand this prompt for multiple entries, type a plus sign (+) at the prompt and press Enter.

4. Specify the name of the folder that contains the DLOs at the Folder prompt.

5. Specify the name of the system to which you are sending DLOs at the Remote system prompt.

6. Specify a folder name in the Folder field and a network system name in the Remote system field.

7. Press F10 (Additional parameters).

8. Additional parameters appear on the display. MIMIX uses the Remote folder, Save active, Save active wait time, and Allow object differences prompts in the save and restore operations. Verify that the values shown are what you want. To see a description of each prompt and its available values, place the cursor on the prompt and press F1 (Help).

9. By default, authority to the object on the remote system is determined by that system. To have the authorities on the remote system determined by the settings of the local system, specify *SRC at the Target authority prompt.

10. To start sending the specified DLOs, press Enter.

Page 510: MIMIX Reference

Chapter 21

Introduction to programming

MIMIX includes a variety of functions that you can use to extend MIMIX capabilities through automation and customization.

The topics in this chapter include:

• “Support for customizing” on page 511 describes several functions you can use to customize your replication environment.

• “Completion and escape messages for comparison commands” on page 514 lists completion, diagnostic, and escape messages generated by comparison commands.

• The MIMIX message log provides a common location to see messages from all MIMIX products. “Adding messages to the MIMIX message log” on page 521 describes how you can include your own messaging from automation programs in the MIMIX message log.

• MIMIX supports batch output jobs on numerous commands and provides several forms of output, including outfiles. For more information, see “Output and batch guidelines” on page 523.

• “Displaying a list of commands in a library” on page 528 describes how to display the super set of all Lakeview commands known to License Manager or subset the list by a particular library.

• “Running commands on a remote system” on page 529 describes how to run a single command or multiple commands on a remote system.

• “Procedures for running commands RUNCMD, RUNCMDS” on page 530 provides procedures for using run commands with a specific protocol or by specifying a protocol through existing MIMIX configuration elements.

• “Using lists of retrieve commands” on page 536 identifies how to use MIMIX list commands to include retrieve commands in automation.

• Commands are typically set with default values that reflect the recommendation of Lakeview Technology. “Changing command defaults” on page 537 provides a method for customizing default values should your business needs require it.

510

Page 511: MIMIX Reference

Support for customizingMIMIX includes several functions that you can use to customize processing within your replication environment.

User exit pointsUser exit points are predefined points within a MIMIX process at which you can call customized programs. User exit points allow you insert customized programs at specific points in an application process to perform additional processing before continuing with the application's processing.

MIMIX provides user exit points for journal receiver management. For more information, see Chapter 22, “Customizing with exit point programs.

Collision resolutionIn the context of high availability, a collision is a clash of data that occurs when a target object and a source object are both updated at the same time. When the change to the source object is replicated to the target object, the data does not match and the collision is detected.

With MIMIX user journal replication, the definition of a collision is expanded to include any condition where the status of a file or a record is not what MIMIX determines it should be when MIMIX applies a journal transaction. Examples of these detected conditions include the following:

• Updating a record that does not exist

• Deleting a record that does not exist

• Writing to a record that already exists

• Updating a record for which the current record information does not match the before image

The database apply process contains 12 collision points at which MIMIX can attempt to resolve a collision.

When a collision is detected, by default the file is placed on hold due to an error (*HLDERR) and user action is needed to synchronize the files. MIMIX provides additional ways to automatically resolve detected collisions without user intervention. This process is called collision resolution. With collision resolution, you can specify different resolution methods to handle these different types of collisions. If a collision does occur, MIMIX attempts the specified collision resolution methods until either the collision is resolved or the file is placed on hold.

You can specify collision resolution methods for a data group or for individual data group file entries. If you specify *AUTOSYNC for the collision resolution element of the file entry options, MIMIX attempts to fix any problems it detects by synchronizing the file.

You can also specify a named collision resolution class. A collision resolution class allows you to define what type of resolution to use at each of the collision points. Collision resolution classes allow you to specify several methods of resolution to try

511

Page 512: MIMIX Reference

Support for customizing

for each collision point and support the use of an exit program. These additional choices for resolving collisions allow customized solutions for resolving collisions without requiring user action. For more information, see “Collision resolution” on page 381.

512

Page 513: MIMIX Reference

513

Page 514: MIMIX Reference

Completion and escape messages for comparison commands

Completion and escape messages for comparison com-mands

When the comparison commands finish processing, a completion or escape message is issued. In the event of an escape message, a diagnostic message is issued prior to the escape message. The diagnostic message provides additional information regarding the error that occurred.

All completion or escape messages are sent to the MIMIX message log. You can work with the message log from either MIMIX Availability Manager or the 5250 emulator. To find messages for comparison commands, specify the name of the command as the process type. For more information about using the message log, see the Using MIMIX book.

CMPFILA messagesThe following are the messages for CMPFILA, with a comparison level specification of *FILE:

• Completion LVI3E01 – This message indicates that all files were compared successfully.

• Diagnostic LVE3E0D – This message indicates that a particular attribute compared differently.

• Diagnostic LVE3385 – This message indicates that differences were detected for an active file.

• Diagnostic LVE3E12 – This message indicates that a file was not compared. The reason the file was not compared is included in the message.

• Escape LVE3E05 – This message indicates that files were compared with differences detected. If the cumulative differences include files that were different but active within the time span specified on the Maximum replication lag (MAXREPLAG) parameter, this message also includes those differences.

• Escape LVE3381 – This message indicates that compared files were different but active within the time span specified on the Maximum replication lag (MAXREPLAG) parameter.

• Escape LVE3E09 – This message indicates that the CMPFILA command ended abnormally.

• Escape LVE3E17 – This message indicates that no object matched the specified selection criteria.

• Informational LVI3E06 – This message indicates that no object was selected to be processed.

The following are the messages for CMPFILA, with a comparison level specification of *MBR:

• Completion LVI3E05 – This message indicates that all members compared successfully.

• Diagnostic LVE3388 – This message indicates that differences were detected for

514

Page 515: MIMIX Reference

an active member.

• Escape LVE3E16 – This message indicates that members were compared with differences detected. If the cumulative differences include members that were different but active within the time span specified on the Maximum replication lag (MAXREPLAG) parameter, this message also includes those differences.

CMPOBJA messagesThe following are the messages for CMPOBJA:

• Completion LVI3E02 – This message indicates that objects were compared but no differences were detected.

• Diagnostic LVE3384 – This message indicates that differences were detected for an active object.

• Escape LVE3E06 – This message indicates that objects were compared and differences were detected. If the cumulative differences include objects that were different but active within the time span specified on the Maximum replication lag (MAXREPLAG) parameter, this message also includes those differences.

• Escape LVE3380 – This message indicates that compared objects were different but active within the time span specified on the Maximum replication lag (MAXREPLAG) parameter.

• Escape LVE3E17 – This message indicates that no object matched the specified selection criteria.

• Informational LVI3E06 – This message indicates that no object was selected to be processed.

The LVI3E02 includes message data containing the number of objects compared, the system 1 name, and the system 2 name. The LVE3E06 message includes the same message data as LVI3E02, and also includes the number of differences detected.

CMPIFSA messagesThe following are the messages for CMPIFSA:

• Completion LVI3E03 – This message indicates that all IFS objects were compared successfully.

• Diagnostic LVE3E0F – This message indicates that a particular attribute was compared differently.

• Diagnostic LVE3386 – This message indicates that differences were detected for an active IFS object.

• Diagnostic LVE3E14 – This message indicates that a IFS object was not compared. The reason the IFS object was not compared is included in the message.

• Escape LVE3E07 – This message indicates that IFS objects were compared with differences detected. If the cumulative differences include IFS objects that were different but active within the time span specified on the Maximum replication lag (MAXREPLAG) parameter, this message also includes those differences.

515

Page 516: MIMIX Reference

Completion and escape messages for comparison commands

• Escape LVE3382 – This message indicates that compared IFS objects were different but active within the time span specified on the Maximum replication lag (MAXREPLAG) parameter.

• Escape LVE3E17 – This message indicates that no object matched the specified selection criteria.

• Escape LVE3E0B – This message indicates that the CMPIFSA command ended abnormally.

• Informational LVI3E06 – This message indicates that no object was selected to be processed.

CMPDLOA messagesThe following are the messages for CMPDLOA:

• Completion LVI3E04 – This message indicates that all DLOs were compared successfully.

• Diagnostic LVE3E11 – This message indicates that a particular attribute compared differently.

• Diagnostic LVE3387 – This message indicates that differences were detected for an active DLO.

• Diagnostic LVE3E15 – This message indicates that a DLO was not compared. The reason the DLO was not compared is included in the message.

• Escape LVE3E08 – This message indicates that DLOs were compared and differences were detected. If the cumulative differences include DLOs that were different but active within the time span specified on the Maximum replication lag (MAXREPLAG) parameter, this message also includes those differences.

• Escape LVE3383 – This message indicates that compared objects were different but active within the time span specified on the Maximum replication lag (MAXREPLAG) parameter.

• Escape LVE3E17 – This message indicates that no object matched the specified selection criteria.

• Escape LVE3E0C – This message indicates that the CMPDLOA command ended abnormally.

• Informational LVI3E06 – This message indicates that no object was selected to be processed.

CMPRCDCNT messagesThe following are the messages for CMPRCDCNT:

• Escape LVE3D4D – This message indicates that ACTIVE(*YES) outfile processing failed and identifies the reason code.

• Escape LVE3D5A – This message indicates that system journal replication is not active.

• Escape LVE3D5F – This message indicates that an apply session exceeded the

516

Page 517: MIMIX Reference

unprocessed entry threshold.

• Escape LVE3D6D – This message indicates that user journal replication is not active.

• Escape LVE3D6F – This message identifies the number of members compared and how many compared members had differences.

• Escape LVE3D72 – This message identifies a child process that ended unexpectedly.

• Escape LVE3E17 – This message indicates that no object was found for the specified selection criteria.

• Informational LVI306B – This message identifies a child process that started successfully.

• Informational LVI306D – This message identifies a child process that completed successfully.

• Informational LVI3D45 – This message indicates that active processing completed.

• Informational LVI3D50 – This message indicates that work files are not deleted.

• Informational LVI3D5A – This message indicates that system journal replication is not active.

• Informational LVI3D5F – This message identifies an apply session that has exceeded the unprocessed entry threshold.

• Informational LVI3D6D – This message indicates that user journal replication is not active.

• Informational LVI3E05 – This message identifies the number of members compared. No differences were detected.

• Informational LVI3E06 – This message indicates that no object was selected for processing.

CMPFILDTA messagesThe following are the messages for CMPFILDTA:

• Completion LVI3D59 – This message indicates that all members compared were identical or that one or more members differed but were then completely repaired.

• Diagnostic LVE3031 - This message indicates the name of the local system is entered on the System 2 (SYS2) prompt. Using the name of the local system on the SYS2 prompt is not valid.

• Diagnostic LVE3D40 – This message indicates that a record in one of the members cannot be processed. In this case, another job is holding an update lock on the record and the wait time has expired.

• Diagnostic LVE3D42 - This message indicates that a selected member cannot be processed and provides a reason code.

• Diagnostic LVE3D46 – This message indicates that a file member contains one or

517

Page 518: MIMIX Reference

Completion and escape messages for comparison commands

more field types that are not supported for comparison. These fields are excluded from the data compared.

• Diagnostic LVE3D50 – This message indicates that a file member contains one or more large object (LOB) fields and a value other than *NONE was specified on the Repair on system (REPAIR) prompt. Files containing LOB fields cannot be repaired. In this case, the request to process the file member is ignored. Specify REPAIR(*NONE) to process the file member.

• Diagnostic LVE3D64 – This message indicates that the compare detected minor differences in a file member. In this case, one member has more records allocated. Excess allocated records are deleted. This difference does not affect replication processing, however.

• Diagnostic LVE3D65 – This message indicates that processing failed for the selected member. The member cannot be compared. Error message LVE0101 is returned.

• Escape LVE3358 – This message indicates that the compare has ended abnormally, and is shown only when the conditions of messages LVI3D59, LVE3D5D, and LVE3D59 do not apply.

• Escape LVE3D5D – This message indicates that insignificant differences were found or remain after repair. The message provides a statistical summary of the differences found. Insignificant differences may occur when a member has deleted records while the corresponding member has no records yet allocated at the corresponding positions. It is also possible that one or more selected members contains excluded fields, such as large objects (LOBs).

• Escape LVE3D5E – This message indicates that the compare request ended because the data group was not fully active. The request included active processing (ACTIVE), which requires a fully active data group. Output may not be complete or accurate.

• Escape LVE3D5F – This message indicates that the apply session exceeded the specified threshold for unprocessed entries. The DB apply threshold (DBAPYTHLD) parameter determines what action should be taken when the threshold is exceeded. In this case, the value *END was specified for DBAPYTHLD, thereby ending the requested compare and repair action.

• Escape LVE3D59 – This message indicates that significant differences were found or remain after repair, or that one or more selected members could not be compared. The message provides a statistical summary of the differences found.

• Escape LVE3D56 – This message indicates that no member was selected by the object selection criteria.

• Escape LVE3D60 – This message indicates that the status of the data group could not be determined. The WRKDG (MXDGSTS) outfile returned a value of *UNKNOWN for one or more fields used in determining the overall status of the data group.

• Escape LVE3D62 – This message indicates the number of mismatches that will not be fully processed for a file due to the large number of mismatches found for this request. The compare will stop processing the affected file and will continue to

518

Page 519: MIMIX Reference

process any other files specified on the same request.

• Escape LVE3D67 – This message indicates that the value specified for the File entry status (STATUS) parameter is not valid. To process members in *HLDERR status, a data group must be specified on the command and *YES must be specified for the Process while active parameter.

• Escape LVE3D68 – This message indicates that a switch cannot be performed due to members undergoing compare and repair processing.

• Escape LVE3D69 – This message indicates that the data group is not configured for database. Data groups used with the CMPFILDTA command must be configured for database, and all processes for that data group must be active.

• Escape LVE3D6C – This message indicates that the CMPFILDTA command ended before it could complete the requested action. The processing step in progress when the end was received is indicated. The message provides a statistical summary of the differences found.

• Escape LVE3E41 – This message indicates that a database apply job cannot process a journal entry with the indicated code, type, and sequence number because a supporting function failed. The journal information and the apply session for the data group are indicated. See the database apply job log for details of the failed function.

• Informational LVI3727 – This message indicates that the database apply process (DBAPY) is currently processing a repair request for a specific member. The member was previously being held due to error (*HLDERR) and is now in *CMPRLS state.

• Informational LVI3728 – This message indicates that the database apply process (DBAPY) is currently processing a repair request for a specific member. The member was previously being held due to error (*HLDERR) and has been changed from *CMPRLS to *CMPACT state.

• Informational LVI3729 – This message indicates that the repair request for a specific member was not successful. As a result, the CMPFILDTA command has changed the data group file entry for the member back to *HLDERR status.

• Informational LVI372C – The CMPFILDTA command is ending controlled because of a user request. The command did not complete the requested compare or repair. Its output may be incomplete or incorrect.

• Informational LVI372D – The CMPFILDTA command exceeded the maximum rule recovery time policy and is ending. The command did not complete the requested compare or repair. Its output may be incomplete or incorrect.

• Informational LVI372E – The CMPFILDTA command is ending unexpectedly. It received an unexpected request from the remote CMPFILDTA job to shut down and is ending. The command did not complete the requested compare or repair. Its output may be incomplete or incorrect.

• Informational LVI3D4B – This message indicates that work files are not automatically deleted because the time specified on the Wait time (seconds) (ACTWAIT) prompt expired or an internal error occurred.

• Informational LVI3D59 – This message indicates that the CMPFILDTA command

519

Page 520: MIMIX Reference

Completion and escape messages for comparison commands

completed successfully. The message also provides a statistical summary of compare processing.

• Informational LVI3D5E - This message indicates that the compare request ended because the request required Active processing and the data group was not active. Results of the comparison may not be complete or accurate.

• Informational LVI3D5F – This message indicates that the apply session exceeded the specified threshold for unprocessed entries, thereby ending the requested compare and repair action. In this case, the value *END was specified for the DB apply threshold (DBAPYTHLD) parameter, which determines what action should be taken when the threshold is exceeded.

• Informational LVI3D60 - This message indicates that the status of the data group could not be determined. The MXDGSTS outfile returned a value of *UNKNOWN for one or more status fields associated with systems, journals, system managers, journal managers, system communications, remote journal link, and database send and apply processes.

• Informational LVI3E06 – This message indicates that the data group specified contains no data group file entries.

When active processing and ACTWAIT(*NONE) is specified, or when the active wait time out occurs, some members will have unconfirmed differences if none of the differences initially found was verified by the MIMIX database apply process.

The CMPFILDTA outfile contains more detail on the results of each member compare, including information on the types of differences that are found and the number of differences found in each member.

Messages LVI3D59, LVE3D5D, and LVE3D59 include message data containing the number of members selected, the number of members compared, the number of members with confirmed differences, the number of members with unconfirmed differences, the number of members successfully repaired, and the number of members for which repair was unsuccessful.

Updated for 5.0.02.00.

520

Page 521: MIMIX Reference

Adding messages to the MIMIX message logThe Add Message Log Entry (ADDMSGLOGE) command allows you to add an entry to the MIMIX message log. This is helpful when you want to include messages from your automation programs into the MIMIX message log for easier tracking. To see the parameters for this command, type the command and press F4 (Prompt). Help text for the parameters describe the options available.

The message is written to the message log file. The message is also sent to the primary and secondary messages queues if the message meets the filter criteria for those queues. The message can also be sent to a program message queue.

Messages generated on a network system will be automatically sent to the management system. However, messages generated on a management system may not be sent to any network systems. The system manager on the management system does not send messages to network systems when it cannot determine which system should receive the message.

521

Page 522: MIMIX Reference

Adding messages to the MIMIX message log

522

Page 523: MIMIX Reference

Output and batch guidelinesThis topic provides guidelines for display, print, and file output. In addition, the user interface, the mechanics of selecting and producing output, and content issues such as formatting are described.

Batch job submission guidelines are also provided. These guidelines address the user interface as well as the mechanics of submitting batch jobs that are not part of the mainline replication process.

General output considerationsCommands can produce many forms of output, including messages, display output (interactive panels), printer output (spooled files), and file output. This section focuses primarily on display, print, and file-related output. In most cases, the output information can be selectively directed to a display, a printer, or an outfile. Messages, on the other hand, are intended to provide diagnostic or status-related information, or an indication of error conditions. Messages are not intended for general output.

Several commands support display, print, output files, or some combination thereof. The Work (WRK) and Display (DSP) commands are the most common classes of commands that support various forms of output. Other classes of commands, such as Compare (CMP) and Verify (VFY), also support various forms of output in many cases. As part of an on-going effort to ensure consistent capabilities across similar classes of commands, most commands in the same class support the same output formats. For example, all Work (WRK) commands typically support display, print, and output formats. This section describes the general guidelines used throughout the product. However, there are some exceptions, which are described in the sections about specific commands.

Display support is intended primarily for Display (DSP) commands for displaying detailed information about a specific entry, or for Work (WRK) related commands that display lists of entries. Audit-based commands, such as Compare (CMP) and Verify (VFY), are often long-running requests and do not typically provide display support.

Spooled output support provides a more easily readable form of output for print or distribution purposes. Output is generated in the form of spooled output files that can easily be printed or distributed. Nearly all Display (DSP) or Work (WRK) commands support this form of output. In some cases, other command-specific options may affect the contents of the spooled output file.

Output files are intended primarily for automation purposes, providing MIMIX-related information in a manner that facilitates programming automation for various purposes—such as additional monitoring support, auditing support, automatic detection, and the correction of error conditions. Output files are also beneficial as intermediate data for advance reporting using SQL query support.

Output parameterSome commands can produce output of more than one type—display, print, or output file. In these cases, the selection is made on the Output parameter. Table 68 lists the values supported by the Output parameter.

523

Page 524: MIMIX Reference

Output and batch guidelines

Note: Not all values are supported for all commands. For some commands, a combination of values is supported.

Commands that support OUTPUT(*) that can also run in batch are required to support the other forms of output as well.

Commands called from a program or submitted to batch with a specification of OUTPUT(*) default to OUTPUT(*PRINT). Displaying a panel during batch processing or when called from another program would otherwise fail.

With the exception of messages generated as a result of running a command, commands that support OUTPUT(*NONE) will generate no other forms of output.

Commands that support combinations of output values do not support OUTPUT(*) in combination with other output values.

Display output Commands that support OUTPUT(*) provide the ability to display information interactively. Display (DSP) and Work (WRK) commands commonly use display support. Display commands typically display detailed information for a specific entity, such as a data group definition. Work commands display a list of entries and provide a summary view of list of entries. Display support is required to work interactively with the MIMIX product.

Work commands often provide subsetting capabilities that allow you to select a subset of information. Rather than viewing all configuration entries for all data groups, for example, subsetting allows you to view the configuration entries for a specific data group. This ability allows you to easily view data that is important or relevant to you at a given time.

Print outputSpooled output is generated by specifying OUTPUT(*PRINT), and is intended to provide a readable form of output for print or distribution purposes. Output is generated in the form of spooled output files that can easily be printed or distributed. On commands that support spooled output, the spooled output is generated as a result of specifying OUTPUT(*PRINT). Most Display (DSP) or Work (WRK) commands support this form of output. Other commands, such as Compare (CMP) and Verify (VFY), also support spooled output in most cases.

Table 68. Values supported by the Output parameter

* Display only

*NONE No output is generated

*PRINT Spooled output is generated

*OUTFILE An output file is generated

*BOTH Both spooled output and an output file are generated.

524

Page 525: MIMIX Reference

The Work (WRK) and Display (DSP) commands support different categories of reports. The following are standard categories of reports available from these commands:

• The detail report contains information for one item, such as an object, definition, or entry. A detail report is usually obtained by using option 6 (Print) on a Work (WRK) display, or by specifying *PRINT on the Output parameter on a Display (DSP) command.

• The list summary report contains summary information for multiple objects, definitions, or entries. A list summary is usually obtained by pressing F21 (Print) on a Work (WRK) display. You can also get this report by specifying *BASIC on the Detail parameter on a Work (WRK) command.

• The list detail report contains detailed information for multiple objects, definitions, or entries. A list detail report is usually obtained by specifying *PRINT on the Output parameter of a Work (WRK) command.

Certain parameters, which vary from command to command, can affect the contents of spooled output. The following list represents a common set of parameters that directly impact spooled output:

• EXPAND(*YES or *NO) - The expand parameter is available on the Work with Data Group Object Entries (WRKDGOBJE), the Work with Data Group IFS Entries (WRKDGIFSE), and the Work with Data Group DLO Entries (WRKDGDLOE) commands. Configuration for objects, IFS objects, and DLOs can be accomplished using generic entries, which represent one or more actual objects on the system. The object entry ABC*, for example, can represent many entries on a system. Expand support provides a means to determine that actual objects on a system are represented by a MIMIX configuration. Specifying *NO on the EXPAND parameter prints the configured data group entries.

• DETAIL(*FULL or *BASIC) - Available on the Work (WRK) commands, the detail option determines the level of detail in the generated spool file. Specifying DETAIL(*BASIC) prints a summary list of entries. For example, this specification on the Work with Data Group Definitions (WRKDGDFN) command will print a summary list of data group definitions. Specifying DETAIL(*FULL) prints each data group definition in detail, including all attributes of the data group definition.

Note: This parameter is ignored when OUTPUT(*) or OUTPUT(*OUTFILE) is specified.

• RPTTYPE(*DIF, *ALL, *SUMMARY or *RRN, depending on command) - The Report Type (RPTTYPE) parameter controls the amount of information in the spooled file. The values available for this parameter vary, depending on the command.

The values *DIF, *ALL, and *SUMMARY are available on the Compare File Attributes (CMPFILA), Compare Object Attributes (CMPOBJA), Compare IFS Attributes (CMPIFSA), and Compare DLO Attributes (CMPDLOA) commands. Specifying *DIF reports only detected differences. A value of *SUMMARY reports a summary of objects compared, including an indication of differences detected. *ALL provides a comprehensive listing of objects compared as well as difference detail.

525

Page 526: MIMIX Reference

Output and batch guidelines

The Compare File Data (CMPFILDTA) command supports *DIF and *ALL values, as well as the value *RRN. Specifying *RRN allows you to output the relative record number of the first 1,000 objects that failed to compare. Using the *RRN value can help resolve situations where a discrepancy is known to exist, but you are unsure which system contains the correct data. In this case, *RRN provides the information that enables you to display the specific records on the two systems and to determine the system on which the file should be repaired.

File outputOutput files can be generated by specifying OUTPUT(*OUTFILE). Having full outfile support across the MIMIX product is important for a number of reasons. Outfile support is a key enabler for advanced automation purposes. The support also allows MIMIX customers and qualified MIMIX consultants to develop and deliver solutions tailored to the individual needs of the user.

As with the other forms of output, output files are commonly supported across certain classes of commands. The Work (WRK) commands commonly support output files. In addition, many audit-based reports, such as Comparison (CMP) commands, also provide output file support. Output file support for Work (WRK) commands provides access to the majority of MIMIX configuration and status-related data. The Compare (CMP) commands also provide output files as a key enabler for automatic error detection and correction capabilities.

When you specify OUTPUT(*OUTFILE), you must also specify the OUTFILE and OUTMBR parameters. The OUTFILE parameter requires a qualified file and library name. As a result of running the command, the specified output file will be used. If the file does not exist, it will automatically be created.

Note: If a new file is created for CMPFILA, for example, the record format used is from the Lakeview-supplied model database file MXCMPFILA, found in the installation library. The text description of the created file is “Output file for CMPFILA.” The file cannot reside in the product library.

The Outmember (OUTMBR) parameter allows you to specify which member to use in the output file. If no member exists, the default value of *FIRST will create a member name with the same name as the file name. A second element on the Outmember parameter indicates the way in which information is stored for an existing member. A value of *REPLACE will clear the current contents of the member and add the new records. A value of *ADD will append the new records to the existing data.

Expand support: The Expand support was developed specifically as a feature for data group configuration entries that support generic specifications. Data group object entries, IFS entries, and DLO entries can all be configured using generic name values. If you specify an object entry with an object name of ABC* in library XYZ and accept the default values for all other fields, for example, all objects in library XYZ are replicated. Specifying EXPAND(*NO) will write the specific configuration entries to the output files. Using EXPAND(*YES) will list all objects from the local system that match the configuration specified. Thus, if object name ABC* for library XYZ represented 1000 actual objects on the system, EXPAND(*YES) would add 1000 rows to the output file. EXPAND(*NO) would add a single generic entry.

Note: EXPAND(*YES) support locates all objects on the local system.

526

Page 527: MIMIX Reference

General batch considerationsMIMIX functions that are identified as long-running processes typically allow you to submit the requests to batch and avoid the unnecessary use of interactive resources. Parameters typically associated with the Batch (BATCH) parameter include Job description (JOBD) and Job name (JOB).

Batch (BATCH) parameterValues supported on the Batch (BATCH) parameter include *YES and *NO. A value of *YES indicates that the request will be submitted to batch. A value of *NO will cause the request to run interactively. The default value varies from command to command, and is based on the general usage of the command. If a command usually requires significant resource to run, the default will likely be *YES.

Some commands, such as Start Data Group (STRDG), perform a number of interactive tasks and start numerous jobs by submitting the requests to batch. Likewise, some jobs, such as the data group apply process, run on a continuous basis and do not end until specifically requested. These jobs represent the various processes required to support an active data group. Commands of this type do not specifically provide a batch (BATCH) parameter since it is the only method available.

For commands that are called from other programs, it is important to understand the difference between BATCH(*YES) and BATCH(*NO). Implementing automatic audit detection and correction support is easier to accomplish using BATCH(*NO). Let us assume you are running the Compare File Attributes (CMPFILA) command as part of an audit. If differences are detected, specifying BATCH(*NO) allows you to monitor for specific exceptions and implement automatic correction procedures. This capability would not be available if you submitted the request to BATCH(*YES).

Job description (JOBD) parameterThe Job Description (JOBD) parameter allows the user of the command to specify which job description to use when submitting the batch request. Newer MIMIX commands use the job descriptions MXAUDIT, MXSYNC, and MXDFT, which are automatically created in the MIMIX installation library when MIMIX is installed. Jobs and related output are associated to the user profile submitting the request. Older commands that provided job description support for batch processing have not been altered. Refer to individual commands for default values.

Job name (JOB) parameter The Job name (JOB) parameter allows the user of the command to specify the job name used for the submitted job request. By default, the job name defaults to the name of the command. The job name parameter is intended to make it easier to identify the active job as well as the spooled files generated as a result of running the command. For spooled files, the job name is also used for the user data information. Only newer features provide this capability.

527

Page 528: MIMIX Reference

Displaying a list of commands in a library

528

Displaying a list of commands in a libraryYou can use the IBM Select Command (SLTCMD) command to display a list of all commands contained within a particular library on the system. This list includes any commands you have added to the associated library, including copies of other commands.

Note: This list does not indicate whether you are licensed to the command or if authority to the command exists.

Do the following:

1. From the library you want, access the MIMIX Intermediate Main Menu.

2. Select option 13 (Utilities menu) and press Enter.

3. When the MIMIX Utilities Menu is displayed, select option 1 (Select all commands).

Page 529: MIMIX Reference

Running commands on a remote systemThe Run Command (RUNCMD) and Run Commands (RUNCMDS) commands provide a convenient way to run a single command or multiple commands on a remote system. The RUNCMD and RUNCMDS commands replace and extend the capabilities available in the IBM commands, Submit Remote Command (SBTRMTCMD) and Run Remote Command (RUNRMTCMD).

The MIMIX commands provide a protocol-independent way of running commands using MIMIX constructs such as system definitions, data group definitions, and transfer definitions. The MIMIX commands enable you to run commands and receive messages from the remote system.

In addition, the RUNCMD and RUNCMDS commands use the current data group direction to determine where the command is to be run. This capability simplifies automation by eliminating the need to manually enter source and target information at the time a command is run.

Note: Do not change the RUNCMD or RUNCMDS commands to PUBLIC(*EXCLUDE) without giving MIMIXOWN proper authority.

Benefits - RUNCMD and RUNCMDS commandsIndividually, the RUNCMD command can be used as a convenient tool to debug base communications problems. The RUNCMD command also provides the ability to prompt on any command. The RUNCMDS command, while supporting up to 300 commands, does not allow command prompting. When multiple commands are run on a single RUNCMDS command, only one communications session is established. The target program environment, including QTEMP and the local data area, is also kept intact. Additionally, the RUNCMDS command has options for monitoring escape and completion messages. All messages are sent to the same program level as the program or command line running the command, enabling you to program remote commands in the same manner as local commands.

Both RUNCMD and RUNCMDS allow you to specify commands to be sent through the journal stream and run by the database apply process. This protocol is a MIMIX request that the U-MX journal entry codes send through the journal stream. The value *DGJRN on the Protocol prompt enables this capability, thereby replacing conventional U-EX support. In addition, the When to run (RUNOPT) prompt can be used to specify when the journal entry associated with the command is processed by the target system for the specified data group. See “Procedures for running commands RUNCMD, RUNCMDS” on page 530 for additional details about the RUNOPT parameter.

Benefits of the RUNCMD and RUNCMDS commands also include the following:

• Provides a convenient and consistent interface to automate tasks across a network.

• Centralizes the management and control of networked systems.

• Enables protocol-independent testing and verification of MIMIX communications setups.

529

Page 530: MIMIX Reference

Procedures for running commands RUNCMD, RUNCMDS

• Supports sending and receiving local data area (LDA) data.

• Allows commands to be run under other user profiles as long as the user ID and password are the same on both systems. The password is validated before the command is run on the remote system, thus the user must have authority to the user profile being used.

Procedures for running commands RUNCMD, RUNCMDSThere are two ways to use the RUNCMD or RUNCMDS commands. You can use them with a specific protocol, or you can use them by specifying a protocol through existing MIMIX configuration elements. To use the commands with a specific protocol, use the procedure “Running commands using a specific protocol” on page 530. To use the commands using an existing MIMIX configuration, use the procedure “Running commands using a MIMIX configuration element” on page 532.

Running commands using a specific protocol1. From the MIMIX Main Menu, select option 13 (Utilities menu). The MIMIX Utilities

Menu appears.

2. From the MIMIX Utilities Menu, select option 1 (Select all commands). The Select Command display appears.

3. Page down and do one of the following:

• To run a single command on a remote system, type a 1 next to RUNCMD. The Run Command (RUNCMD) display appears.

• To run multiple commands on a remote system, type a 1 next to RUNCMDS. The Run Commands (RUNCMDS) display appears.

4. Specify the commands to run or messages to monitor for the command as follows:

d. At the Command prompt specify the command to run on the remote system. When using the RUNCMDS command, you can specify up to 300 commands.

e. If you are using the RUNCMDS command, you can specify as many as ten escape, notify, or status messages to be monitored for each command. Specify these at the Monitor for messages prompt.

5. Specify the protocol and protocol-specific implementation using Table 69.

Table 69. Specific protocols and specifications used for RUNCMD and RUNCMDS

How to run (protocol)

Specify

Run on local system

At the Protocol prompt, specify *LOCAL.

530

Page 531: MIMIX Reference

6. Do one of the following:

• To access additional options, skip to Step 7.

• To run the commands or monitor for messages, press Enter.

7. Press F10 (Additional parameters).

8. At the Check syntax prompt, specify whether to check the syntax of the command only. If *YES is specified, the syntax is checked but the command is not run.

9. At the Local data area length prompt, specify the amount of the current local data area (LDA) to copy. This is useful for automating application processing that is dependent on the local data area and for passing binary information to command programs.

10. At the Return LDA prompt, specify whether to return the contents of the local data area (LDA) from the remote system after the commands are run. The value specified in the Local data area length prompt in Step 9 determines how much data is returned.

Run using TCP/IP

Do the following:1. At the Protocol prompt, specify *TCP to run the commands using

Transmission Control Protocol/Internet Protocol (TCP/IP) communications. Press Enter for additional prompts.

2. At the Host name or address prompt, specify the host alias or address of the TCP protocol.

3. At the Port number or alias prompt, specify the port number or port alias on the local system to communicate with the remote system. This value is a 14-character mixed-case TCP port alias or port number.

Run using SNA

Do the following:1. At the Protocol prompt, specify *SNA to run the commands using

System Network Architecture (SNA) communications. Press Enter for additional prompts.

2. At the Remote location prompt, specify the name or address of the remote location.

3. At the Local location prompt, specify the unique location name that identifies the system to remote devices.

4. At the Remote network identifier prompt, specify the name or address of the remote location.

5. At the Mode prompt, specify the name of the mode description used for communications. The product default for this parameter is MIMIX.

Run using OptiConnect

Do the following:1. At the Protocol prompt, specify *OPTI to run the commands using

OptiConnect fiber optic network communications. Press Enter for additional prompts.

2. At the Remote location prompt, specify the name or address of the remote location.

Table 69. Specific protocols and specifications used for RUNCMD and RUNCMDS

How to run (protocol)

Specify

531

Page 532: MIMIX Reference

Procedures for running commands RUNCMD, RUNCMDS

11. At the User prompt, specify the user profile to use when the command is run on the remote system.

12. To run the commands or monitor for messages, press Enter.

Running commands using a MIMIX configuration elementTo use RUNCMD or RUNCMDS using a MIMIX configuration element, do the following:

1. From the MIMIX Main Menu, select option 13 (Utilities menu). The MIMIX Utilities Menu appears.

2. From the MIMIX Utilities Menu, select option 1 (Select all commands). The Select Command display appears.

3. Page down and do one of the following:

• To run a single command on a remote system, type a 1 next to RUNCMD. The Run Command (RUNCMD) display appears.

• To run multiple commands on a remote system, type a 1 next to RUNCMDS. The Run Commands (RUNCMDS) display appears.

4. Specify the commands to run or messages to monitor for the command as follows:

a. At the Command prompt specify the command to run on the remote system. When using the RUNCMDS command, you can specify up to 300 commands.

b. If you are using the RUNCMDS command, you can specify as many as ten escape, notify, or status messages to be monitored for each command. Specify these at the Monitor for messages prompt.

5. Specify the MIMIX configuration element using Table 70.

Table 70. MIMIX configuration protocols and specifications

Protocol using MIMIX configuration element

Protocol prompt value

Also specify

Run on system defined by the default transfer definition

*SYSDFN System definition prompt:

• Specify the name of the system definition or press F4 for a list of valid definitions.

• Press Enter for additional prompts

532

Page 533: MIMIX Reference

Run on the system specified in the transfer definition (TFRDFN parameter) that is not the local system

*TFRDFN Transfer definition prompt:

• Press F1 Help for assistance in specifying the three-part qualified name of the transfer definition.

• Press Enter for additional prompts.

Run on the system specified in the data group definition that is not the local system

*DGDFN Data group definition prompt:

• Press F1 Help for assistance in specifying the three-part qualified name of the data group definition.

Run on the current source system defined for the data group

*DGSRC Data group definition prompt:

• Press F1 Help for assistance in specifying the three-part qualified name of the data group definition.

Run on the current target system defined for the data group

*DGTGT Data group definition prompt:

• Press F1 Help for assistance in specifying the three-part qualified name of the data group definition.

Run by the database apply process when the journal entry is processed

*DGJRN Data group definition prompt:

• Press F1 Help for assistance in specifying the three-part qualified name of the data group definition.

Run on the system defined as System 1 for the data group

*DGSYS1 Data group definition prompt:

• Press F1 Help for assistance in specifying the three-part qualified name of the data group definition.

Table 70. MIMIX configuration protocols and specifications

Protocol using MIMIX configuration element

Protocol prompt value

Also specify

533

Page 534: MIMIX Reference

Procedures for running commands RUNCMD, RUNCMDS

6. Do one of the following:

• To access additional options, skip to Step 7.

• To run the commands or monitor for messages, press Enter.

7. Press F10 (Additional parameters).

8. At the Check syntax only prompt, specify whether to check the syntax of the command only. If *YES is specified, the syntax is checked but the command is not run.

9. At the Local data area length prompt, specify the amount of the current local data area (LDA) to copy. This is useful for automating application processing that is dependent on the local data area and for passing binary information to command programs.

10. At the Return LDA prompt, specify whether to return the contents of the local data area (LDA) from the remote system after the commands are run. The value specified in the Local data area length prompt in Step 9 determines how much data is returned.

11. At the User prompt, specify the user profile to use when the command is run on the remote system.

12. If you specified *DGJRN for the Protocol prompt, you will see the File prompts. Do the following:

a. At the File name prompt, specify the name of the file to use when the journal entry generated by the commands is sent.

Note: Use these prompts if you want the command to run in the database apply job associated with the named file. If a file is not specified, database apply (DBAPY) session A is selected.

b. At the Library prompt, specify the name of the library associated with the file.

13. If you specified a file name for the File prompt, you will see the When to run prompt. Using Table 71, specify when the journal entry associated with the command is processed by the target system for the specified data group.

14. To run the commands or monitor for messages, press Enter.

Run on the system defined as System 2 for the data group

*DGSYS2 Data group definition prompt:

• Press F1 Help for assistance in specifying the three-part qualified name of the data group definition.

Table 70. MIMIX configuration protocols and specifications

Protocol using MIMIX configuration element

Protocol prompt value

Also specify

534

Page 535: MIMIX Reference

Table 71. Options for processing journal entries with MIMIX *DGJRN protocol

When to run(Runopt)

Specify

Run when the database apply job for the specified file receives the journal entry

Do the following:1. At the Protocol prompt, specify *DGJRN.2. At the When to run prompt, specify *RCV.

Run in sequence with all other entries for the file.

Do the following:1. At the Protocol prompt, specify *DGJRN.2. At the When to run prompt, specify *APY.

535

Page 536: MIMIX Reference

Using lists of retrieve commands

536

Using lists of retrieve commandsThe following additional commands make working with retrieve commands easier:

Note: Although the current retrieve commands will be supported indefinitely, they will not be enhanced. You are now encouraged to use the extensive outfile support now available. Outfile support provides the means to generate a list of entries. The retrieve commands are primarily intended to handle retrieving information for a specific entry only. For more information, see “Output and batch guidelines” on page 523.

• Open MIMIX List (OPNMMXLST) This command allows you to open a list of specified MIMIX definitions or data group entries for use with the MIMIX retrieve commands. You specify the type of definitions or data group entries to include in the list, a CL variable to receive the list identifier, and a data group definition. The CL variable for the list identifier is needed for the MIMIX retrieve commands.

• Close MIMIX List (CLOMMXLST) This command allows you to close a list of specified MIMIX definitions or data group entries opened by the Open MIMIX List (OPNMMXLST) command. A close is necessary in order to free resources. You specify the list identifier to close.

Page 537: MIMIX Reference

537

Changing command defaultsNearly all MIMIX processes are based on commands that have been shipped with default values that reflect best practices recommendations. This ensures the easiest and best use of each command. MIMIX implements named configuration definitions through which you can customize your configuration by using options on commands without resorting to changing command defaults.

If you wish to customize command defaults to fit a specific business need, use the IBM Change Command Default (CHGCMDDFT) command. Be aware that by changing a command default, you may be affecting the operation of other MIMIX processes. Also, each update of MIMIX software will cause any changes to be lost.

Page 538: MIMIX Reference

Chapter 22

Customizing with exit point programs

The MIMIX family of products provide a variety of exit points to enable you to extend and customize your operations.

The topics in this chapter include:

• “Summary of exit points” on page 538 provides tables that summarize the exit points available for use.

• “Working with journal receiver management user exit points” on page 541 describes how to use user exit points safely.

Summary of exit pointsThe following tables summarize the exit points available for use.

MIMIX user exit points MIMIX provides the exit points identified in Table 72. for journal receiver management. For additional information, see “Working with journal receiver management user exit points” on page 541.

MIMIX also supports a generic interface to existing database and object replication process exit points that provides enhanced filtering capability on the source system. This generic user exit capability is only available through a Certified MIMIX Consultant.

MIMIX Monitor user exit pointsTable 73 identifies the user exit points available in MIMIX Monitor. You can use the exit points through programs controlled by a monitor. Monitors can be set up to operate with other products, including MIMIX. You can also use the MIMIX Monitor User Access API (MMUSRACCS) for all interfaces to MIMIX Monitor.

MIMIX Monitor also contains the MIMIX Model Switch Framework. This support provides powerful customization opportunities through a set of programs and commands that are designed to provide a consistent switch framework for you to use in your switching environment.

Table 72. MIMIX exit points for journal receiver management

Type Exit Point Name

Journal receiver management exit points

Receiver change management pre-changeReceiver change management post-changeReceiver delete management pre-checkReceiver delete management pre-deleteReceiver delete management post-delete

538

Page 539: MIMIX Reference

Customizing with exit point programs

The Using MIMIX Monitor book documents the user exit points, the API, and MIMIX Model Switch Framework.

MIMIX Promoter user exit pointsTable 74 identifies the exit points within MIMIX Promoter. If you perform concurrent operations between MIMIX Promoter and MIMIX, you might consider using these exit points within automation.

Table 73. MIMIX Monitor exit points

Type Exit Point Name

Interface exit points Pre-createPost-create Pre-changePost-changePre-copyPost-copyPre-deletePost-deletePre-displayPost-displayPre-printPost-printPre-renamePost-renamePre-startPost-startPre-endPost-end

Pre-work with informationPost-work with informationPre-holdPost-holdPre-releasePost-releasePre-statusPost-statusPre-change statusPost-change statusPre-runPost-runPre-exportPost-exportPre-importPost-import

Condition program exit point After pre-defined condition check

Event program exit point After condition check (pre-defined and user-defined)

Table 74. MIMIX Promoter exit points

Type Exit Point Name

Control exit points (The control exit service program supports these exit points.)

Transfer completeLock failureAfter lockCopy failureCopy finalizeAfter temporary journal delete

Data exit points (The data exit service program supports these exit points.)

Data initializeData transferData finalize

539

Page 540: MIMIX Reference

Summary of exit points

Requesting customized user exit programsIf you need a specialized user exit program designed for your applications, contact us at [email protected] or through the online tools at www.mimix.com/support. Our personnel will ask about your requirements and design a customized program to work with your applications.

540

Page 541: MIMIX Reference

Working with journal receiver management user exit points

User exit points in critical processing areas enable you to incorporate specialized processing with MIMIX to extend function to meet additional needs for your environment. Access to user exit processing is provided through the use of an exit program that can be written in any language supported by i5/OS.

Since user exit programming allows for user code to be run within MIMIX processes, great care must be exercised to prevent the user code from interfering with the proper operation of MIMIX. For example, a user exit program that inadvertently causes an entry to be discarded that is needed by MIMIX could result in a file not being available in case of a switch. Use caution in designing a configuration for use with user exit programming. You can safely use user exit processing with proper design, programming, and testing. Lakeview services are also available to help customers implement specialized solutions.

Journal receiver management exit pointsMIMIX includes support that allows user exit programming in the journal receiver change management and journal receiver delete management processes. With this support, you can customize change management and delete management of journal receivers according to the needs of your environment

Journal receiver management exit points are enabled when you specify a exit program to use in a journal definition.

Change management exit pointsMIMIX can change journal receivers when a specified time is reached, when the receiver reaches a specified size, or when the sequence number reaches a specified threshold. You specify these values when you create a journal definition. MIMIX also changes the journal receiver at other times, such as during a switch and when a user requests a change with the Change Data Group Receiver (CHGDGRCV) command.

The following user exit points are available for customizing change management processing:

• Receiver Change Management Pre-Change User Exit Point. This exit point is located immediately before the point in processing where MIMIX changes a journal receiver. Either the user forced a journal receiver change (CHGDGRCV command) or MIMIX processing determined that the journal receiver needs to change. The return code from the exit program can prevent MIMIX from changing the journal receiver, which can be useful when the exit program changes the receiver.

• Receiver Change Management Post-Change User Exit Point. This exit point is located immediately after the point in processing where MIMIX changes a journal receiver. MIMIX ignores the return code from the exit program. This exit point is useful for processing that does not affect MIMIX processing, such as saving the journal receiver to media. (The example program in Table 75 on page 545 shows how you can determine the name of the previously attached journal by retrieving

541

Page 542: MIMIX Reference

Working with journal receiver management user exit points

the name of the first entry in the currently attached journal receiver.)

Restrictions for Change Management Exit Points: The following restriction applies when the exit program is called from either of the change management exit points:

• Do not include the Change Data Group Receiver (CHGDGRCV) command in your exit program.

• Do not submit batch jobs for journal receiver change or delete management from the exit program. Submitting a batch job would allow the in-line exit point processing to continue and potentially return to normal MIMIX journal management processing, thereby conflicting with journal manager operations. By not submitting journal receiver change management to a batch job, you prevent a potential problem where the journal receiver is locked when it is accessed by a batch program.

Delete management exit pointsMIMIX can delete journal receivers when the send process has completed processing the journal receiver and other configurable conditions are met. When you create a journal definition you specify whether unsaved journal receivers can be deleted, the number of receivers that must be retained, and how many days to retain the receivers.

The following user exit points are available for customizing delete management processing:

• Receiver Delete Management Pre-Check User Exit Point. This exit point is located before MIMIX determines whether to delete a journal receiver. When called at this exit point, actions specified in a user exit program can affect conditions that MIMIX processing checks before the pre-delete exit point. For example, an exit program that saves the journal receiver may make the journal receiver eligible for deletion by MIMIX processing. The return code from the exit program can prevent MIMIX from deleting the journal receiver and any other journal receiver in the chain.

• Receiver Delete Management Pre-Delete User Exit Point. This exit point is located immediately before the point in processing where MIMIX deletes a journal receiver. MIMIX processing determined that the journal receiver is eligible for deletion. The return code from the exit program can prevent MIMIX from deleting the journal receiver, which is useful when the receiver is being used by another application.

• Receiver Delete Management Post-Delete User Exit Point. This exit point is immediately after the point in processing where MIMIX deletes a journal receiver. The return code from the exit program can prevent MIMIX from deleting any other (newer) journal receivers attached to the journal.

Requirements for journal receiver management exit programsThis exit program allows you to include specialized processing in your MIMIX environment at points that handle journal receiver management. The exit program runs with the authority of the user profile that owns the exit program. If your exit

542

Page 543: MIMIX Reference

program fails and signals an exception to MIMIX, MIMIX processing continues as if the exit program was not specified.

Return CodeOUTPUT; CHAR (1)This value indicates how to continue processing the journal receiver when the exit program returns control to the MIMIX process. This parameter must be set. When the exit program is called from Function C2, the value of the return code is ignored. Pos-sible values are:

Function INPUT; CHAR (2)The exit point from which this exit program is called. Possible values are:

Note: Restrictions for exit programs called from the C1 and C2 exit points are described within topic “Change management exit points” on page 541.

Journal Definition INPUT; CHAR (10)The name that identifies the journal definition.

System INPUT; CHAR (8)The name of the system defined to MIMIX on which the journal is defined.

Reserved1INPUT; CHAR (10)This field is reserved and contains blank characters.

Journal NameINPUT; CHAR (10)The name of the journal that MIMIX is processing.

Attention: It is possible to cause long delays in MIMIX processing that are undesirable when you use this exit program. When the exit program is called, MIMIX passes control to the exit program. MIMIX will not continue change management or delete management processing until the exit program returns. Consider placing long running processes that will not affect journal management in a batch job that is called by the exit program.

0 Do not continue with MIMIX journal management processing for this journal receiver.

1 Continue with MIMIX journal management processing.

C1 Pre-change exit point for receiver change management.C2 Post-change exit point for receiver change management.D0 Pre-check exit point for receiver delete management.D1 Pre-change exit point for receiver delete management.D2 Post-change exit point for receiver delete management.

543

Page 544: MIMIX Reference

Working with journal receiver management user exit points

Journal Library INPUT; CHAR (10)The name of the library in which the journal is located.

Receiver Name INPUT; CHAR (10)The name of the journal receiver associated with the specified journal. This is the jour-nal receiver on which journal management functions will operate. For receiver change management functions, this always refers to the currently attached journal receiver. For receiver delete management functions, this always refers to the same journal receiver.

Receiver LibraryINPUT; CHAR (10)The library in which the journal receiver is located.

Sequence OptionINPUT; CHAR (6)The value of the Sequence option (SEQOPT) parameter on the CHGJRN command that MIMIX processing would have used to change the journal receiver. Lakeview Technology recommends that you specify this parameter to prevent synchronization problems if you change the journal receiver. This parameter is only used when the exit program is called at the C1 (pre-change) exit point. Possible values are:

Threshold ValueINPUT; DECIMAL(15, 5)The value to use for the THRESHOLD parameter on the CRTJRNRCV command. This parameter is only used when the exit program is called at the C1 (pre-change) exit point. Possible values are:

Reserved2INPUT; CHAR (1)This field is reserved and contains blank characters.

*CONT The journal sequence number of the next journal entry created is 1 greater than the sequence number of the last journal entry in the currently attached journal receiver.

*RESET The journal sequence number of the first journal entry in the newly attached journal receiver is reset to 1. The exit program should either reset the sequence number or set the return code to 0 to allow MIMIX to change the journal receiver and reset the sequence number.

0 Do not change the threshold value. The exit program must not change the threshold size for the journal receiver.

value The exit program must create a journal receiver with this threshold value, specified in kilobytes. The exit program must also change the journal to use that receiver, or send a return code value of 0 so that MIMIX processing can change the journal receiver.

544

Page 545: MIMIX Reference

Reserved3INPUT; CHAR (1)This field is reserved and contains blank characters.

Journal receiver management exit program exampleThe following example shows how an exit program can customize changing and deleting journal receivers. This exit program only processes journal receivers when it is called at the pre-change exit point (C1), the post-change exit point (C2), or the pre-check exit point (D0).

When called at the pre-change exit point, the sample exit program handles changing any journal receiver in library MYLIB. For any other journal library, MIMIX handles change management processing.

When called at the post-change exit point, the exit program saves the recently detached journal receiver if the journal is in library ABCLIB. (The recently detached journal receiver was the attached receiver at the pre-change exit point.)

When called at the pre-check exit point, if the journal library is TEAMLIB, the exit program saves the journal receiver to tape and allows MIMIX receiver delete management to continue processing.

Table 75. Sample journal receiver management exit program

/*--------------------------------------------------------------*/ /* Program....: DMJREXIT */ /* Description: Example user exit program using CL */ /*--------------------------------------------------------------*/ PGM PARM(&RETURN &FUNCTION &JRNDEF &SYSTEM + &RESERVED1 &JRNNAME &JRNLIB &RCVNAME + &RCVLIB &SEQOPT &THRESHOLD &RESERVED2 + &RESERVED3)

DCL VAR(&RETURN) TYPE(*CHAR) LEN(1) DCL VAR(&FUNCTION) TYPE(*CHAR) LEN(2) DCL VAR(&JRNDEF) TYPE(*CHAR) LEN(10) DCL VAR(&SYSTEM) TYPE(*CHAR) LEN(8) DCL VAR(&RESERVED1) TYPE(*CHAR) LEN(10) DCL VAR(&JRNNAME) TYPE(*CHAR) LEN(10) DCL VAR(&JRNLIB) TYPE(*CHAR) LEN(10) DCL VAR(&RCVNAME) TYPE(*CHAR) LEN(10) DCL VAR(&RCVLIB) TYPE(*CHAR) LEN(10) DCL VAR(&SEQOPT) TYPE(*CHAR) LEN(6) DCL VAR(&THRESHOLD) TYPE(*DEC) LEN(15 5) DCL VAR(&RESERVED2) TYPE(*CHAR) LEN(1) DCL VAR(&RESERVED3) TYPE(*CHAR) LEN(1)

545

Page 546: MIMIX Reference

Working with journal receiver management user exit points

/*--------------------------------------------------------------*/ /* Constants and misc. variables */ /*--------------------------------------------------------------*/ DCL VAR(&STOP) TYPE(*CHAR) LEN(1) VALUE('0') DCL VAR(&CONTINUE) TYPE(*CHAR) LEN(1) VALUE('1') DCL VAR(&PRECHG) TYPE(*CHAR) LEN(2) VALUE('C1') DCL VAR(&POSTCHG) TYPE(*CHAR) LEN(2) VALUE('C2') DCL VAR(&PRECHK) TYPE(*CHAR) LEN(2) VALUE('D0') DCL VAR(&PREDLT) TYPE(*CHAR) LEN(2) VALUE('D1') DCL VAR(&POSTDLT) TYPE(*CHAR) LEN(2) VALUE('D2') DCL VAR(&RTNJRNE) TYPE(*CHAR) LEN(165) DCL VAR(&PRVRCV) TYPE(*CHAR) LEN(10) DCL VAR(&PRVRLIB) TYPE(*CHAR) LEN(10)

/*--------------------------------------------------------------*/ /* MAIN */ /*--------------------------------------------------------------*/ CHGVAR &RETURN &CONTINUE /* Continue processing receiver*/ /*--------------------------------------------------------------*/

/* Handle processing for the pre-change exit point. */ /*--------------------------------------------------------------*/ IF (&FUNCTION *EQ &PRECHG) THEN(DO) /*--------------------------------------------------------------*/ /* If the journal library is my library(MYLIB), exit program */ /* will do the changing of the receivers. */

/*--------------------------------------------------------------*/ IF (&JRNLIB *EQ 'MYLIB') THEN(DO) IF (&THRESHOLD *GT 0) THEN(DO) CRTJRNRCV JRNRCV(&RCVLIB/NEWRCV0000) + THRESHOLD(&THRESHOLD) CHGJRN JRN(&JRNLIB/&JRNNAME) + JRNRCV(&RCVLIB/NEWRCV0000) SEQOPT(&SEQOPT) ENDDO /* There has been a threshold change */ ELSE (CHGJRN JRN(&JRNLIB/&JRNNAME) JRNRCV(*GEN) + SEQOPT(&SEQOPT)) /* No threshold change */ CHGVAR &RETURN &STOP /* Stop processing entry */ ENDDO /* &JRNLIB is MYLIB */ ENDDO /* &FUNCTION *EQ &PRECHG */

/*--------------------------------------------------------------*/ /* At the post-change user exit point if the journal library is */ /* ABCLIB, save the just detached journal receiver. */ /*--------------------------------------------------------------*/ ELSE IF (&FUNCTION *EQ &POSTCHG) THEN(DO) IF COND(&JRNLIB *EQ 'ABCLIB') THEN(DO) RTVJRNE JRN(&JRNLIB/&JRNNAME) + RCVRNG(&RCVLIB/&RCVNAME) FROMENT(*FIRST) + RTNJRNE(&RTNJRNE)

Table 75. Sample journal receiver management exit program

546

Page 547: MIMIX Reference

/*----------------------------------------------------------*/ /* Retrieve the journal entry, extract the previous receiver*/ /* name and library to do the save with. */ /*----------------------------------------------------------*/ CHGVAR &PRVRCV (%SUBSTRING(&RTNJRNE 126 10)) CHGVAR &PRVRLIB (%SUBSTRING(&RTNJRNE 136 10)) SAVOBJ OBJ(&PRVRCV) LIB(&PRVRLIB) DEV(TAP02) + OBJTYPE(*JRNRCV) /* Save detached receiver */ ENDDO /* &JRNLIB is ABCLIB */ ENDDO /* &FUNCTION is &POSTCHG */

/*--------------------------------------------------------------*/ /* Handle processing for the pre-check exit point. */ /*--------------------------------------------------------------*/ ELSE IF (&FUNCTION *EQ &PRECHK) THEN(DO) IF (&JRNLIB *EQ 'TEAMLIB') THEN( + SAVOBJ OBJ(&RCVNAME) LIB(&RCVLIB) DEV(TAP01) + OBJTYPE(*JRNRCV)) ENDDO /* &FUNCTION is &PRECHK */ ENDPGM

Table 75. Sample journal receiver management exit program

547

Page 548: MIMIX Reference

Working with journal receiver management user exit points

548

Page 549: MIMIX Reference

Appendix A

Supported object types for system journal replication

This list identifies IBM i object types and indicates whether MIMIX can replicate these through the system journal.

Note: Not all object types exist in all releases of IBM i.

Object Type Description Replicated*ALRTBL Alert table Yes*AUTL Authorization list Yes*BLKSF Block special file No*BNDDIR Binding directory Yes*CFGL Configuration list No6 *CHTFMT Chart format No9 *CLD C locale description Yes *CLS Class Yes*CMD Command Yes*CNNL Connection list Yes *COSD Class-of-service description Yes *CRG Cluster resource group No9 *CRQD Change request description Yes*CSI Communications side information Yes*CTLD Controller description Yes1 *DDIR Distributed file directory No2 *DEVD Device description Yes1,13 *DIR Directory Yes2 *DOC Document Yes *DSTMF Distributed stream file No2 *DTAARA Data area Yes*DTADCT Data dictionary No*DTAQ Data queue Yes *EDTD Edit description Yes *EXITRG Exit registration Yes *FCT Forms control table Yes*FILE File Yes3, 11 *FLR Folder Yes *FNTRSC Font resource Yes*FNTTBL Font mapping table No9 *FORMDF Form definition Yes*FTR Filter Yes*GSS Graphics symbol set Yes*IGCDCT Double-byte character set conversion

dictionaryNo9

*IGCSRT Double-byte character set sort table No9 *IGCTBL Double-byte character set font table No9 *IPXD Internetwork packet exchange

descriptionYes

*JOBD Job description Yes*JOBQ Job queue Yes4

549

Page 550: MIMIX Reference

*JOBSCD Job schedule Yes*JRN Journal No7 *JRNRCV Journal receiver No7 *LIB Library Yes4 *LIND Line description Yes1 *LOCALE Locale space Yes*M36 AS/400 Advanced 36 machine No8 *M36CFG AS/400 Advanced 36 machine

configurationNo8

*MEDDFN Media definition Yes *MENU Menu Yes*MGTCOL Management collection Yes *MODD Mode description Yes *MODULE Module Yes*MSGF Message file Yes*MSGQ Message queue Yes4 *NODGRP Node group No9 *NODL Node list Yes*NTBD NetBIOS description Yes *NWID Network interface description Yes1 *NWSD Network server description Yes *OOPOOL Persistent pool (for OO objects) No*OUTQ Output queue Yes4, 5 *OVL Overlay Yes*PAGDFN Page definition Yes*PAGSEG Page segment Yes*PDG Print descriptor group Yes*PGM Program Yes12

*PNLGRP Panel group Yes*PRDAVL Product availability No6 *PRDDFN Product definition No6 *PRDLOD Product load No6 *PSFCFG Print Services Facility (PSF)

configurationYes

*QMFORM Query management form Yes*QMQRY Query management query Yes*QRYDFN Query definition Yes*RCT Reference code translate table No9 *S36 System/36 machine description No9 *SBSD Subsystem description Yes*SCHIDX Search index Yes*SOCKET Local socket No*SOMOBJ System Object Model (SOM) object No*SPADCT Spelling aid dictionary Yes*SPLF Spool file Yes *SQLPKG Structured query language package Yes *SQLUDT User-defined SQL type Yes *SRVPGM Service program Yes*SSND Session description Yes *STMF Bytestream file Yes2 *SVRSTG Server storage space No8 *SYMLNK Symbolic link Yes2 *TBL Table Yes

Object Type Description Replicated

550

Page 551: MIMIX Reference

Supported object types for system journal replication

*USRIDX User index Yes*USRPRF User profile Yes *USRQ User queue Yes4 *USRSPC User space Yes10 *VLDL Validation list Yes13 *WSCST Workstation customizing object Yes Notes: 1. Replicating configuration objects to a previous version of IBM i may cause unpredictable

results.2. Objects in QDLS, QSYS.LIB, QFileSvr.400, QLANSrv, QOPT, QNetWare, QNTC, QSR,

and QFPNWSSTG file systems are not currently supported via Data Group IFS Entries. Objects in QSYS.LIB and QDLS are supported via Data Group Object Entries and Data Group DLO Entries. Excludes stream files associated with a server storage space.

3. File attribute types include: DDMF, DSPF, DSPF36, DSPF38, ICFF, LF, LF38, MXDF38, PF-DTA, PF-SRC, PF38-DTA, PF38-SRC, PRTF, PRTF38, and SAVF.

4. Content is not replicated.5. Spooled files are replicated separately from the output queue.6. These objects are system specific. Duplicating them could cause unpredictable results

on the target system.7. Duplicating these objects can potentially cause problems on the target system.8. These objects are not duplicated due to size and IBM recommendation.9. These object types can be supported by MIMIX for replication through the system journal,

but are not currently included. Contact Lakeview Technology Support if you need support for these object types.

10.Changes made though external interfaces such as APIs and commands are replicated. Direct update of the content through a pointer is not supported.

11.The SQL field type of DATALINK is not supported. Files containing these types of fields must be excluded from replication.

12.To replicate *PGM objects to an earlier release of IBM i you must be able to save them to that earlier release of IBM i.

13.Device description attributes include: APPC, ASC, ASP, BSC, CRP, DKT, DSPLCL, DSPRMT, DSPVRT, FNC, HOST, INTR, MLB, NET, OPT, PRTLAN, PRTLCL, PRTRMT, PRTVRT, RTL, SNPTUP, SNPTDN, SNUF, and TAP.

Object Type Description Replicated

551

Page 552: MIMIX Reference

Appendix B

Copying configurations

This section provides information about how you can copy configuration data between systems.

• “Supported scenarios” on page 552 identifies the scenarios supported in version 5 of MIMIX.

• “Checklist: copy configuration” on page 553 directs you through the correct order of steps for copying a configuration and completing the configuration.

• “Copying configuration procedure” on page 558 documents how to use the Copy Configuration Data (CPYCFGDTA) command.

Supported scenariosThe Copy Configuration Data (CPYCFGDTA) command supports copying configuration data from one library to another library on the same system. After MIMIX is installed, you can use the CPYCFGDTA command.

The supported scenarios are as follows: :

Table 76. Supported scenarios for copying configuration

From To

MIMIX version 5 MIMIX version 51

1. The installation you are copying to must be at the same or a higher level service pack.

MIMIX version 42

2. V4R4 service pack SPC070.00.0 or higher must be installed.

MIMIX version 5

552

Page 553: MIMIX Reference

Checklist: copy configurationUse this checklist when you have installed MIMIX in a new library and you want to copy an existing configuration into the new library.

To configure MIMIX with configuration information copied from one or more existing product libraries, do the following:

1. Review “Supported scenarios” on page 552.

2. Use the procedure “Copying configuration procedure” on page 558 to copy the configuration information from one or more existing libraries.

3. Verify that the system definitions created by the CPYCFGDTA command have the correct message queue, output queues, and job descriptions required. Be sure to check system definitions for the management system and all of the network systems.

4. Verify that transfer definitions created have the correct three-part name and that the values specified for each transfer protocol are correct. For *TCP, verify the port number. For *SNA, verify that the SNA mode is what is defined for SNA configuration.

Note: One of the transfer definitions should be named PRIMARY if you intend to create additional data group definitions or system definitions that will use the default value PRIMARY for the Primary transfer definition PRITFRDFN parameter.

5. Verify that the journal definitions created have the information you want for the journal receiver prefix name, auxiliary storage pool, and journal receiver change management and delete management. The default journal receiver prefix for the user journal is generated; for the system journal, the default journal receiver prefix is AUDRCV. If you want to use a prefix other than these defaults, you will need to modify the journal definition using topic “Changing a journal definition” on page 217.

6. If you change the names of any of the system, transfer, or journal definitions created by the copy configuration command, ensure that you also update that name in other locations within the configuration.

Table 77. Changing named definitions after copying a configuration

If you change this name Also change the name in this location

System definition, SYSDFN parameter • Transfer definition, TFRDFN parameter• Data group definition, DGDFN

parameter

Transfer definition, TFRDFN parameter • System definition, PRITFRDFN and SECTFRDFN parameters

• Data group definition, PRITFRDFN and SECTFRDFN parameters

Journal definition, JRNDFN parameter Data group definition, JRNDFN1 and JRNDFN2 parameters

553

Page 554: MIMIX Reference

Checklist: copy configuration

7. Verify the data group definitions created have the correct job descriptions. Verify that the values of parameters for job descriptions are what you want to use. MIMIX provides default job descriptions that are tailored for their specific tasks.

Note: You may have multiple data groups created that you no longer need. Consider whether or not you can combine information from multiple data groups into one data group. For example, it may be simpler to have both database files and objects for an application be controlled by one data group.

8. Verify that the options which control data group file entries are set appropriately.

a. For data group definitions, ensure that the values for file entry options (FEOPT) are what you want as defaults for the data group.

b. Check the file entry options specified in each data group file entry. Any file entry options (FEOPT) specified in a data group file entry will override the default FEOPT values specified in the data group definition. You may need to modify individual data group file entries.

9. Check the data group entries for each data group. Ensure that all of the files and objects that you need to replicate are represented by entries for the data group. Be certain that you have checked the data group entries for your critical files and objects. Use the procedures in the Using MIMIX book to verify your configuration.

10. Check how the apply sessions are mapped for data group file entries. You may need to adjust the apply sessions.

11. Use Table 78 to entries for any additional database files or objects that you need to add to the data group.

Table 78. How to configure data group entries for the preferred configuration.

Class Do the following: Planning and Requirements Information

Library-based objects

1. Create object entries using “Creating data group object entries” on page 267.

2. After creating object entries, load file entries for LF and PF (source and data) *FILE objects using “Loading file entries from a data group’s object entries” on page 273.

Note: If you cannot use MIMIX Dynamic Apply for logical files or PF data files, you should still create file entries for PF source files to ensure that legacy cooperative processing can be used.

3. After creating object entries, load object tracking entries for *DTAARA and *DTAQ objects that are journaled to a user journal. Use “Loading object tracking entries” on page 285.

“Identifying library-based objects for replication” on page 100“Identifying logical and physical files for replication” on page 105“Identifying data areas and data queues for replication” on page 112

554

Page 555: MIMIX Reference

12. Use the #DGFE audit to confirm and automatically correct any problems found in file entries associated with data group object entries. Do the following:

a. Type WRKAUD RULE(#DGFE) and press Enter.

b. Next to the data group you want to confirm, type 9 (Run rule) and press Enter.

c. The results are placed in an outfile. For additional information, see “Interpreting results for configuration data - #DGFE audit” on page 580.

13. If you anticipate a delay between configuring and starting the data group and the data group contains object information, you should set object auditing to ensure that any transactions that occur during the delay will be replicated. Use the procedure “Setting data group auditing values manually” on page 297.

14. Verify that system-level communications are configured correctly.

a. If you are using SNA as a transfer protocol, verify that the MIMIX mode and that the communications entries are added to the MIMIXSBS subsystem.

b. If you are using TCP as a transfer protocol, verify that the MIMIX TCP server is started on each system (on each "side" of the transfer definition). You can use the WRKACTJOB command for this. Look for a job under the MIMIXSBS subsystem with a function of LV-SERVER.

c. Use the Verify Communications Link (VFYCMNLNK) command to ensure that a MIMIX installation on one system can communicate with a MIMIX installation on another system. Refer to topic “Verifying the communications link for a data group” on page 195.

15. Ensure that there are no users on the system that will be the source for replication for the rest of this procedure. Do not allow users onto the source system until you have successfully completed the last step of this procedure.

16. Start journaling using the following procedures as needed for your configuration.

• For user journal replication, use “Journaling for physical files” on page 326 to start journaling on both source and target systems

• For IFS objects, configured for advanced journaling, use “Journaling for IFS objects” on page 330

IFS objects

1. Create IFS entries using “Creating data group IFS entries” on page 282.

2. After creating IFS entries, load IFS tracking entries for IFS objects that are journaled to a user journal. Use “Loading IFS tracking entries” on page 284.

“Identifying IFS objects for replication” on page 118

DLOs Create DLO entries using “Creating data group DLO entries” on page 287.

“Identifying DLOs for replication” on page 124

Table 78. How to configure data group entries for the preferred configuration.

Class Do the following: Planning and Requirements Information

555

Page 556: MIMIX Reference

Checklist: copy configuration

• For data areas or data queues configured for advanced journaling, use “Journaling for data areas and data queues” on page 334

17. Synchronize the database files and objects on the systems between which replication occurs. Topic “Performing the initial synchronization” on page 483 includes instructions for how to establish a synchronization point and identifies the options available for synchronizing.

18. Start the system managers using topic “Starting the system and journal managers” on page 296.

19. Clear pending entries when you start the data groups. Use topic “Starting Selected Data Group Processes” in the Using MIMIX book.

556

Page 557: MIMIX Reference

557

Page 558: MIMIX Reference

Copying configuration procedure

558

Copying configuration procedureThis procedure addresses only some of the tasks needed to complete your configuration. Use this procedure only when directed from the “Checklist: copy configuration” on page 553.

Note: By default, the CPYCFGDTA command replaces all MIMIX configuration data in the current product library with the information from the specified library. Any configuration created in the product library will be replaced with data from the specified library. This may not be desirable.

To copy existing configuration data to the new MIMIX product, do the following:

1. The products in the installation library that will receive the copied configuration data must be shut down for the duration of this procedure. Use topic “Choices when ending replication” in the Using MIMIX book to end activity for the appropriate products.

2. Sign on to the system with the security officer (QSECOFR) user profile or with a user profile that has security officer class and all special authorities.

3. Access the MIMIX Basic Main Menu in the product library that will receive the copied configuration data. From the command line, type the command CPYCFGDTA and press F4 (Prompt).

4. At the Copy from library prompt, specify the name of the library from which you want to copy data.

5. To start copying configuration data, press Enter.

6. When the copy is complete, return to topic “Checklist: copy configuration” on page 553 to verify your configuration.

Page 559: MIMIX Reference

Appendix C

Configuring Intra communications

The MIMIX set of products supports a unique configuration called Intra. Intra is a special configuration that allows the MIMIX products to function fully within a single-system environment. Intra support replicates database and object changes to other libraries on the same system by using system facilities that allow for communications to be routed back to the same system. This provides an excellent way to have a test environment on a single machine that is similar to a multiple-system configuration. The Intra environment can also be used to perform backups while the system remains active.

In an Intra configuration, the product is installed into two libraries on the same system and configured in a special way. An Intra configuration uses these libraries to replicate data to additional disk storage on the same system. The second library in effect becomes a "backup" library.

By using an Intra configuration you can reduce or eliminate your downtime for routine operations such as performing daily and weekly backups. When replicating changes to another library, you can suspend the application of the replicated changes. This enables you to concurrently back up the copied library to tape while your application remains active. When the backup completes, you can resume operations that apply replicated changes to the "backup" library.

An Intra configuration enables you to have a "live" copy of data or objects that can be used to offload queries and report generations. You can also use an Intra configuration as a test environment prior to installing MIMIX on another system or connecting your applications to another System i5.

Because both libraries exist on the same system, an Intra configuration does not provide protection from disaster.

Database replication within an Intra configuration requires that the source and target files either have different names or reside in different libraries. Similarly, objects cannot be replicated to the same named object in the same named library, folders, or directory.

Note: Newly created data groups use remote journaling as the default configuration. Remote journaling is not compatible with intra communications, so you must use source send configuration when configuring for intra communications.

This section includes the following procedures:

• “Manually configuring Intra using SNA” on page 559

• “Manually configuring Intra using TCP” on page 561

Manually configuring Intra using SNA In an Intra environment, MIMIX communicates between two product libraries on the same system instead of between a local system and a remote system. If you manually

559

Page 560: MIMIX Reference

Manually configuring Intra using SNA

configure the communications necessary for Intra, consider the default product library (MIMIX) to be the local system and the second product library (in this example, MIMIXI) to be the remote system.

If you need to manually configure SNA communications for an Intra environment, do the following:

1. Create the system definitions for the product libraries used for Intra as follows:

a. For the MIMIX library (local system), use the local location name in the following command:

CRTSYSDFN SYSDFN(local-location-name) TYPE(*MGT) TEXT(‘Manual creation’)

b. For the MIMIXI library (remote system), use the following command:CRTSYSDFN SYSDFN(INTRA) TYPE(*NET) TEXT(‘Manual creation’)

2. Create the transfer definition between the two product libraries with the following command:

CRTTFRDFN TFRDFN(PRIMARY INTRA local-location-name) PROTOCOL(*SNA) LOCNAME1(INTRA1) LOCNAME2(INTRA2) NETID1(*LOC) TEXT(‘Manual creation’)

3. Create the MIMIX mode description using the following command:

CRTMODD MODD(MIMIX) MAXSSN(100) MAXCNV(100) LCLCTLSSN(12) TEXT('MIMIX INTRA MODE DESCRIPTION – Manual creation.')

4. Create a controller description for MIMIX Intra using the following command:

CRTCTLAPPC CTLD(MIMIXINTRA) LINKTYPE(*LOCAL) TEXT('MIMIX INTRA – Manual creation.')

5. Create a local device description for MIMIX using the following command:

CRTDEVAPPC DEVD(MIMIX) RMTLOCNAME(INTRA1) LCLLOCNAME(INTRA2) CTL(MIMIXINTRA) MODE(MIMIX) APPN(*NO) SECURELOC(*YES) TEXT('MIMIX INTRA – Manual creation.')

6. Create a remote device description for MIMIX using the following command:

CRTDEVAPPC DEVD(MIMIXI) RMTLOCNAME(INTRA2) LCLLOCNAME(INTRA1) CTL(MIMIXINTRA) MODE(MIMIX) APPN(*NO) SECURELOC(*YES) TEXT('MIMIX REMOTE INTRA SUPPORT.')

7. Add a communication entry to the MIMIXSBS subsystem for the local location using the following command:

ADDCMNE SBSD(MIMIXQGPL/MIMIXSBS) RMTLOCNAME(INTRA2) JOBD(MIMIXQGPL/MIMIXCMN) DFTUSR(MIMIXOWN) MODE(MIMIX)

8. Add a communication entry to the MIMIXSBS subsystem for the remote location using the following command:

ADDCMNE SBSD(MIMIXQGPL/MIMIXSBS) RMTLOCNAME(INTRA1) JOBD(MIMIXQGPL/MIMIXCMN) DFTUSR(MIMIXOWN) MODE(MIMIX)

9. Vary on the controller, local device, and remote device using the following

560

Page 561: MIMIX Reference

Configuring Intra communications

commands:

VRYCFG CFGOBJ(MIMIXINTRA) CFGTYPE(*CTL) STATUS(*ON)

VRYCFG CFGOBJ(MIMIX) CFGTYPE(*DEV) STATUS(*ON)

VRYCFG CFGOBJ(MIMIXI) CFGTYPE(*DEV) STATUS(*ON)

10. Start the MIMIX system manager in both product libraries using the following commands:

MIMIX/STRMMXMGR SYSDFN(*INTRA) MGR(*ALL)

MIMIX/STRMMXMGR SYSDFN(*LOCAL) MGR(*JRN)

Note: You still need to configure journal definitions and data group definitions.

Manually configuring Intra using TCP In an Intra environment, MIMIX communicates between two product libraries on the same system instead of between a local system and a remote system. The libraries for the MIMIX installations need to have the same name with the Intra library having an 'I' appended to the end of the library name.

In this example, the MIMIX library is the management system and the MIMIXI library is the network system. If you manually configure the communications necessary for Intra, consider the MIMIX library as the local system and the MIMIXI library as the remote system. You may already have a management system defined and need to add an Intra network system. All the configuration should be done in the MIMIX library on the management system.

Note: If you have multiple network systems, you need to configure your transfer definitions to have the same name with system1 and system2 being different. For more information, see “Multiple network system considerations” on page 172.

To add an entry in the host name table, use the command Configure TCP/IP (CFGTCP) command to access the Configure TCP/IP menu.

Select option 10 (Work with TCP/IP Host Table Entries) from the menu. From the Work with TCP/IP Host Table display, type a 2 (Change) next to the LOOPBACK entry and add 'INTRA' to that entry.

For this example, the host name of the management system is Source and the host name for the network or target system is Intra.

1. Create the system definitions for the product libraries used for Intra as follows:

a. For the MIMIX library (local system) enter the following command:MIMIX/CRTSYSDFN SYSDFN(source) TYPE(*MGT) TEXT(‘management

system’)

Note: You may have already configured this system.

b. For the MIMIXI library (remote system), use the following command:MIMIX/CRTSYSDFN SYSDFN(INTRA) TYPE(*NET) TEXT(‘network

system’)

561

Page 562: MIMIX Reference

Manually configuring Intra using TCP

2. Create the transfer definition between the two product libraries with the following command. Note that the values for PORT1 and PORT2 must be unique.

MIMIX/CRTTFRDFN TFRDFN(PRIMARY SOURCE INTRA) HOST1(SOURCE) HOST2(INTRA) PORT1(55501) PORT2(55502)

3. Create auto-start jobs in the MIMIX subsystem for the port associated with each library so that MIMIX TCP server is started automatically when the subsystem is started.

a. Within the MIMIX library use the commands:CRTDUPOBJ OBJ(MIMIXCMN) FROMLIB(MIMIXQGPL) OBJTYPE(*JOBD)

TOLIB(MIMIX) NEWOBJ(PORT55501)

CHGJOBD JOBD(MIMIX/PORT55501) RQSDTA('MIMIX/STRSVR HOST(SOURCE) PORT(55501) JOBD(MIMIX/PORT55501)

ADDAJE SBSD(MIMIXQGPL/MIMIXSBS) JOB(PORT55501) JOBD(MIMIX/PORT55501)

b. Within the MIMIXI library use the commands:CRTDUPOBJ OBJ(MIMIXCMN) FROMLIB(MIMIXQGPL) OBJTYPE(*JOBD)

TOLIB(MIMIXI) NEWOBJ(PORT55502)

CHGJOBD JOBD(MIMIXI/PORT55502) RQSDTA('MIMIXI/STRSVR HOST(INTRA) PORT(55502) JOBD(MIMIXI/PORT55502)

ADDAJE SBSD(MIMIXQGPL/MIMIXSBS) JOB(PORT55502) JOBD(MIMIXI/PORT55502)

4. Start the server for the management system (source) by entering the following command:

MIMIX/STRSVR HOST(SOURCE) PORT(55501) JOBD(MIMIX/PORT55501)

5. Start the server for the network system (Intra) by entering the following command:

MIMIXI/STRSVR HOST(INTRA) PORT(55502) JOBD(MIMIXI/PORT55502)

6. Start the system managers from the management system by entering the following command:

MIMIX/STRMMXMGR SYSDFN(INTRA) MGR(*ALL) RESET(*YES)

Start the remaining managers normally.

Note: You will still need to configure journal definitions and data group definitions on the management system.

You may want to add service table entries for ports 55501 and 55502 to ensure that other applications will not try and use these ports.

562

Page 563: MIMIX Reference

MIMIX support for independent ASPs

Appendix D MIMIX support for independent ASPs

MIMIX has always supported replication of library-based objects and IFS objects to and from the system auxiliary storage pool (ASP 1) and basic storage pools (ASPs 2-32). Now, MIMIX also supports replication of library-based objects and IFS objects, including journaled IFS objects, data areas and data queues, located in independent ASPs1 (33-255).

The system ASP and basic ASPs are collectively known as SYSBAS. Figure 32 shows that MIMIX supports replication to and from SYSBAS and to and from independent ASPs. Figure 33 shows that MIMIX also supports replication from SYSBAS to an independent ASP and from an independent ASP to SYSBAS.

Figure 32. MIMIX supports replication to and from an independent ASP as well as standard replication to and from SYSBAS (the system ASP and basic ASPs).

Figure 33. MIMIX also supports replication between SYSBAS and an independent ASP.

1. An independent ASP is an iSeries construct introduced by IBM in V5R1 and extended in V5R2 of i5/OS.

563

Page 564: MIMIX Reference

Benefits of independent ASPs

Restrictions: There are several permanent and temporary restrictions that pertain to replication when an independent ASP is included in the MIMIX configuration. See “Requirements for replicating from independent ASPs” on page 567 and “Limitations and restrictions for independent ASP support” on page 567.

Benefits of independent ASPsThe key characteristic of an independent ASP is its ability to function independently from the rest of the storage on a server. Independent ASPs can also be made available and unavailable at the time of your choosing. The benefits of using independent ASPs in your environment can be significant. You can isolate infrequently used data that does not always need to be available when the system is up and running. If you have a lot of data that is unnecessary for day-to-day business operations, for example, you can isolate it and leave it offline until it is needed. This allows you to shorten processing time for other tasks, such as IPLs, reclaim storage, and system start time.

Additional benefits of independent ASPs allow you to do the following:

• Consolidate applications and data from multiple servers into a single System i5 allowing for simpler system management and application maintenance.

• Decrease downtime, enabling data on your system to be made available or unavailable without an IPL.

• Add storage as necessary, without having to make the system unavailable.

• Avoid the need to recover all data in the event of a system failure, since the data is isolated.

• Streamline naming conventions, since multiple instances of data with the same object and library names can coexist on a single System i5 in separate independent ASPs.

• Protect data that is unique to a specific environment by isolating data associated with specific applications from other groups of users.

Using MIMIX provides a robust solution for high availability and disaster recovery for data stored in independent ASPs.

Auxiliary storage pool concepts at a glanceAn independent ASP is actually a part of the larger construct of an auxiliary storage pool (ASP). Each ASP on your system is a group of disk units that can be used to organize data for single-level storage to limit storage device failure and recovery time. The system spreads data across the disk units within an ASP.

Figure 34 shows the types and subtypes of ASPs. The system ASP (ASP 1) is defined by the system and consists of disk unit 1 and any other configured storage not assigned to a basic or independent ASP. The system ASP contains the system objects for the operating system and any user objects not defined to a basic or independent ASP.

564

Page 565: MIMIX Reference

MIMIX support for independent ASPs

User ASPs are additional ASPs defined by the user. A user ASP can either be a basic ASP or an independent ASP.

One type of user ASP is the basic ASP. Data that resides in a basic ASP is always accessible whenever the server is running. Basic ASPs are identified as ASPs 2 through 32. Attributes, such as those for spooled files, authorization, and ownership of an object, stored in a basic ASP reside in the system ASP. When storage for a basic ASP is filled, the data overflows into the system ASP.

Collectively, the system ASP and the basic ASPs are called SYSBAS.

Another type of user ASP is the independent ASP. Identified by device name and numbered 33 through 255, an independent ASP can be made available or unavailable to the server without restarting the system. Unlike basic ASPs, data in an independent ASP cannot overflow into the system ASP. Independent ASPs are configured using iSeries Navigator.

Figure 34. Types of auxiliary storage pools.

Subtypes of independent ASPs consist of primary, secondary, and user-defined file system (UFDS) independent ASPs1. Subtypes can be grouped together to function as a single entity known as an ASP group. An ASP group consists of a primary independent ASP and zero or more secondary independent ASPs. For example, if you make one independent ASP unavailable, the others in the ASP group are made unavailable at the same time.

A primary independent ASP defines a collection of directories and libraries and may have associated secondary independent ASPs. A primary independent ASP defines a database for itself and other independent ASPs belonging to its ASP group. The primary independent ASP name is always the name of the ASP group in which it resides.

A secondary independent ASP defines a collection of directories and libraries and must be associated with a primary independent ASP. One common use for a secondary independent ASP is to store the journal receivers for the objects being journaled in the primary independent ASP.

1. MIMIX does not support UDFS independent ASPs. UDFS independent ASPs contain only user-defined file systems and cannot be a member of an ASP group unless they are converted to a pri-mary or secondary independent ASP.

565

Page 566: MIMIX Reference

Auxiliary storage pool concepts at a glance

Before an independent ASP is made available (varied on), all primary and secondary independent ASPs in the ASP group undergo a process similar to a server restart. While this processing occurs, the ASP group is in an active state and recovery steps are performed. The primary independent ASP is synchronized with any secondary independent ASPs in the ASP group, and journaled objects are synchronized with their associated journal.

While being varied on, several server jobs are started in the QSYSWRK subsystem to support the independent ASP. To ensure that their names remain unique on the server, server jobs that service the independent ASP are given their own job name when the independent ASP is made available.

Once the independent ASP is made available, it is ready to use. Completion message CPC2605 (vary on completed for device name) is sent to the history log.

566

Page 567: MIMIX Reference

Requirements for replicating from independent ASPsThe following requirements must be met before MIMIX can support your independent ASP environment:

• License Program 5722-SS1 option 12 (Host Server) must be installed in order for MIMIX to properly replicate objects in an independent ASP on the source and target systems.

• Any PTFs for i5/OS that are identified as being required need to be installed on both the source and target systems. Log in to Support Central and check the Technical Documents page for a list of i5/OS PTFs that may be required.

• MIMIX product libraries, the LAKEVIEW library, and the MIMIXQGPL library must be installed into *SYSBAS.

Limitations and restrictions for independent ASP sup-port

Limitations: Before using independent ASP support, be aware that independent ASPs do not protect against disk failure. If the disks in the independent ASP are damaged and the data is unrecoverable, data is available only up to the last backup copy. A replication solution such as MIMIX is still required for high-availability and disaster recovery. In addition, be aware of the following limitations:

• Although you can use the same library name between independent ASPs, an independent ASP cannot share a library name with a library in the system ASP or basic ASPs (SYSBAS). SYSBAS is a component of every name space, so the presence of a library name in SYSBAS precludes its use in any independent ASP. This will affect how you configure object for replication with MIMIX, especially for IFS objects. See “Configuring library-based objects when using independent ASPs” on page 569.

• Unlike basic ASPs, when an independent ASP fills, no new objects can be created into the device. Also, updates to existing objects in the independent ASP, such as adding records to a file, may not be successful. If an independent ASP attached to the target system fills, your high-availability and disaster recovery solutions are compromised.

• IBM restricts the object types that can be stored in an independent ASP. For example, DLOs cannot reside in an independent ASP.

Restrictions in MIMIX support for independent ASPs include the following:

• MIMIX supports the replication of objects in primary and secondary independent ASPs only. Replication of IFS objects that reside in user-defined file system (UDFS) independent ASPs is not supported.

• You should not place libraries in independent ASPs within the system portion of a library list. MIMIX commands automatically call the IBM command SETASPGRP, which can result in significant changes to the library list for the associated user job. See “Avoiding unexpected changes to the library list” on page 570.

567

Page 568: MIMIX Reference

Configuration planning tips for independent ASPs

• MIMIX product libraries, the LAKEVIEW library, and the MIMIXQGPL library must be installed into SYSBAS. These libraries cannot exist in an independent ASP.

• Any *MSGQ libraries, *JOBD libraries, and *OUTFILE libraries specified on MIMIX commands must reside in SYSBAS.

• For successful replication, ASP devices in ASP groups that are configured in data group definitions must be made available (varied on). Objects in independent ASPs attached to the source system cannot be journaled if the device is not available. Objects cannot be applied to an independent ASP on the target system if the device is not available.

• Planned switchovers of data groups that include an ASP group must take place while the ASP devices on both the source and target systems are available. If the ASP device for the data group on either the source or target system is unavailable at the time the planned switchover is attempted, the switchover will not complete.

• To support an unplanned switch (failover), the independent ASP device on the backup system (which will become the temporary production system) must be available in order for the failover to complete successfully.

• You must run the Set ASP Group (SETASPGRP) command on the local system before running the Send Network Object (SNDNETOBJ) command if the object you are attempting to send to a remote system is located in an independent ASP.

Also be aware of the following temporary restrictions:

• MIMIX does not perform validity checking to determine if the ASP group specified in the data group definition actually exists on the systems. This may cause error conditions when running commands.

• Any monitors configured for use with MIMIX must specify the ASP group. Monitors of type *JRN or *MSGQ that watch for events in an independent ASP must specify the name of the ASP group where the journal or message queue exists. This is done with the ASPGRP parameter of the CRTMONOBJ command.

• Information regarding independent ASPs is not provided on the following displays: Display Data Group File Entry (DSPDGFE), Display Data Group Data Area Entry (DSPDGDAE), Display Data Group Object Entry (DSPDGOBJE), and Display Data Group Activity Entry (DSPDGACTE). To determine the independent ASP in which the object referenced in these displays resides, see the data group definition.

Configuration planning tips for independent ASPsA job can only reference one independent ASP at a time. Storing applications and programs in SYSBAS ensures that they are accessible by any job. Data stored in an independent ASP is not accessible for replication when the independent ASP is varied off.

For database replication and replication of objects through Advanced Journaling support, due to the requirement for one user journal per data group, it is not possible for a single data group to replicate both SYSBAS data and ASP group data.

568

Page 569: MIMIX Reference

For object replication of library-based objects through the system journal, you should configure related objects in SYSBAS and an ASP group to be replicated by the same data group. Objects in SYSBAS and an ASP group that are not related should be separated into different data groups. This precaution ensures that the data group will start and that objects residing in SYSBAS will be replicated when the independent ASP is not available.

Note: To avoid replicating an object by more than one data group, carefully plan what generic library names you use when configuring data group object entries in an environment that includes independent ASPs. Make every attempt to avoid replicating both SYSBAS data and independent ASP data for objects within the same data group. See the example in “Configuring library-based objects when using independent ASPs” on page 569.

Journal and journal receiver considerations for independent ASPsFor database replication and replication of objects through Advanced Journaling support, data to be replicated and the journal used for its replication must exist in the same ASP. When you configure replication for independent ASP, consider what data you store there and the location of the journal and journal receivers needed to replicate the data.

With independent ASPs, you have the option of placing journal receivers in an associated secondary independent ASP. When you create an independent ASP, an ASP group is automatically created that uses the same name you gave the primary independent ASP.

Configuring IFS objects when using independent ASPsReplication of IFS objects in an independent ASP is supported through default replication processes and through MIMIX Advanced Journaling support. However, there are differences in how to configure for these different environments.

For IFS replication by default object replication processes, you do not need to identify an ASP group in a data group definition because an IFS object’s path includes the independent ASP device name.

However, for IFS replication through Advanced Journaling support, you must specify the ASP group name in the data group definition so that MIMIX can locate the appropriate user journal.

If you are using Advanced Journaling support and want to limit a data group to only replicate IFS objects from SYSBAS, specify *NONE for the ASP group parameters in the data group definition.

Configuring library-based objects when using independent ASPs Use care when creating generic data group object entries; otherwise you can create situations where the same object is replicated by multiple data groups. This applies for replication between independent ASPs as well as replication between an independent ASP and SYSBAS.

569

Page 570: MIMIX Reference

Configuration planning tips for independent ASPs

For example, data group APP1 defines replication between ASP groups named WILLOW on each system. Similarly, group APP2 defines replication between ASP groups named OAK on each system. Both data groups have a generic data group object entry that includes object XZY from library names beginning with LIB*. If object LIBASP/XYZ exists in both independent ASPs and matches the generic data group object entry defined in each data group, both data groups replicate the corresponding object. This is considered normal behavior for replication between independent ASPs, as shown in Figure 35.

However, in this example, if SYSBAS contains an object that matches the generic data group object entry defined for each data group, the same object is replicated by both data groups. Figure 35 shows that object LIBBAS/XYZ meets the criteria for replication by both data groups, which is not desirable.

Figure 35. Object XYZ in library LIBBAS is replicated by both data groups APP1 and APP2 because the data groups contain the same generic data group object entry. As a result, this presents a problem if you need to perform a switch.

Avoiding unexpected changes to the library listIt is recommended that the system portion of your library list does not include any libraries that exist in an ASP group.

Whenever you run a MIMIX command, MIMIX automatically determines whether the job requires a call to the IBM command Set ASP Group (SETASPGRP). The SETASPGRP command changes the current job's ASP group environment and enables MIMIX to access objects that reside in independent ASP libraries. MIMIX resets the job's ASP group to its initial value as needed before processing is completed.

The SETASPGRP command may modify the library list of the current job. If the library list contains libraries for ASP groups other than those used by the ASP group for which the command was called, the SETASPGRP removes the extra libraries from

570

Page 571: MIMIX Reference

the library list. This can affect the system and user portions of the library list as well as the current library in the library list.

When a MIMIX command runs the SETASPGRP command during processing, MIMIX resets the user portion of the library list and the current library in the library list to their initial values. The system portion of the library list is not restored to its initial value.

Figure 36, Figure 37, and Figure 38 show how the system portion of the library list is affected on the Display Library List (DSPLIBL) display when the SETASPGRP command is run.

Figure 36. Before a MIMIX command runs. The library list contains three independent ASP libraries, including a library in independent ASP WILLOW in the system portion of the library list.

Figure 37. During the running of a MIMIX command. The independent ASP libraries are removed from the library list.

Figure 38. After the MIMIX command runs. The library in independent ASP WILLOW in the system portion of the library list is removed. The libraries in independent ASP OAK in the user

Display Library List System: CHICAGO Type options, press Enter. 5=Display objects in library Opt Library Type ASP device Text ___ LIBSYS1 SYS WILLOW : ___ LIBSYS2 SYS : ___ LIBSYS3 SYS : ___ LIBCUR1 CUR WILLOW : ___ LIBUSR1 USR OAK : ___ LIBUSR2 USR : Bottom F3=Exit F12=Cancel F17=Top F18=Bottom

Display Library List System: CHICAGO Type options, press Enter. 5=Display objects in library Opt Library Type ASP device Text ___ LIBSYS1 SYS : ___ LIBSYS2 SYS : ___ LIBSYS3 SYS : ___ LIBCUR1 CUR : ___ LIBUSR1 USR : ___ LIBUSR2 USR : Bottom F3=Exit F12=Cancel F17=Top F18=Bottom

571

Page 572: MIMIX Reference

Detecting independent ASP overflow conditions

portion of the library list and the current library are restored.

The SETASPGRP command can return escape message LVE3786 if License Program 5722-SS1 option 12 (Host Server) is not installed.

Detecting independent ASP overflow conditions You can take advantage of the independent ASP threshold monitor to detect independent ASP overflow conditions that put your high availability solution at risk due to insufficient storage.

The independent ASP threshold monitor, MMIASPTHLD, monitors the QSYSOPR message queue in library QSYS for messages indicating that the amount of storage used by an independent ASP exceeds a defined threshold. When this condition is detected, the monitor sends a warning notification that the threshold is exceeded. The status of warning notifications is incorporated into overall MIMIX status. Notifications can be displayed from MIMIX Availability Manager or with the Work with Notifications (WRKNFY) command.

Each ASP defaults to 90% as the threshold value. To change the threshold value, you must use IBM's iSeries Navigator.

The independent ASP threshold monitor is shipped with MIMIX. The monitor is not automatically started after MIMIX is installed. If you want to use this monitor, you must start it. The monitor is controlled by the master monitor.

Display Library List System: CHICAGO Type options, press Enter. 5=Display objects in library Opt Library Type ASP device Text ___ LIBSYS1 SYS : ___ LIBSYS2 SYS : ___ LIBSYS3 SYS : ___ LIBCUR1 CUR WILLOW : ___ LIBUSR1 USR OAK : ___ LIBUSR2 USR : Bottom F3=Exit F12=Cancel F17=Top F18=Bottom

572

Page 573: MIMIX Reference

Interpreting audit results

Appendix E Interpreting audit results

Audits use commands that compare and synchronize data. The results of the audits are placed in output files associated with the commands. The following topics provide supporting information for interpreting data returned in the output files.

• “Interpreting audit results - MIMIX Availability Manager” on page 575 describes how to check the status of an audit and resolve any problems that occur from within MIMIX Availability Manager.

• “Interpreting audit results - 5250 emulator” on page 576 describes how to check the status of an audit and resolve any problems that occur from a 5250 emulator.

• “Checking the job log of an audit” on page 578 describes how to use an audit’s job log to determine why an audit failed.

• “Interpreting results for configuration data - #DGFE audit” on page 580 describes the #DGFE audit which verifies the configuration data defined to your configuration using the Check Data Group File Entries (CHKDGFE) command.

• “Interpreting results of audits for record counts and file data” on page 582 describes the audits and commands that compare file data or record counts.

• “Interpreting results of audits that compare attributes” on page 586 describes the Compare Attributes commands and their results.

573

Page 574: MIMIX Reference

574

Page 575: MIMIX Reference

Interpreting audit results - MIMIX Availability ManagerWhen viewing results of audits, the starting point is the Audit Summary window. You may also need to view the output file or the job log, which are only available from the system where the audits ran. In most cases, this is the management system.

Do the following:

1. Ensure that you have selected the management system for the installation you want from the navigation bar. If you are not certain which system is the management system, you can select Services to check.

2. From the management system, select Audit Summary from the navigation bar.

3. In the Audit Summary window, check the State and Results columns for the values shown in Table 79. Audits with potential problems are at the top of the list.

4. For each audit, flyover text for the status icon identifies the appropriate action to take. Table 79 provides additional information.

Table 79. Addressing audit problems - MIMIX Availability Manager

State Results Action

Rule Failed (blank) Check the job log or run the rule for the audit again.To run the audit, select Run from the action list and click . To see the job log, refer to “Checking the job log of an audit” on page 578 for more information.

Rule Failed User journal replication is not active

Confirm that data group processes are active and run the rule for the audit again.1. Check the data group status. Select Data Groups from the

navigation bar. Then select the data group from the list.2. In the Summary area, confirm that replication processes are active.

If necessary, select the Start action and click .3. When processes are active, select Summary from the navigation

area. 4. Locate the audit in question. Select the Run action and click .

Completed Successfully

Differences detected, recovery disabled

The detected differences must be manually resolved. Do the following:1. Select Output File from the action list and click . 2. The detected differences are displayed. Look for items with a

Difference Indicator value of *NC or *NE. You can display details about the error or attempt the possible recovery action available.

3. Select the action you want and click .To have MIMIX recover differences on subsequent audits, change the value of the automatic audit recovery policy.

575

Page 576: MIMIX Reference

Interpreting audit results - 5250 emulator

For more information about the values displayed in the audit results, see “Interpreting results for configuration data - #DGFE audit” on page 580, “Interpreting results of audits for record counts and file data” on page 582, and “Interpreting results of audits that compare attributes” on page 586.

Interpreting audit results - 5250 emulatorWhen viewing results of audits, the starting point is the Summary view of the Work with Audits display. You may also need to view the output file or the job log, which are only available from the system where the audits ran. In most cases, this is the management system.

Do the following from the management system:

1. Do one of the following to access the Work with Audits display.

• From a command line, enter WRKAUD VIEW(*AUDSTS)

• From the MIMIX Availability Status display, use option 5 (Display details) next to Audits and notifications. Then, if necessary, use F10 to access the appropriate view.

2. Check the Audit Status column for values shown in Table 80. Audits with potential

Completed Successfully

Differences detected, some objects not recovered

The remaining detected differences must be manually resolved.Note: For audits using the #MBRRCDCNT rule, automatic recovery is not

possible. Other audits, such as #FILDTA, may correct the detected differences.

Do the following:1. Select Output File from the action list and click .2. The detected differences are displayed. Look for items with a

Difference Indicator value of *NE, *NC, or *RCYFAILED. If automatic audit recovery is disabled, you may see other values as well. For the #MBRRCDCNT results, also look for values of: *HLD, *LCK, *NF1, *NF2, *SJ, *UE, and *UN. You can display details about the error or attempt the possible recovery action available.

3. Select the action you want and click .

Table 79. Addressing audit problems - MIMIX Availability Manager

State Results Action

576

Page 577: MIMIX Reference

problems are at the top of the list. Take the action indicated in Table 80.

Table 80. Addressing audit problems - 5250 emulator

Compliance Status

Action

*FAILED The audit failed for these possible reasons.Reason 1: The rule called by the audit failed or ended abnormally.• To run the rule for the audit again, select option 9 (Run rule).• To check the job log, see “Checking the job log of an audit” on page 578.Reason 2: The #FILDTA audit or the #MBRRCDCNT audit which required replication processes that were not active.1. From the MIMIX Availability Status display, check whether there are any problems

indicated for replication processes. 2. If there are no problems with replication processes, use F20 to access a command line

and type WRKAUD. Then skip to Step 6.3. If there are replication problems, use option 9 (Troubleshoot) next to the Replication

activity.4. On the Work with Data Groups display, if processes for the data group show a red I, L,

or P in the Source and Target columns, use option 9 (Start DG).5. When processes are active, use F7 to view audits.6. From the Work with Audits display, use option 9 (Run rule) to run the audit.

*DIFNORCY The comparison performed by the audit detected differences. No recovery actions were attempted because automatic audit recovery is disabled.1. Use option 7 to view notifications for the audit. 2. A subsetted list of the notifications for the audit appears. Use option 8 to view the

results in the output file. 3. Check the Difference Indicator column for values of *NC and *NE. For any of these

differences, you will need manually resolve these problems. To have MIMIX recover differences on subsequent audits, change the value of the automatic audit recovery policy.

*NOTRCVD The comparison performed by the audit detected differences. Some of the differences were not automatically recovered. The remaining detected differences must be manually resolved. Note: For audits using the #MBRRCDCNT rule, automatic recovery is not possible. Other audits,

such as #FILDTA, may correct the detected differences. Do the following:1. Use option 7 to view notifications for the audit. 2. A subsetted list of the notifications for the audit appears. Use option 8 to view the results

in the output file. 3. Check the Difference Indicator column for values of *NC, *NE, and *RCYFAILED. If

automatic audit recovery is disabled, you may see other values as well. For the #MBRRCDCNT results, also look for values of: *HLD, *LCK, *NF1, *NF2, *SJ, *UE, and *UN. For any of these differences, you will need to manually resolve these issues.

577

Page 578: MIMIX Reference

Checking the job log of an audit

Checking the job log of an auditAn audit’s job log can provide more information about why an audit failed. The Job log may be available from the system on which the notification was sent. Typically, this is the management system.

From MIMIX Availability Manager, to check the job log for an audit, do the following:

1. For the audit in question, select the Job logs action and click . This choice is only available when viewing audits from the sending system.

2. The Job Log window opens. Look at the most recent messages to determine the cause of the audit failure.

Note: If you see ‘no data available’ instead, you may still be able to view the job log from the 5250 emulator as described below.

From a 5250 emulator, you must display the notifications from an audit in order to view the job log. Do the following:

1. From the Work with Audits display, type 7 (Notification) next to the audit and press Enter.

2. The notifications associated with the audit are displayed on the Work with Notifications display. Use option 5 (Display) or F22 to view the description in the Notification column.

3. If the notification is not sufficient to determine the problem, use option 12 (Display job) next to the notification.

4. The Display Job menu opens. Select option 4 (Display spooled files). Then use option 5 (Display) from the Display Job Spooled Files display.

5. Look for a completion message from the rule with the text indicated from Step 2. Usually the most recent messages are at the bottom of the display.

578

Page 579: MIMIX Reference

579

Page 580: MIMIX Reference

Interpreting results for configuration data - #DGFE audit

Interpreting results for configuration data - #DGFE audit The #DGFE audit verifies the configuration data that is defined for replication in your configuration. This audit invokes the Check Data Group File Entries (CHKDGFE) command for the audit’s comparison phase. The CHKDGFE command collects data on the source system and generates a report in a spooled file or an outfile.

One possible reason why actual configuration data in your environment may not match what is defined to your configuration is that a file was deleted but the associated data group file entries were left intact. Another reason is that a data group file entry was specified with a member name, but a member is no longer defined to that file. If you use the automatic scheduling and automatic audit recovery functions of MIMIX AutoGuard, these configuration problems can be automatically detected and recovered for you.

The report is available on the system where the command ran. The report displays values that indicate problems or whether a recovery was attempted. When Check Data Group File Entries (CHKDGFE) command is run the following values can be indicated in the report:

• No file entry exists (*NODGFE)

• An extra file entry exists (*EXTRADGFE)

• No file for the existing file entry exists (*NOFILE)

• No file member for the existing file entry exists (*NOMBR)

• File entries are in transition and cannot be compared (*UA).

When the #DGFE rule is called and a recovery is attempted, the following values can also be indicated in the report:

• Recovered by automatic recovery actions (*RECOVERED)

• Automatic audit recovery actions were attempted but failed to correct the detected error (*RCYFAILED)

Table 81 provides examples of when various configuration errors might occur. Table 82 provides possible problem resolution actions for these errors:

Table 81. CHKDGFE - possible error conditions

Result File exists

Member exists

DGFE exists

DGOBJE exists

*NODGFE Yes Yes No COOPDB(*YES)

*EXTRADGFE Yes Yes Yes COOPDB(*NO)

*NOFILE No No Yes Exclude

*NOMBR Yes No Yes No entry

580

Page 581: MIMIX Reference

Table 82. CHKDGFE - possible error resolution actions

Result Recovery Actions

*NODGFE Create the DGFE or change the DGOBJE to COOPDB(*NO) - applies to all objects using the object entry. If you do not want all objects changed to this value, copy the existing DGOBJE to a new, specific DGOBJE with the appropriate COOPDB value.

*EXTRADGFE Delete the DGFE or change the DGOBJE to COOPDB(*YES) - applies to all objects using the object entry. If you do not want all objects changed to this value, copy the existing DGOBJE to a new, specific DGOBJE with the appropriate COOPDB value.

*NOFILE Delete the DGFE, re-create the missing file, or restore the missing file.

*NOMBR Delete the DGFE for the member or add the member to the file.

581

Page 582: MIMIX Reference

Interpreting results of audits for record counts and file data

Interpreting results of audits for record counts and file data

The audits and commands that compare file data or record counts are as follows:

• #FILDTA audit or Compare File Data (CMPFILDTA) command

• #MBRRCDCNT audit or Compare Record Count (CMPRCDCNT) command

Each record in the output files for these audits or commands identifies a file member that has been compared and indicates whether a difference was detected for that member.

MIMIX Availability Manager displays only detected differences found by each compare command using a subset of the fields from the output file. You can see the full set of fields in each output file by viewing it from a 5250 emulator.

The type of data included in the output file is determined by the report type specified on the compare command. When viewed from a 5250 emulator, the data included for each report type is as follows:

• Difference reports return information about detected differences. Difference reports are the default for these compare commands.

• Full reports return information about all objects and attributes compared. Full reports include both differences and objects that are considered synchronized.

• Relative record number reports return the relative record number of the first 1,000 records of a member that fail to compare. Relative record number reports apply only to the Compare File Data command.

What differences were detected by #FILDTAThe Difference Indicator (DIFIND) field identifies the result of the comparison. Table 83 identifies values for the Compare File Data command that can appear in this field

Table 83. Possible values for Compare File Data (CMPFILDTA) output file field Difference Indicator (DIFIND)

Values Description

*APY The database apply (DBAPY) job encountered a problem processing a U-MX journal entry for this member.

*CMT Commit cycle activity on the source system prevents active processing from comparing records or record counts in the selected member.

*CO Unable to process selected member. Cannot open file.

*CO (LOB) Unable to process selected member containing a large object (LOB). The file or the MIMIX-created SQL view cannot be opened.

*DT Unable to process selected member. The file uses an unsupported data type.

582

Page 583: MIMIX Reference

Updated for 5.0.06.00.

What differences were detected by #MBRRCDCNTTable 84 identifies values for the Compare Record Count command that can appear in the Difference Indicator (DIFIND) field.

*EQ Record counts match. No differences were detected. Global difference indicator.

*EQ (OMIT) No difference was detected. However, fields with unsupported types were omitted.

*FF The file feature is not supported for comparison. Examples of file features include materialized query tables.

*FMC Matching entry not found in database apply table.

*FMT Unable to process selected member. File formats differ between source and target files. Either the record length or the null capability is different.

*HLD Indicates that a member is held or an inactive state was detected.

*IOERR Unable to complete processing on selected member. Messages preceding LVE0101 may be helpful.

*NE Indicates a difference was detected.

*REP The file member is being processed for repair by another job running the Compare File Data (CMPFILDTA) command.

*SJ The source file is not journaled, or is journaled to the wrong journal.

*SP Unable to process selected member. See messages preceding message LVE3D42 in job log.

*SYNC The file or member is being processed by the Synchronize DG File Entry (SYNCDGFE) command.

*UE Unable to process selected member. Reason unknown. Messages preceding message LVE3D42 in job log may be helpful.

*UN Indicates that the member’s synchronization status is unknown.

Table 83. Possible values for Compare File Data (CMPFILDTA) output file field Difference Indicator (DIFIND)

Values Description

Table 84. Possible values for Compare Record Count (CMPRCDCNT) output file field Dif-ference Indicator (DIFIND)

Values Description

*CMT Commit cycle activity on the source system prevents active processing from comparing records or record counts in the selected member.

583

Page 584: MIMIX Reference

Interpreting results of audits for record counts and file data

Updated for 5.0.06.00.

*EQ Record counts match. No difference was detected. Global difference indicator.

*FF The file feature is not supported for comparison. Examples of file features include materialized query tables.

*HLD Indicates that a member is held or an inactive state was detected.

*LCK Lock prevented access to member.

*NE Indicates a difference was detected.

*NF1 Member not found on system 1.

*NF2 Member not found on system 2.

*SJ The source file is not journaled, or is journaled to the wrong journal.

*UE Unable to process selected member. Reason unknown. Messages preceding LVE3D42 in job log may be helpful.

*UN Indicates that the member’s synchronization status is unknown.

Table 84. Possible values for Compare Record Count (CMPRCDCNT) output file field Dif-ference Indicator (DIFIND)

Values Description

584

Page 585: MIMIX Reference

585

Page 586: MIMIX Reference

Interpreting results of audits that compare attributes

Interpreting results of audits that compare attributesEach audit that compares attributes does so by calling a Compare Attributes1 command and places the results in an output file. Each row in an output file for a Compare Attributes command can contain either a summary record format or a detailed record format. Each summary row identifies a compared object and includes a prioritized object-level summary of whether differences were detected. Each detail row identifies a specific attribute compared for an object and the comparison results.

The type of data included in the output file is determined by the report type specified on the Compare Attributes command. When viewed from a 5250 emulator, the data included for each report type is as follows:

• Difference reports (RPTTYPE(*DIF)) return information about detected differences. Only summary rows for objects that had detected differences are included. Detail rows for all compared attributes are included. Difference reports are the default for the Compare Attributes commands.

• Full reports (RPTTYPE(*ALL)) return information about all objects and attributes compared. For each object compared there is a summary row as well as a detail row for each attribute compared. Full reports include both differences and objects that are considered synchronized.

• Summary reports (RPTTYPE(*SUMMARY)) return only a summary row for each object compared. Specific attributes compared are not included.

For difference and full reports of compare attribute commands, several of the attribute selectors return an indicator (*INDONLY) rather than an actual value. Attributes that return indicators are usually variable in length, so an indicator is returned to conserve space. In these instances, the attributes are checked thoroughly, but the report only contains an indication of whether it is synchronized.

For example, an authorization list can contain a variable number of entries. When comparing authorization lists, the CMPOBJA command will first determine if both lists have the same number of entries. If the same number of entries exist, it will then determine whether both lists contain the same entries. If differences in the number of entries are found or if the entries within the authorization list are not equal, the report will indicate that differences are detected. The report will not provide the list of entries—it will only indicate that they are not equal in terms of count or content.

MIMIX Availability Manager displays only detected differences found by Compare Attributes commands using a subset of the fields from the output file. MIMIX Availability Manager displays summary rows in the Summary List window and detail rows in the Details window for the Compare command type. You can see the full set of fields in the output file by viewing it from a 5250 emulator.

1. The Compare Attribute commands are: Compare File Attributes (CMPFILA), Compare Object Attributes (CMPOBJA), Compare IFS Attributes (CMPIFSA), and Compare DLO Attributes (CMPDLOA).

586

Page 587: MIMIX Reference

What attribute differences were detectedThe Difference Indicator (DIFIND) field identifies the result of the comparison. Table 85 identifies values that can appear in this field. Not all values may be valid for every Compare command.

Within MIMIX Availability Manager, the value shown in the Summary List window is a prioritized summary of the status of all attributes checked for the object. This summary value is also presented along with other object-identifying information at the top of the Details window. For each attribute displayed on the Details window, the results of its comparison is shown.

When the output file is viewed from a 5250 emulator, the summary row is the first record for each compared object and is indicated by an asterisk (*) in the Compared Attribute (CMPATR) field. The summary row’s Difference Indicator value is the prioritized summary of the status of all attributes checked for the object. When included, detail rows appear below the summary row for the object compared and show the actual result for the attributes compared.

The Priority2 column in Table 85 indicates the order of precedence MIMIX uses when determining the prioritized summary value for the compared object.

Table 85. Possible values for output file field Difference Indicator (DIFIND)

Values1 Description Summary Record2 Priority

*CO Unable to process selected member. Cannot open file. 1

*CO (LOB) Unable to process selected member containing a large object (LOB). The MIMIX-created SQL view cannot be opened.

*CMT An open commit cycle on the source system prevents active processing from comparing one or more records in the selected member.

N/A

*DT Unable to process selected member. The file uses an unsupported data type.

1

*EC The values are based on the MIMIX configuration settings. The actual values may or may not be equal.

5

*EQ Record counts match. No differences were detected. Global difference indicator.

5

*EQ (OMIT) No differences were detected. However, fields with unsupported types were omitted.

*FMT Unable to process selected member. File formats differ between source and target files. Either the record length or the null capability is different.

1

*HLD Indicates that a member is held or an inactive state was detected. N/A

*IOERR Unable to complete processing on selected member. Messages preceding LVE0101 may be helpful.

1

587

Page 588: MIMIX Reference

Interpreting results of audits that compare attributes

*LCK Lock prevented access to member.

*NA The values are not compared. The actual values may or may not be equal.

5

*NC The values are not equal based on the MIMIX configuration settings. The actual values may or may not be equal.

3

*NE Indicates differences were detected. 2

*NF1 Member not found on system 1.

*NF2 Member not found on system 2.

*NS Indicates that the attribute is not supported on one of the systems. Will not cause a global not equal condition.

5

*RCYSBM Indicates that MIMIX AutoGuard submitted an automatic audit recovery action that must be processed through the user journal replication processes. The database apply (DBAPY) will attempt the recovery and send an *ERROR or *INFO notification to indicate the outcome of the recovery attempt.

*RCYFAILED Used to indicate that automatic recovery attempts via AutoGuard failed to recover the detected difference.

*RECOVERED Indicates that recovery for this object was successful. 1 3

*SJ Unable to process selected member. The source file is not journaled.

1

*SP Unable to process selected member. See messages preceding message LVE3D42 in job log.

1

*SYNC Unable to process selected member. The file is being processed by the Synchronize DG File Entry (SYNCDGFE) command.

N/A

*UA Object status is unknown due to object activity. If an object difference is found and the comparison has a value specified on the Maximum replication lag prompt, the difference is seen as unknown due to object activity. This status is only displayed in the summary record.Note: The Maximum replication lag prompt is only valid when a data

group is specified on the command.

2

*UE Unable to process selected member. Reason unknown. Messages preceding message LVE3D42 in job log may be helpful.

1

*UN Indicates that the object’s synchronization status is unknown. 4

1. Not all values may be possible for every Compare command.2. Priorities are used to determine the value shown in output files for Compare Attribute commands. 3. The value *RECOVERED can only appear in an output file modified by a recovery action. The object was initially

found to be *NE or *NC but MIMIX autonomic functions recovered the object.

Table 85. Possible values for output file field Difference Indicator (DIFIND)

Values1 Description Summary Record2 Priority

588

Page 589: MIMIX Reference

For most attributes, when a detailed row contains blanks in either of the System 1 Indicator or System 2 Indicator fields, MIMIX determines the value of the Difference Indicator field according to Table 86. For example, if the System 1 Indicator is *NOTFOUND and the System 2 Indicator is blank (Object found), the resultant Difference Indicator is *NE.

For a small number of specific attributes, the comparison is more complex. The results returned vary according to parameters specified on the compare request and MIMIX configuration values. For more information see the following topics:

• “Comparison results for journal status and other journal attributes” on page 608

• “Comparison results for auxiliary storage pool ID (*ASP)” on page 612

• “Comparison results for user profile status (*USRPRFSTS)” on page 615

• “Comparison results for user profile password (*PRFPWDIND)” on page 619

Where was the difference detected The System 1 Indicator (SYS1IND) and System 2 Indicator (SYS2IND) fields show the status of the attribute on each system as determined by the compare request. Table 87 identifies the possible values. While these fields are available in both summary and detail rows in the output file, MIMIX Availability Manager only displays them in the Details window.

Table 86. Difference Indicator values that are derived from System Indicator values.

Difference Indicator

System 1 Indicator

Object Found (blank value)

*NOTCMPD *NOTFOUND *NOTSPT *RTVFAILED *DAMAGED

System 2

Indicator

Object Found (blank value)

*EQ / *EQ (LOB) / *NE / *UA / *EC / *NC

*NA *NE *NS *UN *NE

*NOTCMPD *NA *NA *NE *NS *UN *NE

*NOTFOUND *NE / *UA *NE / *UA *EQ *NE / *UA *NE / *UA *NE

*NOTSPT *NS *NS *NE *NS *UN *NE

*RTVFAILED *UN *UN *NE *UN *UN *NE

*DAMAGED *NE *NE *NE *NE *NE *NE

Table 87. Possible values for output file fields SYS1IND and SYS2IND

Value Description Summary Record1 Priority

<blank> No special conditions exist for this object. 5

*DAMAGED Object damaged condition. 3

589

Page 590: MIMIX Reference

Interpreting results of audits that compare attributes

For comparisons which include a data group, the Data Source (DTASRC) field identifies which system is configured as the source for replication. In MIMIX Availability Manager Details windows, the direction of the arrow shown the data group field identifies the flow of replication.

What attributes were comparedIn each detailed row, the Compared Attribute (CMPATR) field identifies a compared attribute. The following topics identify the attributes that can be compared by each command and the possible values returned

• “Attributes compared and expected results - #FILATR, #FILATRMBR audits” on page 591

• “Attributes compared and expected results - #OBJATR audit” on page 596

• “Attributes compared and expected results - #IFSATR audit” on page 604

• “Attributes compared and expected results - #DLOATR audit” on page 606

*MBRNOTFND Member not found. 2

*NOTCMPD Attribute not compared. Due to MIMIX configuration settings, this attribute cannot be compared.

N/A2

*NOTFOUND Object not found. 1

*NOTSPT Attribute not supported. Not all attributes are supported on all IBM i releases. This is the value that is used to indicate an unsupported attribute has been specified.

N/A2

*RTVFAILED Unable to retrieve the attributes of the object. Reason for failure may be a lock condition.

4

1. The priority indicates the order of precedence MIMIX uses when setting the system indicators fields in the summary record.

2. This value is not used in determining the priority of summary level records.

Table 87. Possible values for output file fields SYS1IND and SYS2IND

Value Description Summary Record1 Priority

590

Page 591: MIMIX Reference

Attributes compared and expected results - #FILATR, #FILATRMBR audits

The Compare File Attribute (CMPFILA) command supports comparisons at the file and member level. Most of the attributes supported are for file-level comparisons. The #FILATR audit and the #FILATRMBR audit each invoke the CMPFILA command for the comparison phase of the audit.

Some attributes are common file attributes such as owner, authority, and creation date. Most of the attributes, however, are file-specific attributes. Examples of file-specific attributes include triggers, constraints, database relationships, and journaling information.

The difference Indicator (DIFIND) returned after comparing file attributes may depend on whether the file is defined by file entries or object entries. For instance, a attribute could be equal (*EC) to the database configuration but not equal (*NC) to the object configuration. See “What attribute differences were detected” on page 587.

Table 88 lists the attributes that can be compared and the value shown in the Compared Attribute (CMPATR) field in the output file. The Returned Values column lists the values you can expect in the System1 Value (SYS1VAL) and System 2 Value (SYS2VAL) columns as a result of running the comparison.

Table 88. Compare File Attributes (CMPFILA) attributes

Attribute Description Returned Values (SYS1VAL, SYS2VAL)

*ACCPTH1 Access path AR - Arrival sequence access pathEV - Encoded vector with a 1-, 2-, or 4-byte vector. KC - Keyed sequence access path with duplicate keys allowed. Duplicate keys are accessed in first-changed-first-out (FCFO) order.KF - Keyed sequence access path with duplicate keys allowed. Duplicate keys are accessed in first-in-first-out (FIFO) order.KL - Keyed sequence access path with duplicate keys allowed. Duplicate keys are accessed in last-in-first-out (LIFO) orderKN - Keyed sequence access path with duplicate keys allowed. No order is guaranteed when accessing duplicate keys.KU - Keyed sequence access path with no duplicate keys allowed (UNIQUE).

*ACCPTHSIZ1 Access path size *MAX4GB, *MAX1TB

*ALWDLT Allow delete operation

*YES, *NO

*ALWOPS Allow operations Group which checks attributes *ALWDLT, *ALWRD, *ALWUPD, *ALWWRT

*ALWRD Allow read operation

*YES, *NO

*ALWUPD Allow update operation

*YES, *NO

591

Page 592: MIMIX Reference

*ALWWRT Allow write operation

*YES, *NO

*ASP Auxiliary storage pool ID

1-16 (pre-V5R2)1-255 (V5R2)1 = System ASPSee “Comparison results for auxiliary storage pool ID (*ASP)” on page 612 for details.

*AUDVAL Object audit value *NONE, *CHANGE, *ALL

*AUT File authorities Group which checks attributes *AUTL, *PGP, *PRVAUTIND, *PUBAUTIND

*AUTL Authority list name *NONE, list name

*BASIC Pre-determined set of basic attributes

Group which checks a pre-determined set of attributes.When *FILE is specified for the Comparison level (CMPLVL), these attributes are compared: *CST (group), *NBRMBR, *OBJATR, *RCDFMT, *TEXT, and *TRIGGER (group). When *MBR is specified for the Comparison level (CMPLVL), these attributes are compared: *CURRCDS, *EXPDATE, *NBRDLTRCD, *OBJATR, *SHARE, and *TEXT.

*CCSID1 Coded character set 1-65535

*CST Constraint attributes Group which checks attributes *CSTIND, *CSTNBR

*CSTIND 2 Constraint equal indicator

No value, indicator only4 When this attribute is returned in output, its Difference Indicator value indicates if the number of constraints, constraint names, constraint types, and the check pending attribute are equal. For referential and check constraints, the constraint state as well as whether the constraint status is enabled or disabled is also compared.

*CSTNBR 2 Number of constraints

Numeric value

*CURRCDS Current number of records

0-4294967295

*DBCSCAP DBCS capable *YES, *NO

*DBR Group which checks *DBRIND, *OBJATR

*DBRIND 2 Database relations No value, indicator only4 When this attribute is returned in output, its Difference Indicator value indicates if the number of database relations and the dependent file names are equal.

*EXPDATE1 Expiration date for member

Blank for *NONE or date in CYYMMDD format, where C equals the century. Value 0 is 19nn and 1 is 20nn.

Table 88. Compare File Attributes (CMPFILA) attributes

Attribute Description Returned Values (SYS1VAL, SYS2VAL)

592

Page 593: MIMIX Reference

*EXTENDED Pre-determined, extended set

Valid only for Comparison level of *FILE, this group compares the basic set of attributes (*BASIC) plus an extended set of attributes. The following attributes are compared: *ACCPTH, *AUT (group), *CCSID, *CST (group), *CURRCDS, *DBR (group), *MAXKEYL, *MAXMBRS, *MAXRCDL, *NBRMBR, *OBJATR, *OWNER, *PFSIZE (group), *RCDFMT, *REUSEDLT, *SELOMT, *SQLTYP, *TEXT, and *TRIGGER (group).

*FIRSTMBR1 3 Name of member *FIRST

10 character name*NONE if the file has no members.

*FRCKEY1 Force keyed access path

*YES, *NO

*FRCRATIO1 Records to force a write

*NONE, 1-32767

*INCRCDS1 Increment number of records

0-32767

*JOIN Join Logical file *YES, *NOAdd, update, and delete authorities are not checked. Differences in these authorities do not result in an *NE condition.

*JOURNAL Journal attributes Group which checks *JOURNALED, *JRN, *JRNLIB, *JRNIMG, *JRNOMIT. Results are described in “Comparison results for journal status and other journal attributes” on page 608.

*JOURNALED File is currently journaled

*YES, *NO

*JRN Current or last journal

10 character name, blank if never journaled

*JRNIMG Record images *AFTER, *BOTH

*JRNLIB Current or last journal library

10 character name, blank if never journaled

*JRNOMIT Journal entries to be omitted

*OPNCLO, *NONE

*LANGID1 Language ID 3 character ID

*LASTMBR1 3 Name of member *LAST

10 character name*NONE if the file has no members.

*LVLCHK1 Record format level check

*YES, *NO

*MAINT1 Access path maintenance

*IMMED, *REBLD, *DLY

*MAXINC1 Maximum increments

0-32767

Table 88. Compare File Attributes (CMPFILA) attributes

Attribute Description Returned Values (SYS1VAL, SYS2VAL)

593

Page 594: MIMIX Reference

*MAXKEYL1 Maximum key length

1-2000

*MAXMBRS1 Maximum members *NOMAX, 1-32767

*MAXPCT1 Max % deleted records allowed

*NONE, 1-100

*MAXRCDL1 Maximum record length

1-32766

*NBRDLTRCD1 Current number of deleted records

0-4294967295

*NBRMBR1 Number of members

0-32767

*NBRRCDS1 Initial number of records

*NOMAX, 1-2147483646

*OBJCTLLVL1 Object control level 8 character user-defined value

*OWNER File owner User profile name

*PFSIZE File size attributes Group which checks *CURRCDS, *INCRCDS, *MAXINC, *NBRDLTRCD, *NBRRCDS

*PGP Primary group *NONE, user profile name

*PRVAUTIND Private authority indicator

No value, indicator only4 When this attribute is returned in output, its Difference Indicator value indicates if the number of private authorities and private authority values are equal.

*PUBAUTIND Public authority indicator

No value, indicator only4 When this attribute is returned in output, its Difference Indicator value indicates if public authority values are equal.

*RCDFMT Number of record formats

1-32

*RECOVER1 Access path recovery

*IPL, *AFTIPL, *NO

*REUSEDLT1 Reuse deleted records

*YES, *NO

*SELOMT Select / omit file *YES, *NO

*SHARE1 Share open data path

*YES, *NO

*SQLTYP SQL file type PF Types - NONE, TABLE, LF Types - INDEX, VIEW, NONE

*TEXT1 Text description 50 character value

Table 88. Compare File Attributes (CMPFILA) attributes

Attribute Description Returned Values (SYS1VAL, SYS2VAL)

594

Page 595: MIMIX Reference

Updated for 5.0.11.00.

*TRIGGER Group which checks *TRGIND, *TRGNBR, *TRGXSTIND

*TRGIND 2 Trigger equal indicator

No value, indicator only4 When this attribute is returned in output, its Difference Indicator value indicates whether it is enabled or disabled, and if the number of triggers, trigger names, trigger time, trigger event, and trigger condition with an event type of ‘update’ are equal.

*TRGNBR 2 Number of triggers Numeric value

*TRGXSTIND 2 Trigger existence indicator

No value, indicator only4 When this attribute is returned in output, its Difference Indicator value indicates if a trigger program exists on the system.

*USRATR User-defined attribute

10 character user-defined value

*WAITFILE1 Maximum file wait time

*IMMED, *CLS, 1-32767

*WAITRCD1 Maximum record wait time

*IMMED, *NOMAX, 1-32767

1. Differences detected for this attribute are marked as *EC (equal configuration) when the compare request specified a data group and the object is configured for system journal replication with a configured object auditing value of *NONE.

2. This attribute cannot be specified as input for comparing but it is included in a group attribute. When the group attribute is checked, this value may appear in the output.

3. Differences detected for this attribute are marked as *EC (equal configuration) when the compare request specified a data group and the file is configured for system journal replication with a configured Omit content (OMTDTA) value of *FILE.

4. If *PRINT is specified in the comparison, an indicator appears in the system 1 and system 2 columns. If *OUTFILE is specified, however, these values are blank.

Table 88. Compare File Attributes (CMPFILA) attributes

Attribute Description Returned Values (SYS1VAL, SYS2VAL)

595

Page 596: MIMIX Reference

Attributes compared and expected results - #OBJATR audit The #OBJATR audit calls the Compare Object Attributes (CMPOBJA) command and places the results in an output file. Table 89 lists the attributes that can be compared by the CMPOBJA command and the value shown in the Compared Attribute (CMPATR) field in the output file. The command supports attributes that are common among most library-based objects as well as extended attributes which are unique to specific object types, such as subsystem descriptions, user profiles, and data areas. The Returned Values column lists the values you can expect in the System1 Value (SYS1VAL) and System 2 Value (SYS2VAL) columns as a result of running the compare.

Table 89. Compare Object Attributes (CMPOBJA) attributes

Attribute Description Returned Values (SYS1VAL, SYS2VAL)

*ACCPTHSIZ1 2 Access path sizeValid for logical files only.

*MAX4GB and *MAX1TB

*AJEIND Auto start job entries.Valid for subsystem descriptions only.

No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if the number of auto start job entries, job entry and associated job description, and library entry values are equal.

*ASP Auxiliary storage pool ID 1-16 (pre-V5R1)1-32 (V5R1)1-255 (V5R2), 1 = System ASPSee “Comparison results for auxiliary storage pool ID (*ASP)” on page 612 for details.

*ASPNBR Number of defined storage pools.Valid for subsystem descriptions only.

Numeric value

*ATTNPGM2 Attention key handling program Valid for user profiles only.

*SYSVAL, *NONE, *ASSIST, attention program name

*AUDVAL Object audit value *NONE, *USRPRF, *CHANGE, *ALL

*AUT Authority attributes Group which checks *AUTL, *PGP, *PRVAUTIND, *PUBAUTIND

*AUTCHK2 Authority to check.Valid for job queues only.

*OWNER, *DTAAUT

*AUTL Authority list name *NONE, list name

*BASIC Pre-determined set of basic attributes

Group which checks a pre-determined set of attributes. These attributes are compared: *CRTTSP, *DOMAIN, *INFSTS, *OBJATR, *TEXT, and *USRATR.

596

Page 597: MIMIX Reference

*CCSID2 Character identifier control.Valid for user profiles only.

*SYSVAL, ccsid-value

*CNTRYID2 Country ID Valid for user profiles only.

*SYSVAL, country-id

*COMMEIND Communications entriesValid for subsystem descriptions only.

No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if the number of communication entries, maximum number of active jobs, communication device, communication mode, associated job description and library, and the default user entry values are equal.

*CRTAUT2 Authority given to users who do not have specific authority to the object.Valid for libraries only.

*SYSVAL, *CHANGE, *ALL, *USE, *EXCLUDE, *SYSVAL, *CHANGE, *ALL, *USE, *EXCLUDE

*CRTOBJAUD2 Auditing value for objects created in this libraryValid for libraries only.

*SYSVAL, *NONE, *USRPRF, *CHANGE, *ALL

*CRTOBJOWN Profile that owns objects created by userValid for user profiles only.

*USRPRF, *GRPPRF, profile-name

*CRTTSP Object creation date YYYY-MM-DD-HH.MM.SS.mmmmmm

*CURLIB Current library Valid for user profiles only.

*CRTDFT, current-library

*DATACRC2 Data cyclic redundancy check (CRC)Valid for data queues only.

10 character value

*DDMCNV2 DDM conversationValid for job descriptions only.

*KEEP, *DROP

*DECPOS Decimal positionsValid for data areas only.

0-9

*DOMAIN Object Domain *SYSTEM, *USER

*DTAARAEXT Data area extended attributes

Group which checks *DECPOS, *LENGTH, *TYPE, *VALUE

Table 89. Compare Object Attributes (CMPOBJA) attributes

Attribute Description Returned Values (SYS1VAL, SYS2VAL)

597

Page 598: MIMIX Reference

*EXTENDED Pre-determined, extended set

Group which compares the basic set of attributes (*BASIC) plus an extended set of attributes. The following attributes are compared: *AUT, *CRTTSP, *DOMAIN, *INFSTS, *OBJATR, *TEXT, and *USRATR.

*FRCRATIO1 2 Records to force a writeValid for logical files only.

*NONE, 1 - 32,767

*GID Group profile ID numberValid for user profiles only.

1 - 4294967294

*GRPAUT Group authority to created objects Valid for user profiles only.

*NONE, *ALL, *CHANGE, *USE, *EXCLUDE

*GRPAUTTYP Group authority typeValid for user profiles only.

*PGP, *PRIVATE

*GRPPRF Group profile nameValid for user profiles only.

*NONE, profile-name

*INFSTS Information status *OK (No errors occurred), *RTVFAILED (No information returned - insufficient authority or object is locked), *DAMAGED (Object is damaged or partially damaged).

*INLMNU Initial menuValid for user profiles only.

Menu - *SIGNOFF, menu nameLibrary - *LIBL, library name

*INLPGM Initial programValid for user profiles only.

Program - *NONE, program name Library - *LIBL, library name

*JOBDEXT Job description extended attributes

Group which checks *DDMCNV, *JOBQ, *JOBQLIB, *JOBQPRI, *LIBLIND, *LOGOUTPUT, *OUTQ, *OUTQLIB, *OUTQPRI, *PRTDEV

*JOBQ2 Job queueValid for job descriptions only.

10 character name

*JOBQEIND Job queue entriesValid for subsystem descriptions only.

No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if the number of job queue entries, job queue names, job queue libraries, and order of entries are the same

Table 89. Compare Object Attributes (CMPOBJA) attributes

Attribute Description Returned Values (SYS1VAL, SYS2VAL)

598

Page 599: MIMIX Reference

*JOBQEXT Job queue extended attributes

Group which checks *AUTCHK, *JOBQSBS, *JOBQSTS, *OPRCTL

*JOBQLIB2 Job queue libraryValid for job descriptions only.

10 character name

*JOBQPRI2 Job queue priorityValid for job descriptions only.

1 (highest) - 9 (lowest)

*JOBQSBS2 Subsystem that receives jobs from this queueValid for job queues only.

Subsystem name

*JOBQSTS2 Job queue statusValid for job queues only.

HELD, RELEASED

*JOURNAL Journal attributes Group which checks *JOURNALED, *JRN, *JRNLIB, *JRNIMG, *JRNOMIT4.Results are described in “Comparison results for journal status and other journal attributes” on page 608.

*JOURNALED Object is currently journaled

*YES, *NO

*JRN Current or last journal 10 character name

*JRNIMG Record images *AFTER, *BOTH

*JRNLIB Current or last journal library

10 character name

*JRNOMIT Journal entries to be omitted

*OPNCLO, *NONE

*LANGID2 Language IDValid for user profiles only.

*SYSVAL, language-id

*LENGTH Data area lengthValid for data areas only

1-2000 (character), 1-24 (decimal), 1 (logical)

*LIBEXT Extended library information attributes

Group which checks *CRTAUT, *CRTOBJAUD

*LIBLIND Initial library listValid for job descriptions only.

No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if the number of library list entries and entry list values are equal. The comparison is order dependent.

Table 89. Compare Object Attributes (CMPOBJA) attributes

Attribute Description Returned Values (SYS1VAL, SYS2VAL)

599

Page 600: MIMIX Reference

*LMTCPB Limit capabilitiesValid for user profiles only.

*PARTIAL, *YES, *NO

*LOGOUTPUT2 Job log outputValid for job descriptions only.

*SYSVAL, *JOBLOGSVR, *JOBEND, *PND

*LVLCHK1 2 Record format level checkValid for logical files only.

*YES, *NO

*MAINT1 2 Access path maintenanceValid for logical files only.

*DLY, *IMMED, *REBLD

*MAXACT 2 Maximum active jobsValid for subsystem descriptions only.

Numeric value, *NOMAX (32,767)

*MAXMBRS1 2 Maximum membersValid for logical files only.

*NOMAX, 1 - 32,767

*MSGQ2 Message queueValid for user profiles only.

Message queue - message queue nameLibrary - *LIBL, library name

*NBRMBR1 2 Number of logical file membersValid for logical files only.

0 - 32,767

*OBJATR Object attribute 10 character object extended attribute

*OBJCTLLVL2 Object control levelValid for object types that support this attribute5.

8 character user-defined value

*OPRCTL2 Operator controlledValid for job queues only.

*YES, *NO

*OUTQ2 Output queueValid for job descriptions only.

*USRPRF, *DEV, *WRKSTN, output queue name

*OUTQLIB2 Output queue libraryValid for job descriptions only.

10 character name

*OUTQPRI2 Output queue priorityValid for job descriptions only.

1 (highest) - 9 (lowest)

Table 89. Compare Object Attributes (CMPOBJA) attributes

Attribute Description Returned Values (SYS1VAL, SYS2VAL)

600

Page 601: MIMIX Reference

*OWNER Object owner 10 character name

*PGP Primary group *NONE, user profile name

*PRESTIND Pre-start job entriesValid for subsystem descriptions only.

No value, indicator only1 When this attribute is returned in output, its Difference Indicator value indicates if the number of prestart jobs, program, user profile, start job, wait for job, initial jobs, maximum jobs, additional jobs, threshold, maximum users, job name, job description, first and second class, and number of first and second class jobs values are equal.

*PRFOUTQ2 Output queueValid for user profiles only.

*LIBL/*WRKSTN, *DEV

*PRFPWDIND User profile password indicator

See “Comparison results for user profile password (*PRFPWDIND)” on page 619 for details.

*PRTDEV2 Printer deviceValid for job descriptions only.

*USRPRF, *SYSVAL, *WRKSTN, printer device name

*PRVAUTIND Private authority indicator No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if the number of private authorities and private authority values are equal

*PUBAUTIND Public authority indicator No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if the public authority values are equal.

*PWDEXPITV Password expiration intervalValid for user profiles only.

*SYSVAL, *NOMAX, 1-366 days

*PWDIND No password indicatorValid for user profiles only.

*YES (no password), *NO (password)

*QUEALCIND Job queue allocation indicatorValid for subsystem descriptions only.

No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if the job queue entries for a subsystem are in the same order and have the same queue names and queue library names. It also compares the allocation indicator values

Table 89. Compare Object Attributes (CMPOBJA) attributes

Attribute Description Returned Values (SYS1VAL, SYS2VAL)

601

Page 602: MIMIX Reference

*RLOCIND Remote location entriesValid for subsystem descriptions only.

No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if the number of remote location entries, remote location, mode, job description and library, maximum active jabs, and default user entry values are equal.

*RTGEIND Routing entriesValid for subsystem descriptions only.

No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if the number of routing entries, sequence number, maximum active, steps, compare start, entry program, class, and compare entry values are equal

*SBSDEXT Subsystem description extended attributes

Group which checks *AJEIND, *ASPNBR, *COMMEIND, *JOBQEIND, *MAXACT, *PRESTIND, *RLOCIND, *RTGEIND, *SBSDSTS

*SBSDSTS2 Subsystem statusValid for subsystem descriptions only.

*ACTIVE, *INACTIVE

*SIZE Object size Numeric value

*SPCAUTIND Special authoritiesValid for user profiles only.

No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if special authority values are equal

*SQLSP SQL stored proceduresValid for programs and service programs only.

*NONE, or indicator only3 *NONE is returned when there are no stored procedures associated with the program or service program.When the indicator only is returned in output, the Difference Indicator value identifies whether SQL stored procedures associated with the object are equal.

*SQLUDF SQL user defined functionsValid for programs and service programs only.

*NONE, or indicator only3 *NONE is returned when there are no user defined functions associated with the program or service program.When the indicator only is returned in output, the Difference Indicator value identifies whether SQL user defined functions associated with the object are equal.

*SUPGRPIND Supplemental GroupsValid for user profiles only.

No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if supplemental group values are equal

*TEXT2 Text description 50 character description

Table 89. Compare Object Attributes (CMPOBJA) attributes

Attribute Description Returned Values (SYS1VAL, SYS2VAL)

602

Page 603: MIMIX Reference

Updated for 5.0.03.00 and 5.0.07.00.

*TYPE Data area type - data area types of DDM resolved to actual data area typesValid for data areas only.

*CHAR, *DEC, *LGL

*UID User profile ID numberValid for user profiles only.

1 - 4294967294

*USRATR2 User-defined attribute 10 character user-defined value

*USRCLS User ClassValid for user profiles only.

*SECOFR, *SECADM, *PGMR, *SYSOPR, *USER

*USRPRFEXT User profile extended attributes

Group which checks *ATTNPGM, *CCSID, *CNTRYID, *CRTOBJOWN, *CURLIB, *GID, *GRPAUT, *GRPAUTTYP, *GRPPRF, *INLMNU, *INLPGM, *LANGID, *LMTCPB, *MSGQ, *PRFOUTQ, *PWDEXPITV, *PWDIND, *SPCAUTIND, *SUPGRPIND, *USRCLS

*USRPRFSTS User profile status *ENABLED, *DISABLED6

For details, see “Comparison results for user profile status (*USRPRFSTS)” on page 615.

*VALUE2 Data area valueValid for data areas only.

Character value of data

1. This attribute only applies to logical files. Use the Compare File Attributes (CMPFILA) command to compare or omit physical file attributes.

2. Differences detected for this attribute are marked as *EC (equal configuration) when the compare request specified a data group and the object is configured for system journal replication with a configured object auditing value of *NONE.

3. If *PRINT is specified for the output format on the compare request, an indicator appears in the System 1 and System 2 columns. If *OUTFILE is specified, these values are blank.

4. These attributes are compared for object types of *FILE, *DTAQ, and *DTAARA. These are the only objects supported by IBM's user journals.

5. The *OBJCTLLVL attribute is only supported on the following object types: *AUTL, *CNNL, *COSD, *CTLD, *DEVD, *DTAARA, *DTAQ, *FILE, *IPXD, *LIB, *LIND, *MODD, *NTBD, *NWID, *NWSD, and *USRPRF.

6. The profile status is only compared if no data group is specified or the USRPRFSTS has a value of *SRC for the spec-ified data group. If a data group is specified on the CMPOBJA command and the USRPRFSTS value on the object entry has a value of *TGT, *ENABLED, or *DISABLED, the user profile status is not compared.

Table 89. Compare Object Attributes (CMPOBJA) attributes

Attribute Description Returned Values (SYS1VAL, SYS2VAL)

603

Page 604: MIMIX Reference

Attributes compared and expected results - #IFSATR auditThe #IFSATR audit calls the Compare IFS Attributes (CMPIFSA) command and places the results in an output file. Table 90 lists the attributes that can be compared by the CMPIFSA command and the value shown in the Compared Attribute (CMPATR) field in the output file. The Returned Values column lists the values you can expect in the System1 Value (SYS1VAL) and System 2 Value (SYS2VAL) columns as a result of running the compare.

Table 90. Compare IFS Attributes (CMPIFSA) attributes

Attribute Description Returned Values (SYS1VAL, SYS2VAL)

*ALWSAV1 Allow save *YES, *NO

*ASP Auxiliary storage pool 1-16 (pre-V5R1)1-255 (V5R1) 1-System ASPSee “Comparison results for auxiliary storage pool ID (*ASP)” on page 612 for details.

*AUDVAL Object auditing value *ALL, *CHANGE, *NONE, *USRPRF

*AUT Authority attributes Group which checks attributes *AUTL, *PGP, *PUBAUTIND, *PRVAUTIND

*AUTL Authority list name *NONE, list name

*BASIC Pre-determined set of basic attributes

Group which checks a pre-determined set of attributes. The following set of attributes are compared: *CCSID, *DATASIZE, *OBJTYPE, and the group *PCATTR.

*CCSID1 Coded character set 1-65535

*CRTTSP2 Create timestamp SAA format (YY-MM-DD-HH.MM.SS.mmmmmm)

*DATACRC Data cyclic redundancy check (CRC)

8 character value

*DATASIZE1 Data size 0-4294967295

*EXTENDED Pre-determined, extended set

Group which checks a pre-determined set of attributes. Compares the basic set of attributes (*BASIC) plus an extended set of attributes. The following attributes are compared: *AUT (group), *CCSID, *DATASIZE, *OBJTYPE, *OWNER, and *PCATTR (group).

*JOURNAL Journal information Groups which checks attributes *JOURNALED, *JRN, *JRNLIB, *JRNIMG, *JRNOPT. Results are described in “Comparison results for journal status and other journal attributes” on page 608.

*JOURNALED File is currently journaled *YES, *NO

*JRN Current or last journal 10 character name

*JRNIMG Record images *AFTER, *BOTH

604

Page 605: MIMIX Reference

Updated for 5.0.07.00.

*JRNLIB Current or last journal library

10 character name

*JRNOPT Journal optional entries *YES, *NO

*OBJTYPE Object type *STMF, *DIR, *SYMLNK

*OWNER File owner 10 character name

*PCARCHIVE1 Archived file *YES, *NO

*PCATTR PC Attributes Group which checks *PCARCHIVE, *PCHIDDEN, *PCREADO, *PCSYSTEM

*PCHIDDEN1 Hidden file *YES, *NO

*PCREADO1 Read only attribute *YES, *NO

*PCSYSTEM1 System file *YES, *NO

*PGP Primary group *NONE, user profile name

*PRVAUTIND Private authority indicator No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if the number of private authorities and private authority values are equal.

*PUBAUTIND Public authority indicator No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if the public authority values are equal.

1. Differences detected for this attribute are marked as *EC (equal configuration) when the compare request specified a data group and the object is configured for system journal replication with a configured object auditing value of *NONE.

2. The *CRTTSP attribute does not compare directories (*DIR) or symbolic links (*SYMLNK). For stream files (*STMF), the #IFSATR audit omits the *CRTTSP attribute from comparison since creation timestamps are not preserved during replication. Running the CMPIFSA command will detect differences in the creation timestamps for stream files.

3. If *PRINT is specified in the comparison, an indicator appears in the system 1 and system 2 columns. If *OUTFILE is specified, these values are blank.

Table 90. Compare IFS Attributes (CMPIFSA) attributes

Attribute Description Returned Values (SYS1VAL, SYS2VAL)

605

Page 606: MIMIX Reference

Attributes compared and expected results - #DLOATR audit The #DLOATR audit calls the Compare DLO Attributes (CMPDLOA) command and places the results in an output file. Table 91 lists the attributes that can be compared by the CMPDLOA command and the value shown in the Compared Attribute (CMPATR) field in the output file. The Returned Values column lists the values you can expect in the System1 Value (SYS1VAL) and System 2 Value (SYS2VAL) columns as a result of running the compare.

Table 91. Compare DLO Attributes (CMPDLOA) attributes

Attribute Description Returned Values (SYS1VAL, SYS2VAL)

*ASP Auxiliary storage pool 1-16 (pre-V5R1)1-32 (V5R1)1 = System ASPSee “Comparison results for auxiliary storage pool ID (*ASP)” on page 612 for details.

*AUDVAL Object audit value *NONE, *USRPRF, *CHANGE, *ALL

*AUT Authority attributes Group which checks *AUTL, *PGP, *PUBAUTIND, *PRVAUTIND

*AUTL Authority list name *NONE, list name

*BASIC Pre-determined set of basic attributes

Group which checks a pre-determined set of attributes. The following set of attributes are compared: *CCSID, *DATASIZE, *OBJTYPE, *PCATTR, and *TEXT.

*CCSID Coded character set 1-65535

*CRTTSP Create timestamp SAA format (YY-MM-DD-HH.MM.SS.mmmmmm)

*DATASIZE Data size 0-42949672951

*EXTENDED Pre-determined, extended set Group which checks a pre-determined set of attributes. Compares the basic set of attributes (*BASIC) plus an extended set of attributes. The following attributes are compared *AUT, *CCSID, *DATASIZE, *OBJTYPE, *OWNER, *PCATTR, and *TEXT.

*MODTSP Modify timestamp SAA format (YY-MM-DD-HH.MM.SS.mmmmmm)

*OBJTYPE Object type *DOC, *FLR2

*OWNER File owner 10 character name

*PCARCHIVE Archived file *YES, *NO

*PCATTR PC Attributes Group which checks *PCARCHIVE, *PCHIDDEN, *PCREADO, *PCSYSTEM

*PCHIDDEN Hidden file *YES, *NO

*PCREADO Read only attribute *YES, *NO

606

Page 607: MIMIX Reference

*PCSYSTEM System file *YES, *NO

*PGP Primary group *NONE, user profile name

*PRVAUTIND Private authority indicator No value, indicator only3 When this attribute is returned in output, its Difference Indicator value if the number of private authorities and private authority values are equal

*PUBAUTIND Public authority indicator No value, indicator only1 When this attribute is returned in output, its Difference Indicator value if the public authority values are equal

*TEXT Text description 50 character description

1. This attribute is not supported for DLOs with an object type of *FLR.2. This attribute is always compared.3. If *PRINT is specified in the comparison, an indicator appears in the system 1 and system 2 columns. If *OUTFILE is

specified, these values are blank.

Table 91. Compare DLO Attributes (CMPDLOA) attributes

Attribute Description Returned Values (SYS1VAL, SYS2VAL)

607

Page 608: MIMIX Reference

Comparison results for journal status and other journal attributes The Compare File Attributes (CMPFILA), Compare Object Attributes (CMPOBJA), and Compare IFS Attributes (CMPIFSA) commands support comparing the journaling attributes listed in Table 92 for objects replicated from the user journal. These commands function similarly when comparing journaling attributes.

When a compare is requested, MIMIX determines the result displayed in the Differences Indicator field by considering whether the file is journaled, whether the request includes a data group, and the data group’s configured settings for journaling.

Regardless of which journaling attribute is specified on the command, MIMIX always checks the journaling status first (*JOURNALED attribute). If the file or object is journaled on both systems, MIMIX then considers whether the command specified a data group definition before comparing any other requested attribute.

Compares that do not specify a data group - When no data group is specified on the compare request, MIMIX compares the journaled status (*JOURNALED attribute). Table 93 shows the result displayed in the Differences Indicator field. If the file or

Table 92. Journaling attributes

When specified on the CMPOBJA command, these values apply only to files, data areas, or data queues. When specified on the CMPFILA command, these values apply only to PF-DTA and PF38-DTA files.

*JOURNAL Object journal information attributes. This value acts as a group selection, causing all other journaling attributes to be selected

*JOURNALED Journal Status. Indicates whether the object is currently being journaled. This attribute is always compared when any of the other journaling attributes are selected.

*JRN 1

1. When these values are specified on a Compare command, the journal status (*JOURNALED) attribute is always evaluated first. The result of the journal status comparison determines whether the command will compare the specified attribute.

Journal. Indicates the name of the current or last journal. If blank, the object has never been journaled.

*JRNIMG 1 2

2. Although *JRNIMG can be specified on the CMPIFSA command, it is not compared even when the journal status is as expected. The journal image status is reflected as not supported (*NS) because IBM i only supports after (*AFTER) images.

Journal Image. Indicates the kinds of images that are written to the journal receiver for changes to objects.

*JRNLIB 1 Journal Library. Identifies the library that contains the journal. If blank, the object has never been journaled.

*JRNOMIT 1 Journal Omit. Indicates whether file open and close journal entries are omitted.

608

Page 609: MIMIX Reference

object is not journaled on both systems, the compare ends. If both source and target systems are journaled, MIMIX then compares any other specified journaling attribute.

Compares that specify a data group - When a data group is specified on the compare request, MIMIX compares the journaled status (*JOURNALED attribute) to the configuration values. If both source and target systems are journaled according to the expected configuration settings, then MIMIX compares any other specified journaling attribute against the configuration settings.

The Compare commands vary slightly in which configuration settings are checked.

• For CMPFILA requests, if the journaled status is as configured, any other specified journal attributes are compared. Possible results from comparing the *JOURNALED attribute are shown in Table 94.

• For CMPOBJA and CMPIFSA requests, if the journaled status is as configured and the configuration specifies *YES for Cooperate with database (COOPDB), then any other specified journal attributes are compared. Possible results from comparing the *JOURNALED attribute are shown in Table 94 and Table 95. If the configuration specifies COOPDB(*NO), only the journaled status is compared; possible results are shown in Table 96.

Table 94, Table 95, and Table 96 show results for the *JOURNALED attribute that can appear in the Difference Indicator field when the compare request specified a data group and considered the configuration settings.

Table 93. Difference indicator values for *JOURNALED attribute when no data group is specified

Difference Indicator

Journal Status 1

1. The returned values for journal status found on the Source and Target systems are shown in the SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of the DTASRC field.

Target

Yes No *NOTFOUND

SourceYes *EQ *NE *NE

No *NE *EQ *NE

*NOTFOUND *NE *NE *UN

609

Page 610: MIMIX Reference

Table 94 shows results when the configured settings for Journal on target and Cooperate with database are both *YES.

Table 95 shows results when the configured settings are *NO for Journal on target and *YES for Cooperate with database. .

Table 96 shows results when the configured setting for Cooperate with database is *NO. In this scenario, you may want to investigate further. Even though the Difference Indicator shows values marked as configured (*EC), the object can be not journaled

Table 94. Difference indicator values for *JOURNALED attribute when a data group is spec-ified and the configuration specifies *YES for JRNTGT and COOPDB

Difference Indicator

Journal Status 1

1. The returned values for journal status found on the Source and Target systems are shown in the SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of the DTASRC field.

Target

Yes No *NOTFOUND

SourceYes *EC *EC *NE

No *NC *NC *NE

*NOTFOUND *NE *NE *UN

Table 95. Difference indicator values for *JOURNALED attribute when a data group is spec-ified and the configuration specifies *NO for JRNTGT and *YES for COOPDB.

Difference Indicator

Journal Status 1

1. The returned values for journal status found on the Source and Target systems are shown in the SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of the DTASRC field.

Target

Yes No *NOTFOUND

SourceYes *NC *EC *NE

No *NC *NC *NE

*NOTFOUND *NE *NE *UN

610

Page 611: MIMIX Reference

on one or both systems. The actual journal status values are returned in the System 1 Value (SYS1VAL) and System 2 Value (SYS2VAL) fields.

How configured journaling settings are determinedWhen a data group is specified on a compare request, MIMIX also considers configuration settings when comparing journaling attributes. For comparison purposes, MIMIX assumes that the source system is journaled and that the target system is journaled according to configuration settings.

Depending on the command used, there are slight differences in what configuration settings are checked. The CMPFILA, CMPOBJA, and CMPIFSA commands retrieve the following configurable journaling attributes from the data group definition:

• The Journal on target (JRNTGT) parameter identifies whether activity replicated through the user journal is journaled on the target system. The default value is *YES.

• The System 1 journal definition (JRNDFN1) and System 2 journal definition (JRNDFN2) values are retrieved and used to determine the source journal, source journal library, target journal, and target journal library.

• Values for elements Journal image and Omit open/close entries specified in the File entry options (FEOPT) parameter are retrieved. The default values are *AFTER and *YES, respectively.

Because the data group’s values for Journal image and Omit open/close entries can be overridden by a data group file entry or a data group object entry, the CMPFILA and CMPOBJA commands also retrieve these values from the entries. The values determined after the order of precedence is resolved, sometimes called the overall MIMIX configuration values, are used for the compare.

For CMPOBJA and CMPIFSA requests, the value of the Cooperate with database (COOPDB) parameter is retrieved from the data group object entry or data group IFS entry. The default value in object entries is *YES, while the default value in IFS entries is *NO.

Table 96. Difference indicator values for *JOURNALED attribute when a data group is spec-ified and the configuration specifies *NO for COOPDB.

Difference Indicator

Journal Status 1

1. The returned values for journal status found on the Source and Target systems are shown in the SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of the DTASRC field.

Target

Yes No *NOTFOUND

SourceYes *EC *EC *NE

No *EC *EC *NE

*NOTFOUND *NE *NE *UN

611

Page 612: MIMIX Reference

Comparison results for auxiliary storage pool ID (*ASP) The Compare File Attributes (CMPFILA), Compare Object Attributes (CMPOBJA), Compare IFS Attributes (CMPIFSA), and Compare DLO Attributes (CMPDLOA) commands support comparing the auxiliary storage pool (*ASP) attribute for objects replicated from the user journal. These commands function similarly.

When a compare is requested, MIMIX determines the result displayed in the Differences Indicator field by considering whether a data group was specified on the compare request.

Compares that do not specify a data group - When no data group is specified on the compare request, MIMIX compares the *ASP attribute for all files or objects that match the selection criteria specified in the request. The result displayed in the Differences Indicator field. Table 97 shows the possible results in the Difference Indicator field.

Compares that specify a data group - When a data group is specified on the compare request (CMPFILA, CMPDLOA, CMPIFSA commands), MIMIX does not compare the *ASP attribute. When a data group is specified on a CMPOBJA request which specifies an object type except libraries (*LIB), MIMIX does not compare the *ASP attribute. Table 98 shows the possible results in the Difference Indicator field

Table 97. Difference Indicator values when no data group is specified

Difference Indicator

ASP Values 1

1. The returned values for *ASP attribute on the Source and Target systems are shown in the SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of the DTASRC field.

Target

ASP1 ASP2 *NOTFOUND

SourceASP1 *EQ *NE *NE

ASP2 *NE *EQ *NE

*NOTFOUND *NE *NE *EQ

Table 98. Difference Indicator values for non-library objects when the request specified a data group

Difference Indicator

ASP Values 1

1. The returned values for *ASP attribute on the Source and Target systems are shown in the SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of the DTASRC field.

Target

ASP1 ASP2 *NOTFOUND

SourceASP1 *NOTCMPD *NOTCMPD *NE

ASP2 *NOTCMPD *NOTCMPD *NE

*NOTFOUND *NE *NE *EQ

612

Page 613: MIMIX Reference

For CMPOBJA requests which specify a a data group and an object type of *LIB, MIMIX considers configuration settings for the library. Values for the System 1 library ASP number (LIB1ASP), System 1 library ASP device (LIB1ASPD), System 2 library ASP number (LIB2ASP), and System 2 library ASP device (LIB2ASPD) are retrieved from the data group object entry and used in the comparison. Table 99, Table 100, and Table 101 show the possible results in the Difference Indicator field.

Note: For Table 99, Table 100, and Table 101, the results are the same even if the system roles are switched.

Table 99 shows the expected values for the ASP attribute when the request specifies a data group and the configuration specifies *SRCLIB for the System 1 library ASP number and the data source is system 2. .

Table 100 shows the expected values for the ASP attribute the request specifies a data group and the configuration specifies 1 for the System 1 library ASP number and the data source is system 2.

Table 101 shows the expected values for the ASP attribute when the request specifies a data group and the configuration specifies *ASPDEV for the System 1

Table 99. Difference Indicator values for libraries when a data group is specified and config-ured values are LIB1ASP(*SRCLIB) and DTASRC(*SYS2).

Difference Indicator

ASP Values 1

1. The returned values for *ASP attribute on the Source and Target systems are shown in the SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of the DTASRC field.

Target

ASP1 ASP2 *NOTFOUND

SourceASP1 *EC *NC *NE

ASP2 *NC *EC *NE

*NOTFOUND *NE *NE *EQ

Table 100. Difference Indicator values for libraries when a data group is specified and config-ured values are LIB1ASP(1) and DTASRC(*SYS2)

Difference Indicator

ASP Values 1

1. The returned values for *ASP attribute on the Source and Target systems are shown in the SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of the DTASRC field.

Target

1 2 *NOTFOUND

Source1 *EC *NC *NE

2 *EC *NC *NE

*NOTFOUND *NE *NE *EQ

613

Page 614: MIMIX Reference

library ASP number, DEVNAME is specified for the System 1 library ASP device, and data source is system 2. .

Table 101. Difference Indicator values for libraries when a data group is specified and config-ured values are LIB1ASP(*ASPDEV), LIB1ASPD(DEVNAME) and DTASRC(*SYS2)

Difference Indicator

ASP Values 1

1. The returned values for *ASP attribute on the Source and Target systems are shown in the SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of the DTASRC field.

Target

DEVNAME 2 *NOTFOUND

Source1 *EC *NC *NE

2 *EC *NC *NE

*NOTFOUND *NE *NE *EQ

614

Page 615: MIMIX Reference

Comparison results for user profile status (*USRPRFSTS)When comparing the attribute *USRPRFSTS (user profile status) with the Compare Object Attributes (CMPOBJA) command, MIMIX determines the result displayed in the Differences Indicator field by considering the following:

• The status values of the object on both the source and target systems

• Configured values for replicating user profile status, at the data group and object entry levels

• The value of the Data group definition (DGDFN) parameter specified on the CMPOBJA command.

Compares that do not specify a data group - When the CMPOBJA command does not specify a data group, MIMIX compares the status values between source and target systems. The result is displayed in the Differences Indicator field, according to Table 85 in “Interpreting results of audits that compare attributes” on page 586.

Compares that specify a data group - When the CMPOBJA command specifies a data group, MIMIX checks the configuration settings and the values on one or both systems. (For additional information, see “How configured user profile status is determined” on page 616.)

When the configured value is *SRC, the CMPOBJA command compares the values on both systems. The user profile status on the target system must be the same as the status on the source system, otherwise an error condition is reported. Table 102 shows the possible values.

When the configured value is *ENABLED or *DISABLED, the CMPOBJA command checks the target system value against the configured value. If the user profile status on the target system does not match the configured value, an error condition is reported. The source system user profile status is not relevant. Table 103 and Table

Table 102. Difference Indicator values when configured user profile status is *SRC

Difference Indicator

User profile status Target

*ENABLED *DISABLED *NOTFOUND

Source*ENABLED *EC *NC *NE

*DISABLED *NC *EC *NE

*NOTFOUND *NE *NE *UN

615

Page 616: MIMIX Reference

104 show the possible values when configured values are *ENABLED or *DISABLED, respectively.

When the configured value is *TGT, the CMPOBJA command does not compare the values because the result is indeterminate. Any differences in user profile status between systems are not reported. Table 105 shows possible values.

How configured user profile status is determinedThe data group definition determines the behavior for replicating user profile status unless it is explicitly overridden by a non-default value in a data group object entry. The value determined after the order of precedence is resolved is sometimes called the overall MIMIX configuration value. Unless specified otherwise in the data group or

Table 103. Difference Indicator values when configured user profile status is *ENABLED

Difference Indicator

User profile status Target

*ENABLED *DISABLED *NOTFOUND

Source*ENABLED *EC *NC *NE

*DISABLED *EC *NC *NE

*NOTFOUND *NE *NE *UN

Table 104. Difference Indicator values when configured user profile status is *DISABLED

Difference Indicator

User profile status Target

*ENABLED *DISABLED *NOTFOUND

Source*ENABLED *NC *EC *NE

*DISABLED *NC *EC *NE

*NOTFOUND *NE *NE *UN

Table 105. Difference Indicator values when configured user profile status *TGT

Difference Indicator

User profile status Target

*ENABLED *DISABLED *NOTFOUND

Source*ENABLED *NA *NA *NE

*DISABLED *NA *NA *NE

*NOTFOUND *NE *NE *UN

616

Page 617: MIMIX Reference

in an object entry, the default is to use the value *SRC from the data group definition. Table 106 shows the possible values at both the data group and object entry levels.

Table 106. Configuration values for replicating user profile status

*DGDFT Only available for data group object entries, this indicates that the specified in the data group definition is used for the user profile statue. This is the default value for object entries.

*DISABLE 1

1. Data group definitions use these values. In data group object entries, the values *DISABLED and *ENABLED are used but have the same meaning.

The status of the user profile is set to *DISABLED when the user profile is created or changed on the target system.

*ENABLE 1 The status of the user profile is set to *ENABLED when the user profile is created or changed on the target system.

*SRC This is the default value in the data group definition. The status of the user profile on the source system is always used when the user profile is created or changed on the target system.

*TGT If a new user profile is created, the status is set to *DISABLED. If an existing user profile is changed, the status of the user profile on the target system is not altered.

617

Page 618: MIMIX Reference

618

Page 619: MIMIX Reference

Comparison results for user profile password (*PRFPWDIND)When comparing the attribute *PRFPWDIND (user profile password indicator) with the Compare Object Attributes (CMPOBJA) command, MIMIX assumes that the user profile names are the same on both systems. User profile passwords are only compared if the user profile name is the same on both systems and the user profile of the local system is enabled and has a defined password.

If the local or remote user profile has a password of *NONE, or if the local user profile is disabled or expired, the user profile password is not compared. The System Indicator fields will indicate that the attribute was not compared (*NOTCMPD). The Difference Indicator field will also return a value of not compared (*NA).

The CMPOBJA command does not support name mapping while comparing the *PRFPWDIND attribute. If the user profile names are different, or if you attempt name mapping, the System Indicator fields will indicate that comparing the attribute is not supported (*NOTSPT). The Difference Indicator field will also return a value of not supported (*NS).

The following tables identify the expected results when user profile password is compared. Note that the local system is the system on which the command is being run, and the remote system is defined as System 2.

Table 107 shows the possible Difference Indicator values when the user profile passwords are the same on the local and remote systems and are not defined as *NONE.

Table 107. Difference Indicator values when user profile passwords are the same, but not *NONE.

Difference Indicator

User Profile Password Remote System

*ENABLED *DISABLED Expired Not Found

Local System

*ENABLED *EQ *EQ *EQ *NE

*DISABLED *NA *NA *NA *NE

Expired *NA *NA *NA *NE

Not Found *NE *NE *NE *EQ

619

Page 620: MIMIX Reference

Table 108 shows the possible Difference Indicator values when the user profile passwords are different on the local and remote systems and are not defined as *NONE.

Table 109 shows the possible Difference Indicator values when the user profile passwords are defined as *NONE on the local and remote systems.

Table 108. Difference Indicator values when user profile passwords are different, but not *NONE

Difference Indicator

User Profile Password Remote System

*ENABLED *DISABLED Expired Not Found

Local System

*ENABLED *NE *NE *NE *NE

*DISABLED *NA *NA *NA *NE

Expired *NA *NA *NA *NE

Not Found *NE *NE *NE *EQ

Table 109. Difference Indicator values when user profile passwords are *NONE.

Difference Indicator

User Profile Password Remote System

*ENABLED *DISABLED Expired Not Found

Local System

*ENABLED *NA *NA *NA *NE

*DISABLED *NA *NA *NA *NE

Expired *NA *NA *NA *NE

Not Found *NE *NE *NE *EQ

620

Page 621: MIMIX Reference

Appendix F

Outfile formats

This section contains the output files (outfile) formats for those MIMIX commands that provide outfile support.

Lakeview Technology provides a model database file that defines the record format for the outfile. These database files can be found in the product installation library.

Public authority to the created outfile is the same as the create authority of the library in which the file is created. Use the Display Library Description (DSPLIBD) command to see the create authority of the library.

You can use the Run Query (RUNQRY) command to display outfiles with column headings and data type formatting if you have the licensed program 5722QU1, Query, installed.

Otherwise, you can use the Display File Field Description (DSPFFD) command to see detailed outfile information, such as the field length, type, starting position, and number of bytes.

Outfile support in MIMIX Availability ManagerMIMIX Availability Manager provides enhanced MIMIX output file information for the compare commands used by audits. For these commands, MIMIX Availability Manager provides a subsetted view of the output file in a window unique to the command. Each output file window provides options for taking actions that are appropriate for the errors detected, including problems and recovered items.

Note: Corrective action is only available for output files associated with Notifications. Recovery output files only display entries that have problems or have already been recovered.

All other output files generated by other commands are shown in their entirety in the Output File Information window. The outfile display can be customized using Preferences.

For more information about audit results, see “Interpreting audit results - MIMIX Availability Manager” on page 575 and “Interpreting audit results - 5250 emulator” on page 576.

621

Page 622: MIMIX Reference

Work panels with outfile support

622

Work panels with outfile supportThe following table lists the work panels with outfile support.

Table 110. Work panels with outfile support

Panel Description

WRKDGDFN Work with DG Definitions

WRKJRNDFN Work with Journal Definitions

WRKTFRDFN Work with Transfer Definitions

WRKSYSDFN Work with System Definitions

WRKDGFE Work with DG File Entries

WRKDGDAE Work with DG Data Area Entries

WRKDGOBJE Work with DG Object Entries

WRKDGDLOE Work with DG DLO Entries

WRKDGIFSE Work with DG IFS Entries

WRKDGACT Work with DG Activity

WRKDGACTE Work with DG Activity Entries

WRKDGIFSTE Work with DG IFS Tracking Entries

WRKDGOBJTE Work with DG Object Tracking Entries

Page 623: MIMIX Reference

MCAG out

MCAGith Application Groups (WRKAG)

Column headingsAGDFNNAMEUSER PROFILEAPP NAMEAPP LIBRARYAPP RELEASE LEVELPARENT APP GROUPAPP CRG EXIT PGMCRG EXIT PGM LIBCRG EXIT PGM JOB NAMECRG EXIT PGM DATANUMBER OF RESTARTS

file (WRKAG command)

623

outfile (WRKAG command)The following fields are available if you specify *OUTFILE on the Output parameter of the Work wcommand.

Table 111. MCAG outfile (WRKAG command)

Field Description Type, length Valid values

AGDFN Application group definition CHAR(10) User-defined name

USRPRF User profile CHAR(10) Any valid user profile

APP Application name CHAR(10) *AGDFN, user-defined nameAPPLIB Application library CHAR(10) *APP, user-defined name

RLSLVL Application release level CHAR(10) User-defined value

PARENT Parent application group CHAR(10) *AGDFN, *NONE, *PARENT, user-defined name

EXITPGM Application CRG exit program

CHAR(10) User-defined name

EXITPGMLIB Application CRG exit program library

CHAR(10) *APPLIB, user-defined name

JOB Exit program job name CHAR(10) *APP, *JOBD, user-defined name

EXITDTA Exit program data CHAR(256) User-defined value

NBRRESTART Number of restarts PACKED(5 0) 0-3

Page 624: MIMIX Reference

MCAG out

TAKEOVER IP ADDRESSDESCRIPTIONUPDATE CLUSTER ENVINPUT DATA AREA NAMEAPP CRG STATUS

APP CRG NODES STATUSDATA CRG STATUS

Column headings

file (WRKAG command)

624

HOST Takeover IP address CHAR(256) User-defined value

TEXT Description CHAR(50) User-defined value

UPDENV Update cluster environment CHAR(10) *YES, *NO

IDA Input data area name CHAR(10) BLANK, Name of the Input Data Area

AGSTS Application CRG status CHAR(10) BLANK, *ACTIVE, *INACTIVE, *ATTN, *UNKNOWN, *NONE, *NOTAVAIL, *INDOUBT, *RESTORED, *ADDNODPND, *DLTPND, *DLTCMDPND, *CHGPND, *CRTPND, *ENDCRGPND, *RMVNODPND, *STRCRGPND, *SWTPND

AGNODS Application CRG nodes status

CHAR(10) BLANK, *ACTIVE, *INACTIVE, *ATTN, *UNKNOWN, *NONE, *NOTAVAIL

DCSTS Data CRGs status CHAR(10) BLANK, *ACTIVE, *INACTIVE, *ATTN, *UNKNOWN, *NONE, *NOTAVAIL, *INDOUBT, *RESTORED, *ADDNODPND, *DLTPND, *DLTCMDPND, *CHGPND, *CRTPND, *ENDCRGPND, *RMVNODPND, *STRCRGPND, *SWTPND

Table 111. MCAG outfile (WRKAG command)

Field Description Type, length Valid values

Page 625: MIMIX Reference

MCAG out

DATA CRG NODES STATUSDG STATUS

FAILOVER MSGQ LIBRARYFAILOVER MSGQ NAMEFAILOVER WAIT TIMEFAILOVER DFT ACTION

Column headings

file (WRKAG command)

625

DCNODS Data CRG nodes status CHAR(10) BLANK, *ACTIVE, *INACTIVE, *ATTN, *UNKNOWN, *NONE, *NOTAVAIL

REPSTS Data group status CHAR(10) BLANK, *ACTIVE, *INACTIVE, *ATTN, *UNKNOWN, *NONE, *NOTAVAIL

FMSGQL Failover message queue library

CHAR(10) *NONE, User-defined name

FMSGQN Failover message queue name

CHAR(10) *NONE, User-defined name

FWTIME Failover wait time PACKED(5 0) *NOMAX, 1-32767

FDFTACT Failover default action PACKED(5 0) *CANCEL, *PROCEED

Table 111. MCAG outfile (WRKAG command)

Field Description Type, length Valid values

Page 626: MIMIX Reference

MCDTACR

MCDTith Data CRG Entries (WRKDTARGE)

Column headingsDATA CRGDGDFN NAMEAGDFN NAMEJOURNALJOURNAL LIBRARYOBJECT SPECIFIER FILE (OSF)OSF LIBRARYOSF MEMBERRJ MODE (DELIVER)

me DATA CRG EXIT PGMDATA CRG EXIT PGM LIBRARY

*ATTN, *UNKNOWN, T, *RESTORED, TCMDPND, RGPND, D, *SWTPND

DATA CRG STATUS

*ATTN, *UNKNOWN, DATA CRGSTATUS

*ATTN, *UNKNOWN, DG STATUS

GE outfile (WRKDTARGE command)

626

ACRGE outfile (WRKDTARGE command)The following fields are available if you specify *OUTFILE on the Output parameter of the Work wcommand.

Table 112. MCDTACRGE outfile (WRKDTARGE command)

Field Description Type, length Valid valuesDTACRG Data CRG CHAR(10) User-defined nameDGDFN Data group name CHAR(10) *DTACRG, user-defined nameAGDFN Application group definition CHAR(10) User-defined nameJRN Journal name CHAR(10) *DGDFN, user-defined nameJRNLIB Journal library CHAR(10) User-defined name

OSF Object specifier file CHAR(10) *DTACRG, user-defined name

OSFLIB Object specifier file library CHAR(10) *AGDFN, user-defined nameOSFMBR Object specifier file member CHAR(10) *DTACRG, user-defined nameDELIVERY RJ mode CHAR(10) *NONE, *ASYNC, *SYNC

EXITPGM Data CRG exit program CHAR(10) MMXDTACRG, user-defined na

EXITPGMLIB Data CRG exit program library CHAR(10) *MIMIX, user-defined name

DCSTS Data CRGs status CHAR(10) BLANK, *ACTIVE, *INACTIVE,*NONE, *NOTAVAIL, *INDOUB*ADDNODPND, *DLTPND, *DL*CHGPND, *CRTPND, *ENDC*RMVNODPND, *STRCRGPN

DCNODS Data CRG nodes status CHAR(10) BLANK, *ACTIVE, *INACTIVE,*NONE, *NOTAVAIL

REPSTS Data group status CHAR(10) BLANK, *ACTIVE, *INACTIVE,*NONE, *NOTAVAIL

Page 627: MIMIX Reference

MCDTACR

DEVICE CRGASPGROUPDATA RESOURCE TYPE

d name FAILOVER MSGQ LIBRARY

d name FAILOVER MSGQ NAMEFAILOVER WAIT TIME

D FAILOVER DFT ACTIONCLUSTER ADMINISTRATIVE DOMAINSYNCHRONIZATION DOMAIN

Column headings

GE outfile (WRKDTARGE command)

627

Updated for 5.0.13.00.

DEVCRG Device CRG name CHAR(10) User-defined nameASPGRP ASP Group CHAR(10) *NONE, User-defined name

DTATYPE Data resource group type CHAR(10) *DEV, *DTA, *PEER, *XSM

FMSGQL Failover message queue library CHAR(10) *AGDFN, *NONE, User-define

FMSGQN Failover message queue name CHAR(10) *AGDFN, *NONE, User-define

FWTIME Failover wait time PACKED(5 0) *AGDFN, *NOMAX, 1-32767

FDFTACT Failover default action PACKED(5 0) *AGDFN, *CANCEL, *PROCEE

ADMDMN Cluster administrative domain CHAR(10) *NONE, User-defined value

SYNCOPT Synchronization option PACKED(10 5) *LASTCHG, *ACTDMN

Table 112. MCDTACRGE outfile (WRKDTARGE command)

Field Description Type, length Valid values

Page 628: MIMIX Reference

MCNODE

MCNOith Node Entries (WRKNODE)

Column headingsAGDFN NAMECRG NAMENODECURRENT ROLECURRENT SEQUENCE

CURRENT DATA PROVIDERPREFERRED ROLEPREFERRED SEQUENCE

CONFIGURED ROLE

outfile (WRKNODE command)

628

DE outfile (WRKNODE command)The following fields are available if you specify *OUTFILE on the Output parameter of the Work wcommand.

Table 113. MCNODE outfile (WRKNODE command)

Field Description Type, length Valid values

AGDFN Data CRG CHAR(10) User-defined name

CRG CRG name CHAR(10) *AGDFN, user-defined nameNODE System name CHAR(8) User-defined nameCURROLE Current role CHAR(10) *PRIMARY, *BACKUP,

*REPLICATE, *UNDEFINEDCURSEQ Current sequence PACKED(5 0) -2, -1, 0-127

(-2= *UNDEFINED) (-1 = *REPLICATE)(0 = *PRIMARY)(1-127 = *BACKUP sequence)

CURDTAPVD Current data provider CHAR(10) *PRIMARY, *BACKUP, *UNDEFINED, user-defined name

PREFROLE Preferred role CHAR(10) *PRIMARY, *BACKUP, *REPLICATE, *UNDEFINED

PREFSEQ Preferred sequence PACKED(5 0) -2, -1, 0-127 (-2= *UNDEFINED)(-1 = *REPLICATE)(0 = *PRIMARY)(1-127 = *BACKUP sequence)

CFGROLE Configured role CHAR(10) *PRIMARY, *BACKUP, *REPLICATE, *UNDEFINED

Page 629: MIMIX Reference

MCNODE

CONFIGURED SEQUENCE

CONFIGURED DATA PROVIDERCRG NODE STATUS

Column headings

outfile (WRKNODE command)

629

CFGSEQ Configured sequence PACKED(5 0) -2, -1, 0-127 (-2= *UNDEFINED)(-1 = *REPLICATE)(0 = *PRIMARY)(1-127 = *BACKUP sequence)

CFGDTAPVD Configured data provider CHAR(10) *PRIMARY, *BACKUP, *UNDEFINED, user-defined name

STATUS CRG node status CHAR(10) *ACTIVE, *INACTIVE, *ATTN, *NONE, *NOTAVAIL, *UNKNOWN

Table 113. MCNODE outfile (WRKNODE command)

Field Description Type, length Valid values

Page 630: MIMIX Reference

MXCDGFE

MXCDData Group File Entries (CHKDGFE) ation, see “Interpreting results for

lumn headingsMESTAMP

MMAND NAMEDFN SHORT NAMEDFN NAMESTEM 1STEM 2TA SOURCESTEM 1 OBJECTSTEM 1 LIBRARYSTEM 1 MEMBERJECT TYPESULT

TIONSTEM 2 OBJECTSTEM 2 LIBRARYSTEM 2 MEMBER

outfile (CHKDGFE command)

237

GFE outfile (CHKDGFE command)The following fields are available if you specify *OUTFILE on the Output parameter of the Check command.The command is also called by audits which run the #DGFE rule. For additional informconfiguration data - #DGFE audit” on page 580.

Table 114. MXCDGFE outfile (CHKDGFE command)

Field Description Type, length Valid values CoTIMESTAMP Timestamp (YYYY-MM-

DD.HH.MM.SSmmmm)TIMESTAMP SAA timestamp TI

COMMAND Command name CHAR(10) CHKDGFE CODGSHRTNM Data group short name CHAR(3) Short data group name DGDGDFN Data group definition name CHAR(10) User-defined data group name DGDGSYS1 System 1 CHAR(8) User-defined system name SYDGSYS2 System 2 CHAR(8) User-defined system name SYDTASRC Data source CHAR(10) *SYS1, *SYS2 DAFILE System 1 file name CHAR(10) User-defined name SYLIB System 1 library name CHAR(10) User-defined name SYMBR System 1 member name CHAR(10) User-defined name SYOBJTYPE Object type CHAR(10) *FILE OBRESULT Result CHAR(10) *NODGFE, *EXTRADGFE,

*NOFILE, *NOMBR, *RCYFAILED, *RECOVERED, *UA

RE

OPTION Option CHAR(100) *NONE, *NOFILECHK OPFILE2 System 2 file name CHAR(10) User-defined name SYLIB2 System 2 library name CHAR(10) User-defined name SYMBR2 System 2 member name CHAR(10) User-defined name SY

Page 631: MIMIX Reference

MXCDGFE

P DEVICE

JECT ATTRIBUTE

lumn headings

outfile (CHKDGFE command)

237

ASPDEV Source ASP device CHAR(10) *UNKNOWN - if object not found or an API error*SYSBAS - if object in ASP 1-32User-defined name - if object in ASP 33-255

AS

OBJATR Object attribute CHAR(10) PF-DTA, PF-SRC, LF, PF38-DTA, PF38-SRC, LF38

OB

Table 114. MXCDGFE outfile (CHKDGFE command)

Field Description Type, length Valid values Co

Page 632: MIMIX Reference

CMPDLOA outfile (CMPDLOA command)

632

MXCMn page 586.

Column head-ingsTIMESTAMP

COMMAND NAMEDGDFN SHORT NAME

me ed on the command.

DGDFN NAME

o DG specified.

SYSTEM 1

o DG specified.

SYSTEM 2

DATA SOURCESYSTEM 1 DLOSYSTEM 2 DLOCCSIDCNTRYIDLANGID

nd expected results - 06

COMPARED ATTRIBUTE

s the difference detected” SYSTEM 1 INDICATOR

MX

PDLOA outfile (CMPDLOA command)For additional supporting information, see “Interpreting results of audits that compare attributes” o

Table 115. CMPDLOA Output file (MXCMPDLOA)

Field Description Type, length Valid values

TIMESTAMP Timestamp (CCCC-YY-MM-DD.HH.MM.SSmmmm)

CHAR(26) SAA timestamp

COMMAND Command name CHAR(10) CMPDLOA

DGSHRTNM Data group short name CHAR(3) Short data group name

DGNAME Data group definition name CHAR(10) User-defined data group naNote: Blank if not DG specifi

SYSTEM1 System 1 CHAR(8) User-defined system nameNote: Local system name if n

SYSTEM2 System 2 CHAR(8) User-defined system nameNote: Local system name if n

DTASRC Data source CHAR(10) *SYS1, *SYS2

SYS1DLO System 1 DLO name CHAR(76) User-defined name

SYS2DLO System 2 DLO name CHAR(76) User-defined name

CCSID DLO name CCSID BIN(5) User-defined nameCNTRYID DLO name country ID CHAR(2) System-defined nameLANGID DLO name language ID CHAR(3) System-defined nameCMPATR Compared attribute CHAR(10) See “Attributes compared a

#DLOATR audit” on page 6SYS1IND System 1 file indicator CHAR(10) See Table 87 in “Where wa

on page 589

Page 633: MIMIX Reference

CMPDLOA outfile (CMPDLOA command)

633

s the difference detected” SYSTEM 2 INDICATOR

ces were detected” on DIFFERENCE INDICATOR

nd expected results - 06

SYSTEM 1 VALUESYSTEM 1 CCSID

nd expected results - 06

SYSTEM 2 VALUESYSTEM 2 CCSID

Column head-ings

MX

SYS2IND Stem 2 file indicator CHAR(10) See Table 87 in “Where waon page 589

DIFIND Differences indicator CHAR(10) See “What attribute differenpage 587

SYS1VAL System 1 value of the specified attribute

VARCHAR(2048) MINLEN(50)

See “Attributes compared a#DLOATR audit” on page 6

SYS1CCSID System 1 value CCSID BIN(5) 1-65535

SYS2VAL System 1 value of the specified attribute

VARCHAR(2048) MINLEN(50)

See “Attributes compared a#DLOATR audit” on page 6

SYS2CCSID System 1 value CCSID BIN(5) 1-65535

Table 115. CMPDLOA Output file (MXCMPDLOA)

Field Description Type, length Valid values

Page 634: MIMIX Reference

MXCMPFILA outfile (CMPFILA command)

634

MXCMn page 586.

Column head-ingsTIMESTAMP

COMMAND NAMEDGDFN SHORT NAME

me n the command.

DGDFN NAME

G specified.SYSTEM 1

DG specified.SYSTEM 2

DATA SOURCESYSTEM 1 FILESYSTEM 1 LIBRARYMEMBERSYSTEM 2 FILESYSTEM 2 LIBRARYOBJECT TYPE

PFILA outfile (CMPFILA command)For additional supporting information, see “Interpreting results of audits that compare attributes” o

Table 116. CMPFILA Output file (MXCMPFILA)

Field Description Type, length Valid values

TIMESTAMP Timestamp (YYYY-MM-DD.HH.MM.SSmmmmmm)

TIMESTAMP SAA timestamp

COMMAND Command name CHAR(10) CMPFILA

DGSHRTNM Data group short name CHAR(3) Short data group name

DGNAME Data group definition name CHAR(10) User-defined data group na*blank if not DG specified o

SYSTEM1 System 1 CHAR(8) User-defined system name*local system name if no D

SYSTEM2 System 2 CHAR(8) User-defined system name*remote system name if no

DTASRC Data source CHAR(10) *SYS1, *SYS2

SYS1OBJ System 1 object name CHAR(10) User-defined name

SYS1LIB System 1 library name CHAR(10) User-defined name

MBR Member name CHAR(10) User-defined nameSYS2OBJ System 2 object name CHAR(10) System-defined name

SYS2LIB System 2 library name CHAR(10) System-defined name

OBJTYPE Object type CHAR(10) *FILE

Page 635: MIMIX Reference

MXCMPFILA outfile (CMPFILA command)

635

nd expected results - dits” on page 591.

COMPARED ATTRIBUTE

s the difference detected” SYSTEM 1 INDICATOR

s the difference detected” SYSTEM 2 INDICATOR

ces were detected” on DIFFERENCE INDICATOR

nd expected results - dits” on page 591.

SYSTEM 1 VALUESYSTEM 1 CCSID

nd expected results - dits” on page 591.

SYSTEM 2 VALUESYSTEM 2 CCSIDSYSTEM 1 ASP DEVICESYSTEM 2 ASP DEVICE

Column head-ings

CMPATR Compared attribute CHAR(10) See “Attributes compared a#FILATR, #FILATRMBR au

SYS1IND System 1 file indicator CHAR(10) See Table 87 in “Where waon page 589.

SYS2IND System 2 file indicator CHAR(10) See Table 87 in “Where waon page 589.

DIFIND Differences indicator CHAR(10) See “What attribute differenpage 587.

SYS1VAL System 1 value of the specified attribute

VARCHAR(2048) MINLEN(50)

See “Attributes compared a#FILATR, #FILATRMBR au

SYS1CCSID System 1 value CCSID BIN(5) 1-65535

SYS2VAL System 2 value of the specified attribute

VARCHAR(2048) MINLEN(50)

See “Attributes compared a#FILATR, #FILATRMBR au

SYS2CCSID System 2 value CCSID BIN(5) 1-65535

ASPDEV1 System 1 ASP device CHAR(10) *NONE, User-defined name

ASPDEV2 System 2 ASP device CHAR(10) *NONE, User-defined name

Table 116. CMPFILA Output file (MXCMPFILA)

Field Description Type, length Valid values

Page 636: MIMIX Reference

CMPFILD outfile (CMPFILDTA command)

636

MXCMcounts and file data” on page 582.

n 0 (zero) indicates that there are either

equested, this value should be 0 (zero);

ntegrity.

et performed repair processing.

Column head-ingsTIMESTAMP

COMMAND NAMEDGDFN SHORT NAME

he DGDFN NAME

SYSTEM 1

SYSTEM 2

DATA SOURCE

MX

PFILD outfile (CMPFILDTA command) For additional information for interpreting this outfile, see “Interpreting results of audits for record

The following fields require additional explanation:

Major mismatches before - Indicates the number of mismatched records found. A value other thamissing records or data within records does not match.

Major mismatches after - Indicates the number of mismatched records remaining. If repair was rotherwise, the value should equal that shown in the Major mismatches before column.

Minor mismatches after - Indicates the number of differences remaining that do not affect data i

Apply pending - Indicates the number of records for which the database apply process has not y

Table 117. Compare File Data (CMPFILDTA) output file (MXCMPFILD)

Field Description Type, length Valid values

TIMESTAMP Timestamp (YYYY-MM-DD.HH.MM.SSmmmmmm)

TIMESTAMP SAA timestamp

COMMAND Command name CHAR(10) “CMPFILDTA”

DGSHRTNM Data group short name CHAR(3) Short data group name

DGNAME Data group definition name CHAR(10) User-defined data group name* blank if not DG specified on tcommand

SYSTEM1 System 1 CHAR(8) User-defined system name *local system name if no DG specified

SYSTEM2 System 2 CHAR(8) User-defined system name*remote system name if no DGspecified

DTASRC Data source CHAR(10) *SYS1, *SYS2

Page 637: MIMIX Reference

CMPFILD outfile (CMPFILDTA command)

637

SYSTEM 1 OBJECTSYSTEM 1 LIBRARYMEMBER SYSTEM 2 OBJECTSYSTEM 2 LIBRARYOBJECT TYPE

e DIFFERENCE INDICATORREPAIR SYSTEMFILE REPAIR SUCCESSFULTOTAL RECORDS COMPAREDMAJOR MISMATCHES BEFORE PROCESSINGMAJOR MISMATCHES AFTER PROCESSINGMINOR MISMATCHES AFTER PROCESSING

Column head-ings

MX

SYS1OBJ System 1 object name CHAR(10) User-defined name

SYS1LIB System 1 library name CHAR(10) User-defined name

MBR Member name CHAR(10) User-defined nameSYS2OBJ System 2 object name CHAR(10) User-defined name

SYS2LIB System 2 library name CHAR(10) User-defined name

OBJTYPE Object type CHAR(10) *FILEDIFIND Differences indicator CHAR(10) “What attribute differences wer

detected” on page 587REPAIRSYS Repair system CHAR(10) *SYS1, *SYS2

FILEREP File repair successful CHAR(10) Blank, *YES, *NO

TOTRCDS Total records compared DECIMAL(20) 0 - 99999999999999999999

MAJMISMBEF Major mismatches before processing

DECIMAL(20) 0 - 99999999999999999999

MAJMISMAFT Major mismatches after processing

DECIMAL(20) 0 - 99999999999999999999

MINMISMAFT Minor mismatches after processing

DECIMAL(20) 0 - 99999999999999999999

Table 117. Compare File Data (CMPFILDTA) output file (MXCMPFILD)

Field Description Type, length Valid values

Page 638: MIMIX Reference

CMPFILD outfile (CMPFILDTA command)

638

ACTIVE RECORDS PENDINGSYSTEM 1 ASP DEVICESYSTEM 2 ASP DEVICE

ks TEMPORARYTARGET SQL VIEW

Column head-ings

MX

APYPENDING Apply pending records DECIMAL(20) 0 - 99999999999999999999

ASPDEV1 System 1 ASP device CHAR(10) *NONE, User-defined name

ASPDEV2 System 2 ASP device CHAR(10) *NONE, User-defined name

TMPSQLVIEW Temporary target system SQL view pathname

CHAR(33) i5/OS-format path name or blan

Table 117. Compare File Data (CMPFILDTA) output file (MXCMPFILD)

Field Description Type, length Valid values

Page 639: MIMIX Reference

MXCMPFILR outfile (CMPFILDTA command, RRN report)

639

MXCMPFILR outfile (CMPFILDTA command, RRN report) This output file format is the result of specifying *RRN for the report type on the Compare File Data command. Output in this format enables you to see the relative record number (RRN) of the first 1,000 objects that failed to compare. This value is useful when resolving situations where a discrepancy is known to exist, but you are unsure which system contains the correct data. Viewing the RRN value provides information that enables you to display the specific records on the two systems and to determine the system on which the file should be repaired.

Table 118. Compare File Data (CMPFILDTA) relative record number (RRN) output file (MXCMPFILR)

Field Description Type, length Valid values Column head-ings

SYSTEM 1 System 1 CHAR(8) User-defined system name*local system name if no DG specified

SYSTEM 1

SYSTEM 2 System 2 CHAR(8) User-defined system name*local system name if no DG specified

SYSTEM 2

SYS1OBJ System 1 object name CHAR(10) User-defined name SYSTEM 1 OBJECT

SYS1LIB System 1 library name CHAR(10) User-defined name SYSTEM 1 LIBRARY

MBR Member name CHAR(10) User-defined name MEMBERSYS2OBJ System 2 object name CHAR(10) User-defined name SYSTEM 2

OBJECTSYS2LIB System 2 library name CHAR(10) User-defined name SYSTEM 2

LIBRARYRRN Relative record number DECIMAL(10) Number RRNASPDEV1 System 1 ASP device CHAR(10) *NONE, User-defined name SYSTEM 1 ASP

DEVICEASPDEV2 System 2 ASP device CHAR(10) *NONE, User-defined name SYSTEM 2 ASP

DEVICE

Page 640: MIMIX Reference

PRCDC outfile (CMPRCDCNT command)

640

MXCMcounts and file data” on page 582.

Column head-ings‘TIMESTAMP’

‘COMMAND’‘NAME’‘DGDFN’‘SHORT’‘NAME’‘DGDFN’‘NAME’

‘SYSTEM 1’

‘SYSTEM 2’

‘DATA’‘SOURCE’‘SYSTEM 1’‘OBJECT’‘SYSTEM 1’‘LIBRARY’‘MEMBER’

MXCM

PRCDC outfile (CMPRCDCNT command) For additional information for interpreting this outfile, see “Interpreting results of audits for record

Table 119. Compare Record Count (CMPRCDCNT) output file (MXCMPRCDC)

Field Description Format Valid values

TIMESTAMP Timestamp (YYYY-MM-DD.HH.MM.SS.mmmmmm)

TIMESTAMP SAA timestamp

COMMAND Command Name CHAR(10) “CMPFILDTA”

DGSHRTNM Data group short name CHAR(3) short data group name

DGNAME Data group definition name CHAR(10) user-defined data group name* blank if not DG specified on the command

SYSTEM1 System 1 CHAR(8) user-defined system name* local system name if no DG specified

SYSTEM2 System 2 CHAR(8) user-defined system name* remote system name if no DG specified

DTASRC Data source CHAR(10) *SYS1, *SYS2

SYS1OBJ System 1 object name CHAR(10) user-defined name

SYS1LIB System 1 library name CHAR(10) user-defined name

MBR Member name CHAR(10) user-defined name

Page 641: MIMIX Reference

PRCDC outfile (CMPRCDCNT command)

641

‘DIFFERENCE’‘INDICATOR’‘SYSTEM 1’‘CURRENT’‘RECORDS’‘SYSTEM 2’‘CURRENT’RECORDS’‘SYSTEM 1’‘DELETED’‘RECORDS’‘SYSTEM 2’‘DELETED’‘RECORDS’‘SYSTEM 1’‘ASP’‘DEVICE’‘SYSTEM 2’‘ASP’‘DEVICE’‘ACTIVE’‘RECORDS’‘PENDING’

Column head-ings

MXCM

DIFIND Differences indicator CHAR(10) Refer to differences indicator table

SYS1CURCNT System 1 current records DECIMAL(20) 0 - 99999999999999999999

SYS2CURCNT System 2 current records DECIMAL(20) 0 - 99999999999999999999

SYS1DLTCNT System 1 deleted records DECIMAL(20) 0 - 99999999999999999999

SYS2DLTCNT System 2 deleted records DECIMAL(20) 0 - 99999999999999999999

ASPDEV1 System 1 ASP device CHAR(10) *NONE, user-defined name

ASPDEV2 System 2 ASP device CHAR(10) *NONE, user-defined name

ACTRCDPND Active records pending DECIMAL(20) 0 - 99999999999999999999

Table 119. Compare Record Count (CMPRCDCNT) output file (MXCMPRCDC)

Field Description Format Valid values

Page 642: MIMIX Reference

PRCDC outfile (CMPRCDCNT command)

642

MXCM

Page 643: MIMIX Reference

643

Page 644: MIMIX Reference

MXCMPIFSA outfile (CMPIFSA command)

644

MXCMn page 586.

Column head-ingsTIMESTAMP

COMMAND NAMEDGDFN SHORT NAME

me n the command.

DGDFN NAME

G specified.SYSTEM 1

DG specified.SYSTEM 2

DATA SOURCESYSTEM 1 OBJECTSYSTEM 2 OBJECTCCSIDCNTRYIDLANGID

nd expected results - 4.

COMPARED ATTRIBUTE

s the difference detected” SYSTEM 1 INDICATOR

PIFSA outfile (CMPIFSA command)For additional supporting information, see “Interpreting results of audits that compare attributes” o

Table 120. CMPIFSA Output file (MXCMPIFSA)

Field Description Type, length Valid values

TIMESTAMP Timestamp (YYYY-MM-DD.HH.MM.SSmmmmmm)

TIMESTAMP SAA timestamp

COMMAND Command name CHAR(10) CMPIFSA

DGSHRTNM Data group short name CHAR(3) Short data group name

DGNAME Data group definition name CHAR(10) User-defined data group na*blank if not DG specified o

SYSTEM1 System 1 CHAR(8) User-defined system name*local system name if no D

SYSTEM2 System 2 CHAR(8) User-defined system name*remote system name if no

DTASRC Data source CHAR(10) *SYS1, *SYS2

SYS1OBJ System 1 object name CHAR(10) User-defined name

SYS2OBJ System 2 object name CHAR(10) User-defined name

CCSID IFS object name CCSID BIN(5) User-defined nameCNTRYID IFS object name country ID CHAR(2) System-defined nameLANGID IFS object name language ID CHAR(3) System-defined nameCMPATR Compared attribute CHAR(10) See “Attributes compared a

#IFSATR audit” on page 60SYS1IND System 1 file indicator CHAR(10) See Table 87 in “Where wa

on page 589.

Page 645: MIMIX Reference

MXCMPIFSA outfile (CMPIFSA command)

645

s the difference detected” SYSTEM 2 INDICATOR

were detected” on DIFFERENCE INDICATOR

nd expected results - 4.

SYSTEM 1 VALUESYSTEM 1 CCSID

nd expected results - 4.

SYSTEM 2 VALUESYSTEM 2 CCSID

Column head-ings

SYS2IND System 2 file indicator CHAR(10) See Table 87 in “Where waon page 589.

DIFIND Differences indicator CHAR(10) “What attribute differences page 587.

SYS1VAL System 1 value of the specified attribute

VARCHAR(2048) MINLEN(50)

See “Attributes compared a#IFSATR audit” on page 60

SYS1CCSID System 1 value CCSID BIN(5) 1-65535

SYS2VAL System 2 value of the specified attribute

VARCHAR(2048) MINLEN(50)

See “Attributes compared a#IFSATR audit” on page 60

SYS2CCSID System 2 value CCSID BIN(5) 1-65535

Table 120. CMPIFSA Output file (MXCMPIFSA)

Field Description Type, length Valid values

Page 646: MIMIX Reference

MXCMPIFSA outfile (CMPIFSA command)

646

Page 647: MIMIX Reference

CMPOBJA outfile (CMPOBJA command)

647

MXCMn page 586.

Column head-ingsTIMESTAMP

COMMAND NAMEDGDFN SHORT NAME

me n the command.

DGDFN NAME

G specified.SYSTEM 1

DG specified.SYSTEM 2

DATA SOURCESYSTEM 1 FILESYSTEM 1 LIBRARYMEMBERSYSTEM 2 OBJECTSYSTEM 2 LIBRARYOBJECT TYPE

MX

POBJA outfile (CMPOBJA command)For additional supporting information, see “Interpreting results of audits that compare attributes” o

Table 121. CMPOBJA Output file (MXCMPOBJA)

Field Description Type, length Valid values

TIMESTAMP Timestamp (YYYY-MM-DD.HH.MM.SSmmmm)

TIMESTAMP SAA timestamp

COMMAND Command name CHAR(10) CMPOBJA

DGSHRTNM Data group short name CHAR(3) Short data group name

DGNAME Data group definition name CHAR(10) User-defined data group na*blank if not DG specified o

SYSTEM1 System 1 CHAR(8) User-defined system name*local system name if no D

SYSTEM2 System 2 CHAR(8) User-defined system name*remote system name if no

DTASRC Data source CHAR(10) *SYS1, *SYS2

SYS1OBJ System 1 object name CHAR(10) User-defined name

SYS1LIB System 1 library name CHAR(10) User-defined name

MBR Member name CHAR(10) User-defined nameSYS2OBJ System 2 object name CHAR(10) User-defined name

SYS2LIB System 2 library name CHAR(10) User-defined name

OBJTYPE Object type CHAR(10) User-defined name

Page 648: MIMIX Reference

CMPOBJA outfile (CMPOBJA command)

648

nd expected results - 96

COMPARED ATTRIBUTE

s the difference detected” SYSTEM 1 INDICATOR

s the difference detected” SYSTEM 2 INDICATOR

were detected” on DIFFERENCE INDICATOR

nd expected results - 96

SYSTEM 1 VALUESYSTEM 1 CCSID

nd expected results - 96

SYSTEM 2 VALUESYSTEM 2 CCSIDSYSTEM 1 ASP DEVICESYSTEM 2 ASP DEVICE

Column head-ings

MX

CMPATR Compared attribute CHAR(10) See “Attributes compared a#OBJATR audit” on page 5

SYS1IND System 1 file indicator CHAR(10) See Table 87 in “Where waon page 589

SYS2IND Stem 2 file indicator CHAR(10) See Table 87 in “Where waon page 589

DIFIND Differences indicator CHAR(10) “What attribute differences page 587

SYS1VAL System 1 value of the specified attribute

VARCHAR(2048) MINLEN(50)

See “Attributes compared a#OBJATR audit” on page 5

SYS1CCSID System 1 value CCSID BIN(5) 1-65535

SYS2VAL System 1 value of the specified attribute

VARCHAR(2048) MINLEN(50)

See “Attributes compared a#OBJATR audit” on page 5

SYS2CCSID System 1 value CCSID BIN(5) 1-65535

ASPDEV1 System 1 ASP device CHAR(10) *NONE, User-defined name

ASPDEV2 System 2 ASP device CHAR(10) *NONE, User-defined name

Table 121. CMPOBJA Output file (MXCMPOBJA)

Field Description Type, length Valid values

Page 649: MIMIX Reference

MXDGACT

MXDG

Column headingsDGDFN NAMEDGDFN SYSTEM 1DGDFNSYSTEM 2DATA SOURCE

E OBJECT STATUS CATEGORYOBJECT TYPEOBJECT ATTRIBUTEFAILURE REASONENTRY COUNTOBJECT CATEGORYOBJECT LIBRARY OBJECTMEMBERDLO

outfile (WRKDGACT command)

649

ACT outfile (WRKDGACT command)

Table 122. MXDGACT outfile (WRKDGACT command)

Field Description Type, length Valid values

DGDFN Data group name (Data group definition)

CHAR(10) User-defined data group name

DGSYS1 System 1 name (Data group definition)

CHAR(8) User-defined system name

DGSYS2 System 2 name CHAR(8) User-defined system name(Data group definition)

DTASRC Data source CHAR(10) *SYS1, *SYS2

STATUS Object status category CHAR(10) *COMPLETED, *FAILED, *DELAYED, *ACTIV

TYPE Object type CHAR(10) Refer to the OM5100P file for the list of valid object types

OBJATR Object attribute CHAR(10) Refer to the OM5200P file for the list of valid object attributes

REASON Failure reason CHAR(11) *INUSE, *RESTRICTED, *NOTFOUND, *OTHER, blank

COUNT Entry count PACKED(5 0) 0-9999 (9999 = maximum value supported)

OBJCAT Object category CHAR(10) *DLO, *IFS, *SPLF, *LIB

OBJLIB Object library CHAR(10) User-defined name, BLANK

OBJ Object name CHAR(10) User-defined name, BLANKOBJMBR Member name CHAR(10) User-defined name, BLANKDLO DLO name CHAR(12) User-defined name, BLANK

Page 650: MIMIX Reference

MXDGACT

FOLDERSPLF JOBSPLF NAMESPLF NUMBEROUTQOUTQ LIBRARYIFS OBJECT

CCSID

IFS Object (UNICODE)

Column headings

outfile (WRKDGACT command)

650

FLR Folder name CHAR(63) User-defined name, BLANKSPLFJOB Spooled file job name CHAR(26) Three part spooled file name, BLANKSPLF Spooled file name CHAR(10) User-defined name, BLANKSPLFNBR Spooled file number PACKED(7 0) 1-99999, BLANK

OUTQ Output queue CHAR(10) User-defined name, *NONE, BLANKOUTQLIB Output queue library CHAR(10) User-defined name, *NONE, BLANK

IFS Object IFS name CHAR(1024) VARLEN(100)

User-defined name, BLANK

CCSID Object CCSID BIN(5 0) Default to job CCSID. If unable to convert to job's CCSID or job CCSID is 65535, related fields will be written in Unicode

IFSUCS IFS Object (UNICODE) Graphic(512)VARLEN(75)CCSID(13488)

User-defined name (Unicode), BLANK

Table 122. MXDGACT outfile (WRKDGACT command)

Field Description Type, length Valid values

Page 651: MIMIX Reference

MXDGACT

MXDG

Column head-ingsDGDFN NAME

DGDFN SYSTEM 1DGDFN SYSTEM 2DATA SOURCEOBJECT STATUS CATEGORYOBJECT STATUS

OBJECT TYPE

OBJECT ATTRIBUTEFAILURE REASONOBJECT CATEGORYJOURNAL SEQUENCE NUMBERJOURNAL ENTRY CODE

E outfile (WRKDGACTE command)

651

ACTE outfile (WRKDGACTE command)

Table 123. MXDGACTE outfile (WRKDGACTE command)

Field Description Type, length Valid values

DGDFN Data group name (Data group definition)

CHAR(10) User-defined data group name

DGSYS1 System 1 name (Data group definition)

CHAR(8) User-defined system name

DGSYS2 System 2 name (Data group definition)

CHAR(8) User-defined system name

DTASRC Data source CHAR(10) *SYS1, *SYS2

STATUS Object status category CHAR(10) *COMPLETED, *FAILED, *DELAYED, *ACTIVE

OBJSTATUS Object status CHAR(2) Refer to on-line help for complete list

TYPE Object type CHAR(10) Refer to the OM5100P file for the listof valid object types

OBJATR Object attribute CHAR(10) Refer to the OM5200P file for the listof valid object attributes

REASON Failure reason CHAR(11) *INUSE, *RESTRICTED, *NOTFOUND, *OTHER, blank

OBJCAT Object category CHAR(10) *DLO, *IFS, *SPLF, *LIB

SEQNBR Journal sequence number PACKED(10 0) 1-9999999999

JRNCODE Journal entry code CHAR(1) Valid journal codes

Page 652: MIMIX Reference

MXDGACT

JOURNAL ENTRY TYPEJOURNAL ENTRY TIMESTAMPJOURNAL ENTRY SEND TIMESTAMPJOURNAL ENTRY RCV TIMESTAMPJOURNAL ENTRY RTV TIMESTAMPCONTAINER SEND TIMESTAMPJOURNAL ENTRY APY TIMESTAMPREQUIRES CONTAINER SENDWAITING FOR RETRYNUMBER OF RETRIES ATTEMPTEDNUMBER OF RETRIES REMAINING

Column head-ings

E outfile (WRKDGACTE command)

652

JRNTYPE Journal entry type CHAR(2) Valid journal types

JRNTSP Journal entry timestamp TIMESTAMP YYYY-MM-DD.HH.MM.SS.mmmmmm

JRNSNDTSP Journal entry send timestamp

TIMESTAMP YYYY-MM-DD.HH.MM.SS.mmmmmm

JRNRCVTSP Journal entry receive timestamp

TIMESTAMP YYYY-MM-DD.HH.MM.SS.mmmmmm

JRNRTVTSP Journal entry retrieve timestamp

TIMESTAMP YYYY-MM-DD.HH.MM.SS.mmmmmm

CNRSNDTSP Container send timestamp TIMESTAMP YYYY-MM-DD.HH.MM.SS.mmmmmm

JRNAPYTSP Journal entry apply timestamp

TIMESTAMP YYYY-MM-DD.HH.MM.SS.mmmmmm

REQCNRSND Requires container CHAR(10) *YES, *NO

RTYWAIT Waiting for retry CHAR(10) *YES, *NO

RTYATTEMPT Number of retries attempted PACKED(5 0) 0-1998

RTYREMAIN Number of retries remaining PACKED(5 0) 0-1998

Table 123. MXDGACTE outfile (WRKDGACTE command)

Field Description Type, length Valid values

Page 653: MIMIX Reference

MXDGACT

DELAY INTERVALNEXT RETRY TIMESTAMPMESSAGE IDMESSAGE DATAFAILED JOB NAMEJOURNAL ENTRYOBJECT LIBRARYOBJECTMEMBERDLOFOLDERSPLF JOBSPLF NAMESPLF NUMBEROUTQOUTQ LIBRARYIFS OBJECT

Column head-ings

E outfile (WRKDGACTE command)

653

DLYITV Delay interval PACKED(5 0) 1-7200

NXTRTYTSP Next retry timestamp TIMESTAMP YYYY-MM-DD.HH.MM.SS.mmmmmm

MSGID Message ID CHAR(7) Valid message ID, BLANKMSG Message data CHAR(256)

VARLEN(50)Valid message data, BLANK

FAILEDJOB Failed job name CHAR(26) Job name, BLANK

JRNENT Journal entry CHAR(400) Journal entry

OBJLIB Object library CHAR(10) User-defined name, BLANK

OBJ Object name CHAR(10) User-defined name, BLANKOBJMBR Member name CHAR(10) User-defined name, BLANKDLO DLO name CHAR(12) User-defined name, BLANKFLR Folder name CHAR(63) User-defined name, BLANKSPLFJOB Spooled file job name CHAR(26) Three part spooled file name, BLANKSPLF Spooled file name CHAR(10) User-defined name, BLANKSPLFNBR Spooled file number PACKED(7 0) 1-99999, BLANK

OUTQ Output queue CHAR(10) User-defined name, *NONE, BLANKOUTQLIB Output queue library CHAR(10) User-defined name, *NONE, BLANK

IFS Object IFS name CHAR(1024) VARLEN(100)

User-defined name, BLANK

Table 123. MXDGACTE outfile (WRKDGACTE command)

Field Description Type, length Valid values

Page 654: MIMIX Reference

MXDGACT

CCSID

TARGET OBJECT LIBRARYTARGET OBJECTTARGET MEMBERTARGET DLOTARGET FOLDERTARGET SPLFJOB

TARGET SPLF NUMBERTARGET OUTQTARGET OUTQ LIBRARYTARGET IFS OBJECT

Column head-ings

E outfile (WRKDGACTE command)

654

CCSID Object CCSID BIN(5 0) Default to job CCSID. If unable to convert to job's CCSID or job CCSIDis 65535, related fields will be writtenin Unicode.

TGTOBJLIB Target system object library name

CHAR(10) User-defined name, BLANK

TGTOBJ Target system object name CHAR(10) User-defined name, BLANK

TGTOBJMBR Target system object member name

CHAR(10) User-defined name, BLANK

TGTDLO Target system DLO name CHAR(12) User-defined name, BLANKTGTFLR Target system object folder

nameCHAR(63) User-defined name, BLANK

TGTSPLFJOB Target system spooled file job name

CHAR(26) Three part spooled file name, BLANK

TGTSPLF Target system spooled file name

CHAR(10) User-defined name, BLANK

TGTSPLFNBR Target system spooled file job number

PACKED(7 0) 1-999999, BLANK

TGTOUTQ Target system output queue CHAR(10) User-defined name, BLANK

TGTOUTQLIB Target system output queue library

CHAR(10) User-defined name, BLANK

TGTIFS Target system IFS name CHAR(1024) VARLEN(100)

User-defined name, BLANK

Table 123. MXDGACTE outfile (WRKDGACTE command)

Field Description Type, length Valid values

Page 655: MIMIX Reference

MXDGACT

RENAMED OBJECT LIBRARYRENAMED OBJECTRENAMED MEMBERRENAMED DLORENAMED FOLDERRENAMED SPLF JOBRENAMED SPLF NAMERENAMED SPLF NUMBERRENAMED OUTQRENAMED OUTQ LIBRARYRENAMED IFS OBJECTRENAMED TGT OBJECTS LIBRARY

Column head-ings

E outfile (WRKDGACTE command)

655

RNMOBJLIB Renamed object library name

CHAR(10) User-defined name, BLANK

RNMOBJ Renamed object name CHAR(10) User-defined name, BLANK

RNMOBJMBR Renamed object member name

CHAR(10) User-defined name, BLANK

RNMDLO Renamed DLO name CHAR(12) User-defined name, BLANK

RNMFLR Renamed object folder name

CHAR(63) User-defined name, BLANK

RNMSPLFJOB Renamed spooled file job name

CHAR(26) Three part spooled file name, BLANK

RNMSPLF Renamed spooled file name CHAR(10) User-defined name, BLANK

RNMSPLFNBR Renamed spooled file number

PACKED(7 0) 1-999999, BLANK

RNMOUTQ Renamed output queue CHAR(10) User-defined name, BLANK

RNMOUTQLIB Renamed output queue library

CHAR(10) User-defined name, BLANK

RNMIFS Renamed IFS object name CHAR(1024) VARLEN(100)

User-defined name, BLANK

RNMOBJLIB Renamed target object library name

CHAR(10) User-defined name, BLANK

Table 123. MXDGACTE outfile (WRKDGACTE command)

Field Description Type, length Valid values

Page 656: MIMIX Reference

MXDGACT

RENAMED TARGET OBJECTRENAMED TARGET OBJ MEMBERRENAMED TARGET DLORENAMED TARGET FOLDERRENAMED TARGET SPLF JOBRENAMED TARGET SPLF NAMERENAMED TARGET SPLF NUMBERRENAMED TARGET OUTQRENAMED TARGET OUTQ LIBRARYRENAMED TARGET IFS OBJECT

Column head-ings

E outfile (WRKDGACTE command)

656

RNMTGTOBJ Renamed target object name

CHAR(10) User-defined name, BLANK

RNMTOBJMBR Renamed target object member name

CHAR(10) User-defined name, BLANK

RNMTGTDLO Renamed target object DLO name

CHAR(12) User-defined name, BLANK

RNMTGTFLR Renamed target object folder name

CHAR(63) User-defined name, BLANK

RNMTSPLFJ Renamed target spooled file job name

CHAR(26) Three part spooled file name, BLANK

RNTTGTSPLF Renamed target spooled file name

CHAR(10) User-defined name, BLANK

RNMTSPLFN Renamed target spooled file number

PACKED(7 0) 1-999999, BLANK

RNMTGTOUTQ Renamed target output queue

CHAR(10) User-defined name, BLANK

RNMTOUTQL Renamed target output queue library

CHAR(10) User-defined name, BLANK

RNMTGTIFS Renamed target object IFS name

CHAR(1024) VARLEN(100)

User-defined name, BLANK

Table 123. MXDGACTE outfile (WRKDGACTE command)

Field Description Type, length Valid values

Page 657: MIMIX Reference

MXDGACT

COOPERATE WITH DATABASEIFS OBJECT FID (Binary)IFS OBJECT FID (Hex)IFS Object (UNICODE)

TGT IFS Object (UNICODE)RNM IFS Object (UNICODE)RNM TGT IFS Object (UNICODE)

Column head-ings

E outfile (WRKDGACTE command)

657

COOPDB Cooperate with DB CHAR(10) *YES, *NO, BLANK

OBJFID IFS object file identifier (binary format)

BIN(16) Binary representation of file identifier

OBJFIDHEX IFS object file identifier (character format)

CHAR(32) Character representation of file identifier

IFSUCS IFS Object (UNICODE) GRAPHIC(512) VARLEN(75) CCSID(13488

User-defined name (Unicode), BLANK

TGTIFSUCS TGT IFS Object (UNICODE) GRAPHIC(512) VARLEN(75) CCSID(13488)

User-defined name (Unicode), BLANK

RNMIFSUCS RNM IFS Object (UNICODE)

GRAPHIC(512) VARLEN(75) CCSID(13488)

User-defined name (Unicode), BLANK

RNMTGTIFSU RNM TGT IFS Object (UNICODE)

GRAPHIC(512) VARLEN(75) CCSID(13488)

User-defined name (Unicode), BLANK

Table 123. MXDGACTE outfile (WRKDGACTE command)

Field Description Type, length Valid values

Page 658: MIMIX Reference

MXDGACT

E outfile (WRKDGACTE command)

658

Page 659: MIMIX Reference

MXDGDAE outfile (WRKDGDAE command)

659

MXDGDAE outfile (WRKDGDAE command)

Table 124. MXDGDAE outfile (WRKDGDAE command)

Field Description Type, length Valid values Column head-ings

DGDFN Data group name (Data group definition)

CHAR(10) User-defined data group name DGDFN NAME

DGSYS1 System 1 name (Data group definition)

CHAR(8) User-defined system name DGDFN SYSTEM 1

DGSYS2 System 2 name (Data group definition)

CHAR(8) User-defined system name DGDFN SYSTEM 2

DTAARA1 System 1 data area CHAR(10) User-defined name, *ALL SYSTEM 1 DATA AREA

DTAARALIB1 System 1 data area library CHAR(10) User-defined name SYSTEM 1 DATA AREA LIBRARY

DTAARA2 System 2 data area CHAR(10) User-defined name, *ALL SYSTEM 2 DATA AREA

DTAARALIB2 System 2 data area library CHAR(10) User-defined name SYSTEM 2 DATA AREA LIBRARY

TEXT Description CHAR(50) User-defined text DESCRIPTIONRTVERR Retrieve error field CHAR(10) *NO, *YES RETRIEVE

ERROR FIELD

Page 660: MIMIX Reference

MXDGDFN

MXDG

Column Head-ings

data group name DGDFN NAMEsystem name DGDFN NAME

SYSTEM 1system name DGDFN NAME

SYSTEM 2up name DGDFN SHORT

NAMEDATA SOURCEALLOW SWITCH

DB DG TYPEname, *DGDFN CONFIGURED

PRITFRDFNname, *NONE CONFIGURED

SECTFRDFNDB READER WAITTIMEJOURNAL ON TARGET

r-defined name, CONFIGURED SYSTEM 1 JRNDFN

name, blank ACTUAL SYSTEM 1 JRNDFN

name, blank JRNDFN SYSTEM 2

outfile (WRKDGDFN command)

660

DFN outfile (WRKDGDFN command)

Table 125. MXDGDFN outfile (WRKDGDFN command)

Field Description Type, length Valid values

DGDFN Data group definition name (Data group definition) CHAR(10) User-defined DGSYS1 System 1 (Data group definition) CHAR(8) User-defined

DGSYS2 System 2 (Data group definition) CHAR(8) User-defined

DGSHRTNM Data group short name CHAR(3) Short data gro

DTASRC Data source CHAR(10) *SYS1, *SYS2ALWSWT Allow to be switched CHAR(10) *YES, *NODGTYPE Data group type CHAR(10) *ALL, *OBJ, *PRITFRDFN Configured primary transfer definition CHAR(10) User-defined

SECTFRDFN Secondary transfer definition CHAR(10) User-defined

RDRWAIT Reader wait time (seconds) PACKED(5 0) 0-600

JRNTGT Journal on target CHAR(10) *YES, *NO

JRNDFN1 Configured system 1 journal definition CHAR(10) *DGDFN, use*NONE

JRNDFN1NM Actual system 1 journal definition name CHAR(10) User-defined

JRNDFN2SYS System 2 journal definition system name CHAR(8) User-defined

Page 661: MIMIX Reference

MXDGDFN

r-defined name, CONFIGURED SYSTEM 2 JRNDFN

name, blank ACTUAL SYSTEM 2 JRNDFN

name, blank JRNDFN SYSTEM 2RJ LINKCURRENT NUMBER OF DB APPLIESREQUESTED NUMBER OF DB APPLIES

ND DBJRNPRC BEFORE IMAGES

ND DBJRNPRC FILES NOT IN DG

ND DBJRNPRC GEN’D BY MIMIX ACT

ND DBJRNPRC NOT USED BY MIMIX

-defined text DESCRIPTION*NONE) SYNC CHECK

INTERVAL*NONE) TIME STAMP

INTERVAL

Column Head-ings

outfile (WRKDGDFN command)

661

JRNDFN2 Configured system 2 journal definition CHAR(10) *DGDFN, use*NONE

JRNDFN2NM Actual system 2 journal definition name CHAR(10) User-defined

JRNDFN2SYS System 2 journal definition system name CHAR(8) User-defined

RJLNK User remote journal link CHAR(10) *YES, *NONBRDBAPY Number of DB apply sessions PACKED(3 0) 1-6

RQSDBAPY Requested number of DB apply sessions PACKED(3 0) 1-6

DBBFRIMG Before images (DB journal entry processing) CHAR(10) *IGNORE, *SE

DBNOTINDG For files not in data group (DB journal entry processing)

CHAR(10) *IGNORE, *SE

DBMMXGEN Generated by MIMIX activity (DB journal entry processing)

CHAR(10) *IGNORE, *SE

DBNOTUSED Not used by MIMIX (DB journal entry processing) CHAR(10) *IGNORE, *SE

TEXT Description CHAR(50) *BLANK, userSYNCCHKITV Synchronization check interval PACKED(5 0) 0 - 999999 (0=

TSPITV Time stamp interval PACKED(5 0) 0 - 999999 (0=

Table 125. MXDGDFN outfile (WRKDGDFN command)

Field Description Type, length Valid values

Page 662: MIMIX Reference

MXDGDFN

VERIFICATION INTERVALDATA AREA POLLING INTERVALNUMBER OF RETRIES FIRST RETRY INTERVALSECOND RETRY INTERVALUSE ADAPTIVE CACHE

name, blank, *NONE DATA CRGTH FEOPT

JOURNAL IMAGESFEOPT OMIT OPEN CLOSE

KEYED FEOPT REPLICATION TYPEFEOPT LOCK MBR ON APPLYFEOPT CFG APPY SESSION

UTOSYNC, user- FEOPT COLLISION RESOLUTION

Column Head-ings

outfile (WRKDGDFN command)

662

VFYITV Verify interval PACKED(5 0) 1000-999999

DTAARAITV Data area polling interval PACKED(5 0) 1-7200

RTYNBR Number of times to retry PACKED(3 0) 0-999

RTYDLYITV1 First retry delay interval PACKED(5 0) 1-3600

RTYDLYITV2 Second retry delay interval PACKED(5 0) 10-7200

ADPCHE Adaptive cache CHAR(10) *YES, *NO

DATACRG Data cluster resource group CHAR(10) User-defined DFTJRNIMG Journal image (File entry options) CHAR(10) *AFTER, *BO

DFTOPNCLO Omit open / close entries (File entry options) CHAR(10) *NO, *YES

DFTREPTYPE Replication type (File entry options) CHAR(10) *POSITION, *

DFTAPYLOCK Lock member during apply (File entry options) CHAR(10) *YES, *NO

DFTAPYSSN Configured apply session (File entry options) CHAR(10) *ANY, A-F

DFTCRCLS Collision resolution (File entry options) CHAR(10) *HLDERR, *Adefined name

Table 125. MXDGDFN outfile (WRKDGDFN command)

Field Description Type, length Valid values

Page 663: MIMIX Reference

MXDGDFN

FEOPT DISABLE TRIGGERSFEOPT PROCESS CONSTRAINTDBAPYPRC FORCE DATADBAPYPRC MAX OPEN MEMBERS

9 DBAPYPRC THRESHOLD WARNINGDBAPYPRC HISTORYDBAPYPRC KEEP JRNDBAPYPRC SIZE OF LOG SPACES

name OBJPRC DEFAULT OWNER

*SAVRST OBJPRC DLO TRANSFER METHOD

TIMIZED OBJPRC IFS TRANSFER METHOD

Column Head-ings

outfile (WRKDGDFN command)

663

DFTSBTRG Disable triggers during apply (File entry options) CHAR(10) *YES, *NO

DFTPRCCST Process constraint entries (File entry options) CHAR(10) *YES

DBFRCITV Force data interval (Database apply processing) PACKED(5 0) 1-99999

DBMAXOPN Maximum open members (Database apply processing)

PACKED(5 0) 50 - 32767

DBAPYTWRN Threshold warning (Database apply processing) PACKED(7 0) 0, 100-999999

DBAPYHST Apply history log spaces (Database apply processing)

PACKED(5 0) 0-9999

DBKEEPLOG Keep journal log spaces (Database apply processing)

PACKED(5 0) 0-9999

DBLOGSIZE Size of log spaces (MB) (Database apply processing)

PACKED(5 0) 1-16

OBJDFTOWN Object default owner (Object processing) CHAR(10) User-defined

OBJDLOMTH DLO transmission method (Object processing) CHAR(10) *OPTIMIZED,

OBJIFSMTH IFS transmission method (Object processing) CHAR(10) *SAVRST, *OP

Table 125. MXDGDFN outfile (WRKDGDFN command)

Field Description Type, length Valid values

Page 664: MIMIX Reference

MXDGDFN

ENABLE, *DISABLE OBJPRCUSER PROFILESTATUSOBJPRCKEEP DELETEDSPLFOBJPRCKEEP DLOSYS NAMEOBJRTVPRC DELAYOBJRTVPRC MIN NUMBER OF JOBSOBJRTVPRC MAX NUMBER OF JOBS OBJRTVPRC THLD FOR MORE JOBSCNRSNDPRC MIN NUMBER OF JOBSCNRSNDPRC MAX NUMBER OF JOBSCNRSNDPRC THLD FOR MORE JOBS

Column Head-ings

outfile (WRKDGDFN command)

664

OBJUSRSTS User profile status (Object processing) CHAR(10) *SRC, *TGT, *

OBJKEEPSPL Keep deleted spooled files (Object processing) CHAR(10) *YES, *NO

OBJKEEPDLO Keep DLO System Name (Object Processing) CHAR(10) *YES, *NO

OBJRTVDLY Retrieve delay (Object retrieve processing) PACKED(3 0) 0-999

OBJRTVMINJ Minimum number of jobs (Object retrieve processing)

PACKED(3 0) 1-99

OBJRTVMAXJ Maximum number of jobs (Object retrieve processing)

PACKED(3 0) 1-99

OBJRTVTHLD Threshold for more jobs (Object retrieve processing)

PACKED(5 0) 1-99999

CNRSNDMINJ Minimum number of jobs (Container send processing)

PACKED(3 0) 1-99

CNRSNDMAXJ Maximum number of jobs (Container send processing)

PACKED(3 0) 1-99

CNRSNDTHLD Threshold for more jobs (Container send processing)

PACKED(5 0) 1-99999

Table 125. MXDGDFN outfile (WRKDGDFN command)

Field Description Type, length Valid values

Page 665: MIMIX Reference

MXDGDFN

OBJAPYPRC MIN NUMBER OF JOBSOBJAPYPRC MAX NUMBER OF JOBSOBJAPYPRC THLD FOR MORE JOBS

= *NONE) OBJAPYPRC THLD FOR WARNING MSGS

RENT USRPRF FOR SUBMIT JOB

n name SEND JOBDn library SEND JOBD

LIBRARYn name APPLY JOBDn library APPLY JOBD

LIBRARYn name REORGANIZE

JOBDn library REORGANIZE

JOBD LIBRARYn name SYNC JOBDn library SYNC JOBD

LIBRARY

Column Head-ings

outfile (WRKDGDFN command)

665

OBJAPYMINJ Minimum number of jobs (Object apply processing) PACKED(3 0) 1-99

OBJAPYMAXJ Maximum number of jobs (Object apply processing) PACKED(3 0) 1-99

OBJAPYTHLD Threshold for more jobs (Object apply processing) PACKED(5 0) 1-99999

OBJAPYTWRN Threshold for warning messages (Object apply processing)

PACKED(5 0) 0, 50-99999 (0

SBMUSR User profile for submit job CHAR(10) *JOBD, *CUR

SNDJOBD Send job description CHAR(10) Job descriptioSNDJOBDLIB Send job description library CHAR(10) Job descriptio

APYJOBD Apply job description CHAR(10) Job descriptioAPYJOBDLIB Apply job description library CHAR(10) Job descriptio

RGZJOBD Reorganize job description CHAR(10) Job descriptio

RGZJOBDLIB Reorganize job description library CHAR(10) Job descriptio

SYNJOBD Synchronize job description CHAR(10) Job descriptioSYNJOBDLIB Synchronize job description library CHAR(10) Job descriptio

Table 125. MXDGDFN outfile (WRKDGDFN command)

Field Description Type, length Valid values

Page 666: MIMIX Reference

MXDGDFN

9 le active for files only ond wait time)

while active) ve while active for all ith specified wait time)

SAVE WHILE ACTIVE (SEC)

59, *NONE, YSDFN2

night (default)

RESTART TIME

defined name SYSTEM 1ASP GROUP

defined name SYSTEM 2ASP GROUP

SRJRN COOPERATIVEJOURNAL

APY RECOVERY PROCESSRECOVERY DURATION

NO JOURNAL AT CREATION

ONE) RJLNK THRESHOLD (TIME IN MIN)

99 (0 = *NONE) RJLNK THRESHOLD (NBR OF JRNE)

ONE) DBSND/DBRDR THRESHOLD (TIME IN MIN)

Column Head-ings

outfile (WRKDGDFN command)

666

SAVACT Save while active (seconds) PACKED(5 0) -1, 0, 1-99999(0 = Save whiwith a 120 sec(-1 = No save(1-99999 = Saobject types w

RSTARTTIME Restart Time CHAR((8) 000000 - 2359*SYSDFN1, *S000000 = mid

ASPGRP1 System 1 ASP group CHAR(10) *NONE, User-

ASPGRP2 System 2 ASP group CHAR(10) *NONE, User-

COOPJRN Cooperative Journal CHAR(10) *SYSJRN, *U

RCYWINPRC Recovery Window Process CHAR (7) *NONE, *ALL

RCHWINDUR Recovery Window Duration PACKED (5 0) 0-99999

JRNATCRT Journal at creation CHAR(10) *DFT, *YES, *

RJLNKTHLDM RJ Link Threshold (Time in minutes) PACKED(4 0) 0-9999 (0 = *N

RJLNKTHLDE RJ Link Threshold (Number of journal entries) PACKED(7 0) 0, 1000-99999

DBSNDTHLDM DB Send/Reader Threshold (Time in minutes) PACKED(4 0) 0-9999 (0 = *N

Table 125. MXDGDFN outfile (WRKDGDFN command)

Field Description Type, length Valid values

Page 667: MIMIX Reference

MXDGDFN

99 (0 = *NONE) DBSND/DBRDR THRESHOLD (NBR OF JRNE)

ONE) OBJSND THRESHOLD (TIME IN MIN)

99 (0 = *NONE) OBJSND THRESHOLD (NBR OF JRNE)

= *NONE) OBJRTV THRESHOLD

= *NONE) CNRSND THRESHOLD

Column Head-ings

outfile (WRKDGDFN command)

667

Updated for 5.0.08.00 and 5.0.13.00.

DBSNDTHLDE DB Send/Reader Threshold (Number of journal entries)

PACKED(7 0) 0, 1000-99999

OBJSNDTHDM Object Send Threshold (Time in minutes) PACKED(4 0) 0-9999 (0 = *N

OBJSNDTHDE Object Send Threshold (Number of journal entries) PACKED(7 0) 0, 1000-99999

OBJRTVTHDE Object Retrieve Threshold (Number of activity entries)

PACKED(5 0) 0, 50-99999 (0

CNRSNDTHDE Container Send Threshold (Number of activity entries)

PACKED(5 0) 0, 50-99999 (0

Table 125. MXDGDFN outfile (WRKDGDFN command)

Field Description Type, length Valid values

Page 668: MIMIX Reference

MXDGDLO

MXDG

s

R

R

G

SS

P

E outfile (WRKDGDLOE command)

668

DLOE outfile (WRKDGDLOE command)

Table 126. MXDGDLOE outfile (WRKDGDLOE command)

Field Description Type, length Valid values Columnheading

DGDFN Data group name (Data group definition)

CHAR(10) User-defined data group name DGDFNNAME

DGSYS1 System 1 name (Data group definition)

CHAR(8) User-defined system name DGDFNSYSTEM1

DGSYS2 System 2 name (Data group definition)

CHAR(8) User-defined system name DGDFNSYSTEM2

FLR1 System 1 folder CHAR(63) User-defined name SYSTEM1 FOLDE

DOC1 System 1 document CHAR(12) User-defined name, *ALL SYSTEM1 DLO

OWNER Owner CHAR(10) User-defined name, *ALL OWNERFLR2 System 2 folder CHAR(63) *FLR1, User-defined name SYSTEM

2 FOLDEDOC2 System 2 document CHAR(12) *DOC1, User-defined name SYSTEM

2 DLOOBJAUD Object auditing value CHAR(10) *CHANGE, *ALL, *NONE OBJECT

AUDITINVALUE

PRCTYPE Process type CHAR(10) *INCLD, *EXCLD PROCETYPE

OBJRTVDLY

Retrieve delay (Object retrieve processing)

PACKED(3 0)

0-999, *DGDFT OBJRTVRC DELAY

Page 669: MIMIX Reference

MXDGDLO

E outfile (WRKDGDLOE command)

669

Page 670: MIMIX Reference

MXDGFE o

MXDG

head-

1

2 1

1Y 1

R 2

2Y 2

RPTION

L

OMIT LOSE

utfile (WRKDGFE command)

670

FE outfile (WRKDGFE command)

Table 127. MXDGFE outfile (WRKDGFE command)

Field Description Type, length Valid values Columnings

DGDFN Data group name(Data group definition)

CHAR(10) User-defined data group name DGDFNNAME

DGSYS1 System 1 name(Data group definition)

CHAR(8) User-defined system name DGDFNSYSTEM

DGSYS2 System 2 name(Data group definition)

CHAR(8) User-defined system name DGDFNSYSTEM

FILE1 System 1 file name CHAR(10) User-defined name SYSTEMFILE

LIB1 System 1 library name CHAR(10) User-defined name SYSTEMLIBRAR

MBR1 System 1 member name CHAR(10) User-defined name SYSTEMMEMBE

FILE2 System 2 file name CHAR(10) User-defined name SYSTEMFILE

LIB2 System 2 library name CHAR(10) User-defined name SYSTEMLIBRAR

MBR2 System 2 member name CHAR(10) User-defined name SYSTEMMEMBE

TEXT Description CHAR(50) User-defined text DESCRIJRNIMG Journal image

(File entry options)CHAR(10) *AFTER, *BOTH, *DGDFT FEOPT

JOURNAIMAGE

OPNCLO Omit open/close entries (File entry options)

CHAR(10) *YES, *NO, *DGDFT FEOPT OPEN C

Page 671: MIMIX Reference

MXDGFE o

ATION

LOCK

FILTER GE

NT

STED

ION TION

E RS

SS RS

SSAINTS

NT

head-

utfile (WRKDGFE command)

671

REPTYPE Replication type (File entry options)

CHAR(10) *POSITION, *KEYED, *DGDFT FEOPT REPLICTYPE

APYLOCK Lock member during apply (File entry options)

CHAR(10) *YES, *NO, *DGDFT FEOPT MBR ONAPPLY

FTRBFRIMG Filter before image (File entry options)

CHAR(10) *YES, *NO, *DGDFT FEOPT BFR IMA

APYSSN Current apply session (File entry options)

CHAR(10) A-F, *DGDFT FEOPT CURREAPYSSN

RQSAPYSSN

Configured or requested apply session (File entry options)

CHAR(10) A-F, *DGDFT FEOPT REQUEAPYSSN

CRCLS Collision resolution class (File entry options)

CHAR(10) *HLDERR, *AUTOSYNC, user-defined name

FEOPT COLLISRESOLU

DSBTRG Disable triggers during apply (File entry options)

CHAR(10) *YES, *NO, *DGDFT FEOPT DISABLTRIGGE

PRCTRG Process trigger entries (File entry options)

CHAR(10) *YES, *NO, *DGDFT FEOPT PROCETRIGGE

PRCCST Process constraint entries (File entry options)

CHAR(10) *YES FEOPTPROCECONSTR

STATUS File status CHAR(10) *ACTIVE, *RLSWAIT, *RLSCLR, *HLD, *HLDIGN, *RLS, *HLDRGZ, *HLDPRM, *HLDRNM, *HLDSYNC, *HLDRTY, *HLDERR, *HLDRLTD, *CMPACT, *CMPRLS, *CMPRPR

CURRESTATUS

Table 127. MXDGFE outfile (WRKDGFE command)

Field Description Type, length Valid values Columnings

Page 672: MIMIX Reference

MXDGFE o

STED

1 LED 2 LED

CODEL

CODEL

TYPE

head-

utfile (WRKDGFE command)

672

Updated for 5.0.07.00 and 5.0.08.00.

RQSSTS Requested file status CHAR(10) *ACTIVE, *HLD, *HLDIGN, *RLS, *RLSWAIT

REQUESTATUS

JRN1STS System 1 journaled CHAR(10) *YES, *NO, *NA SYSTEMJOURNA

JRN2STS System 2 journaled CHAR(10) *YES, *NO, *NA SYSTEMJOURNA

ERRCDE Error code CHAR(2) Valid error codes ERRORJECDE Journal entry code CHAR(1) Valid journal entry code JOURNA

ENTRY JETYPE Journal entry type CHAR(2) Valid journal entry type JOURNA

ENTRY

Table 127. MXDGFE outfile (WRKDGFE command)

Field Description Type, length Valid values Columnings

Page 673: MIMIX Reference

MXDGFE o

utfile (WRKDGFE command)

673

Page 674: MIMIX Reference

MXDGIFSE

MXDG

Column head-ingsDGDFN NAME

DGDFN SYSTEM 1DGDFN SYSTEM 2SYSTEM 1 IFS OBJECTSYSTEM 2 IFS OBJECT

s 65535 or data BJ1 and OBJ2

CCSID

PROCESS TYPEOBJECT TYPEOBJRTVPRC DELAYCOOPERATE WITH DATABASEOBJECT AUDITING VALUE

outfile (WRKDGIFSE command)

674

IFSE outfile (WRKDGIFSE command)

Table 128. MXDGIFSE outfile (WRKDGIFSE command)

Field Description Type, length Valid values

DGDFN Data group name (Data group definition)

CHAR(10) User-defined data group name

DGSYS1 System 1 name (Data group definition)

CHAR(8) User-defined system name

DGSYS2 System 2 name (Data group definition)

CHAR(8) User-defined system name

OBJ1 System 1 object CHAR(1024) User-defined name

OBJ2 System 2 object CHAR(1024) *OBJ1, user-defined name

CCSID Object CCSID BIN(5 0) Defaults to job CCSID. If job CCSID icannot be converted to job CCSID, Ovalues remain in Unicode

PRCTYPE Process type CHAR(10) *INCLD, *EXCLD

TYPE Object type CHAR(10) *DIR, *STMF, *SYMLNKOBJRTVDLY Retrieve delay (Object retrieve

processing)CHAR(10) 0-999, *DGDFT

COOPDB Cooperate with database CHAR(10) *YES, *NO, blank

OBJAUD Object auditing CHAR(10) *NONE, *CHANGE, *ALL

Page 675: MIMIX Reference

MXDGIFSE

outfile (WRKDGIFSE command)

675

Page 676: MIMIX Reference

MXDGSTS

MXDGing interfaces:

SF record format from the MXDGSTS riteria specified on the command. If

SRCSTS) field or the Data group target te relative to where the request was d for DTASRCSTS is *UNKNOWN, the ollected from the remote system will be

with processes not used by the

ry the contents of this output file.

ronous interval monitor, the monitor can nitor to take action based on the status

Monitor book.

Column head-ings

-hh.mm.ss.mmmuuu TIME REQUEST PROCESSED

me DGDFN NAME

DGDFN SYSTEM 1

outfile (WRKDG command)

676

STS outfile (WRKDG command)The MXDGSTS outfile contains status information which corresponds to fields shown in the follow

• MIMIX Availability Manager: the data group detail status displays

• 5250 emulator: Work with Data Groups (WRKDG) command

The Work with Data Groups (WRKDG) command generates new outfiles based on the MXDGSTmodel database file supplied by Lakeview Technology. The content of the outfile is based on the cthere are no differences found, the file is empty.

Usage notes: • When the value *UNKNOWN is returned for either the Data group source system status (DTA

system status (DTATGTSTS), status information is not available from the system that is remomade. For example, if you requested the report from the target system and the value returneWRKDG request could not communicate with the source system. Fields which rely on data cblank.

• If a data group is configured for only database or only object replication, any fields associatedconfigured type of replication will be blank.

• See “WRKDG outfile SELECT statement examples” on page 696 for examples of how to que

• You can automate the process of gathering status. If you use MIMIX Monitor to create a synchspecify the command to generate the outfile. Through exit programs, you can program the moreturned in the outfile. For information about creating interval monitors, see the Using MIMIX

Table 129. MXDGSTS outfile (WRKDG command)

Field Description Type, length Valid values

ENTRYTSP Entry timestamp TIMESTAMP SAA format: YYYY-MM-DD

DGDFN Data group definition name (Data group definition)

CHAR(10) User-defined data group na

DGSYS1 System 1 (Data group definition) CHAR(8) User-defined system name

Page 677: MIMIX Reference

MXDGSTS

DGDFN SYSTEM 2ELAPSED TIME

ELAPSED TIME (HHH:MM:SS)

T, *NONE SYS STATUS RETRIEVED FROMDG SOURCE SYSTEM

KNOWN DG SOURCE STATUSDG TARGET SYSTEM

KNOWN DG TARGET STATUSSYSTEM 1 SWITCH STATUSSYSTEM 2 SWITCH STATUS

ING, *DISABLED OVERALL DG STATUSCONFIGURED FOR DB REPLICATIONCONFIGURED FOR OBJECT REPLICATION

Column head-ings

outfile (WRKDG command)

677

DGSYS2 System 2 (Data group definition) CHAR(8) User-defined system name

STSTIME Elapsed time for data group status (seconds)

PACKED(10 0) Calculated, 0-9999999999

STSTIMF Elapsed time for data group status (HHH:MM:SS)

CHAR(10) Calculated, 0-9999999

STSAVAIL Data group status retrieved from these systems

CHAR(10) *ALL, *SOURCE, *TARGE

DTASRC Data group source system CHAR(8) User-defined system name

DTASRCSTS Data group source system status CHAR(10) *ACTIVE, *INACTIVE, *UN

DTATGT Data group target system CHAR(8) User-defined system name

DTATGTSTS Data group target system status CHAR(10) *ACTIVE, *INACTIVE, *UN

SWTSTS1 Switch mode status for system 1 CHAR(10) *NONE, *SWITCH

SWTSTS2 Switch mode status for system 2 CHAR(10) *NONE, *SWITCH

DGSTS Data group status summary CHAR(10) BLANK, *ERROR, *WARN

DBCFG Data group configured for data base replication

CHAR(10) *YES, *NO

OBJCFG Data group configured for object replication

CHAR(10) *YES, *NO

Table 129. MXDGSTS outfile (WRKDG command)

Field Description Type, length Valid values

Page 678: MIMIX Reference

MXDGSTS

KNOWN SOURCE MANAGER SUMMATION

KNOWN, *NONE, DB SEND STATUS

KNOWN, *NONE, OBJECT SEND STATUS

KNOWN, *NONE DATA AREA POLLER STATUS

KNOWN TARGET MANAGER SUMMATION

RTIAL, *UNKNOWN, DB APPLY SUMMATION

RTIAL, *UNKNOWN, OBJECT APPLY SUMMATIONTOTAL DB FILE ENTRIESACTIVE DB FILE ENTRIESINACTIVE DB FILE ENTRIESFILES NOT JOURNALED ON SOURCEFILES NOT JOURNALED ON TARGETFILES HELD FOR ERRORS

Column head-ings

outfile (WRKDG command)

678

SRCSYSSTS Source system manager status summation (system manager

CHAR(10) *ACTIVE, *INACTIVE, *UN

DBSNDSTS Database send process status summation (DBSNDPRC)

CHAR(10) *ACTIVE, *INACTIVE, *UN*THRESHOLD

OBJSNDSTS Object send process status summation (OBJSNDPRC)

CHAR(10) *ACTIVE, *INACTIVE, *UN*THRESHOLD

DTAPOLLSTS Data area polling process status (DTAPOLLPRC)

CHAR(10) *ACTIVE, *INACTIVE, *UN

TGTSYSSTS Target System manager status summation (system manager plus journal manager status)

CHAR(10) *ACTIVE, *INACTIVE, *UN

DBAPYSTS Database apply status summation (Apply sessions A-F)

CHAR(10) *ACTIVE, *INACTIVE, *PA*NONE, *THRESHOLD

OBJAPYSTS Object apply status summation CHAR(10) *ACTIVE, *INACTIVE, *PA*NONE, *THRESHOLD

FECNT Total database file entries PACKED(5 0) 0-99999

FEACTIVE Active database file entries (FEACT) PACKED(5 0) 0-99999

FENOTACT Inactive database file entries PACKED(5 0) 0-99999

FENOTJRNS Database file entries not journaled on source

PACKED(5 0) 0-99999

FENOTJRNT Database file entries not journaled on target

PACKED(5 0) 0-99999

FEHLDERR Database file entries held due to error PACKED(5 0) 0-99999

Table 129. MXDGSTS outfile (WRKDG command)

Field Description Type, length Valid values

Page 679: MIMIX Reference

MXDGSTS

FILES HELD FOR OTHEROBJECTS PENDING ON SOURCE SYSTEMOBJECTS PENDING ON TARGET SYSTEMTOTAL OBJECTS DELAYEDTOTAL OBJECTS IN ERRORDLO CONFIG CHANGEDIFS CONFIG CHANGEDOBJECT CONFIG CHANGED

ition name PRIMARY TFRDFN

ition name SECONDARY TFRDFN

ition name LAST USED TFRDFN

Column head-ings

outfile (WRKDG command)

679

FEHLDOTHR Database file entries held for other reasons (FEHLD)

PACKED(5 0) 0-99999

OBJPENDSRC Objects in pending status, source system

PACKED(5 0) 0-99999

OBJPENDAPY Objects in pending status, target system PACKED(5 0) 0-99999

OBJDELAY Objects in delayed status PACKED(5 0) 0-99999

OBJERR Objects in error PACKED(5 0) 0-99999

DLOCFGCHG DLO configuration changed CHAR(10) *YES, *NO

IFSCFGCHG IFS configuration changed CHAR(10) *YES, *NO

OBJCFGCHG Object configuration changed CHAR(10) *YES, *NO

PRITFRDFN Primary transfer definition CHAR(10) User-defined transfer defin

SECTFRDFN Secondary transfer definition CHAR(10) User-defined transfer defin

TFRDFN Current transfer definition CHAR(10) User-defined transfer defin

Table 129. MXDGSTS outfile (WRKDG command)

Field Description Type, length Valid values

Page 680: MIMIX Reference

MXDGSTS

LAST USED TFRDFN STATUS

KNOWN SOURCE SYS MANAGER STATUS

KNOWN SOURCE JRN MANAGER STATUS

RTIAL, *UNKNOWN, CONTAINER SEND STATUS

RTIAL, *UNKNOWN, OBJECT RETRIEVE STATUS

KNOWN TARGET SYS MANAGER STATUS

KNOWN TARGET JRN MANAGER STATUSDB JRNRCV

DB JRNRCV LIBRARY

d codes DB ENTRY TYPE AND CODEDB ENTRY SEQUENCE

-hh.mm.ss.mmmuuu DB ENTRY TIMESTAMP

Column head-ings

outfile (WRKDG command)

680

TFRSTS Current transfer definition communications status

CHAR(10) *ACTIVE, *INACTIVE

SRCMGRSTS Source system manager status CHAR(10) *ACTIVE, *INACTIVE, *UN

SRCJRNSTS Source journal manager status CHAR(10) *ACTIVE, *INACTIVE, *UN

CNRSNDSTS Container send process status CHAR(10) *ACTIVE, *INACTIVE, *PA*NONE, *THRESHOLD

OBJRTVSTS Object retrieve process status CHAR(10) *ACTIVE, *INACTIVE, *PA*NONE, *THRESHOLD

TGTMGRSTS Target system manager status CHAR(10) *ACTIVE, *INACTIVE, *UN

TGTJRNSTS Target journal manager status CHAR(10) *ACTIVE, *INACTIVE, *UN

CURDBRCV Current database journal entry receiver name

CHAR(10) User-defined value

CURDBLIB Current database journal entry receiver library name

CHAR(10) User-defined value

CURDBCODE Current database journal code and entry type

CHAR(3) Valid journal entry types an

CURDBSEQ Current database journal entry sequence number

PACKED(10 0) 0-9999999999

CURDBTSP Current database journal entry timestamp

TIMESTAMP SAA format: YYYY-MM-DD

Table 129. MXDGSTS outfile (WRKDG command)

Field Description Type, length Valid values

Page 681: MIMIX Reference

MXDGSTS

999 DB ARRIVAL RATEDB READER JRNRCVDB READER JRNRCV LIBRARY

d codes DB READER TYPE AND ENTRY CODEDB READER ENTRY SEQUENCE

-hh.mm.ss.mmmuuu DB READER ENTRY TIMESTAMP

99999 DB READER READ RATE

99999 DB SEND BACKLOGDB SEND BACKLOG SECONDSDB SEND BACKLOG HHH:MM:SSDB LAST RECEIVED JRNRCV

Column head-ings

outfile (WRKDG command)

681

CURDBTPH Current database journal entry transactions per hour

PACKED(15 0) Calculated, 0-9999999999

RDDBRCV Last read database journal entry receiver name (DBSNTRCV)

CHAR(10) User-defined value

RDDBLIB Last read database journal entry receiver library name

CHAR(10) User-defined value

RDDBCODE Last read database journal code and entry type

CHAR(3) Valid journal entry types an

RDDBSEQ Last read database journal entry sequence number (DBSNTSEQ)

PACKED(10 0) 0-9999999999

RDDBTSP Last read database journal entry timestamp (DBSNTDATE, DBSNTTIME)

TIMESTAMP SAA format: YYYY-MM-DD

RDDBTPH Last read database journal entry transactions per hour

PACKED(15 0) Calculated, 0-9999999999

DBSNDBKLG Number of database entries not sent PACKED(15 0) Calculated, 0-9999999999

DBSNBKTIME Estimated time to process database entries not sent (seconds)

PACKED(10 0) Calculated, 0-9999999999

DBSNBKTIMF Estimated time to process database entries not sent (HHH:MM:SS)

CHAR(10) Calculated, 0-999:99:99

RCVDBRCV Last received database journal entry receiver name

CHAR(10) User-defined value

Table 129. MXDGSTS outfile (WRKDG command)

Field Description Type, length Valid values

Page 682: MIMIX Reference

MXDGSTS

DB LAST RECEIVED JRNRCV LIB

up and Recovery Guide DB LAST RCV TPE AND ENTRYDB LAST RECEIVED SEQUENCE

-hh.mm.ss.mmmuuu DB LAST RECEIVED TIMESTAMP

99999 DB RECEIVE ARRIVAL RATEREQUESTED DB APPLY SESSIONSCONFIGURED DB APPLY SESSIONSACTIVE DB APPLY SESSIONS

99999 DB APPLY BACKLOGDB APPLY TIME SECONDSDB APPLY TIME HHH:MM:SS

Column head-ings

outfile (WRKDG command)

682

RCVDBLIB Last received database journal entry receiver library name

CHAR(10) User-defined value

RCVDBCODE Last received database journal code and entry type

CHAR(3) See the IBM OS/400 Backfor journal and entry types

RCVDBSEQ Last received database journal entry sequence number

PACKED(10 0) 0-9999999999

RCVDBTSP Last received database journal entry timestamp

TIMESTAMP SAA format: YYYY-MM-DD

RCVDBTPH Last received database journal entry transactions per hour

PACKED(15 0) Calculated, 0-9999999999

DBAPYREQ Number of database apply sessions requested

PACKED(5 0) 1-6

DBAPYMAX Number of database apply sessions configured

PACKED(5 0) 1-6

DBAPYACT Number of database apply session currently active (DBAPYPRC)

PACKED(5 0) 1-6

DBAPYBKLG Number of database entries not applied PACKED(15 0) Calculated, 0-9999999999

DBAPBKTIME Estimated time to process database entries not applied (seconds)

PACKED(10 0) Calculated, 0-9999999999

DBAPBKTIMF Estimated time to process database entries not applied (HHH:MM:SS)

CHAR(10) Calculated, 0-999:99:99

Table 129. MXDGSTS outfile (WRKDG command)

Field Description Type, length Valid values

Page 683: MIMIX Reference

MXDGSTS

99999 DB APPLY PROCESSING RATE

RESHOLD, DB APPLY A STATUSDB APPLY A LAST RECEIVEDDB APPLY A LAST PROCESSED

99999 DB APPLY A BACKLOGDB APPLY A TIME SECONDS

DB APPLY A TIME HHH:MM:SS

99999 DB APPLY A PROCESSING RATEDB APPLY A COMMIT INDICATORDB APPLY A CURRENT COMMIT ID

up and Recovery Guide types.

DB APPLY A TYPE AND ENTRY

Column head-ings

outfile (WRKDG command)

683

DBAPYTPH Database apply total transactions per hour

PACKED(15 0) Calculated, 0-9999999999

DBASTS Database apply session A status CHAR(10) *ACTIVE, *INACTIVE, *TH*UNKNOWN

DBARCVSEQ Database apply session A last received sequence number

PACKED(10 0) 0-9999999999

DBAPRCSEQ Database apply session A last processed sequence number

PACKED(10 0) 0-9999999999

DBABKLG Database apply session A number of unprocessed entries

PACKED(15 0) Calculated, 0-9999999999

DBABKTIME Database apply session A estimated time to apply unprocessed transactions (seconds)

PACKED(10 0) Calculated, 0-9999999999

DBABKTIMF Database apply session A estimated time to apply unprocessed transactions (HHH:MM:SS)

CHAR(10) Calculated, 0-999:99:99

DBATPH Database apply session A number of transactions per hour

PACKED(15 0) Calculated, 0-9999999999

DBAOPNCMT Database apply session A open commit indicator

CHAR(10) *YES, *NO

DBACMTID Database apply session A oldest open commit ID

CHAR(10) Journal-defined commit ID

DBAAPYCODE Database apply session A last applied journal code and entry type

CHAR(3) See the IBM OS/400 Backfor journal codes and entry

Table 129. MXDGSTS outfile (WRKDG command)

Field Description Type, length Valid values

Page 684: MIMIX Reference

MXDGSTS

DB APPLY A LAST APPLIED

-hh.mm.ss.mmmuuu DB APPLY A LAST TIMESTAMPDB APPLY A OBJECT NAME

name DB APPLY A LIBRARY NAME

er name DB APPLY A MEMBER NAME

DB APPLY A TIME DIFF SECONDSDB APPLY A TIME DIFF HHH:MM:SSDB APPLY A HOLD SEQUENCE

the DBA field values. All DBx headings match the DBA headings, with ‘x’

OBJECT JRNRCVOBJECT JRNRCV LIBRARY

Column head-ings

outfile (WRKDG command)

684

DBAAPYSEQ Database apply session A last applied sequence number

PACKED(10 0) 0-9999999999

DBAAPYTSP Database apply session A last applied journal entry timestamp

TIMESTAMP SAA format: YYYY-MM-DD

DBAAPYOBJ Database apply session A object to which last transaction was applied

CHAR(10) User-defined object name

DBAAPYLIB Database apply session A library of object to which last transaction was applied

CHAR(10) User-defined object library

DBAAPYMBR Database apply session A member of object to which last transaction was applied.

CHAR(10) User-defined object memb

DBAAPYTIME Database apply session A last applied journal entry clock time difference (seconds)

PACKED(10 0) Calculated, 0-9999999999

DBAAPYTIMF Database apply session A last applied journal entry clock time difference (HHH:MM:SS)

CHAR(10) Calculated, 0-999:99:99

DBAHLDSEQ Database apply session A hold MIMIX log sequence number

PACKED(10 0) 0-9999999999

DBxnnnnnnn replacing database apply session ‘A’

Repeat the database apply (all DBx fields match session ‘A’ fields including the DBA fields) reserved information for five other apply sessions with values of ‘x’ from B-F)

All DBx field values match

CUROBJRCV Current object journal entry receiver name

CHAR(10) User-defined value

CUROBJLIB Current object journal entry receiver library name

CHAR(10) User-defined value

Table 129. MXDGSTS outfile (WRKDG command)

Field Description Type, length Valid values

Page 685: MIMIX Reference

MXDGSTS

up and Recovery Guide types.

OBJECT TYPE AND ENTRY CODESOBJECT JOURNAL SEQUENCES

-hh.mm.ss.mmmuuu OBJECT JRN ENTRY TIMESTAMPOBJECT ARRIVAL PER HOUROBJRDRPRC JRNRCVOBJRDRPRC JRNRCV LIBRARY

up and Recovery Guide entry types.

OBJRDRPRC TYPE AND ENTRY CODEOBJRDRPRC JOURNAL SEQUENCE

-hh.mm.ss.mmmuuu OBJRDRPRC JRN ENTRY TIMESTAMP

99999 OBJRDRPRC READ RATE

99999 OBJSNDPRC BACKLOG

Column head-ings

outfile (WRKDG command)

685

CUROBJCODE Current object journal code and entry type

CHAR(3) See the IBM OS/400 Backfor journal codes and entry

CUROBJSEQ Current object journal entry sequence number

PACKED(10 0) 0-9999999999

CUROBJTSP Current object journal entry timestamp TIMESTAMP SAA format: YYYY-MM-DD

CUROBJTPH Current object journal entry transactions per hour

PACKED(15 0) 0-999999999999999

RDOBJRCV Last read object journal entry receiver name (OBJSNTRCV)

CHAR(10) User-defined value

RDOBJLIB Last read object journal entry receiver library name

CHAR(10) User-defined value

RDOBJCODE Last read object journal code and entry type

CHAR(3) See the IBM OS/400 Backfor journal entry codes and

RDOBJSEQ Last read object journal entry sequence number (OBJSNTSEQ)

PACKED(10 0) 0-9999999999

RDOBJTSP Last read object journal entry timestamp (OBJSNTDATE, OBJSNTTIME)

TIMESTAMP SAA format: YYYY-MM-DD

RDOBJTPH Last read object journal entry transactions per hour

PACKED(15 0) Calculated, 0-9999999999

OBJSNDBKLG Object entries not processed PACKED(15 0)) Calculated, 0-9999999999

Table 129. MXDGSTS outfile (WRKDG command)

Field Description Type, length Valid values

Page 686: MIMIX Reference

MXDGSTS

99999 OBJSNDPRC SENT IN TIME SLICEOBJSNDPRC BACKLOG SECONDSOBJSNDPRC BACKLOG HHH:MM:SSOBJRCVPRC LAST RCVD JRNRCVOBJRCVPRC LAST RCVD JRNRCV LIB

up and Recovery Guide types.

OBJRCVPRC LAST TYPE AND ENTRYOBJRCVPRC LAST ENTRY SEQUENCE

-hh.mm.ss.mmmuuu OBJRCVPRC LAST ENTRY TIMESTAMPOBJRCVPRC RECEIVE RATEOBJRTVPRC MIN NUMBER OF JOBSOBJRTVPRC NUMBER OF JOBS

Column head-ings

outfile (WRKDG command)

686

OBJSNDNUM Number of object entries sent PACKED(15 0)) Calculated, 0-9999999999

OBJSBKTIME Estimated time to process object entries not sent (seconds)

PACKED(10 0) Calculated, 0-9999999999

OBJSBKTIMF Estimated time to process entries not sent (HHH:MM:SS)

CHAR(10) Calculated, 0-999:99:99

RCVOBJRCV Last received object journal entry receiver name

CHAR(10) User-defined value

RCVOBJLIB Last received object journal entry receiver library name

CHAR(10) User-defined value

RCVOBJCODE Last received object journal code and entry type

CHAR(3) See the IBM OS/400 Backfor journal codes and entry

RCVOBJSEQ Last received object journal entry sequence number

PACKED(10 0) 0-9999999999

RCVOBJTSP Last received object journal entry timestamp

TIMESTAMP SAA format: YYYY-MM-DD

RCVOBJTPH Last received object journal entry transactions per hour

PACKED(15 0) 0-999999999999999

OBJRTVMIN Minimum number of object retriever processes

PACKED(3 0) 1-99

OBJRTVACT Active number of object retriever processes (OBJRTVPRC)

PACKED(3 0) 1-99

Table 129. MXDGSTS outfile (WRKDG command)

Field Description Type, length Valid values

Page 687: MIMIX Reference

MXDGSTS

OBJRTVPRC MAX NUMBER OF JOBSOBJRTVPRC BACKLOG

up and Recovery Guide types.

OBJRTVPRC LAST TYPE AND ENTRYOBJRTVPRC LAST SEQUENCE

-hh.mm.ss.mmmuuu OBJRTVPRC LAST TIMESTAMP

object OBJRTVPRC LAST OBJ TYPE

and path OBJRTVPRC LAST OBJ NAMECNRSNDPRC MIN NUMBER OF JOBSCNRSNDPRC NUMBER OF JOBSCNRSNDPRC MAX NUMBER OF JOBSCNRSNDPRC BACKLOG

Column head-ings

outfile (WRKDG command)

687

OBJRTVMAX Maximum number of object retriever processes

PACKED(3 0) 1-99

OBJRTVBKLG Number of object retriever entries not processed

PACKED(15 0) 0-999999999999999

OBJRTVCODE Last processed object retrieve journal code and entry type

CHAR(3) See the IBM OS/400 Backfor journal codes and entry

OBJRTVSEQ Last processed object retrieve journal sequence number

PACKED(10 0) 0-9999999999

OBJRTVTSP Last processed object retrieve journal entry timestamp (OBJRTVDATE, OBJRTVTIME)

TIMESTAMP SAA format: YYYY-MM-DD

OBJRTVTYPE Type of object last processed by object retrieve

CHAR(10) Object type of user-defined

OBJRTVOBJ Qualified name of object last processed by object retrieve

CHAR(1024) User-defined object name Note: Variable length of 75.

CNRSNDMIN Minimum number of container send processes

PACKED(3 0) 1-99

CNRSNDACT Active number of container send processes (CNRSNDPRC)

PACKED(3 0) 1-99

CNRSNDMAX Maximum number of container send processes

PACKED(3 0) 1-99

CNRSNDBKLG Number of container send entries not processed

PACKED(15 0) 0-999999999999999

Table 129. MXDGSTS outfile (WRKDG command)

Field Description Type, length Valid values

Page 688: MIMIX Reference

MXDGSTS

CNRSNDPRC NUMBER SENTCNRSNDPRC RATE

up and Recovery Guide types.

CNRSNDPRC LAST TYPE AND ENTRYCNRSNDPRC LAST SEQUENCE

-hh.mm.ss.mmmuuu CNRSNDPRC LAST TIMESTAMP

object CNRSNDPRC LAST OBJ TYPE

and path CNRSNDPRC LAST OBJ NAMEOBJAPYPRC MIN NUMBER OF JOBSOBJAPYPRC NUMBER OF JOBSOBJAPYPRC MAX NUMBER OF JOBS

99999 OBJAPYPRC BACKLOG

Column head-ings

outfile (WRKDG command)

688

CNRSNDNUM Number of containers sent PACKED(15 0) 0-999999999999999

CNRSNDCPH Containers per hour PACKED(15 0) 0-999999999999999

CNRSNDCODE Last processed container send journal code and entry type

CHAR(3) See the IBM OS/400 Backfor journal codes and entry

CNRSNDSEQ Last processed container send journal sequence number (CNRSNTSEQ)

PACKED(10 0) 0-9999999999

CNRSNDTSP Last processed container send journal entry timestamp (CNRSNTDATE, CNTRSNTTIME)

TIMESTAMP SAA format: YYYY-MM-DD

CNRSNDTYPE Type of object last processed by container send

CHAR(10) Object type of user-defined

CNRSNDOBJ Qualified name of object last processed by container send

CHAR(1024) User-defined object name Note: Variable length of 75.

OBJAPYMIN Minimum number of object apply processes

PACKED(3 0) 1-99

OBJAPYACT Active number of object apply processes (OBJAPYPRC)

PACKED(3 0) 1-99

OBJAPYMAX Maximum number of object apply processes

PACKED(3 0) 1-99

OBJAPYBKLG Number of object apply entries not processed

PACKED(15 0) Calculated, 0-9999999999

Table 129. MXDGSTS outfile (WRKDG command)

Field Description Type, length Valid values

Page 689: MIMIX Reference

MXDGSTS

99999 OBJAPYPRC ACTIVE BACKLOG

99999 OBJAPYPRC APPLIED IN TIME SLICEOBJAPYPRC BACKLOG SECONDSOBJAPYPRC BACKLOG HHH:MM:SS

99999 OBJAPYPRC RATE

up and Recovery Guide types.

OBJAPYPRC LAST TYPE AND ENTRYOBJAPYPRC LAST SEQUENCE

-hh.mm.ss.mmmuuu OBJAPYPRC LAST TIMESTAMP

object OBJAPYPRC LAST OBJ TYPE

and path OBJAPYPRC LAST OBJ NAMERJ LINK USED BY DG

Column head-ings

outfile (WRKDG command)

689

OBJAPYACTA Number of active objects PACKED(15 0) Calculated, 0-9999999999

OBJAPYNUM Number of object entries applied PACKED(15 0) Calculated, 0-9999999999

OBJABKTIME Estimated time to process object entries not applied (seconds)

PACKED(10 0) Calculated, 0-9999999999

OBJABKTIMF Estimated time to process object entries not applied (HHH:MM:SS)

CHAR(10) Calculated, 0-999:99:99

OBJAPYTPH Number of object entries applied per hour

PACKED(15 0) Calculated, 0-9999999999

OBJAPYCODE Last applied object journal code and entry type

CHAR(3) See the IBM OS/400 Backfor journal codes and entry

OBJAPYSEQ Last applied object journal sequence number (OBJAPYSEQ)

PACKED(10 0) 0-9999999999

OBJAPYTSP Last applied object journal entry timestamp (OBJAPYDATE, OBJAPYTIME)

TIMESTAMP SAA format: YYYY-MM-DD

OBJAPYTYPE Type of object last processed by object apply

CHAR(10) Object type of user-defined

OBJAPYOBJ Qualified name of object last processed by object apply

CHAR(1024) User-defined object name Note: Variable length of 75.

RJINUSE Remote journal (RJ) link used by data group

CHAR(10) *YES, *NO

Table 129. MXDGSTS outfile (WRKDG command)

Field Description Type, length Valid values

Page 690: MIMIX Reference

MXDGSTS

tion name RJ LINK SOURCE JRNDFNRJ LINK SOURCE JRNDFN

tion name RJ LINK TARGET SYSTEMRJ LINK TARGET JRNDFN

erated RDB name RJ PRIMARY RDB ENTRY

ition name RJ PRIMARY TFRDFN

erated RDB name RJ SECONDARY RDB ENTRY

ition name RJ SECONDARY TFRDFN

CT, *INACTPEND, END, *SYNCPEND,

RJ LINK STATE

RJ LINK DELIVERY MODERJ LINK SEND PRIORITY

Column head-ings

outfile (WRKDG command)

690

RJSRCDFN RJ link source journal definition CHAR(10) User-defined journal defini

RJSRCSYS RJ link source system CHAR(8) User-defined system name

RJTGTDFN RJ link target journal definition CHAR(10) User-defined journal defini

RJTGTSYS RJ link target system CHAR(8) User-defined system name

RJPRIRDB RJ link primary RDB entry CHAR(18) User-defined or MIMIX gen

RJPRITFR RJ link primary transfer definition name CHAR(10) User-defined transfer defin

RJSECRDB RJ link secondary RDB entry CHAR(18) User-defined or MIMIX gen

RJSECTFR RJ link secondary transfer definition name

CHAR(10) User-defined transfer defin

RJSTATE RJ link state CHAR(10) BLANK, *FAILED, *CTLINA*ASYNC, *SYNC, *ASYNP*NOTBUILT, *UNKNOWN

RJDLVRY RJ link delivery mode CHAR(10) *ASYNC, *SYNC, BLANK

RJSNDPTY RJ link send task priority PACKED(3 0) 0-99 0=*SYSDFT

Table 129. MXDGSTS outfile (WRKDG command)

Field Description Type, length Valid values

Page 691: MIMIX Reference

MXDGSTS

TIVE, *INACTIVE, RJREADER STATUS

TIVE, *INACTIVE RJ SOURCE MONITOR

TIVE, *INACTIVE RJ TARGET MONITORTOTAL IFS TRACKING ENTRIESACTIVE IFS TRACKING ENTRIESINACT IFS TRACKING ENTRIESIFS TE NOT JOURNALED ON SOURCEIFS TE NOT JOURNALED ON TARGETIFS TE HELD FOR ERRORSIFS TE HELD FOR OTHERTOTAL OBJ TRACKING ENTRIESACTIVE OBJ TRACKING ENTRIES

Column head-ings

outfile (WRKDG command)

691

RJRDRSTS RJ reader task status CHAR(10) BLANK, *UNKNOWN, *AC*THRESHOLD

RJSMONSTS RJ link source monitor status CHAR(10) BLANK, *UNKNOWN, *AC

RJTMONSTS RJ link target monitor status CHAR(10) BLANK, *UNKNOWN, *AC

ITECNT Total IFS tracking entries PACKED(10 0) 0-999999

ITEACTIVE Active IFS tracking entries PACKED(10 0) 0-999999

ITENOTACT Inactive IFS tracking entries PACKED(10 0) 0-999999

ITENOTJRNS IFS tracking entries not journaled on source

PACKED(10 0) 0-999999

ITENOTJRNT IFS tracking entries not journaled on target

PACKED(10 0) 0-999999

ITEHLDERR IFS tracking entries held due to error PACKED(10 0) 0-999999

ITEHLDOTHR IFS tracking entries held for other reasons

PACKED(10 0) 0-999999

OTECNT Total object tracking entries PACKED(10 0) 0-999999

OTEACTIVE Active object tracking entries PACKED(10 0) 0-999999

Table 129. MXDGSTS outfile (WRKDG command)

Field Description Type, length Valid values

Page 692: MIMIX Reference

MXDGSTS

INACT OBJ TRACKING ENTRIESOBJ TE NOT JOURNALED ON SOURCEOBJ TE NOT JOURNALED ON TARGETOBJ TE HELD FOR ERRORSOBJ TE HELD FOR OTHERJOURNAL CACHE TARGETJOURNAL CACHE SOURCE

CTIVE JOURNAL STATE TARGET

CTIVE JOURNAL STATE SOURCE

ARNING, N

JRN CACHE TARGET STATUS

ARNING, N

JRN CACHE SOURCE STATUS

Column head-ings

outfile (WRKDG command)

692

OTENOTACT Inactive object tracking entries PACKED(10 0) 0-999999

OTENOTJRNS Object tracking entries not journaled on source

PACKED(10 0) 0-999999

OTENOTJRNT Object tracking entries not journaled on target

PACKED(10 0) 0-999999

OTEHLDERR Object tracking entries held due to error PACKED(10 0) 0-999999

OTEHLDOTHR Object tracking entries held for other reasons

PACKED(10 0) 0-999999

JRNCACHETA Journal cache target CHAR(10) *YES, *NO, *UNKNOWN

JRNCACHESA Journal cache source CHAR(10) *YES, *NO, *UNKNOWN

JRNSTATETA Journal state target CHAR(10) *ACTIVE, *STANDBY, *INA

JRNSTATESA Journal state source CHAR(10) *ACTIVE, *STANDBY, *INA

JRNCACHETS Journal cache status - target CHAR(10) *ERROR, *NONE, *OK, *W*NOFEATURE, *UNKNOW

JRNCACHESS Journal cache status - source CHAR(10) *ERROR, *NONE, *OK, *W*NOFEATURE, *UNKNOW

Table 129. MXDGSTS outfile (WRKDG command)

Field Description Type, length Valid values

Page 693: MIMIX Reference

MXDGSTS

ARNING, N

JOURNAL STATE TARGET

ARNING, N

JOURNAL STATE SOURCERJ TGT JRNRCVRJ TGT JRNRCV LIBRARY

d codes RJTGT TYPE AND ENTRY CODERJ TGT ENTRY SEQUENCE

-hh.mm.ss.mmmuuu RJ TGT ENTRY TIMESTAMP

and path LAST OBJ RETRIEVED (UNICODE)

and path LAST OBJ SENT (UNICODE)

and path LAST OBJ APPLIED (UNICODE)TOTALDB FILE ENTRIES2

Column head-ings

outfile (WRKDG command)

693

JRNSTATETS Journal state target status CHAR(10) *ERROR, *NONE, *OK, *W*NOFEATURE, *UNKNOW

JRNSTATESS Journal state source status CHAR(10) *ERROR, *NONE, *OK, *W*NOFEATURE, *UNKNOW

RJTGTRCV Last RJ target journal entry receiver name

CHAR(10) User-defined value

RJTGTLIB Last RJ target journal entry receiver library name

CHAR(10) User-defined value

RJTGTCOCDE Last RJ target journal code and entry type

CHAR(3) Valid journal entry types an

RJTGTSEQ Last RJ target journal entry sequence number

PACKED(10 0) 0-9999999999

RJTGTTSP Last RJ target journal entry timestamp TIMESTAMP SAA format: YYYY-MM-DD

OBJRTVUCS Qualified name of object last qualified by object retrieve - Unicode

GRAPHIC(512) VARLEN(75) CCSID(13488)

User-defined object name

CNRSNDUCS Qualified name of object last qualified by container send - Unicode

GRAPHIC(512) VARLEN(75) CCSID(13488)

User-defined object name

OBJAPYUCS Qualified name of object last qualified by object apply - Unicode

GRAPHIC(512) VARLEN(75) CCSID(13488)

User-defined object name

FECNT2 Total database file entries PACKED(10 0) 0-9999999999

Table 129. MXDGSTS outfile (WRKDG command)

Field Description Type, length Valid values

Page 694: MIMIX Reference

MXDGSTS

ACTIVE DB FILE ENTRIES2INACTIVE DB FILE ENTRIES2FILES NOT JOURNALED ON SOURCE2FILES NOT JOURNALED ON TARGET2FILES HELD FOR ERRORS2FILES HELD FOR OTHERS2FILES BEING REPAIRED2RJLNK THRESHOLD (TIME IN MIN)RJLNK THRESHOLD (NBR OF JRNE)DBSND/DBRDR THRESHOLD (TIME IN MIN)DBSND/DBRDR THRESHOLD (NBR OF JRNE)

Column head-ings

outfile (WRKDG command)

694

FEACTIVE2 Active database file entries (FEACT) PACKED(10 0) 0-9999999999

FENOTACT2 Inactive database file entries PACKED(10 0) 0-9999999999

FENOTJRNS2 Database file entries not journaled on source

PACKED(10 0) 0-9999999999

FENOTJRNT2 Database file entries not journaled on target

PACKED(10 0) 0-9999999999

FEHLDERR2 Database file entries held due to error PACKED(10 0) 0-9999999999

FEHLDOTHR2 Database file entries held for other reasons (FEHLD)

PACKED(10 0) 0-9999999999

FECMPRPR2 Database file entries being repaired PACKED(10 0) 0-9999999999

RJLNKTHLDM RJ Link Threshold Exceeded (Time in minutes)

PACKED(4 0) 0-9999

RJLNKTHLDE RJ Link Threshold Exceeded (Number of journal entries)

PACKED(7 0) 0-9999999

DBRDRTHLDM DB Send/Reader Threshold Exceeded (Time in minutes)

PACKED(4 0) 0-9999

DBRDRTHLDE DB Send/Reader Threshold Exceeded (Number of journal entries)

PACKED(7 0) 0-9999999

Table 129. MXDGSTS outfile (WRKDG command)

Field Description Type, length Valid values

Page 695: MIMIX Reference

MXDGSTS

DB APPLY A THRESHOLDDB APPLY B THRESHOLDDB APPLY C THRESHOLDDB APPLY D THRESHOLDDB APPLY E THRESHOLDDB APPLY F THRESHOLDOBJSND THRESHOLD (TIME IN MIN)OBJSND THRESHOLD (NBR OF JRNE)OBJRTV THRESHOLDCNRSND THRESHOLDOBJAPY THRESHOLD

9 RJ BACKLOG

Column head-ings

outfile (WRKDG command)

695

Updated for 5.0.13.00.

DBAPYATHLD DB Apply A Threshold Exceeded (Number of journal entries)

PACKED(5 0) 0-99999

DBAPYBTHLD DB Apply B Threshold Exceeded (Number of journal entries)

PACKED(5 0) 0-99999

DBAPYCTHLD DB Apply C Threshold Exceeded (Number of journal entries)

PACKED(5 0) 0-99999

DBAPYDTHLD DB Apply D Threshold Exceeded (Number of journal entries)

PACKED(5 0) 0-99999

DBAPYETHLD DB Apply E Threshold Exceeded (Number of journal entries)

PACKED(5 0) 0-99999

DBAPYFTHLD DB Apply F Threshold Exceeded (Number of journal entries)

PACKED(5 0) 0-99999

OBJSNDTHDM Object Send Threshold Exceeded (Time in minutes)

PACKED(4 0) 0-9999

OBJSNDTHDE Object Send Threshold Exceeded (Number of journal entries)

PACKED(7 0) 0-9999999

OBJRTVTHDE Object Retrieve Threshold Exceeded (Number of activity entries)

PACKED(5 0) 0-99999

CNRSNDTHDE Container Send Threshold Exceeded (Number of activity entries)

PACKED(5 0) 0-99999

OBJAPYTHDE Object Apply Threshold Exceeded (Number of activity entries)

PACKED(5 0) 0-99999

RJBKLG RJ Backlog PACKED(15 0) Calculated 0-99999999999

Table 129. MXDGSTS outfile (WRKDG command)

Field Description Type, length Valid values

Page 696: MIMIX Reference

MXDGSTS

WRKDGus outfile reports. The first three .

with additional data group related

to your output.

ay all of the data group names that start l order. The statement would be entered

A%'

t are in the outfile. The records are listed

%%'

outfile (WRKDG command)

696

outfile SELECT statement examplesFollowing are some example SELECT statements that query a WRKDG outfile and produce varioexamples show how to use wild cards to produce reports about specific data groups in the outfile

The last example adds a few field definitions, in request time sequence, to produce outfile reportsinformation.

These are basic examples, there may be additional formatting options that you may want to apply

WRKDG outfile example 1This SELECT statement uses a single wildcard character to query the outfile to retrieve and displwith an ‘A’ and have 0 or more characters following the ‘A’. The records are listed in record arrivaas follows:

SELECT DGDFN, DGSYS1, DGSYS2 FROM library/filename WHERE DGDFN like '

The outfile report produced follows:DGN SYS SYS ACCTPAY CHICAGO LONDON ACCTREC CHICAGO LONDON APP1 CHICAGO LONDON APP2 CHICAGO LONDON

WRKDG outfile example 2This SELECT statement uses wildcard characters to query the outfile for all data group names thain record arrival order. The statement would be entered as follows:

SELECT DGDFN, DGSYS1, DGSYS2 FROM library/filename WHERE DGDFN like '

The outfile report produced follows:DGN SYS SYS INVENTORY CHICAGO LONDON PAYROLL CHICAGO LONDON ACCTPAY CHICAGO LONDON ORDERS CHICAGO LONDON ACCTREC CHICAGO LONDON APP1 CHICAGO LONDON APP2 CHICAGO LONDON

Page 697: MIMIX Reference

MXDGSTS

an ‘A’. The records are listed in record

%A%'

records are listed in data group name l records for a data group are listed e file and the current top sequence ows:ame WHERE DGDFN like '%A%'

outfile (WRKDG command)

697

SUPERAPP CHICAGO LONDON

WRKDG outfile example 3This SELECT statement uses wildcard characters to find all data groups with names that contain arrival order. The statement would be entered as follows:

SELECT DGDFN, DGSYS1, DGSYS2 FROM library/filename WHERE DGDFN like '

The outfile report produced is follows:DGN SYS SYS PAYROLL CHICAGO LONDON ACCTPAY CHICAGO LONDON ACCTREC CHICAGO LONDON APP1 CHICAGO LONDON APP2 CHICAGO LONDON SUPERAPP CHICAGO LONDON

WRKDG outfile example 4This SELECT statement selects all records that have a data group name containing an ‘A’. Theseorder with all duplicate data group names listed by the time the entry was placed in the outfile. Altogether in ascending time sequence. Additionally, the time stamp that the entry was placed in thnumber of the object journal are also listed with the entry. The statement would be entered as foll

SELECT DGDFN, DGSYS1, DGSYS2, ENTRYTSP, CUROBJSEQ, FROM library/filenORDER BY DGDFN, DGSYS1, DGSYS2, ENTRYTSP

The outfile report produced follows:DGN SYS SYS ENTRYTSP SEQN PAYROLL CHICAGO LONDON 2001-02-06-11.09.59.842000 29,034,877 ACCTPAY CHICAGO LONDON 2001-02-06-11.24.05.851000 29,035,093 ACCTREC CHICAGO LONDON 2001-02-06-11.09.59.842000 29,034,879 APP1 CHICAGO LONDON 2001-02-06-11.24.05.851000 29,035,095 APP2 CHICAGO LONDON 2001-02-06-14.24.49.793000 29,051,130 SUPERAPP CHICAGO LONDON 2001-02-06-11.09.59.842000 0

Page 698: MIMIX Reference

MXDGSTS

outfile (WRKDG command)

698

Page 699: MIMIX Reference

MXDGSTS

outfile (WRKDG command)

699

Page 700: MIMIX Reference

MXDGSTS

outfile (WRKDG command)

700

Page 701: MIMIX Reference

MXDGSTS

outfile (WRKDG command)

701

Page 702: MIMIX Reference

MXDGSTS outfile (WRKDG command)

702

Page 703: MIMIX Reference

MXDGOBJ

MXDG

Column head-ingsDGDFN NAME

DGDFN SYSTEM 1DGDFN SYSTEM 2SYSTEM 1 OBJECTSYSTEM 1 LIBRARY

he list of valid values OBJECT TYPEhe list of valid object OBJECT

ATTRIBUTEric*, *OBJ1 SYSTEM 2

OBJECTLIB1 SYSTEM 2

LIBRARYOBJECT AUDITING VALUEPROCESS TYPECOOPERATE WITH DATABASEREPLICATE SPOOLED FILES

E outfile (WRKDGOBJE command)

703

OBJE outfile (WRKDGOBJE command)

Table 130. MXDGOBJE outfile (WRKDGOBJE command)

Field Description Type, length Valid values

DGDFN Data group name (Data group definition)

CHAR(10) User-defined data group name

DGSYS1 System 1 name (Data group definition)

CHAR(8) User-defined system name

DGSYS2 System 2 name (Data group definition)

CHAR(8) User-defined system name

OBJ1 System 1 folder CHAR(10) User-defined name, *ALL

LIB1 System 1 library CHAR(10) User-defined name, generic*

TYPE Object type CHAR(10) Refer to the OM5100P file for tOBJATR Object attribute CHAR(10) Refer to the OM5200P file for t

attributesOBJ2 System 2 object CHAR(10) User-defined name, *ALL, gene

LIB2 System 2 library CHAR(10) User-defined name, generic*, *

OBJAUD Object auditing value (configured value)

CHAR(10) *CHANGE, *ALL, *NONE

PRCTYPE Process type CHAR(10) *INCLD, *EXCLDCOOPDB Cooperate with database CHAR(10) *YES, *NO

REPSPLF Replicate spooled files CHAR(10) *YES, *NO

Page 704: MIMIX Reference

MXDGOBJ

KEEP DLTD SPOOLED FILESOBJRTVPRC DELAY

ED, *SRC, *TGT USER PROFILE STATUSFEOPT JOURNAL IMAGEFEOPT OMIT OPEN CLOSEFEOPT REPLICATION TYPEFEOPT LOCK MBR ON APPLYFEOPT CURRENT APYSSN

HLDERR, FEOPT COLLISION RESOLUTIONFEOPT DISABLE TRIGGERSFEOPT PROCESS TRIGGERSFEOPT PROCESS CONSTRAINTSSYSTEM 1 LIBRARY ASP

Column head-ings

E outfile (WRKDGOBJE command)

704

KEEPSPLF Keep deleted spooled files CHAR(10) *YES, *NO

OBJRTVDLY Retrieve delay (Object retrieve processing)

CHAR(10) 0-999, *DGDFT

USRPRFSTS User profile status CHAR(10) *DGDFT, *DISABLED, *ENABL

JRNIMG Journal image (File entry options) CHAR(10) *DGDFT, *AFTER, *BOTH

OPNCLO Omit open and close entries (File entry options)

CHAR(10) *DGDFT, *YES, *NO

REPTYPE Replication type (File entry options)

CHAR(10) *DGDFT, *POSITION, *KEYED

APYLOCK Lock member during apply (File entry options)

CHAR(10) *DGDFT, *YES, *NO

APYSSN Apply session (File entry options) CHAR(10) A-F, *DGDFT, *ANY

CRCLS Collision resolution (File entry options)

CHAR(10) User-defined name, *DGDFT, **AUTOSYNC

DSBTRG Disable triggers during apply (File entry options)

CHAR(10) *YES, *NO, *DGDFT

PRCTRG Process trigger entries (File entry options)

CHAR(10) *YES, *NO, *DGDFT

PRCCST Process constraint entries (File entry options)

CHAR(10) *YES

LIB1ASP System 1 library ASP number PACKED(3,0) 0 = *SRCLIB, 1-32,-1 = *ASPDEV

Table 130. MXDGOBJE outfile (WRKDGOBJE command)

Field Description Type, length Valid values

Page 705: MIMIX Reference

MXDGOBJ

SYSTEM 1 LIBRARY ASP DEVSYSTEM 2 LIBRARY ASP

SYSTEM 2 LIBRARY ASP DEVNUMBER OF OMIT CONTENT VALUES

cters each) OMIT CONTENT

SPOOLED FILEOPTIONSNUMBER OF COOPERATING OBJECT TYPESCOOPERATING OBJECT TYPESNUMBER OF ATTRIBUTEATTRIBUTE

Column head-ings

E outfile (WRKDGOBJE command)

705

Updated for 5.0.08.00.

LIB1ASPD System 1 library ASP device(File entry options)

CHAR(10) *LIB1ASP, User-defined name

LIB2ASP System 2 library ASP number PACKED(3,0) 0 = *SRCLIB, 1-32,-1 = *ASPDEV

LIB2ASPD System 2 library ASP device(File entry options)

CHAR(10) *LIB2ASP, User-defined name

NBROMTDTA Number of omit content (OMTDTA) values

PACKED(3 0) 1-10

OMTDTA Omit content values (File entry options)

CHAR(100) *NONE, *FILE, *MBR (10 chara

SPLFOPT Spooled file options CHAR(10) *NONE, *HLD, *HLDONSAV

NUMCOOPTYP Number of cooperating object types

PACKED(3 0) 0-999

COOPTYPE Cooperating object types CHAR(100) *FILE, *DTAARA, *DTAQ

NBRATROPT Number of attribute options PACKED (3 0) -1, 1-50

ATROPT Attribute options CHAR(500) *ALL

Table 130. MXDGOBJE outfile (WRKDGOBJE command)

Field Description Type, length Valid values

Page 706: MIMIX Reference

MXDGTSP

MXDG

Column head-ingsDGDFN NAME

DGDFN SYSTEM 1DGDFN SYSTEM 2DATA SOURCEAPPLY SESSION

the target systemntry is created.)

CREATE TIMESTAMP

the target systemo the create ng remote journaling. is the time the journal em and is sent by the

SEND TIMESTAMP

the target systemntry is received by the tem when using y the target system by n-remote journaling).

RECEIVE TIMESTAMP

the target systemntry is applied on the

APPLY TIMESTAMP

outfile (WRKDGTSP command)

706

TSP outfile (WRKDGTSP command)

Table 131. MXDGTSP outfile (WRKDGTSP command)

Field Description Type, length Valid values

DGDFN Data group name (Data group definition)

CHAR(10) User-defined data group name

DGSYS1 System 1 name (Data group definition)

CHAR(8) User-defined system name

DGSYS2 System 2 name (Data group definition)

CHAR(8) User-defined system name

DTASRC Data source CHAR(10) *SYS1, *SYS2APYSSN Apply session CHAR(10) A-FCRTTSP Create Timestamp (YYYY-MM-

DD.HH.MM.SS.mmmmmmTIMESTAMP SAA timestamp - normalized to

(Timestamp when the journal eSNDTSP Send Timestamp (YYYY-MM-

DD.HH.MM.SS.mmmmmmTIMESTAMP SAA timestamp - normalized to

(Timestamp value is set equal ttimestamp (CRTTSP) when usiFor non-remote journaling, this entry is read on the source systMIMIX send process.)

RCVTSP Receive Timestamp (YYYY-MM-DD.HH.MM.SS.mmmmmm

TIMESTAMP SAA timestamp - normalized to(Timestamp when the journal ejournal reader on the target sysremote journaling or received bthe MIMIX send process for no

APYTSP Apply Timestamp (YYYY-MM-DD.HH.MM.SS.mmmmmm

TIMESTAMP SAA timestamp - normalized to(Timestamp when the journal etarget system.)

Page 707: MIMIX Reference

MXDGTSP

ion of the timestamps rocess is received on te journaling. For nd send times are set value of 0.

SEND ELAPSED TIME

time and the receive RECEIVE ELAPSED TIME

ive time and the apply APPLY ELAPSED TIME

ion of the timestamp try is applied on the

TOTAL ELAPSED TIME

TIME DIFFERENCE

Column head-ings

outfile (WRKDGTSP command)

707

CRTSNDET Elapsed time between create and send process (milliseconds)

PACKED(10 0) Calculated, 0-9999999999(Elapsed time between generatand the time the MIMIX send pthe target system for non-remoremote journaling, the create aequal so elapsed time will be a

SNDRCVET Elapsed time between send and receive process (milliseconds)

PACKED(10 0) Calculated, 0-9999999999(Elapsed time between the sendtime.)

RCVAPYET Elapsed time between receive and apply process (milliseconds)

PACKED(10 0) Calculated, 0-9999999999Elapsed time between the recetime.)

CRTAPYET Elapsed time between create and apply timestamps (milliseconds)

PACKED(10 0) Calculated, 0-9999999999(Elapsed time between generatto the time when the journal entarget system.)

SYSTDIFF The time differential between the source and target systems, where time differential = source time - target time

PACKED(10 0) -9999999999-0, 0-9999999999

Table 131. MXDGTSP outfile (WRKDGTSP command)

Field Description Type, length Valid values

Page 708: MIMIX Reference

MXDGTSP

outfile (WRKDGTSP command)

708

Page 709: MIMIX Reference

MXJRNDF

MXJR

Column head-ings

ame JRNDFN NAMEJRNDFN SYSTEMJOURNALJOURNAL LIBRARYJOURNAL LIBRARY ASP

JRNRCV PREFIXJRNRCV LIBRARYJRNRCV LIBRARY ASP

*SIZE, ations are:

RECEIVER CHANGE MANAGEMENT

RECEIVER THRESHOLD SIZE (MB)

N outfile (WRKJRNDFN command)

709

NDFN outfile (WRKJRNDFN command)

Table 132. MXJRNDFN outfile (WRKJRNDFN command)

Field Description Type, length Valid values

JRNDFN Journal definition name (Journal definition)

CHAR(10) User-defined journal definition n

JRNSYS System name (Journal definition) CHAR(8) User-defined system name

JRN Journal name (Journal) CHAR(10) Journal, *JRNDFNJRNLIB Journal library (Journal) CHAR(10) Journal library

JRNLIBASP Journal library ASP PACKED(3 0) Numeric value 0 = *CRTDFT1-32- 1 = *ASPDEV

JRNRCVPFX Journal receiver prefix (Journal receiver prefix)

CHAR(10) *GEN, user-defined name

JRNRCVLIB Journal receiver library (Journal receiver prefix)

CHAR(10) User-defined name, *JRNLIB

RCVLIBASP Journal receiver library ASP PACKED(3 0) Numeric value 0 = *CRTDFT1-32- 1 = *ASPDEV

CHGMGT Receiver change management CHAR(20) 2 x CHAR(10) - *NONE, *TIME, *SYSTEM The only valid combin*TIME *SIZE*TIME *SYSTEM

THRESHOLD Receiver threshold size (MB) PACKED(7 0) 10-1000000

Page 710: MIMIX Reference

MXJRNDF

RECEIVER CHANGE TIMERESET SEQUENCE THRESHOLDRECEIVER DELETE MANAGEMENTKEEP UNSAVED JRNRCVKEEP JRNRCV COUNTKEEP JRNRCV (DAYS)DESCRIPTIONJRNRCV ASPMSGQ THRESHOLD MSGQ

e field JRNLIB if MSGQ THRESHOLD MSGQ LIBRARYRJ LINK EXIT PROGRAMEXIT PROGRAM LIBRARY

Column head-ings

N outfile (WRKJRNDFN command)

710

RCVTIME Time of day to change receiver ZONED(6 0) Time

RESETTHLD Reset sequence threshold PACKED(5 0) 10-1000000

DLTMGT Receiver delete management CHAR(10) *YES, *NO

KEEPUNSAV Keep unsaved journal receivers CHAR(10) *YES, *NO

KEEPRCVCNT Keep journal receiver (days) PACKED(3 0) 0-999

KEEPJRNRCV Journal receiver ASP PACKED(3 0) 0-999

TEXT Description CHAR(50) *BLANK, User-defined textJRNRCVASP Journal receiver ASP PACKED(3 0) Numeric value (0 = *LIBASP)MSGQ Threshold message queue CHAR(10) User-defined name, *JRNDFN

MSGQLIB Threshold message queue library CHAR(10) *JRNLIB, user-defined name (Sethis field contains *JRNLIB)

RJLNK Remote journal link CHAR(10) *NONE, *SOURCE, *TARGETEXITPGM Exit program CHAR(10) *NONE, user-defined name

EXITPGMLIB Exit program library CHAR(10) User-defined name

Table 132. MXJRNDFN outfile (WRKJRNDFN command)

Field Description Type, length Valid values

Page 711: MIMIX Reference

MXJRNDF

AARA, MIN JRN ENTRY DATAREQUESTED THRESHOLD SIZESAVE TYPEJOURNALING LAG LIMIT (SEC)

e JOURNAL LIBRARY ASP DEV

e JRNRCV LIBRARY ASP DEVTARGET JOURNAL STATEJOURNAL CACHING

Column head-ings

N outfile (WRKJRNDFN command)

711

Updated for 5.0.02.00.

MINENTDTA Minimal journal entry data CHAR(100) Array of 10 CHAR(10) fields *DT*FLDBDY, *FILE, *NONE

REQTHLDSIZ Requested threshold size PACKED(7 0) Numeric value

SAVTYPE Save type CHAR(10)JRNLAGLMT Journaling lag limit (seconds) PACKED(3 0)

JRNLIBASPD Journal library ASP device CHAR(10) *JRNLIBASP, user-defined nam

RCVLIBASPD Journal receiver library ASP device

CHAR(10) *RCVLIBASP, user-defined nam

TGTSTATE Target journal state CHAR(10) *ACTIVE, *STANDBY

JRNCACHE Journal cache option CHAR(10) *SRC, *TGT, *BOTH, *NONE

Table 132. MXJRNDFN outfile (WRKJRNDFN command)

Field Description Type, length Valid values

Page 712: MIMIX Reference

MXJRNDF

N outfile (WRKJRNDFN command)

712

Page 713: MIMIX Reference

MXRJLNK

MXRJ

Column head-ingsSOURCE JOURNAL DEFINITIONSOURCE SYSTEMSRC JRNLIBRARYASP

rimary SRC JRNLIBRARYASP DEVSRC JRNRCVLIBRARYASP

rimary SRC JRNRCVLIBRARYASP DEVTARGET JOURNAL DEFINITIONTARGET SYSTEMTGT JRNLIBRARYASP

outfile (WRKRJLNK command)

713

LNK outfile (WRKRJLNK command)

Table 133. MXRJLNK outfile (WRKRJLNK command)

Field Description Type, length Valid values

SRCJRNDFN Journal definition name on source

CHAR(10) Journal definition name

SRCSYS Source system name of journal definition

CHAR(8) System name

SRCJEJRNA Source Journal Library ASP DEC(3) "0 = *CRTDFT-1 = *ASPDEV

SRCJEJLAD Source Journal Library ASP Device

CHAR(10) *JRNLIBASP, *ASPDEV, ASP PGroup name

SRCJERCVA Source Journal Receiver Library ASP

DEC(3) "0 = *CRTDFT-1 = *ASPDEV

SRCJERLAD Source Journal Receiver Library ASP Device

CHAR(10) *RCVLIBASP, *ASPDEV, ASP PGroup name

TGTJRNDFN Journal definition name on target

CHAR(10) Journal definition name

TGTSYS Target system name of journal definition

CHAR(8) System name

TGTJEJRNA Target Journal Library ASP DEC(3) "0 = *CRTDFT-1 = *ASPDEV

Page 714: MIMIX Reference

MXRJLNK

rimary TGT JRNLIBRARYASP DEVTGT JRNRCVLIBRARYASP

rimary TGT JRNRCVLIBRARYASP DEVRJ MODE (DELIVERY)

C, INACT, WN

STATE

FN PRIMARY TFRDFN

FN, SECONDARY TFRDFNPRIORITYTEXT

Column head-ings

outfile (WRKRJLNK command)

714

TGTJEJLAD Target Journal Library ASP Device

CHAR(10) *JRNLIBASP, *ASPDEV, ASP PGroup name

TGTJERCVA Target Journal Receiver Library ASP

DEC(3) "0 = *CRTDFT-1 = *ASPDEV

TGTJERLAD Target Journal Receiver Library ASP Device

CHAR(10) *RCVLIBASP, *ASPDEV, ASP PGroup name

RJMODE Delivery mode of remote journaling

CHAR(10) *ASYNC, *SYNC, blank

RJSTATE Remote journal state CHAR(10) *ASYNC, *ASYNCPEND, *SYN*SYNCPEND, *INACTIVE, *CTL*FAILED, *NOTBUILT, *UNKNO

PRITFRDFN Primary transfer definition CHAR(10) Transfer definition name, *SYSD

SECTFRDFN Secondary transfer definition CHAR(10) Transfer definition name, *SYSD*NONE

PRIORITY Async process priority Packed(3 0) 0=*SYSDFN, 1-99TEXT Text description CHAR(50) Plain text

Table 133. MXRJLNK outfile (WRKRJLNK command)

Field Description Type, length Valid values

Page 715: MIMIX Reference

MXRJLNK

outfile (WRKRJLNK command)

715

Page 716: MIMIX Reference

MXSYSDF

MXSY

Column head-ingsSYSDFN NAMESYSTEM TYPECONFIGURED PRITFRDFNCONFIGURED SECTFRDFNCLUSTER MEMBER

FN, *SECTFRDFN RITFRSYS1 and contains

CLUSTER TFRDFN

PRIMARY MSGQ

PRIMARY MSGQ LIB

*ERROR, *TERM, PRIMARY MSGQ SEVPRIMARY MSGQ SEV NBR

PRIMARY MSGQ INFO LEVEL

SECONDARY MSGQ

N outfile (WRKSYSDFN command)

716

SDFN outfile (WRKSYSDFN command)

Table 134. MXSYSDFN outfile (WRKSYSDFN command)

Field Description Type, length Valid values

SYSDFN System definition CHAR(8) User-defined nameTYPE System type CHAR(10) *MGT, *NETPRITFRDFN Configured primary transfer

definitionCHAR(10) User-defined name

SECTFRDFN Configured secondary transfer definition

CHAR(10) User-defined name

CLUMBR Cluster member CHAR(10) *YES, *NO

CLUTFRDFN Cluster transfer definition CHAR(20) User-defined name, *PRITFRD(Refer to the PRITFRNAME, PPRITFRSYS2 fields if this field *PRITFRDFN)

PRIMSGQ Primary message queue (Primary message handling)

CHAR(10) User-defined name

PRIMSGQLIB Primary message queue library (Primary message handling)

CHAR(10) User-defined name, *LIBL

PRISEV Primary message queue severity (Primary message handling)

CHAR(10) *SEVERE, *INFO, *WARNING,*ALERT, *ACTION, 0-99

PRISEVNBR Primary message queue severity number (Primary message handling)

PACKED(3 0) 0-99

PRIINFLVL Primary message queue information level (Primary message handling)

CHAR(10) *SUMMARY, *ALL

SECMSGQ Secondary message queue (Secondary message handling)

CHAR(10) User-defined name

Page 717: MIMIX Reference

MXSYSDF

SECONDARY MSGQ LIB

*ERROR, *TERM, SECONDARY MSGQ SEV

SECONDARY MSGQ SEV NBR

TFRSYS1 field if this SECONDARY MSGQ INFO LEVELDESCRIPTIONJRNMGR DELAY (SEC)SYSMGR DELAY (SEC)OUTQOUTQ LIBRARY

HOLD ON OUTQSAVE ON OUTQKEEP SYS HISTORY (DAYS)KEEP DG HISTORY (DAYS)KEEP MIMIX DATA (DAYS)MIMIX DATA LIB ASP

Column head-ings

N outfile (WRKSYSDFN command)

717

SECMSGQLIB Secondary message queue library (Secondary message handling)

CHAR(10) User-defined name, *LIBL

SECSEV Secondary message queue severity (Secondary message handling)

CHAR(10) *SEVERE, *INFO, *WARNING,*ALERT, *ACTION, 0-99

SECSEVNBR Secondary message queue severity number (Secondary message handling)

PACKED(3 0) 0-99

SECINFLVL Secondary message queue information level (Secondary message handling)

CHAR(10) *SUMMARY, *ALL (Refer to thefield contains *SYS1)

TEXT Description CHAR(50) *BLANK, user-defined textJRNMGRDLY Journal manager delay (seconds) PACKED(3 0) 5-900

SYSMGRDLY System manager delay (seconds) PACKED(3 0) 5-900

OUTQ Output queue (Output queue) CHAR(10) User-defined nameOUTQLIB Output queue library (Output

queue)CHAR(10) User-defined name

HOLD Hold on output queue CHAR(10) *YES, *NOSAVE Save on output queue CHAR(10) *YES, *NOKEEPSYSHST Keep system history (days) PACKED(3 0) 1-365

KEEPDGHST Keep data group history (days) PACKED(3 0) 1-365

KEEPMMXDTA Keep MIMIX data (days) PACKED(3 0) 1-365, 0 = *NOMAX

DTALIBASP MIMIX data library ASP PACKED(3 0) Numeric value, 0 = *CRTDFT

Table 134. MXSYSDFN outfile (WRKSYSDFN command)

Field Description Type, length Valid values

Page 718: MIMIX Reference

MXSYSDF

DISK STORAGE LIMIT (GB)USRPRF FOR SUBMIT JOBMANAGER JOBD

MANAGER JOBD LIBRARY DEFAULT JOBD

DEFAULT JOBD LIBRARYMIMIX PRODUCT LIBRARY

d)RESTART TIME

KEEP NEW NFY (DAYS)KEEP ACKNFY (DAYS)ASP GROUP

Column head-ings

N outfile (WRKSYSDFN command)

718

DSKSTGLMT Disk storage limit (GB) PACKED(5 0) 1-9999, 0 = *NOMAX

SBMUSR User profile for submit job CHAR(10) *JOBD, *CURRENT

MGRJOBD Manager job description (Manager job description)

CHAR(10) User-defined name

MGRJOBDLIB Manager job description library (Manager job description)

CHAR(10) User-defined name

DFTJOBD Default job description (Default job description)

CHAR(10) User-defined name

DFTJOBDLIB Default job description library (Default job description)

CHAR(10) User-defined name

PRDLIB MIMIX product library CHAR(10) User-defined name

RSTARTTIME Job restart time CHAR(8) 000000 - 235959, *NONE(Values are returned left-justifie

KEEPNEWNFY Keep new notification (days) PACKED(3 0) 1-365, 0 = *NOMAX

KEEPACKNFY Keep acknowledged notification (days)

PACKED(3 0) 1-365, 0 = *NOMAX

ASPGRP ASP Group CHAR(10) *NONE, User-defined name

Table 134. MXSYSDFN outfile (WRKSYSDFN command)

Field Description Type, length Valid values

Page 719: MIMIX Reference

MXSYSDF

N outfile (WRKSYSDFN command)

719

Page 720: MIMIX Reference

MXTFRDF

MXTFe MXTFRDFN record format.

Column head-ings

name TFRDFN NAME

TFRDFN NAME SYSTEM 1TFRDFN NAME SYSTEM 2TRANSFER PROTOCOL

fer to the TFRSYS1 )

SYSTEM 1 HOST OR ADDRESS

fer to the TFRSYS2 )

SYSTEM 2 HOST OR ADDRESSSYSTEM 1 PORT NBR OR ALIASSYSTEM 2 PORT NBR OR ALIASSYSTEM 1 LOCATIONSYSTEM 2 LOCATION

TATR, *NONE SYSTEM 1 NETWORK IDENTIFIER

TATR, *NONE SYSTEM 2 NETWORK IDENTIFIER

N outfile (WRKTFRDFN command)

720

RDFN outfile (WRKTFRDFN command)The Work with Transfer Definitions (WRKTFRDFN) command generates new outfiles based on th

Table 135. MXTFRDFN outfile (WRKTFRDFN command)

Field Description Type, length Valid values

TFRDFN Transfer definition name (Transfer definition)

CHAR(10) User-defined journal definition

TFRSYS1 System 1 name (Transfer definition)

CHAR(8) User-defined system name

TFRSYS2 System 2 name (Transfer definition)

CHAR(8) User-defined system name

PROTOCOL Transfer protocol CHAR(10) *TCP, *SNA, *OPTI

HOST1 System 1 host name or address CHAR(256) *SYS1, user-defined name (Refield if this field contains *SYS1

HOST2 System 2 host name or address CHAR(256) *SYS2, user-defined name (Refield if this field contains *SYS2

PORT1 System 1 port number or alias CHAR(14) User-defined port number

PORT2 System 2 port number or alias CHAR(14) User-defined port number

LOCNAME1 System 1 location name CHAR(8) *SYS1, user-defined name

LOCNAME2 System 2 location name CHAR(8) *SYS2, user-defined name

NETID1 System 1 network identifier CHAR(8) *LOC, user-defined name, *NE

NETID2 System 2 network identifier CHAR(8) *LOC, User-defined name, *NE

Page 721: MIMIX Reference

MXTFRDF

SNA MODEDESCRIPTIONTHRESHOLD SIZERELATIONAL DATABASERELATIONAL DATABASERELATIONAL DATABASEMANAGE DIRECTORY ENTRIESTFRDFN SHORT NAME

Column head-ings

N outfile (WRKTFRDFN command)

721

MODE SNA mode CHAR(8) User-defined name, *NETATRTEXT Description CHAR(50) *BLANK, user-defined textTHLDSIZE Reset sequence threshold PACKED(7 0) 0-9999999

RDB Relational database CHAR(18) *GEN, user-defined name

RDBSYS1 System 1 Relational database name

CHAR(18) *SYS1, User-defined name

RDBSYS2 System 2 Relational database name

CHAR(18) *SYS2, User-defined name

MNGRDB Manage RDB Directory Entries Indicator

CHAR(10) *DFT, *YES, *NO

TFRSHORTN Transfer definition short name CHAR(4) Name

Table 135. MXTFRDFN outfile (WRKTFRDFN command)

Field Description Type, length Valid values

Page 722: MIMIX Reference

MZPRCDFN outfile (WRKPRCDFN command)

722

MZPRCDFN outfile (WRKPRCDFN command)

Table 136. MZPRCDFN outfile (WRKPRCDFN command)

Field Description Type, length

Valid values Column headings

PRCDFN Process definition name(Process definition)

CHAR(10) *ANY, user-defined name PRCDFNNAME

PRCSYS System name(Process definition)

CHAR(10) *ANY, *BACKUP, *PRIMARY, *REPLICATE, user-defined name

PRCDFNSYSTEM

TYPE Process type CHAR(10) *ANY, *CRGADDNOD, *CRGCHG, *CRGCRT, *CRGDLT, *CRGDLTCMD, *CRGEND, *CRGENDNOD, *CRGFAIL, *CRGREJOIN, *CRGRESTR, *CRGRMVNOD, *CRGSTR, *CRGSWT, *CRGUNDO, User-defined value

PROCESSTYPE

PRDLIB Product library CHAR(10) User-defined name PRODUCTLIBRARY

TEXT Description CHAR(50) User-defined value DESCRIPTION

Page 723: MIMIX Reference

MZPRCE o

MZPR

Column head-ingsPRCDFN NAME

E, PRCDFNSYSTEM

CRT,

IN,R,lue

PROCESSTYPE

SEQUENCE NUMBERLABELMESSAGE ID

*RTN ACTION

utfile (WRKPRCE command)

723

CE outfile (WRKPRCE command)

Table 137. MZPRCE outfile (WRKPRCE command)

Field Description Type, length Valid values

PRCDFN Process definition name (Process definition)

CHAR(10) *ANY, user-defined name

PRCSYS System name (Process definition)

CHAR(10) *ANY, *BACKUP, *PRIMARY, *REPLICATuser-defined name

TYPE Process type CHAR(10) *ANY, *CRGADDNOD, *CRGCHG, *CRG*CRGDLT, *CRGDLTCMD, *CRGEND, *CRGENDNOD, *CRGFAIL, *CRGREJO*CRGRESTR, *CRGRMVNOD, *CRGST*CRGSWT, *CRGUNDO, User-defined va

SEQNBR Sequence number PACKED(6 0) 1-999999

LABEL Label CHAR(10) User-defined nameMSGID Message identifier CHAR(10) *ANY, user-defined valueACTION Action CHAR(10) *CMD, *CMDPMT, *CMP, *CMT, *GOTO,

Page 724: MIMIX Reference

MZPRCE o

KNOD1, CKNOD5,KSTS4,UNAME, ,

ODCNT, RVSTS,

RVROL2, VSTS1,

VSTS5, EPNOD4, PSTS3,

ser-defined

COMPARE OPERAND1

COMPARE OPERATOR

KNOD1, CKNOD5, KSTS4, UNAME, ,

ODCNT, PRVSTS, RVROL2, VSTS1,

VSTS5, EPNOD4, PSTS3,

ser-defined

COMPARE OPERAND2

COMMAND DETAILS

Column head-ings

utfile (WRKPRCE command)

724

OPERAND1 Compare operand 1 CHAR(10) BLANK, *ACTCODE, *APPCRGSTS, *BC*BCKNOD2, *BCKNOD3, *BCKNOD4, *B*BCKSTS1, *BCKSTS2, *BCKSTS3, *BC*BCKSTS5, *CHGNOD, *CHGROLE, *CL*CRGNAME, *CRGTYPE, *DTACRGSTS*ENDOPT, *LCLNOD, *LCLPRVROL, *LCLPRVSTS, *LCLROLE, *LCLSTS, *N*PRDLIB, *PRINOD,*PRIPRVROL, *PRIP*PRISTS, *PRVACTCDE, *PRVROL1, *P*PRVROL3, *PRVROL4, *PRVROL5, *PR*PRVSTS2, *PRVSTS3, *PRVSTS4, *PR*REPNOD1, *REPNOD2, *REPNOD3, *R*REPNOD5, *REPSTS1, *REPSTS2, *RE*REPSTS4, *REPSTS5, *ROLETYPE, Utype

OPERATOR Compare operator CHAR(10)

OPERAND2 Compare operand 2 CHAR(10) BLANK, *ACTCODE, *APPCRGSTS, *BC*BCKNOD2, *BCKNOD3, *BCKNOD4, *B*BCKSTS1, *BCKSTS2, *BCKSTS3, *BC*BCKSTS5, *CHGNOD, *CHGROLE, *CL*CRGNAME, *CRGTYPE, *DTACRGSTS*ENDOPT, *LCLNOD, *LCLPRVROL, *LCLPRVSTS, *LCLROLE, *LCLSTS, *N*PRDLIB, *PRINOD, *PRIPRVROL, *PRI*PRISTS, *PRVACTCDE, *PRVROL1, *P*PRVROL3, *PRVROL4, *PRVROL5, *PR*PRVSTS2, *PRVSTS3, *PRVSTS4, *PR*REPNOD1, *REPNOD2, *REPNOD3, *R*REPNOD5, *REPSTS1, *REPSTS2, *RE*REPSTS4, *REPSTS5, *ROLETYPE, Utype

CMD Command details CHAR(1000) BLANK, user-defined value

Table 137. MZPRCE outfile (WRKPRCE command)

Field Description Type, length Valid values

Page 725: MIMIX Reference

MZPRCE o

ACTION LABELRETURN VALUECOMMENT TEXT

Column head-ings

utfile (WRKPRCE command)

725

ACTLBL Action label CHAR(10) BLANK, user-defined value

RTNVAL Return value CHAR(10) *FAIL, *SUCCESS

COMMENT Comment text CHAR(50) BLANK, user-defined value

Table 137. MZPRCE outfile (WRKPRCE command)

Field Description Type, length Valid values

Page 726: MIMIX Reference

MXDGIFST

MXDG

Column head-ingsDGDFN NAME

DGDFN SYSTEM 1DGDFN SYSTEM 2SYSTEM 1 IFS OBJECT (UNICODE)SYSTEM 1 FILE ID (BINARY)SYSTEM 1 FILE ID (HEX)SYSTEM 2 IFS OBJECT (UNICODE)SYSTEM 2 FILE ID (BINARY)SYSTEM 2 FILE ID (HEX)

ID is 65535 or data , OBJ1 and OBJ2

CCSID

E outfile (WRKDGIFSTE command)

726

IFSTE outfile (WRKDGIFSTE command)

Table 138. MXDGIFSTE outfile (WRKDGIFSTE command)

Field Description Type, length Valid values

DGDFN Data group name (Data group definition)

CHAR(10) User-defined data group name

DGSYS1 System 1 name (Data group definition)

CHAR(8) User-defined system name

DGSYS2 System 2 name (Data group definition)

CHAR(8) User-defined system name

OBJ1 System 1 object name (unicode) GRAPHIC(512)VARLEN(75)

User-defined name

FID1 System 1 file identifier (binary) BIN(16 0) i5/OS-defined file identifier

FID1HEX System 1 file identifier (hexadecimal-readable)

CHAR(32) i5/OS-defined file identifier

OBJ2 System 2 object name (unicode) GRAPHIC(512)VARLEN(75)

User-defined name

FID2 System 2 file identifier (binary) BIN(16 0) i5/OS-defined file identifier

FID2HEX System 2 file identifier (hexadecimal-readable)

CHAR(32) i5/OS-defined file identifier

CCSID Object CCSID BIN(5 0) Defaults to job CCSID. If job CCScannot be converted to job CCSIDvalues remain in Unicode.

Page 727: MIMIX Reference

MXDGIFST

g CCSID value. ible.

SYSTEM 1 IFS OBJECT CONVERTED

g CCSID value. ible.

SYSTEM 2 IFS OBJECT CONVERTEDOBJECT TYPE

IGN, *HLDRNM, CURRENT STATUSSYSTEM 1 JOURNALED SYSTEM 2 JOURNALED APPLY SESSION

Column head-ings

E outfile (WRKDGIFSTE command)

727

OBJ1CVT System 1 object name (converted to job CCSID)

CHAR(512)VARLEN(75)

User-defined name converted usinZero length if conversion not poss

OBJ2CVT System 2 object name (converted to job CCSID)

CHAR(512)VARLEN(75)

User-defined name converted usinZero length if conversion not poss

TYPE Object type CHAR(10) *DIR, *STMF, *SYMLNKSTSVAL Entry status CHAR(10) *ACTIVE, *HLD, *HLDERR, *HLD

*RLSWAITJRN1STS Journaled on system 1 CHAR(10) *YES, *NO

JRN2STS Journaled on system 2 CHAR(10) *YES, *NO

APYSSN Apply session CHAR(10) ‘A’ (only supported apply session)

Table 138. MXDGIFSTE outfile (WRKDGIFSTE command)

Field Description Type, length Valid values

Page 728: MIMIX Reference

MXDGOBJ

MXDG

Column head-ingsDGDFN NAME

DGDFN SYSTEM 1DGDFN SYSTEM 2SYSTEM 1 OBJECTSYSTEM 1 LIBRARYOBJECT TYPESYSTEM 2 OBJECTSYSTEM 2 LIBRARY

N, *RLSWAIT CURRENT STATUSSYSTEM 1 JOURNALED SYSTEM 2 JOURNALED CURRENT APYSSNREQUESTED APYSSN

TE outfile (WRKDGOBJTE command)

728

OBJTE outfile (WRKDGOBJTE command)

Table 139. MXDGOBJTE outfile (WRKDGOBJTE command)

Field Description Type, length Valid values

DGDFN Data group name (Data group definition)

CHAR(10) User-defined data group name

DGSYS1 System 1 name (Data group definition)

CHAR(8) User-defined system name

DGSYS2 System 2 name (Data group definition)

CHAR(8) User-defined system name

OBJ1 System 1 object CHAR(10) User-defined name

LIB 1 System 1 library CHAR(10) User-defined name

TYPE Object type CHAR(10) *DTAARA, *DTAQOBJ2 System 2 object CHAR(10) User-defined name

LIB 2 System 2 library CHAR(10) User-defined name

STSVAL Entry status CHAR(10) *ACTIVE, *HLD, *HLDERR, *HLDIG

JRN1STS Journaled on system 1 CHAR(10) *YES, *NO

JRN2STS Journaled on system 2 CHAR(10) *YES, *NO

APYSSN Current apply session CHAR(10) ‘A’ (only supported apply session)

RQSAPYSSN Requested apply session CHAR(10) ‘A’ (only supported apply session)

Page 729: MIMIX Reference

MXDGOBJ

SYSTEM 1 OBJECT (APPLY)SYSTEM 1 LIBRARY (APPLY)SYSTEM 2 OBJECT (APPLY)SYSTEM 2 LIBRARY (APPLY)

Column head-ings

TE outfile (WRKDGOBJTE command)

729

OBJ1APY System 1 object (known by apply) CHAR(10) User-defined name

LIB1APY System 1 library (known by apply) CHAR(10) User-defined name

OBJ2APY System 2 object (known by apply) CHAR(10) User-defined name

LIB2APY System 2 library (known by apply) CHAR(10) User-defined name

Table 139. MXDGOBJTE outfile (WRKDGOBJTE command)

Field Description Type, length Valid values

Page 730: MIMIX Reference

Notices© Copyright 1999, 2008, Lakeview Technology Inc., All rights reserved. This document may not be copied, reproduced, translated, or transmitted in whole or part, except under license of Lakeview Technology Inc.

® MIMIX is a registered trademark of Lakeview Technology Inc.

™ MIMIX AutoGuard, MIMIX AutoNotify, MIMIX Availability Manager, MIMIX ha1, MIMIX ha Lite, MIMIX DB2 Replicator, MIMIX Object Replicator, MIMIX Monitor, MIMIX Promoter, IntelliStart, RJ Link, and MIMIX Switch Assistant are trademarks of Lakeview Technology Inc.

AS/400, DB2, eServer, i5/OS, IBM, iSeries, OS/400, Power, System i, and WebSphere are trademarks of International Business Machines Corporation.

All other trademarks are the property of their respective owners.

Lakeview Technology Inc. is an IBM Business Partner.

If you are an entity of the U.S. government, you agree that this documentation and the program(s) referred to in this document are Commercial Computer Software, as defined in the Federal Acquisition Regulations (FAR), and the DoD FAR Supplement, and are delivered with only those rights set forth within the license agreement for such documentation and program(s). Use, duplication or disclosure by the Government is subject to restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Software clause at DFAR 252.227-7013 (48 CFR) or subparagraphs (c)(1) & (2) of the Commercial Computer Software - Restricted Rights clause at FAR 52.227-19.

The information in this document is subject to change without notice. Lakeview Technology Inc. makes no warranty of any kind regarding this material and assumes no responsibility for any errors that may appear in this document. The program(s) referred to in this document are not specifically developed, or licensed, for use in any nuclear, aviation, mass transit, or medical application or in any other inherently dangerous applications, and any such use shall remove Lakeview Technology Inc. from liability. Lakeview Technology Inc. shall not be liable for any claims or damages arising from such use of the Program(s) for any such applications.

Examples and Example Programs:

This book contains examples of reports and data used in daily operation. To illustrate them as completely as possible the examples may include names of individuals, companies, brands, and products. All of these names are fictitious. Any similarity to the names and addresses used by an actual business enterprise is entirely coincidental.

This book contains small programs that are furnished by Lakeview Technology Inc. as simple examples to provide an illustration. These examples have not been thoroughly tested under all conditions. Lakeview Technology, therefore, cannot guarantee or imply reliability, serviceability, or function of these example programs. All programs contained herein are provided to you “AS IS.” THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE EXPRESSLY DISCLAIMED.

Lakeview Technology Inc.1901 South MeyersSuite 600Oakbrook Terrace, IL 60181 USAwww.lakeviewtech.comPhone:630-282-8100Fax:630-282-8500

Page 731: MIMIX Reference
Page 732: MIMIX Reference

Index

Symbols*FAILED activity entry 43*HLD, files on hold 103*HLDERR, held due to error 381*HLDERR, hold error status 77*MSGQ, maintaining private authorities 104

Aaccess paths, journaling 220access types (file) for T-ZC entries 387accessing

MIMIX Main Menu 91active server technology 440additional resources 17advanced journaling

add to existing data group 85apply session balancing 87benefits 72conversion examples 86convert data group to 85ending journaling 331, 335loading tracking entries 284planning for 85replication process 73serialized transactions with database 85starting journaling 330, 334

advanced journaling, data areas and data queues

synchronizing 505verifying journaling 336

advanced journaling, IFS objectsfile IDs (FIDs) 312journal receiver size 213restrictions 121synchronizing 505verifying journaling 332

advanced journaling, large objects (LOBs)journal receiver size 213synchronizing 476

APPC/SNA, configuring 163apply session

constraint induced changes 371default value 240specifying 236

apply session, databaseload balancing 87

ASPbasic 565concepts 564

group 565independent 565independent, benefits 564independent, configuration tips 568independent, configuring 568independent, configuring IFS objects 569independent, configuring library-based ob-jects 569independent, effect on library list 570independent, journal receiver considerations 569independent, limitations 567independent, primary 565independent, replication 563independent, requirements 567independent, restrictions 567independent, secondary 565SYSBAS 563system 564user 565

asynchronous delivery 65attributes, supported

CMPDLOA command 606CMPFILA command 591CMPIFSA command 604CMPOBJA command 596

audit results#DGFE rule 580, 630#DLOATR rule 606, 632#DLOATR rule, ASP attributes 612#FILATR rule 591, 634#FILATR rule, ASP attributes 612#FILATR rule, journal attributes 608#FILATRMBR rule 591, 634#FILATRMBR rule, ASP attributes 612#FILATRMBR rule, journal attributes 608#FILDTA rule 582, 636#IFSATR rule 604, 644#IFSATR rule, ASP attributes 612#IFSATR rule, journal attributes 608#MBRRCDCNT rule 582, 640#OBJATR rule 596, 647#OBJATR rule, ASP attributes 612#OBJATR rule, journal attributes 608#OBJATR rule, user profile password attribute 619#OBJATR rule, user profile status attribute 615interpreting 573, 575, 576interpreting, attribute comparisons 586

732

Page 733: MIMIX Reference

interpreting, file data comparisons 582timestamp difference 129troubleshoot 578

auditing and reporting, compare commandsDLO attributes 434file and member attributes 425file data using active processing 464file data using subsetting options 467file data with repair capability 458file data without active processing 455files on hold 461IFS object attributes 431object attributes 428

auditing value, i5/OS objectset by MIMIX 58

auditing, i5/OS object 25performed by MIMIX 297

audits 487job log 578

authorities, private 104automation 510autostart job entry 190

changing 191configuring 190identifying 191

Bbacklog

comparing file data restriction 442backup system 23

restricting access to files 240basic ASP 565batch output 527benefits

independent ASPs 564LOB replication 107

bi-directional data flow 361broadcast configuration 68

Ccandidate objects

defined 400cascade configuration 68cascading distributions, configuring 365catchup mode 63change management

journal receivers 202overview 37remote journal environment 37

changingRJ link 227startup programs, remote journaling 305

changing from RJ to MIMIX processingpermanently 229temporarily 228

checklistconvert *DTAARA, *DTAQ to user journaling 154convert IFS objects to user journaling 154converting to remote journaling 147copying configuration data 553legacy cooperative processing 157manual configuration (source-send) 143MIMIX Dynamic Apply 150new preferred configuration 139pre-configuration 81

collision points 511collision resolution 511

default value 240requirements 382working with 381

commandschanging defaults 537displaying a list of 528

commands, by mnemonicADDDGDAE 290ADDMSGLOGE 521ADDRJLNK 225CHGDGDAE 290CHGJRNDFN 217CHGRJLNK 227CHGSYSDFN 171CHGTFRDFN 186CHKDGFE 303, 580CLOMMXLST 536CMPDLOA 420CMPFILA 420CMPFILDTA 440, 455CMPIFSA 420CMPOBJA 420CMPRCDCNT 437CPYCFGDTA 552CPYDGDAE 291CPYDGFE 291CPYDGIFSE 291CRTCRCLS 383CRTDGDFN 247, 251CRTJRNDFN 215CRTSYSDFN 170

733

Page 734: MIMIX Reference

CRTTFRDFN 184DLTCRCLS 384DLTDGDFN 256DLTJRNDFN 256DLTSYSDFN 256DLTTFRDFN 256DSPDGDAE 293DSPDGFE 293DSPDGIFSE 293ENDJRNFE 327ENDJRNIFSE 331ENDJRNOBJE 335ENDJRNPF 327LODDGDAE 289LODDGFE 272LODDGOBJE 268MIMIX 91OPNMMXLST 536RMVDGDAE 292RMVDGFE 292RMVDGFEALS 292RMVDGIFSE 292RMVRJCNN 231RUNCMD 529RUNCMDS 529SETDGAUD 297SETIDCOLA 373SNDNETDLO 509SNDNETIFS 508SNDNETOBJ 475, 506STRJRNFE 326STRJRNIFSE 330STRJRNOBJE 334STRMMXMGR 296STRSVR 189SWTDG 25SYNCDFE 473SYNCDGACTE 473, 479SYNCDGFE 480, 489SYNCDLO 472, 478, 499SYNCIFS 472, 478, 495, 505SYNCOBJ 472, 478, 491, 505VFYCMNLNK 194, 195VFYJRNFE 328VFYJRNIFSE 332VFYJRNOBJE 336VFYKEYATR 359WRKCRCLS 383WRKDGDAE 289, 291WRKDGDFN 255

WRKDGDLOE 291WRKDGFE 291WRKDGIFSE 291WRKDGOBJE 291WRKJRNDFN 255WRKRJLNK 310WRKSYSDFN 255WRKTFRDFN 255

commands, by nameAdd Data Group Data Area Entry 290Add Message Log Entry 521Add Remote Journal Link 225Change Data Group Data Area Entry 290Change Journal Definition 217Change RJ Link 227Change System Definition 171Change Transfer Definition 186Check Data Group File Entries 303, 580Close MIMIX List 536Compare DLO Attributes 420Compare File Attributes 420Compare File Data 440, 455Compare IFS Attributes 420Compare Object Attributes 420Compare Record Counts 437Copy Configuration Data 552Copy Data Group Data Area Entry 291Copy Data Group File Entry 291Copy Data Group IFS Entry 291Create Collision Resolution Class 383Create Data Group Definition 247, 251Create Journal Definition 215Create System Definition 170Create Transfer Definition 184Delete Collision Resolution Class 384Delete Data Group Definition 256Delete Journal Definition 256Delete System Definition 256Delete Transfer Definition 256Display Data Group Data Area Entry 293Display Data Group File Entry 293Display Data Group IFS Entry 293End Journal Physical File 327End Journaling File Entry 327End Journaling IFS Entries 331End Journaling Obj Entries 335Load Data Group Data Area Entries 289Load Data Group File Entries 272Load Data Group Object Entries 268MIMIX 91

734

Page 735: MIMIX Reference

Open MIMIX List 536Remove Data Group Data Area Entry 292Remove Data Group File Entry 292Remove Data Group IFS Entry 292Remove Remote Journal Connection 231Run Command 529Run Commands 529Send Network DLO 509Send Network IFS 508Send Network Object 506Send Network Objects 475Set Data Group Auditing 297Set Identity Column Attribute 373Start Journaling File Entry 326Start Journaling IFS Entries 330Start Journaling Obj Entries 334Start Lakeview TCP Server 189Start MIMIX Managers 296Switch Data Group 25Synchronize Data Group Activity Entry 479Synchronize Data Group File Entry 480, 489Synchronize DG Activity Entry 473Synchronize DG File Entry 473Synchronize DLO 472, 478, 499Synchronize IFS 478Synchronize IFS Object 472, 495, 505Synchronize Object 472, 478, 491, 505Verify Communications Link 194, 195Verify Journaling File Entry 328Verify Journaling IFS Entries 332Verify Journaling Obj Entries 336Verify Key Attributes 359Work with Collision Resolution Classes 383Work with Data Group Data Area Entries 289, 291Work with Data Group Definition 255Work with Data Group DLO Entries 291Work with Data Group File Entries 291Work with Data Group IFS Entries 291Work with Data Group Object Entries 291Work with Journal Definition 255Work with RJ Links 310Work with System Definition 255Work with Transfer Definition 255

commands, run on remote system 529commit cycles

effect on audit comparison 582, 583effect on audit results 587policy effect on compare record count 351

commitment control 107

#MBRRCDCNT audit performance 351journal standby state, journal cache 341, 344journaled IFS objects 73

communicationsAPPC/SNA 163configuring system level 159job names 48native TCP/IP 159OptiConnect 163protocols 159starting TCP sever 189

compare commandscompletion and escape messages 514outfile formats 419report types and outfiles 418spooled files 418

comparingDLO attributes 434file and member attributes 425IFS object attributes 431object attributes 428when file content omitted 389

comparing attributesattributes to compare 422overview 420supported object attributes 421, 445

comparing file data 440active server technology 440advanced subsetting 451allocated and not allocated records 442comparing a random sample 451comparing a range of records 448comparing recently inserted data 448comparing records over time 451data correction 440first and last subset 453interleave factor 451keys, triggers, and constraints 443multi-threaded jobs 441number of subsets 451parallel processing 441processing with DBAPY 441, 461referential integrity considerations 444repairing files in *HLDERR 441restrictions 441security considerations 442thread groups 450transfer definition 450transitional states 441using active processing 464

735

Page 736: MIMIX Reference

using subsetting options 467wait time 450with repair capability 458with repair capability when files are on hold 461without active processing 455

comparing file record counts 437configuration

additional supporting tasks 294auditing 580copying existing data 558

configuringadvanced replication techniques 353bi-directional data flow 361cascading distributions 365choosing the correct checklist 137classes, collision resolution 383data areas and data queues 112DLO documents and folders 124file routing, file combining 363for improved performance 337IFS objects 118independent ASP 568Intra communications 560, 561job restart time 313keyed replication 356library-based objects 100message queue objects for user profiles 104omitting T-ZC journal entry content 388spooled file replication 102to replicate SQL stored procedures 393unique key replication 356

configuring, collision resolution 382confirmed journal entries 64considerations

journal for independent ASP 569what to not replicate 83

constraints*CST attribute for CMPFILA 591apply session for dependent files 371auditing with CMPFILA 420CMPFILA file-specific attribute 591comparing file data 443omit content and legacy cooperative process-ing 389referential integrity considerations 444requirements 370requirements when synchronizing 481restrictions with high availability journal perfor-mance enhancements 344

support 370when journal is in standby state 341

constraints, physical files withapply session ignored 111configuring 107legacy cooperative processing 111

constraints, referential 111contacting Lakeview Technology 19container send process 56

defaults 243description 54threshold 243

contextual transfer definitionsconsiderations 183RJ considerations 182

continuous mode 63conventions

product 14publications 14

convert data groupto advanced journaling 154

COOPDB (Cooperate with database) 113, 120cooperative journal (COOPJRN)

behavior 106cooperative processing

and omitting content 389configuring files 105file, preferred method for 50introduction 50journaled objects 51legacy 51legacy limitations 111MIMIX Dynamic Apply limitations 110

cooperative processing, legacylimitations 111requirements and limitations 111

COOPJRN 106COOPJRN (Cooperative journal) 236COOPTYPE (Cooperating object types) 113copying

data group entries 291definitions 255

create operation, how replicated 129customer support 19customizing 510

replication environment 511

Ddata area

736

Page 737: MIMIX Reference

retrictions of journaled 113data areas

journaling 72polling interval 238polling process 77synchronizing an object tracking entry 505

data distribution techniques 361data group 24

convert to remote journaling 147database only 110determining if RJ link used 310ending 40, 67RJ link differences 67sharing an RJ link 66short name 234starting 40switching 24switching, RJ link considerations 70timestamps, automatic 237type 235

data group data area entry 289adding individual 290loading from a library 289

data group definition 35, 233creating 247parameter tips 234

data group DLO entry 287adding individual 288loading from a folder 287

data group entry 401defined 93description 24object 267procedures for configuring 265

data group file entry 272adding individual 278changing 279loading from a journal definition 276loading from a library 275, 276loading from FEs from another data group 277loading from object entries 273sources for loading 272

data group IFS entry 282with independent ASPs 569

data group object entryadding individual 268custom loading 267independent ASP 569with independent ASP 569

data library 34, 168data management techniques 361data queue

restrictions of journaled 113data queues

journaling 72synchronizing journaled objects 505

data source 234database apply

serialization 85with compare file data (CMPFILDTA) 441, 461

database apply process 76description 66threshold warning 241

database reader process 66description 66threshold 241

database receive process 76database send process 76

description 76filtering 236threshold 241

DDMpassword validation 306server in startup programs 305server, starting 308

defaults, command 537definitions

data group 35journal 35named 34remote journal link 35renaming 258RJ link 35system 35transfer 35

delay times 167delay/retry processing

first and second 238third 239

delete managementjournal receivers 203overview 37remote journal environment 38

delete operationsjournaled *DTAARA, *DTAQ, IFS objects 134legacy cooperative processing 134

deletingdata group entries 292

737

Page 738: MIMIX Reference

definitions 256delivery mode

asynchronous 65synchronous 63

detail report 525detected differences

viewing and resolving 575, 576directory entries

managing 178RDB 178

display output 524displaying

data group entries 293definitions 257

distribution request, data-retrieval 55DLOs

example, entry matching 125generic name support 124keeping same name 242object processing 124

duplicate identity column values 373dynamic updates

adding data group entries 278removing data group entries 292

Eend journaling

data areas and data queues 335files 327IFS objects 331IFS tracking entry 331object tracking entry 335

ending CMPFILDTA jobs 454examples

convert to advanced journaling 86DLO entry matching 125IFS object selection, subtree 415job restart time 316journal definitions for multimanagement envi-ronment 209journal definitions for switchable data group 207journal receiver exit program 545load file entries for MIMIX Dynamic Apply 273object entry matching 102object retrieval delay 391object selection process 407object selection, order precedence in 408object selection, subtree 410

port alias, complex 161port alias, simple 160querying content of an output file 696SETIDCOLA command increment values 377WRKDG SELECT statements 696

exit points 511journal receiver management 538, 541MIMIX Monitor 538MIMIX Promoter 539

exit programsjournal receiver management 204, 542requesting customized programs 540

expand support 526extended attribute cache 345

configuring 345

Ffailed request resolution 43FEOPT (file and tracking entry options) 239file id (FID) 75files

combining 363omitting content 387output 526routing 364sharing 361synchronizing 480

filteringdatabase replication 76messages 45on database send 236on source side 237remote journal environment 66

firewall, using CMPFILDTA with 442folder path names 124

Ggeneric name support 402

DLOs 124generic user exit 538

Hhelp, accessing 14history retention 168hot backup 21

IIBM i5/OS option 42 341IBM OS/400 objects

738

Page 739: MIMIX Reference

to not replicate 83IFS directory, created during installation 29IFS file systems 118

unsupported 118IFS object selection

examples, subtree 415subtree 405

IFS objects 118file id (FID) use with journaling 75journaled entry types, commitment control and 73journaling 72not supported 118path names 119supported object types 118

IFS objects, journaledrestrictions 121supported operations 130sychronizing 482, 505

independent ASP 565limitations 567primary 565replication 563requirements 567restrictions 567secondary 565synchronizing data within an 477

information and additional resources 17installations, multiple MIMIX 23interleave factor 451Intra configuration 559IPL, journal receiver change 37

Jjob classes 30job description parameter 527job descriptions 30, 168

in data group definition 243in product library 30list of MIMIX 30

job logfor audit 578

job name parameter 527job names 47job restart time 313

data group definition procedure 319examples 315overview 313parameter 168, 244

system definition procedure 319jobs, restarted automatically 313journal 25

improving performance of 337maximum number of objects in 26security audit 53system 53

journal analysis 43journal at create 127, 238

requirements 323requirements and restrictions 324

journal caching 202, 342journal definition 35

configuring 197created by other processes 200creating 215fields on data group definition 235parameter tips 201remote journal environment considerations 205remote journal naming convention 206remote journal naming convention, multiman-agement 208remote journaling example 207

journal entries 25confirmed 64filtering on database send 236minimized data 339OM journal entry 130receive journal entry (RCVJRNE) 346unconfirmed 64, 70

journal entry codesfor data area and data queues 114supported by MIMIX user journal processing 122

journal image 239, 355journal manager 33journal receiver 25

change management 37, 202delete management 37, 38, 203prefix 202RJ processing earlier receivers 38size for advanced journaling 213starting point 26stranded on target 39

journal receiver managementinteraction with other products 38recommendations 37

journal sequence number, change during IPL 37

739

Page 740: MIMIX Reference

journal standby state 341journaled data areas, data queues

planning for 85journaled IFS objects

planning for 85journaled object types

user exit program considerations 87journaling 25

cannot end 327data areas and data queues 72ending for data areas and data queues 335ending for IFS objects 331ending for physical files 327IFS objects 72IFS objects and commitment control 73implicitly started 323requirements for starting 323starting for data areas and data queues 334starting for IFS objects 330starting for physical files 326starting, ending, and verifying 322verifying 487verifying for data areas and data queues 336verifying for IFS objects 332verifying for physical files 328

journaling environmentautomatically creating 236building 219removing 231source for values (JRNVAL) 219

journaling on target, RJ environment consider-ations 39journaling status

data areas and data queues 334files 326IFS objects 330

journaling, startingfiles 326

Kkeyed replication 355

comparing file data restriction 442file entry option defaults 239preventing before-image filtering 237restrictions 356verifying file attributes 359

Llarge object (LOB) support

user exit program 108large objects (LOBs)

minimized journal entry data 339legacy cooperative processing

configuring 108limitations 111requirements 111

librariesto not replicate 83

library listadding QSOC to 164

library list, effect of independent ASP 570library-based objects, configuring 100limitations

database only data group 110list detail report 525list summary report 525load leveling 57loading

tracking entries 284LOB replication 107local-remote journal pair 63log space 26logical files 105, 106long IFS path names 119

Mmanage directory entries 178management system 24maximum size transmitted 177MAXOPT2 value 213menu

MIMIX Configuration 295MIMIX Main 91

message handling 167message log 521message queues

associated with user profiles 104journal-related threshold 204

messages 44CMPDLOA 516CMPFILA 514CMPFILDTA 517CMPIFSA 515CMPOBJA 515CMPRCDCNT 516comparison completion and escape 514

MIMIX AutoGuard 487MIMIX Dynamic Apply

740

Page 741: MIMIX Reference

configuring 105, 108recommended for files 105reqirements and limitations 110

MIMIX environment 29MIMIX installation 23MIMIX jobs, restart time for 313MIMIX Model Switch Framework 538MIMIX performance, improving 337MIMIX Retry Monitor 43MIMIXOWN user profile 31, 306MIMIXQGPL library 34MIMIXSBS subsystem 34, 90minimized journal entry data 339

LOBs 107MMNFYNEWE monitor 127monitor

new objects not configured to MIMIX 127move/rename operations

system journal replication 130user journal replication 131

multimanagementjournal definition naming 208

multi-threaded jobs 441

Nname pattern 405name space 53names, displaying long 119naming conventions

data group definitions 234journal definitions 201, 206, 208multi-part 27transfer definitions 176transfer definitions, contextual (*ANY) 183transfer definitions, multiple network systems 172

network systems 24multiple 172

new objectsautomatically journal 238automatically replicate 127files 127files processed by legacy cooperative pro-cessing 128files processed with MIMIX Dynamic Apply 127IFS object journal at create requirements 323IFS objects, data areas, data queues 128journal at create selection criteria 324

notification of objects not in configuration 127notification retention 168

Oobject apply process

defaults 243description 54threshold 243

object attributes, comparing 422object auditing 323object auditing level, i5/OS

manually set for a data group 297set by MIMIX 58, 297

object auditing valuedata areas, data queues 112DLOs 124IFS objects 120library-based objects 98omit T-ZC entry considerations 388

object entry, data groupcreating 267

object locking retry interval 238object processing

data areas, data queues 112defaults 241DLOs 124high volume objects 350IFS objects 118retry interval 238spooled files 102

object retrieval delayconsiderations 391examples 391selecting 391

object retrieve process 56defaults 243description 53threshold 243with high volume objects 350

object selection 399commands which use 399examples, order precedence 408examples, process 407examples, subtree 410name pattern 405order precedence 401parameter 401process 399subtree 404

741

Page 742: MIMIX Reference

object selector elements 401by function 402

object selectors 401object send process 54

description 53threshold 242

object types supported 96, 549Omit content (OMTDTA) parameter 388

and comparison commands 389and cooperative processing 389

open commit cyclesaudit results 582, 583, 587

OptiConnect, configuring 163outfiles 621

MCAG 623MCDTACRGE 626MCNODE 628MXCDGFE 630MXCMPDLOA 632MXCMPFILA 634MXCMPFILD 636MXCMPFILR 639MXCMPIFSA 644MXCMPOBJA 647MXCMPRCDC 640MXDGACT 649MXDGACTE 651MXDGDAE 659MXDGDFN 660MXDGDLOE 668MXDGFE 670MXDGIFSE 674, 726, 728MXDGIFSTE 726MXDGOBJE 703MXDGOBJTE 728MXDGSTS 676MXDGTSP 706MXJRNDFN 709MXSYSDFN 716MXTFRDFN 720MZPRCDFN 722MZPRCE 723user profile password 619user profile status 615WRKRJLNK 713

outfiles, supporting informationrecord format 621work with panels 622

outputbatch 527

considerations 523display 524expand support 526file 526parameter 523print 524

output filequerying content, examples of 696

output file fieldsDifference Indicator 582, 587System 1 Indicator field 589System 2 Indicator field 589

output queues 168overview

MIMIX operations 40remote journal support 61starting and ending replication 40support for resolving problems 42support for switching 24, 44working with messages 44

Pparallel processing 441path names, IFS 119policy, CMPRCDCNT commit threshold 351polling interval 238port alias 160

complex example 161creating 162simple example 160

print output 524printing

controlling characteristics of 168data group entries 293definitions 257

private authorities, *MSGQ replication of 104problems, journaling

data areas and data queues 334files 326IFS objects 330

processcontainer send and receive 56database apply 76database reader 66database receive 76database send 76names 47object apply 56object retrieve 56

742

Page 743: MIMIX Reference

object send 54process, object selection 399processing defaults

container send 243database apply 241file entry options 239object apply 243object retrieve 243user journal entry 236

production system 23publications

conventions 14formatting used in 15IBM 17

QQAUDCTL system value 53QAUDLVL system value 53, 103QDFTJRN data area 238

restrictions 324role in processing new objects 324

QSOClibrary 164subsystem 305

RRCVJRNE (Receive Journal Entry) 346

configuring values 347determining whether to change the value of 347understanding its values 346

RDB 178directory entries 178

RDB directory entry 188reader wait time 235receiver library, changing for RJ target journal 222receivers

change management 202delete management 203

recommendationmultimanagement journal definitions 208

relational database (RDB) 178entries 178, 186

remote journalbenefits 61i5/OS function 25, 61i5/OS function, asynchronous delivery 65i5/OS function, synchronous delivery 63

MIMIX support 61relational database 178

remote journal environmentchanging 222contextual transfer definitions 182receiver change management 37receiver delete management 38restrictions 62RJ link 66security implications 306switch processing changes 44

remote journal link 35, 66remote journal link, See also RJ linkremote journaling

data group definition 236repairing

file data 458files in *HLDERR 441files on hold 461

replicatinguser profiles 476what to not replicate 83

replicationadvanced topic parameters 237by object type 96configuring advanced techniques 353constraint-induced modifications 371data area 77defaults for object types 96direction of 23ending data group 40ending MIMIX 40independent ASP 563maximum size threshold 177positional vs. keyed 355process, remote journaling environment 66retrieving extended attributes 345spooled files 102SQL stored procedures 393starting data group 40starting MIMIX 40system journal process 53unit of work for 24user-defined functions 393what to not replicate 83

replication path 46reports

detail 525list detail 525list summary 525

743

Page 744: MIMIX Reference

types for compare commands 418requirement

objects and journal in same ASP 26requirements

independent ASP 567journal at create 323keyed replication 355legacy cooperative processing 111MIMIX Dynamic Apply 110standby journaling 343user journal replication of data areas and data queues 112

restarted 313restore operations, journaled *DTAARA, *DTAQ, IFS objects 134restrictions

comparing file data 441data areas and data queues 113independent ASP 567journal at create 324journal receiver management 38journaled *DTAARA, *DTAQ objects 113journaled IFS objects 121keyed replication (unique key) 356legacy cooperative processing 111LOBs 108MIMIX Dynamic Apply 110number of objects in journal 26QDFTJRN data area 324remote journaling 62standby journaling 343

retrying, data group activity entries 43RJ link 35

adding 225changing 227data group definition parameter 236description 66end options 67identifying data groups that use 310sharing among data groups 66switching considerations 70threshold 237

RJ link monitorsdescription 68displaying status of 68ending 68not installed, status when 68operation 68

Ssave-while-active 396

considerations 396examples 397options 397wait time 396

search process, *ANY transfer definitions 181security

considerations, CMPFILDTA command 442general information 80remote journaling implications 306

security audit journal 53sending

DLOs 509IFS objects 508library-based objects 506

serializationdatabase files and journaled objects 85object changes with database 72

serversstarting DDM 308starting TCP 189

short transfer definition name 176source physical files 105, 106source system 23spooled files 102

compare commands 418keeping deleted 103options 103retaining on target system 242

SQL stored procedures 393replication requirements 393

SQL table identity columns 373alternatives to SETIDCOLA 375check for replication of 378problem 373SETIDCOLA command details 376SETIDCOLA command examples 377SETIDCOLA command limitations 374SETIDCOLA command usage notes 377setting attribute 378when to use SETIDCOLA 374

standby journalingIBM i5/OS option 42 341journal caching 342journal standby state 341MIMIX processing with 342overview 341requirements 343

744

Page 745: MIMIX Reference

restrictions 343start journaling

data areas and data queues 334file entry 326files 326IFS objects 330IFS tracking entry 330object tracking entry 334

startingsystem and journal managers 296TCP server 189TCP server automatically 190

startup programschanges for remote journaling 305MIMIX subsystem 90QSOC subsystem 305

status, values affecting updates to 238storage, data libraries 168stranded journal on target, journal entries 39subsystem

MIMIXSBS, starting 90QSOC 305

subtree 404IFS objects 405

switchingallowing 234data group 24enabling journaling on target system 235example RJ journal definitions for 207independent ASP restriction 568MIMIX Model Switch Framework with RJ link 70preventing identity column problems 373remote journaling changes to 44removing stranded journal receivers 39RJ link considerations 70

synchronization check, automatic 237synchronizing 472

activity entries overview 479commands for 474considerations 474data group activity entries 503database files 489database files overview 480DLOs 499DLOs in a data group 499DLOs without a data group 500establish a start point 483file entry overview 480files with triggers 480

IFS objects 495IFS objects by path name only 496IFS objects in a data group 495IFS objects without a data group 496IFS tracking entries 505including logical files 481independent ASP, data in an 477initial 484initial configuration 483initial configuration MQ environment 483limit maximum size 474LOB data 476object tracking entries 505object, IFS, DLO overview 478objects 491objects in a data group 491objects without a data group 492related file 481resources for 483status changes caused by 476tracking entries 482user profiles 474, 476

synchronous delivery 63unconfirmed entries 64

SYSBAS 563, 565system ASP 564system definition 35, 166

changing 171creating 170parameter tips 167

system journal 53system journal replication

advanced techniques 353omitting content 387

system library list 163, 570system manager 32system user profiles

to not replicate 83system value

QAUDCTL 53QAUDLVL 53, 103QSYSLIBL 164

system, roles 23

Ttarget journal state 202target system 23TCP/IP

adding to startup program 305

745

Page 746: MIMIX Reference

configuring native 159creating port aliases for 160

temporary filesto not replicate 83

thread groups 450threshold, backlog

adjusting 251container send 243database apply 241database reader/send 241object apply 243object retrieve 243object send 242remote journal link 237

threshold, CMPRCDCNT commit 351timestamps, automatic 237tracking entries

loading 284loading for data areas, data queues 285loading for IFS objects 284purpose 74

tracking entryfile identifiers (FIDs) 312

transfer definition 35, 174, 450changing 186contextual system support (*ANY) 28, 181fields in data group definition 235fields in system definition 167multiple network system environment 172other uses 174parameter tips 176short name 176

transfer protocolsOptiConnect parameters 177SNA parameters 177TCP parameters 176

trigger programsdefined 368synchronizing files 369

triggersavoiding problems 444comparing file data 443disabling during synchronization 480read 443update, insert, and delete 443

T-ZC journal entriesaccess types 387configuring to omit 388omitting 387

Uunconfirmed journal entries 64, 70unique key

comparing file data restriction 442file entry options for replicating 239replication of 355

user ASP 565user exit points 541user exit program

data areas and data queues 87IFS objects 87large objects (LOBs) 108

user exit, generic 538user journal replication

advanced techniques 353requirements for data areas and data queues 112supported journal entries for data areas, data queues 114tracking entry 74

user profileMIMIXOWN 306password 619status 615

user profilesdefault 168MIMIX 31replication of 104specifying status 242synchronizing 474system distribution directory entries 476to not replicate 83

user-defined functions 393

Vverifying

communications link 194, 195initial synchronization 487journaling, IFS tracking entries 332journaling, object tracking entries 336journaling, physical files 328key attributes 359send and receive processes automatically 238

Wwait time

comparing file data 450reader 235

746

Page 747: MIMIX Reference

WRKDG SELECT statement 696

747