This is one of the things you might look at while trouble shooting Degradation, but in fact ought to check periodically, to avoid a problem developing and not being spotted. Doubtless someone may update this article to reflect improvements in IBM eServers in later models. The reality of what is needed will vary greatly by what you have on your 400 hardware, in terms of system load, and which version of OS/400 or i5/OS.
Al Mac has similar notes at work in FIXTUNING, an SEU "document." Some of the terminology used by Al Mac may not be the correct stuff ... instead sometimes Al Mac has assigned names to a technique learned, or figured out outside formal 400 education, and not known what name would be better.
- 1 User Friendly
- 2 Brain Overload
- 3 Errors and other Clues
- 3.1 Communications Lines
- 3.2 Data Base Monitor
- 3.3 File hit Maximum records
- 3.4 Hogging System Resources
- 3.5 JOBLOG
- 3.6 Job Tracking
- 3.7 Library List vs. Qualified Calls
- 3.8 Messages Management
- 3.9 Performance Measurement
- 3.10 Users work day
- 3.11 Workload Job Accounting
- 4 Application Software Review
- 4.1 Batch Considerations
- 4.2 Blocking
- 4.3 CPU Sharing
- 4.4 Call Frequency
- 4.5 Data Slicing
- 4.6 Debug
- 4.7 Disk Access
- 4.8 Logicals
- 4.9 Message Over Load
- 4.10 Open Query File Optimization
- 4.11 Program Performance Optimization
- 4.12 Query
- 4.13 Random Inefficiency
- 4.14 Read Slowly
- 4.15 Run Priority
- 4.16 Screen Restructuring
- 4.17 SQL ADVISOR
- 4.18 Start Stop Start Stop to Infinity
- 4.19 Update Productively
- 4.20 Write Slowly
- 5 Backup Management
- 6 Disk Space Cleaning
- 7 Management Central
- 8 System Values
- 9 Task Scheduling
- 10 Other Resources
Performance Tuning is not just to help our 400 run more efficiently, it is also to help our end users be more productive with respect to the 400 data supplied to them.
This is a collection of complicated topics that can take a while to wrap our mind around and thoroughly "Grok", so pick one area, study it, leave the others alone, until ready to move on to something else.
What prior knowledge of the 400 is is smart for you to have some of, to help you swim in these waters?
Symptoms Told Computer Doctor
Errors and other Clues
Data Base Monitor
File hit Maximum records
Hogging System Resources
Library List vs. Qualified Calls
Users work day
Workload Job Accounting
Application Software Review
Typically software is written "Ok" but then demands on the system, and the nature of the data, leads to an evolution in how programs are used, such that they are no longer optimized for current usage, and some kind of review worthwhile.
Some software, or modifications, may have been written under rushed conditions, in which there is inefficiency that can be reduced, by reprogramming. When we are drowning in individual programs, obviously this effort ought to be directed towards programs that are both identifiable has having serious inefficiencies, and are run very frequently.
Notice that we can dump information about software executables to an *OUTFILE for Query or other analyis. One factoid is number of days software was used after creation. Highest numbers means used almost every day.
This is a technique Al Mac figured out during performance tuning, in which Al has no idea what the correct terminology is, and came up with this to describe the process.
Data Slicing Theory
This is a program modification to altered software design. The issue is identifying where it can improve performance sufficiently to justify the effort of doing so.
Most BPCS RPG programs, in Al Mac experience, are supplied by a Prompt Screen where the user supplied what criteria the user is interested in, such as facility, warehouse range, item range, customer range, many other factors, then the program either selects that stuff using OPNQRYF or launches a program that looks at an entire file, rejecting from consideration those that are not relevant to the selection criteria.
We can compare several programs that in theory are looking at similar data. How long do they take to run. We can send DSPLOG data to an *OUTFILE capturing when Batch jobs started and ended, to get typical statistics on how long they take to run, sorted by program name, or run time. If we have two or more programs that in theory ought to be looking at the same data, but have wildly different run times, then the ones that take longer are perhaps candidates for some kind of performance improvement.
How much faster might a program execute if it did not look at records that are outside the criteria of the user Prompt Screen? How much of a drain can this reduce disk space access vs. all other 400 users?
If particular combinations are rarely to be repeated, consider creating a temporary logical on the fly, based on the Prompt Screen selection criteria.
Before modification to apply the data slicing concept, this report typically ran to 10,000 pages or more. After data slicing it typically ran to no more than a few hundred pages.
Some managers were only interested in those items for which there was on-hand inventory, and there was a cost variance standard vs. actual.
Shop Order Creation
This is a BPCS function we modified, with heavy help from consultants, then over the years, had a heavy stream of modification requests from the users.
Basically the software is creating factory paperwork associated with new production orders. The core software accesses 100% of relevant files, when most users only need to be futzing with that portion of the data that is their facility, planner code, portion of the factory that they manage.
Via Embedded SQL Cursors, it was possible to go after joins of selected fields of multiple files, such that the total data processed vs. disk was a tiny fraction of what would have been read had RPG had to go through reading entire records, analysing content, using that to chain to other files, do same kind of thing, and end up reading portions of records not needed, and reading many records not needed, just to find the data actually needed.
Other ideas to add later:
- For those files, in the program, accessed via RPG as oppoeed to SQL, that contain facility and other markers relevant to up front user selection, substitute a logical that limits the data accessed to that facility or whatever user constraints apply.
This is a very heavily used (popular) in-house program that has experienced a stream of modification requests. Think of a humongous spread sheet where vertical is facility, customer, item, info about the item, and horizontal is grand total due in various date sensitive buckets.
Long ago this was removed from being able to be run interactively because of big drain, but even in batch processing, it can take several hours to run to a conclusion.
This software is actually a string of programs.
- At one point it is reading 100% of the customer order lines file.
- Certain kinds of records are excluded because:
- they do not match facility of user Prompt Screen or Menu option request;
- order line is coded completed, one way or another
- e.g. total shipments greater than or equal to original order quantity.
Of interest to data slicing is the customer order records are both for current requirements, and a historical record of recent months shipments, thus less than 10% of the contents of the file are current reuuirements.
A logical that says to only select those records in which total order requirements exceed total quantity shipped so far, would dramatically reduce then # of records that the file needs to read ... in other words, instead of the RPG program reading 20,000 records, or whatever the # is, then deciding which 1,000 to process, the logical would deliver only the 1,000 relevant records in the first place.
Such a logical, also exclude on the basis of coding that says an order has been cancelled or otherwise no longer relevant.