Performance Tuning
This is one of the things you might look at while trouble shooting Degradation, but in fact ought to check periodically, to avoid a problem developing and not being spotted. Doubtless someone may update this article to reflect improvements in IBM eServers in later models. The reality of what is needed will vary greatly by what you have on your 400 hardware, in terms of system load, and which version of OS/400 or i5/OS.
Al Mac has similar notes at work in FIXTUNING, an SEU "document." Some of the terminology used by Al Mac may not be the correct stuff ... instead sometimes Al Mac has assigned names to a technique learned, or figured out outside formal 400 education, and not known what name would be better.
Contents
- 1 User Friendly
- 2 Brain Overload
- 3 Errors and other Clues
- 4 Application Software Review
- 4.1 Batch Considerations
- 4.2 Blocking
- 4.3 CPU Sharing
- 4.4 Call Frequency
- 4.5 Data Slicing
- 4.6 Debug
- 4.7 Disk Access
- 4.8 Logicals
- 4.9 Message Over Load
- 4.10 Open Query File Optimization
- 4.11 Program Performance Optimization
- 4.12 Query
- 4.13 Random Inefficiency
- 4.14 Read Slowly
- 4.15 Run Priority
- 4.16 Screen Restructuring
- 4.17 SQL ADVISOR
- 4.18 Start Stop Start Stop to Infinity
- 4.19 Update Productively
- 4.20 Write Slowly
- 5 Backup Management
- 6 Disk Space Cleaning
- 7 Management Central
- 8 System Values
- 9 Task Scheduling
- 10 Other Resources
User Friendly
Performance Tuning is not just to help our 400 run more efficiently, it is also to help our end users be more productive with respect to the 400 data supplied to them.
Brain Overload
This is a collection of complicated topics that can take a while to wrap our mind around and thoroughly "Grok", so pick one area, study it, leave the others alone, until ready to move on to something else.
Choices
Pre Requisites
What prior knowledge of the 400 is is smart for you to have some of, to help you swim in these waters?
Symptoms Told Computer Doctor
Errors and other Clues
Communications Lines
Data Base Monitor
File hit Maximum records
Data File
Spool File
Hogging System Resources
JOBLOG
Job Tracking
Library List vs. Qualified Calls
Messages Management
Performance Measurement
Users work day
Workload Job Accounting
Application Software Review
Typically software is written "Ok" but then demands on the system, and the nature of the data, leads to an evolution in how programs are used, such that they are no longer optimized for current usage, and some kind of review worthwhile.
Some software, or modifications, may have been written under rushed conditions, in which there is inefficiency that can be reduced, by reprogramming. When we are drowning in individual programs, obviously this effort ought to be directed towards programs that are both identifiable has having serious inefficiencies, and are run very frequently.
Notice that we can dump information about software executables to an *OUTFILE for Query or other analyis. One factoid is number of days software was used after creation. Highest numbers means used almost every day.
Batch Considerations
CPU Intensive
Blocking
CPU Sharing
Call Frequency
Data Slicing
This is a technique Al Mac figured out during performance tuning, in which Al has no idea what the correct terminology is, and came up with this to describe the process.
Al Mac has applied this thinking to several BPCS programs, some that came from SSA, some from consultants, and some that we developed in-house.
Data Slicing Theory
This is a program modification to altered software design. The issue is identifying where it can improve performance sufficiently to justify the effort of doing so.
Most BPCS RPG programs, in Al Mac experience, are supplied by a Prompt Screen where the user supplied what criteria the user is interested in, such as facility, warehouse range, item range, customer range, many other factors, then the program either selects that stuff using OPNQRYF or launches a program that looks at an entire file, rejecting from consideration those that are not relevant to the selection criteria.
We can compare several programs that in theory are looking at similar data. How long do they take to run. We can send DSPLOG data to an *OUTFILE capturing when Batch jobs started and ended, to get typical statistics on how long they take to run, sorted by program name, or run time. If we have two or more programs that in theory ought to be looking at the same data, but have wildly different run times, then the ones that take longer are perhaps candidates for some kind of performance improvement.
How much faster might a program execute if it did not look at records that are outside the criteria of the user Prompt Screen? How much of a drain can this reduce disk space access vs. all other 400 users?
If particular combinations are rarely to be repeated, consider creating a temporary logical on the fly, based on the Prompt Screen selection criteria.
INV260
This is a BPCS program that came from SSA. It lists inventory on-hand by item or by class for whole company or for a particular facility. There is no provision to exclude inactive items.
Before modification to apply the data slicing concept, this report typically ran to 10,000 pages or more. After data slicing it typically ran to no more than a few hundred pages.
Some managers were only interested in those items for which there was on-hand inventory, and there was a cost variance standard vs. actual.