Difference between revisions of "Performance Tuning"
(→Data Slicing) |
m (→Data Slicing) |
||
Line 65: | Line 65: | ||
This is a technique [[User:Al Mac|Al Mac]] figured out during performance tuning, in which Al has no idea what the correct terminology is, and came up with this to describe the process. | This is a technique [[User:Al Mac|Al Mac]] figured out during performance tuning, in which Al has no idea what the correct terminology is, and came up with this to describe the process. | ||
+ | |||
+ | [[User:Al Mac|Al Mac]] has applied this thinking to several [[BPCS]] programs, some that came from [[SSA]], some from consultants, and some that we developed in-house. | ||
===== Data Slicing Theory ===== | ===== Data Slicing Theory ===== |
Revision as of 07:49, 13 June 2005
This is one of the things you might look at while trouble shooting Degradation, but in fact ought to check periodically, to avoid a problem developing and not being spotted. Doubtless someone may update this article to reflect improvements in IBM eServers in later models. The reality of what is needed will vary greatly by what you have on your 400 hardware, in terms of system load, and which version of OS/400 or i5/OS.
Al Mac has similar notes at work in FIXTUNING, an SEU "document." Some of the terminology used by Al Mac may not be the correct stuff ... instead sometimes Al Mac has assigned names to a technique learned, or figured out outside formal 400 education, and not known what name would be better.
Contents
- 1 Brain Overload
- 2 Errors and other Clues
- 3 Application Software Review
- 3.1 Batch Considerations
- 3.2 Blocking
- 3.3 CPU Sharing
- 3.4 Call Frequency
- 3.5 Data Slicing
- 3.6 Debug
- 3.7 Disk Access
- 3.8 Logicals
- 3.9 Message Over Load
- 3.10 Open Query File Optimization
- 3.11 Program Performance Optimization
- 3.12 Query
- 3.13 Random Inefficiency
- 3.14 Read Slowly
- 3.15 Run Priority
- 3.16 Screen Restructuring
- 3.17 SQL ADVISOR
- 3.18 Start Stop Start Stop to Infinity
- 3.19 Update Productively
- 3.20 Write Slowly
- 4 Backup Management
- 5 Disk Space Cleaning
- 6 Management Central
- 7 System Values
- 8 Task Scheduling
- 9 Other Resources
Brain Overload
This is a collection of complicated topics that can take a while to wrap our mind around and thoroughly "Grok", so pick one area, study it, leave the others alone, until ready to move on to something else.
Choices
Pre Requisites
What prior knowledge of the 400 is is smart for you to have some of, to help you swim in these waters?
Symptoms Told Computer Doctor
Errors and other Clues
Communications Lines
Data Base Monitor
File hit Maximum records
Data File
Spool File
Hogging System Resources
JOBLOG
Job Tracking
Library List vs. Qualified Calls
Messages Management
Performance Measurement
Users work day
Workload Job Accounting
Application Software Review
Typically software is written "Ok" but then demands on the system, and the nature of the data, leads to an evolution in how programs are used, such that they are no longer optimized for current usage, and some kind of review worthwhile.
Some software, or modifications, may have been written under rushed conditions, in which there is inefficiency that can be reduced, by reprogramming. When we are drowning in individual programs, obviously this effort ought to be directed towards programs that are both identifiable has having serious inefficiencies, and are run very frequently.
Notice that we can dump information about software executables to an *OUTFILE for Query or other analyis. One factoid is number of days software was used after creation. Highest numbers means used almost every day.
Batch Considerations
CPU Intensive
Blocking
CPU Sharing
Call Frequency
Data Slicing
This is a technique Al Mac figured out during performance tuning, in which Al has no idea what the correct terminology is, and came up with this to describe the process.
Al Mac has applied this thinking to several BPCS programs, some that came from SSA, some from consultants, and some that we developed in-house.
Data Slicing Theory
This is a program modification to altered software design. The issue is identifying where it can improve performance sufficiently to justify the effort of doing so.
Most BPCS RPG programs, in Al Mac experience, are supplied by a Prompt Screen where the user supplied what criteria the user is interested in, such as facility, warehouse range, item range, customer range, many other factors, then the program either selects that stuff using OPNQRYF or launches a program that looks at an entire file, rejecting from consideration those that are not relevant to the selection criteria.
We can compare several programs that in theory are looking at similar data. How long do they take to run. We can send DSPLOG data to an *OUTFILE capturing when Batch jobs started and ended, to get typical statistics on how long they take to run, sorted by program name, or run time. If we have two or more programs that in theory ought to be looking at the same data, but have wildly different run times, then the ones that take longer are perhaps candidates for some kind of performance improvement.
How much faster might a program execute if it did not look at records that are outside the criteria of the user Prompt Screen? How much of a drain can this reduce disk space access vs. all other 400 users?
If particular combinations are rarely to be repeated, consider creating a temporary logical on the fly, based on the Prompt Screen selection criteria.