After you complete this chapter, you should be able to:
An index is an efficiency mechanism for quickly finding rows within a table. It is needed because data is stored in a table in the order in which it was added (the order of arrival).
Picture yourself ripping all the pages out of this book and then throwing them into the air and mixing them all up. You then gather them into a stack without regard to any sort of order. If you were to look for a single page number within that stack, how would you do it?
Some people might just hunt and peck though the stack hoping to find it. I'm sure that most would soon give up and then turn the pages one by one. Obviously, this job would be easier if the pages were all back in order. With the pages in order, you could start somewhere in the middle and knowing which way to turn, you could quickly zero in on the page you want.
Imagine that you have now put the pages back in order. Picking up the newly ordered book, perhaps you begin to look for the page that describes SE16. Where do you look? You don't know the page number, so you look in the index. The index contains the most important words sorted in alphabetical order. You quickly locate SE16 in the index and obtain a page number. Using that, search through the book to find that page.
To create an index, the publisher duplicates important information and then sorts it, placing it at the end of the book apart from the rest. When you look at it, are you looking at the original information? No, of course not. You are looking at a copy of the original information; it has been duplicated. The book takes up a little more space on your bookshelf because of the index, it adds a little to the price of the book because of the time and effort required to create it, but no one complains because it saves so much time when you want to find something.
A table in the database is a lot like this book. Each row is given a sequential number when it is added to a table, much like a page number. The important columns are copied to an index and sorted, much like the index at the back of the book. To find any piece of data, the system searches the index first to find the row number and then quickly finds the row.
Each R/3 table has a primary index built on the primary key of the table. This index is needed because the data in the table is stored in the order it was added to the table. When a row is added, the key information is sorted and copied to the index, along with a pointer to the original row. Figure 5.1 shows an example using table ztxlfa1. The data from the lifnr field in ztxlfa1 is copied to the index, sorted alphabetically, and stored along with the row number of the original row. (The mandt field has been left out to simplify the example.)
NOTE |
The row number is internally assigned by the database; you never see it in the DDIC. It is used by the index to point back to the original row from which the data came. |
When the statement select * from ztxlfa1 where lifnr = '0000001050' is executed, the system recognizes that it has lifnr already sorted in an index, and thus uses it automatically. It finds 1050 in the index and obtains row number 2. It then looks in the table for row number 2.
Now suppose the statement select * from ztxlfa1 where land1 = 'US' is executed. The index is sorted by lifnr, not land1. The land1 column is not in any particular order in the table. The system will have to scan the table from the beginning to the end looking for all rows where land1 = 'US'. The table in Figure 5.1 is very small, but this is a very time-consuming process on a larger table.
Now think back to the book example. Sometimes a book has more than one index. For example, a book on the medicinal uses of herbs might have the usual index, plus an index only for common plant names, another only for botanical names, and a third only for medicinal effects. To create these indexes, information has to be sorted and duplicated in three more places at the back of the book. Then, there are four ways to find a page number quickly: via the commonly used index or any of the three additional indexes.
The commonly used index is analogous to the primary index of a table. The additional indexes are analogous to secondary indexes. To support a where clause on a non-primary key field, a secondary index can be defined. A table can have multiple secondary indexes.
Figure 5.2 illustrates a secondary index on ztxlfa1-land1.
An index increases performance tremendously. In a table containing a million rows, a select on a column containing unique values that is not supported by an index would read, on average, 500,000 records to find the matching one, with a worst-case scenario of one million compares.
If you add an index, the system can use a binary search algorithm to find the record after comparing an average of only 10 records, with a worst-case scenario of 20 records! That's 50,000 times faster. For this reason, you should ensure that fields in a where clause are always supported by an index. This can be done either by displaying existing indexes and then organizing your logic to use these fields, or by creating a secondary index.
Each index in R/3 has a unique one to three character identifier, or id. The primary index always has an id of 0 (zero). A secondary index id can be any other number or an alphabetic character.
Start the ScreenCam "How to Display Secondary Indexes" now.
To display the existing secondary indexes on a database table, follow this procedure:
Figure 5.3 : The secondary index Z in table ztxlfa1 is composed of the fields mandt and land1.
The status of the index is displayed in the Status field. Whether the index exists in the database is indicated by a message appearing below the Status field. In the DB Index Name field is the name of the index as it is known in the database. The index name in the database is formed from the table name, followed by underscores, and then the index id. In the lower part of the window are the fields that comprise the index. The index sort order is the same as the field order.
The preceding procedure will only show secondary indexes. The index on the primary key (index 0) does not appear in this list.
Start the ScreenCam "How to Display All Indexes" now.
To display all indexes, including the primary index, follow these steps:
An index can consist of more than one field. For example, building a secondary index on land1 and regio enables select * from ztxlfa1 where land1 = 'US' and regio = 'MA' to quickly find matching rows. When multiple indexes exist (a primary and one or more secondary indexes), the RDBMS uses an optimizer to pick the best one. To choose an index, the optimizer looks at the field names in the where clause and then it looks for an index having the same field names in the same order as they were specified in the where clause. Therefore, to ensure the system chooses the index you intend, specify the fields in the where clause in the same order as they appear in the index.
If a table begins with the mandt field, so should its indexes. If a table begins with mandt and an index doesn't, the optimizer might not use the index.
Remember, if you will, Open SQL's automatic client handling feature. When select * from ztxlfa1 where land1 = 'US' is executed, the actual SQL sent to the database is select * from ztxlfa1 where mandt = sy-mandt and land1 = 'US'. Sy-mandt contains the current logon client. When you select rows from a table using Open SQL, the system automatically adds sy-mandt to the where clause, which causes only those rows pertaining to the current logon client to be found.
When you create an index on a table containing mandt, therefore, you should also include mandt in the index. It should come first in the index, because it will always appear first in the generated SQL.
For the rest of this discussion, please assume that mandt is included in all indexes.
If you code select * from ztxlfa1 where regio = 'MA', there needs to be an index starting with mandt and regio to ensure that the optimizer uses it. If you code select * from ztxlfa1 where land1 = 'US' and regio = 'MA', there should be an index containing mandt, land1, and regio. The ideal index would also have them in that order. If there wasn't an index in that order, but there was an index starting with mandt and land1, the optimizer would use it to match the first two fields of the where clause, and then perform an index range scan to find a match on regio. An index range scan is a sequential scan through a portion of an index.
If the where clause contains more fields than the index, the system uses the index to narrow down the search. It then reads records from the table and scans them to find matches. For example, if an index is composed of fields F1 and F2, and you code where F1 = 1 and F2 = 2 and F3 = 3, the system can only use the index to locate records where F1=1 and F2=2. It will read those records from the table to find out which ones have F3=3. Therefore, the index is only partially effective. It would be more effective if it contained all fields: F1, F2, and F3.
How much more effective is the index with F3? It depends on the way the data is distributed in the table. If the combination of F1 and F2 is very specific (or selective) and results in only a few matching rows, the index is already very effective. Adding F3 to the index in this case might not be worthwhile because of the overhead of maintaining the index during table updates. If the combination of F1 and F2 results in a large number of the rows from the table being selected, the index is not very selective and adding F3 will probably increase performance and reduce resource consumption.
When creating or modifying indexes, try not to use a field that is already in an index. There is a chance, when a preexisting program is run, that the optimizer will choose the new index, and a preexisting program might run slower.
As a rule of thumb, follow these guidelines when you create an index:
NOTE |
There is a maximum of 15 secondary indexes per table allowed by R/3. |
Start the ScreenCam "How to Create a Secondary Index" now.
To create a secondary index:
You have learned how to create a secondary index. Indexes improve the efficiency of the select statement.
NOTE |
If you tickmark the Unique Index check box, the combination of fields in the index is guaranteed to be unique; this is enforced by the RDBMS. An insert or modify statement will fail if the combination of values in index fields is already present in the table. |
Start the ScreenCam "How to Delete a Secondary Index" now.
To delete a secondary index:
In step six, pressing the Yes button causes the index to be deleted from the database. When you return to the Dictionary: Table/Structure: Display Fields screen, you do not need to save the table. Also note that you cannot undo the delete by pressing the Cancel button on the table display screen.
When a select statement is executed, the optimizer attempts to choose the most appropriate index to speed up data retrieval. If you have several indexes on a table, how can you tell which index is actually being used, or even if it is using any at all?
To do this you can use the SQL trace tool.
CAUTION |
Only one person can perform an SQL trace at a time. Remember to turn off the trace when you are finished. SQL trace slows the system down. |
Start the ScreenCam "How to Use SQL Trace to Determine the Index Used" now.
To determine which index is being used for a select statement:
You have learned how to execute a SQL trace. This will help you determine what index is being used.
To display the technical settings for a table, display the table and press the Technical Settings button on the Application toolbar. (If the button is not there, your window might be too narrow. Try widening your window or use the menu path Goto->Technical Settings.) To modify these settings, press the Display <-> Change button on the Application toolbar. The ABAP/4 Dictionary: Maintain Technical Settings screen is displayed, as shown in Figure 5.4.
The data class determines the tablespace that the table is assigned to. (The term "tablespace" applies to Oracle databases. For Informix, substitute the term "DB space.") A tablespace is a physical file on disk that is used to hold tables. Every table is assigned to one tablespace. Tables with similar characteristics are usually grouped into one tablespace, so tablespaces are the administrative units used by the DBA to manage the database. For example, tables that grow quickly will be grouped together in a tablespace on a disk with a lot of free space.
Each data class has an associated tablespace. When you activate a table, it is created in the tablespace associated with that data class. If you change the data class when the table is active, nothing happens; it doesn't move to another tablespace.
The important data classes are:
In addition to these categories, there might be USER data classes as well. These are created by your DBA, who will direct you in choosing them at the appropriate times.
NOTE |
Other data classes can be displayed by pressing the down arrow at the end of the Data Class field and then pressing the System Data Types button. These classes should only be used by SAP; they are for R/3 system tables that hold, for example, Data Dictionary information and program source code. |
The Size Category field enables you to specify the maximum number of records you expect this table to hold. You are setting the size of the initial extent and next extents, as well as the maximum number of extents allowed for this table. An extent is the amount of space allocated for a table. The initial extent is the amount of space allocated for a table when it is created. If a table uses up that space, another extent will be allocated, causing the table's space to be extended.
The size categories are always 0 through 4, but the number of expected records in each size category will change with every table based on the length of a row. Choose an appropriate category based on the maximum number of records you expect that table to hold.
CAUTION |
When choosing a size category, be generous. In R/3 it is much better to over-estimate than underestimate this parameter. If the size category is too small and the table grows larger than the initial allocation, the RDBMS allocates another extent. Secondary extents reduce performance and require a table-space reorganization to remove. Reorganizations can be very difficult to perform on a large database, especially one that requires high availability. It is much easier to reduce an overly generous size category than it is to increase an underestimated one. |
You can display the number of extents allocated for a table by using the following procedure.
Start the ScreenCam "How to Display the Number of Extents Used by a Table" now.
To display the number of extents for a table:
You have learned how to look at the number of extents allocated in a table. This will give information about whether the table will need to be reorganized or not.
As mentioned on Day 1, data can be buffered in RAM on the application server. This is controlled on a per-table basis, and is specified in the technical settings for each table (refer to Figure 5.4).
Whenever an Open SQL statement is used to read a record, the data buffer is checked first to see whether it is there. If not, the data is read from the database. If the table's attributes indicate that the data should be buffered, the record is saved in RAM on the application server in data buffers. Later, if that record is read again, it is read from the buffer instead of the database. This process is diagrammed in Figure 5.5.
Figure 5.5 : Data can be buffered in RAM on the application sever.
By buffering data, you increase performance in two important ways:
Buffering a table can cause a select to run 10 to 100 times faster or more. At first thought, it might seem like a good idea to buffer every table in the system. However, buffers are stored entirely in RAM, so space is limited by the amount of RAM available. In fact, there is so much more data than there is RAM that tables must be buffered judiciously to prevent overruns. If a buffer overruns, it might swap to disk, which can obliterate any performance gained by buffering.
Tables containing a numeric data type in the primary key cannot be buffered. The DDIC numeric data types are CURR, DEC, FLTP, INT1, INT2, INT4, PREC, and QUAN.
TIP |
Transaction ST02 displays buffers and usage statistics. On the Tune Summary screen, the data buffers are named Generic Key and Single Record. A double-click on each of these lines will display a detailed analysis. From there, the objects in the buffer can be displayed by pressing the Buffered Objects button on the Application toolbar. |
The Basis consultant is responsible for monitoring the buffers and preventing buffer shortages from slowing the system down. Before buffering a table, it is a good idea to talk to your Basis consultant.
If you have two application servers, the same record can be buffered on each server. This would not be a problem if it were not for the fact that data can be updated. If a buffered record is updated, the changes must be communicated to the other application server. This is done in a process known as buffer synchronization, which occurs at 1- to 4-minute intervals depending on your system configuration.
For example, as shown in Figure 5.6, suppose user 1 is logged on to application server 1 and user 2 is logged on to application server 2. User 1 reads record 1. Record 1 is buffered on application server 1.
Figure 5.6 : Step A: User 1 reads record 1, causing it to be buffered on application server 1.
In Figure 5.7, user 2 reads record 1. The same record is now in the buffers of both application servers.
Figure 5.7 : Step B: User 2 also reads record 1, causing it to be buffered on application server 2.
In Figure 5.8, user 1 updates record 1. The update is reflected in both the database and the buffers on server 1, but the old record is still on server 2. An entry is made in the synchronization log on the database server indicating that record 1 has been updated.
In Figure 5.9, user 2 reads record 1. The old version of record 1 is found in the buffer, so user 2 sees old data. User 2 will not see the new version of record 1 until a buffer synchronization occurs.
Figure 5.9 : Step D: User 2 reads record 1 and gets stale data.
In Figure 5.10, the buffer synchronization process detects that the synchronization interval has expired and begins synchronization. It reads the synchronization log on the database server and determines that record 1 is out of sync. It then marks that record as invalid in the buffer. It does not yet refresh the record in the buffer from the database.
In Figure 5.11, user 2 reads record 1 again. This time, record 1 has been marked invalid in the buffer, so it is fetched from the database, thereby refreshing the buffer on application server 2.
As shown in Figures 5.6 through 5.11, buffer synchronization uses a synchronization process on each application server and a synchronization log on the database server. At a predefined interval, each synchronization process checks the log to see whether any buffered data has been modified. Data that has been modified is marked invalid in the buffer. The next time that data is read, it is refreshed from the database.
The Basis consultant sets the length of the buffer synchronization interval using the rdisp/bufreftime parameter.
To display it, use transaction RZ10 as follows:
If you code a transaction that updates a buffered table, you must always be sure to go to the database and get the most up-to-date information before allowing a user to change it. The following example illustrates this point. Record locking is mentioned in the following paragraph for the sake of completeness.
Suppose record 1 contains fields F1 and F2. This time, user 2 reads it first, causing it to be buffered on server 2 (see Figure 5.12).
Figure 5.12: Step A: User 2 reads record 1, causing it to be buffered on application server 2.
User 1, now wanting to update record 1, locks it and reads it (see Figure 5.13).
User 1 changes F1, writes it to the database (see Figure 5.14), and unlocks it.
At this point, the buffer on application server 2 still has the original copy of record 1 with the original contents of F1. If user 2 locks record 1 and reads it (from the buffer), changes F2 (see Figure 5.15), and then saves it to the database, the changes made by user 1 will be lost (see Figure 5.16).
Figure 5.15: Step D: User 2 reads the old record 1 and locks it.
Figure 5.16: Step E: User 2 saves record 1 to the database,
overwriting user 1's change.
Use bypassing buffer on the select statement when you need the most up-to-date information, for example, select * from ztxtlfa1 bypassing buffer. Doing so causes the buffer to be ignored and the data to be read directly from the database. You should use this whenever you are reading for update, that is, reading a record that will be updated and written back to the database.
For example, in step D, if user 2 used bypassing buffer to read the record, the most up-to-date version of the record would have been read and user 1's changes would not have been lost.
TIP |
Bypassing the buffer for updates provides the best balance between safety and speed. However, if you can be sure that all updates are done from a single application server, you don't need to code bypassing buffer. Bypassing the buffer does slow the update process a little because a database access is required. If you need the maximum amount of speed for updates, and if you can be sure that only one application server will be used, this alternative might work for you. Talk to your Basis consultant to find out whether it can be arranged. |
You can take one of three buffering approaches for tables that can be updated. In order of best to worst performance, they are as follows:
If a table is updated often, do not use the bypassing buffer approach. Each update will cause the buffers on all other application servers to be reloaded at the next synchronization. In this case, buffering could actually increase the total network traffic instead of decreasing it.
The lower half of the Technical Settings screen (shown previously in Figure 5.4) contains the controls for buffer settings. If you create a table, you have complete control over these settings. To change these settings on a SAP table, you need to obtain a modification key from SAP.
You can set the buffering for a table when you create it, or you can change it later. Changing the buffer settings on SAP tables requires a modification key, which you can obtain from SAP.
These three radio buttons switch buffering off and on:
Choose Buffering Not Allowed if the table should never be buffered, which occurs in the following cases:
Choose either Buffering Allowed But Turned Off or Buffering Turned On if the users of this data can tolerate a 1-to-4 minute buffer latency. Don't forget this latency will not exist when updating data because you will code bypassing buffer to get the latest information.
CAUTION |
Buffering Not Allowed on a SAP table means you should not turn buffering on for this table. Doing so can cause unreliable operation of R/3 applications and possible loss or corruption of data. |
As shown on the Technical Settings screen in Figure 5.4, three types of buffering are possible:
Although you use a check box to indicate the buffering type, only one type can be selected at a time. If you choose more than one, an error message will be displayed.
There are two data buffers on each application server, and the buffering type chosen determines which buffer is used. On an application sever there are:
The generic record buffer is called TABL (short for "table"). The single-record buffer is TABLP ("P"' stands for "partial").
To activate the full buffering type, tickmark the Full check box. When an attempt is made to read data from a fully buffered table, the system looks in the TABL buffer for it. If it is not found there, all rows are loaded from the database into TABL (see Figure 5.17). This happens whenever a select statement for it is executed, no matter how many records match the where clause. Even if no records match, all are loaded into the buffer if the table is not already there or if it has been invalidated due to buffer synchronization.
Figure 5.17: All rows of a fully buffered table are loaded into TABL.
Loading of TABL does not occur with select single statements; only select/ endselect does this. If the table is fully buffered and a select single statement is executed, no records are loaded into TABL. However, if the table is already loaded into TABL, select single will read records from it.
During buffer synchronization, the entire table is invalidated if any record within it changes. The next time data is read the entire table is reloaded.
TIP |
If a select statement is executed but quite often does not find data in the table, full buffering can still make the table accesses more efficient. Because the entire table is in the buffer, the system can determine whether the records exist before going to the database, thus avoiding a database access to find out. |
Full buffering is appropriate for small tables that seldom change. Check tables and their text tables, such as ztxt005, ztxt005t, ztxt005s, and ztxt005u, should be fully buffered. They are set up with an initial set of valid values and descriptions and modified infrequently thereafter.
With generic buffering, a group of records is loaded into TABL instead of the entire table. To activate this type of buffering, tickmark the Generic check box and put a number n in the No. of Key Fields so that the n left-most fields of the primary key will group records. When a record is read and n is 1, all records having the same value in the first field of the primary key are loaded into the buffer. When a record is read and n is 2, all records having the same value in the first two fields of the primary key are loaded into the buffer. This is illustrated in Figure 5.18.
Figure 5.18: Generic buffering. n must be less than the number of fields in the primary key.
In another example, suppose you choose generic buffering for table ztxlfa1 and set the No. of Key Fields to 1. Because the first field is mandt, all records for a client are loaded into the buffer at one time.
NOTE |
If you specify full buffering for a client-dependent table (having mandt in the first column), the system automatically buffers it generically with n equal to 1. That is, when a record is read, only records having the same client number are loaded into the buffer. You cannot see this reflected in the technical set-tings; they continue to indicate full buffering. It just happens "under the covers." |
Generic buffering is suitable for tables in which records are usually accessed in sets or groups. It is suitable if, when one record of the group is accessed, more accesses will usually occur for records from the same group. For example, if a user logs on in English, it is highly probable that only English descriptions will be read by that user and by other users on the same application server. Text tables, where descriptions are stored, always have mandt and spras (the language code) as the first two fields. Therefore, generic buffering with n equal to 2 is appropriate.
CAUTION |
Each record group in the buffer requires administrative overhead. Try to have a relatively large number of records per group. With generic buffering, if you end up loading most of the rows of a table and they are loaded into many small groups, a fully buffered table will be more efficient. |
During buffer synchronization, if a table is buffered generically and a record is invalidated in the buffer, all records in the group are invalidated. The next read of any record in that group will cause the entire group to be reloaded.
Tickmarking the Single Records check box turns on single-record buffering. With this type of buffering, select single picks one record into the single record buffer TABLP, as shown in Figure 5.19. With this buffering type, records are only buffered when the select single statement is executed. Select/endselect does not load or read TABLP.
For example, if table ztxlfa1 has a buffering type of single records, select single * from ztxlfa1 where lifnr = 'V1' causes the TABLP buffer to be searched for record V1. If it is found there, that copy is used. If it is not found, it is loaded into TABLP from the database.
Even if a record is not found, an entry is still made into the single record buffer. The entry indicates that record does not exist in the database. The next time that record is requested, the buffer can be used to resolve the request and a database access is not necessary.
For example, suppose vendor X1 does not exist in the database. Executing select single * from ztxlfa1 where lifnr = 'X1' would cause an entry to be created in the single record buffer to indicate that record X1 does not exist in the database. The next time vendor X1 is requested, the system looks in the buffer and discovers that it does not exist and does not need to go to the database.
Single-record buffering is appropriate for very large tables in which a small percentage of the records are read often. For single-record buffering to work, records must be read using the select single statement.
In summary, there are two buffers: TABL (the generic record buffer) and TABLP (the single record buffer). There are also two varieties of the select statement: select and select single. The select statement loads TABL and the select single statement loads TABLP.
When reading, select only reads TABL; it ignores TABLP. select single reads both buffers. This behavior is summarized in Figure 5.20. Remember that a record can only be in one buffer at a time because a table can only have one buffering type.
Figure 5.20: How the select statements interact with the buffers.
TIP |
To clear the contents of both TABL and TABLP, type /$tab in the Command field and press the Enter key. You can use this method after you turn off buffering to clear the buffers before testing the difference in a program's performance. Note that this can slow system performance in general for up to a couple of hours until the buffers are again repopulated. |
In TABL, old records are not discarded when new ones are loaded. Instead, entire tables are periodically unloaded from the buffer. The length of time between unloads and which tables are unloaded is determined by a caching algorithm within R/3. This complex algorithm is based on the amount of space used by a table in the buffer versus the number of read accesses on it in the previous period, and also on the amount of free space available in the buffer and the current access quality of the buffer.
If a table is frequently accessed without being updated often, it should be buffered. You will need to consult your Basis administrator to find out if there is enough room in the buffer to add your table. If there is not, and your table will be accessed often, it might be possible to increase the amount of RAM allocated to the buffers. Alternatively, another table accessed less frequently than yours could be changed so that it is no longer buffered.
Records in a buffer have their own primary index built in RAM. When records are needed, the index is used to locate them in the buffer the same way the primary index in the database is used to locate records in a table. However, secondary indexes are not created in the buffer, nor are the database secondary indexes used to access buffered data. Therefore, when coding a select. . .where statement on a buffered table, use primary key fields in the where clause beginning with the first field of the primary key and using as many in sequence from the primary key as you can. If the first field of the primary key is missing from the where clause (not counting mandt), a full scan of the table in the buffer will be performed. For example, if table ztxlfa1 has been fully buffered, select * from ztxlfa1 where lifnr = 'V1' will be supported by the buffer's primary index. Select * from ztxlfa1 where land1 = 'CA' will cause a full table scan in the buffer to occur.
Being unable to use secondary indexes causes a peculiar catch-22 effect if you fully buffer large tables that have secondary indexes. Buffering such a table will cause selects on the primary index to speed up, but it will possibly slow selects on secondary indexes down. A select statement on a large table (size category 1 or greater) that is fully supported by a secondary index in the database could run slower against the buffer because a secondary index is not available there. How much slower (if at all) will depend on the amount of data in the table. To compensate for this problem, avoid the buffer and use the secondary indexes by adding bypassing buffer to the select statement where a secondary index should be used.
For example, table TSTC contains over 12,000 rows, is size category 1, is fully buffered, and has a secondary index on its pgmna field. Measurements show that select * from tstc bypassing buffer where pgmna = 'ZTXTEST' ran 18 times faster on average than select * from tstc where pgmna = 'ZTXTEST', even though the table is fully buffered. The first select was resolved in the database using a secondary index on the pgmna field; the second performed a full table scan in the buffer.
CAUTION |
Be very careful when using bypassing buffer on buffered tables that are also updated. If you use it on one select statement, you should use it on all select statements for that table and all related tables within a single trans-action. If you don't, you will in effect be mixing buffered and unbuffered data. This can cause data inconsistencies in your program and possibly in the database if your program also does updates. |
DO place the fields in the where clause in the same order as they appear in the index. | DON'T code a select that is unsupported by an index. |
DO be generous when choosing a size category. | DON'T choose a size category that will cause a table to require a secondary extent. |
DO use bypassing buffer when reading a record for update from a buffered table. |
I can't display some of the screens that you mentioned. I get the error message You are not authorized to use transaction XXXX. How can I see these screens? | |
Ask your security administrator for authorization. | |
When I display the data buffers using ST02, I see a lot of other buffers there, too. What are they all for? | |
Data buffers are only two of the many buffers kept on an application server. The nametabs (table runtime objects) are all buffered, as well as all programs, menus, and toolbars (a.k.a. CUA or GUI status), screens, and calendars. These buffers are all listed here. |
The Workshop provides you two ways for you to affirm what you've learned in this chapter. The Quiz section poses questions to help you solidify your understanding of the material covered and the Exercise section provides you with experience in using what you have learned. You can find answers to the quiz questions and exercises in Appendix B, "Answers to Quiz Questions and Exercises."
What is the data class for tables: MARA, LFA1, and KNA1? What is the size category for MARA, LFA1, and KNA1?