Jul 22, 2024
Database Block Buffer Cache- Memory Structures

So far, we have looked at relatively small components of the SGA. Now we are going to look at one that is potentially huge in size. The block buffer cache is where Oracle stores database blocks before writing them to disk and after reading them in from disk.

This is a crucial area of the SGA for us. Make it too small and our queries will take forever to run. Make it too big and we’ll starve other processes (e.g., we won’t leave enough room for a dedicated server to create its PGA, and we won’t even get started).

We have three places to store cached blocks from individual segments in the SGA:

•\ Default pool: The location where all segment blocks are normally cached. This is the original—and, previously, the only—buffer pool.
•\ Keep pool: An alternate buffer pool where by convention you assign segments that are accessed fairly frequently, but still get aged out of the default buffer pool due to other segments needing space.
•\ Recycle pool: An alternate buffer pool where by convention you assign large segments that you access very randomly and which would therefore cause excessive buffer flushing of many blocks from many segments. There’s no benefit to caching such segments because by the time you wanted the block again, it would have been aged out of the cache. You would separate these segments out from the segments in the default and keep pools so they would not cause those blocks to age out of the cache.

Note that in the keep and recycle pool descriptions I used the phrase “by convention.” There is nothing in place to ensure that you use either the keep pool or the recycle pool in the fashion described. In fact, the three pools manage blocks in a mostly identical fashion; they do not have radically different algorithms for aging or caching blocks.

The goal here was to give the DBA the ability to segregate segments to hot, warm, and “do not care to cache” areas. The theory was that objects in the default pool would be hot enough (i.e., used enough) to warrant staying in the cache all by themselves.

The cache would keep them in memory since they were very popular blocks. If you had some segments that were fairly popular but not really hot, these would be considered the warm blocks. These segments’ blocks could get flushed from the cache to make room for blocks you used infrequently (the “do not care to cache” blocks).

To keep these warm segments’ blocks cached, you could do one of the following:

•\ Assign these segments to the keep pool, in an attempt to let the warm blocks stay in the buffer cache longer.

•\ Assign the “do not care to cache” segments to the recycle pool, keeping the recycle pool fairly small so as to let the blocks come into the cache and leave the cache rapidly (decrease the overhead of managing them all).

Having to do one of these two things increased the management work the DBA had to perform, as there were three caches to think about, size, and assign objects to. ­Remember also that there is no sharing among them, so if the keep pool has lots of unused space, it won’t give it to the overworked default or recycle pool.

All in all, these pools were generally regarded as a very fine, low-level tuning device, only to be used after most other tuning alternatives had been looked at (if I could rewrite a query to do one-tenth the I/O rather than set up multiple buffer pools, that would be my choice).

There are up to four more optional caches, the DB_nK_CACHE_SIZE, to consider in addition to the default, keep, and recycle pools. These caches were added in support of multiple block sizes in the database. A database can have a default block size, which is the size of the blocks stored in the default, keep, or recycle pool, as well as up to four nondefault block sizes, as explained in Chapter 3.

The blocks in these buffer caches are managed in the same way as the blocks in the original default pool—there are no special algorithm changes for them either. Let’s now move on to see how the blocks are managed in these pools.

More Details
Jun 22, 2024
Fixed SGA- Memory Structures

The fixed SGA is a component of the SGA that varies in size from platform to platform and from release to release. It is “compiled” into the Oracle binary itself at installation time (hence the name “fixed”). The fixed SGA contains a set of variables that point to the other components of the SGA, as well as variables that contain the values of various parameters. The size of the fixed SGA is something over which we have no control, and it is generally very small. Think of this area as a “bootstrap” section of the SGA—something Oracle uses internally to find the other bits and pieces of the SGA.

Redo Buffer

The redo buffer is where data that needs to be written to the online redo logs will be cached temporarily, before it is written to disk. Since a memory-to-memory transfer is much faster than a memory-to-disk transfer, the use of the redo log buffer can speed up the database operation. The data will not reside in the redo buffer for very long. In fact, LGWR initiates a flush to disk of this area in one of the following scenarios:

•\ Every three seconds
•\ Whenever a COMMIT or ROLLBACK is issued
•\ When LGWR is asked to switch log files
•\ When the redo buffer gets one-third full or contains 1MB of cached redo log data

Oracle recommends setting the redo log buffer size to a minimum of 8MB. If you’re using flashback functionality and have an SGA greater than 4G, then Oracle recommends setting the log buffer to at least 64MB. If you’re using Data Guard with asynchronous redo transport and have a high redo generation rate, then Oracle recommends setting this to at least 256MB.

A large system with lots of concurrent transactions might benefit somewhat from a large redo log buffer because while LGWR (the process responsible for flushing the redo log buffer to disk) is writing a portion of the log buffer, other sessions could be filling it up. In general, a long-running transaction that generates a lot of redo will benefit the most from a larger than normal log buffer, as it will be continuously filling up part of the redo log buffer while LGWR is busy writing out some of it (we’ll cover the phenomenon of writing uncommitted data at length in Chapter 9). The larger and longer the transaction, the more benefit it could receive from a generous log buffer.

The default size of the redo buffer, as controlled by the LOG_BUFFER parameter, varies widely by operating system, database version, and other parameter settings. Rather than try to explain what the most common default size is (there isn’t such a thing), I’ll refer you to the documentation for your release of Oracle (the Oracle Database Reference guide). My default LOG_BUFFER—given the instance we just started earlier with a 1.5GB SGA—is shown by the following query:

SQL> select value, isdefault from v$parameter where name = ‘log_buffer’;

The size is about 7MB. The minimum size of the default log buffer is OS dependent. If you’d like to find out what that is, just set your LOG_BUFFER to 1 byte and restart your database. For example, on my Oracle Linux instance, I see the following:

SQL> alter system set log_buffer=1 scope=spfile;
SQL> startup force;
SQL> show parameter log_buffer

The smallest log buffer I can really have, regardless of my settings, is going to be 1.680MB on this system.

Note For most database applications, the default value for the LOG_BUFFER parameter is sufficient. If you see a large number of waits associated with the log buffer space event, then consider increasing the LOG_BUFFER parameter.

More Details
Mar 22, 2024
Determining How the Memory Is Allocated- Memory Structures-1

Questions that come up frequently are “How is this memory allocated?” and “What will be the amount of RAM used by my session?” These are hard questions to answer for the simple reason that the algorithms for serving out memory under the automatic scheme are not documented and can and will change from release to release. When using things that begin with “A”—for automatic—you lose a degree of control, as the underlying algorithms decide what to do and how to control things.

We can make some observations based on information from Oracle Support notes 147806.1 and 223730.1:

•\ The PGA_AGGREGATE_TARGET is a goal of an upper limit. It is not a value that is preallocated when the database is started up. You can observe this by setting the PGA_AGGREGATE_TARGET to a value much higher than the amount of physical memory you have available on your server. You will not see any large allocation of memory as a result (one caveat, if you’ve set MEMORY_TARGET, and then set PGA_ AGGREGATE_TARGET to a value larger than MEMORY_TARGET, on instance startup Oracle throws an ORA-00838 error and won’t let you start yourinstance).

•\ The amount of PGA memory available for a given session is derived from the setting of PGA_AGGREGATE_TARGET. The algorithm for determining the maximum size used by a process varies by database version. The amount of PGA memory a process is allocated is typically a function of the amount of memory available and the number of processes competing for space.

•\ As the workload on your instance goes up (more concurrent queries, concurrent users), the amount of PGA memory allocated to your work areas will go down. The database will try to keep the sum of all PGA allocations under the threshold set by PGA_AGGREGATE_TARGET. This is analogous to having a DBA sit at a console all day, setting the SORT_AREA_SIZE and HASH_AREA_SIZE parameters based on the amount of work being performed in the database. We will directly observe this behavior shortly in a test.

OK, so how can we observe the different work area sizes being allocated to our session? By running some test scripts to observe the memory used by our session and the amount of I/O to temp we performed. I performed the following test on an Oracle Linux machine with four CPUs and dedicated server connections. We begin by creating a table to hold the metrics we’d like to monitor (the following code is placed in a file named stats.sql):

The columns in this table we’ll be using for the metrics represent

•\ NAME: The name of the statistic we are gathering (PGA and UGA information from V$SESSTAT for the current session, plus all of the memory information for the database instance as well as temporary tablespace writes).

•\ VALUE: The value of the given metric.

•\ ACTIVE: The number of other sessions doing work in the instance. Before we start, we assume an “idle” instance; we are the only user session right now, hence the value of zero.

Next, create a table T as follows:

More Details
Oct 22, 2023
Fast Recovery Area (FRA)- Files

The FRA in Oracle is a location where the database will manage many of the files related to database backup and recovery. In this area (an area being a part of a disk set aside for this purpose; a directory, for example), you could find the following:

•\ RMAN backup pieces (full and/or incremental backups)
•\ RMAN image copies (byte-for-byte copies of datafiles and controlfiles)
•\ Online redo logs
•\ Archived redo logs
•\ Multiplexed control files
•\ Flashback logs

Oracle uses this new area to manage these files, so the server will know what is on disk and what is not on disk (and perhaps on tape elsewhere). Using this information, the database can perform operations like a disk-to-disk restore of a damaged datafile or the flashing back (a “rewind” operation) of the database to undo an operation that should not have taken place. For example, you could use the FLASHBACK DATABASE command to put the database back the way it was five minutes ago (without doing a full restore of the database and a point-in-time recovery). That would allow you to “undrop” that accidentally dropped user account.

To set up an FRA, you need to set two parameters: db_recovery_file_dest_size and db_recovery_file_dest. You can view the current settings of these parameters via

SQL> show parameter db_recovery_file_dest

NAME TYPE VALUE

db_recovery_file_dest string /opt/oracle/fra
db_recovery_file_dest_size big integer 100G

The Fast Recovery Area is more of a logical concept. It is a holding area for the file types discussed in this chapter. Its use is optional—you don’t need to use it, but if you want to use some advanced features, such as the Flashback Database, you must use this area to store the information.

Data Pump Files

Data Pump is a file format used by at least two tools in Oracle. External tables can load and unload data in the Data Pump format, and the import/export tools IMPDP and EXPDP use this file format.

They are cross-platform (portable) binary files that contain metadata (not stored in CREATE/ALTER statements, but rather in XML) and possibly data.

That they use XML as a metadata representation structure is actually relevant to you and me as end users of the tools. IMPDP and EXPDP have some sophisticated filtering and translation capabilities.

This is in part due to the use of XML and to the fact that a CREATE TABLE statement is not stored as a CREATE TABLE, but rather as a marked-up document.

This permits easy implementation of a request like “Please replace all references to tablespace FOO with tablespace BAR.” IMPDP just has to apply a simple XML transformation to accomplish the same. FOO, when it refers to a TABLESPACE, would be surrounded by FOO tags (or some other similar representation).

In Chapter 15, we’ll take a closer look at these tools. Before we get there, however, let’s see how we can use this Data Pump format to quickly extract some data from database A and move it to database B. We’ll be using an “external table in reverse” here.

External tables give us the ability to read flat files—plain old text files—as if they were database tables. We have the full power of SQL to process them. They are read-only and designed to get data from outside Oracle in. External tables can go the other way: they can be used to get data out of the database in the Data Pump format to facilitate moving the data to another machine or another platform. To start this exercise, we’ll need a DIRECTORY object, telling Oracle the location to unload to:

SQL> create or replace directory tmp as ‘/tmp’; Directory created.
SQL> create table all_objects_unload organization external
( type oracle_datapumpdefault directory TMPlocation( ‘allobjects.dat’ ))

asselect * from all_objects

Table created.

And that literally is all there is to it: we have a file in /tmp named allobjects.dat that contains the contents of the query select * from all_objects. We can peek at this information:

SQL> !strings /tmp/allobjects.dat | head x86_64/Linux 2.4.xxAL32UTF8 19.00.00.00.00001:001:000001:000001 i<?xml version=”1.0″ encoding=”UTF-8″?…

That’s just the head, or top, of the file. Now, using a binary SCP, you can move that file to any other platform where you have Oracle installed and by issuing a CREATE DIRECTORY statement (to tell the database where the file is) and a CREATE TABLE statement, such as this:

SQL> create table t ( OWNER VARCHAR2(30),OBJECT_NAME VARCHAR2(30),SUBOBJECT_NAME VARCHAR2(30),OBJECT_ID NUMBER, DATA_OBJECT_ID NUMBER,OBJECT_TYPE VARCHAR2(19),CREATED DATE,LAST_DDL_TIME DATE,TIMESTAMP VARCHAR2(19),STATUS VARCHAR2(7),TEMPORARY VARCHAR2(1),GENERATED VARCHAR2(1),SECONDARY VARCHAR2(1))

organization external

( type oracle_datapump default directory TMP location( ‘allobjects.dat’ ));

You would be set to read that unloaded data using SQL immediately. That is the power of the Data Pump file format: immediate transfer of data from system to system, over “sneakernet” if need be. Think about that the next time you’d like to take a subset of data home to work with over the weekend while testing.

Even if the database character sets differ (they did not in this example), Oracle has the ability now to recognize the differing character sets due to the Data Pump format and deal with them. Character set conversion can be performed on the fly as needed to make the data “correct” in each database’s representation.

Again, we’ll come back to the Data Pump file format in Chapter 15, but this section should give you an overall feel for what it is about and what might be contained in the file.

More Details
Jul 22, 2023
Manual PGA Memory Management- Memory Structures

There are only a few scenarios that I can think of where manual PGA memory management may be appropriate. One is if you’re running an old version of Oracle and do not have the option to upgrade.

Another situation may be that you run large batch jobs that run during periods when they are the only activities in the instance where manual PGA management may provide performance benefits.

Therefore, it’s possible you may find yourself in an environment where this type of PGA memory management is used. Other than those scenarios, the manual management style should be avoided.

In manual PGA memory management, the following are the parameters that have the largest impact on the size of your PGA, outside of the memory allocated by your session for PL/SQL tables and other variables:

•\ SORT_AREA_SIZE: The total amount of RAM that will be used to sort information before swapping out to disk (using disk space in the temporary tablespace the user is assigned to).

•\ SORT_AREA_RETAINED_SIZE: The amount of memory that will be used to hold sorted data after the sort is complete. That is, if SORT_AREA_ SIZE is 512KB and SORT_AREA_RETAINED_SIZE is 256KB, your server process would use up to 512KB of memory to sort data during the initial processing of the query. When the sort is complete, the sorting area would “shrink” down to 256KB, and any sorted data that does not fit in that 256KB would be written out to the temporary tablespace.

•\ HASH_AREA_SIZE: The amount of memory your server process can use to store hash tables in memory. These structures are used during a hash join, typically when joining a large set with another set. The smaller of the two sets would be hashed into memory, and anything that didn’t fit in the hash area region of memory would be stored in the temporary tablespace by the join key.

Note Before using the parameters in the prior list, you must set the WORKAREA_ SIZE_POLICY parameter to MANUAL.

These parameters control the amount of space Oracle will use to sort or hash data in memory before using the temporary tablespace on disk, and how much of that memory segment will be retained after the sort is done. The SORT_AREA_SIZE (minus SORT_AREA_ RETAINED_SIZE) calculated value is generally allocated out of your PGA, and the SORT_ AREA_RETAINED_SIZE value will be in your UGA.

Here are the important things to remember about using the *_AREA_SIZE parameters:

•\ These parameters control the maximum amount of memory used by a SORT, HASH, or BITMAP MERGE operation.

•\ A single query may have many operations taking place that use this memory, and multiple sort/hash areas could be created. Remember that you may have many cursors opened simultaneously, each with its own SORT_AREA_RETAINED needs. So, if you set the sort area size to 10MB, you could use 10, 100, 1000, or more megabytes of RAM in your session. These settings are not session limits; rather, they are limits on a single operation, and your session could have many sorts in a single query or many queries open that require a sort.

•\ The memory for these areas is allocated on an “as needed” basis. If you set the sort area size to 1GB as we did, it doesn’t mean you’ll allocate 1GB of RAM. It only means that you’ve given the Oracle process the permission to allocate that much memory for a sort/hash operation.

Now that we’ve briefly reviewed manual PGA memory management, let’s move on to what you should be using, automatic PGA memory management.

More Details
Aug 22, 2022
Online Redo Log- Files-1

Every Oracle database has at least two online redo log file groups. Each redo log group consists of one or more redo log members (redo is managed in groups of members). The individual redo log file members of these groups are true mirror images of each other.

These online redo log files are fixed in size and are used in a circular fashion. Oracle will write to log file group 1, and when it gets to the end of this set of files, it will switch to log file group 2 and overwrite the contents of those files from start to end.

When it has filled log file group 2, it will switch back to log file group 1 (assuming we have only two redo log file groups; if we have three, it would, of course, proceed to the third group). This is shown in Figure 3-4.

Figure 3-4.  Writing to log file groups

The act of switching from one log file group to another is called a log switch. It is important to note that a log switch may cause a temporary “pause” in a poorly configured database. Since the redo logs are used to recover transactions in the event of a failure, we must be certain we won’t need the contents of a redo log file before we are able to use it. If Oracle isn’t sure that it won’t need the contents of a log file, it will suspend operations in the database momentarily and make sure that the data in the cache that this redo “protects” is safely written (checkpointed) onto disk. Once Oracle is sure of that, processing will resume and the redo file will be reused.

We’ve just started to talk about a key database concept: checkpointing. To understand how online redo logs are used, you’ll need to know something about checkpointing, how the database buffer cache works, and what a process called databaseblock writer (DBWn) does. Thedatabase buffer cache and DBWn are covered in more detail later on, but we’llskip ahead a little anyway and touch on them now. 

Thedatabase buffer cache is where database blocks are stored temporarily. This isa structure in Oracle’s SGA. As blocks are read, they are stored in this cache, hopefully so we won’t have to physically reread them later.

The buffer cache is first and foremost a performance tuning device. It exists solely to make the very slow process of physical I/O appear to be much faster than it is.

When we modify a block by updating a row on it, these modifications are done in memory to the blocks in the buffer cache. Enough information to redo or to replay this modification is stored in the redo log buffer, another SGA data structure.

When we COMMIT our modifications, making them permanent, Oracle does not go to all of the blocks we modified in the SGA and write them to disk. Rather, it just writes the contents of the redo log buffer out to the online redo logs.

As long as that modified block is in the buffer cache and not on disk, we need the contents of that online redo log in case the database fails. If, at the instant after we committed, the power was turned off, the database buffer cache would be wiped out. If this happens, the only record of our change is in that redo log file.

Upon restart of the database, Oracle will actually replay our transaction, modifying the block again in the same way we did and committing it for us. So, as long as that modified block is cached and not written to disk, we can’t reuse (overwrite) that redo log file. 

This is where DBWn comes into play. This Oracle background process is responsible for making space in the buffer cache when it fills up and, more important, for performing checkpoints. A checkpoint is the writing of dirty (modified) blocks from the buffer cache to disk.

Oracle does this in the background for us. Many things can cause a checkpoint to occur, the most common being a redo log switch. As we filled up log file 1 and switched to log file 2, Oracle initiated a checkpoint.

At this point, DBWn started writing to disk all of the dirty blocks that are protected by log file group 1. Until DBWn flushes all of these blocks protected by that log file, Oracle can’t reuse (overwrite) it. If we attempt to use it before DBWn has finished its checkpoint, we’ll get a message like this in our database’s ALERT log: … 

Thread 1 cannot allocate new log, sequence 16 Checkpoint not complete Current log# 3 seq# 15 mem# 0: /opt/oracle/oradata/CDB/redo03.log…

More Details
Jul 22, 2022
Password Files- Files

The password file is an optional file that permits the remote SYSDBA or administrator access to the database. When you attempt to start Oracle, there is no database available that can be consulted to verify passwords. When you start Oracle on the local system (i.e., not over the network, but from the machine the database instance will reside on), Oracle will use the OS to perform the authentication.

When Oracle was installed, the person performing the installation was asked to specify an OS group for the administrators. Normally, on UNIX/Linux, this group will be DBA by default, and ORA_DBA on Windows. It can be any legitimate group name on that platform, however. That group is “special,” in that any user in that group can connect to Oracle “as SYSDBA” without specifying a username or password.

However, what happens if you attempt to connect as SYSDBA over the network to a remote database:

$ sqlplus sys/foo@localhost:1521/CDB as sysdba

ERROR:

ORA-01017: invalid username/password; logon denied

OS authentication won’t work over the network for SYSDBA, even if the very unsafe (for security reasons) parameter REMOTE_OS_AUTHENT is set to true. So, OS authentication won’t work, and, as discussed earlier, if you’re trying to start up an instance to mount and open a database, then by definition there’s no database yet in which to look up authentication details. It is the proverbial chicken and egg problem.

Enter the password file. The password file stores a list of usernames and passwords that are allowed to remotely authenticate as SYSDBA over the network. Oracle must use this file to authenticate them, not the normal list of passwords stored in the database.

So, let’s correct our situation. First, verify that the REMOTE_LOGIN_PASSWORDFILE parameter is set to the default of EXCLUSIVE, meaning only one database uses a given password file:

SQL> show parameter remote_login_passwordfile
NAME TYPE VALUE
remote_login_passwordfile string EXCLUSIVE

Note Other valid values for this parameter are NONE, meaning there is no password file (there are no remote SYSDBA connections), and SHARED (more than one database can use the same password file).

The next step is to use the command-line tool (on UNIX/Linux and Windows) named orapwd to create and populate the initial password file:

$ orapwd

Usage: orapwd file= entries= force= asm= dbuniquename= format= sysbackup= sysdg= syskm= delete= input_file=Usage: orapwd describe file=where

There must be no spaces around the equal-to (=) character.

The command we’ll use when logged into the operating system account that owns the Oracle software is

$ orapwd file=orapw$ORACLE_SID password=foo entries=20

This creates a password file named orapwCDB in my case (my ORACLE_SID is CDB). That’s the naming convention for this file on most UNIX/Linux platforms (see your installation/OS admin guide for details on the naming of this file on your platform), and it resides in the $ORACLE_HOME/dbs directory. On Windows, this file is named PW%ORACLE_SID%.ora, and it’s located in the %ORACLE_HOME%\database directory. You should navigate to the correct directory prior to running the command to create that file, or move that file into the correct directory afterward.

Now, currently the only user in that file is SYS, even if there are other SYSDBA accounts on that database (they are not in the password file yet). Using that knowledge, however, we can for the first time connect as SYSDBA over the network:

$ sqlplus sys/foo@localhost:1521/CDB as sysdba
SQL> show user
USER is “SYS”

Note If you experience an ORA-12505 “TNS:listener does not currently know of SID given in connect Descriptor” error during this step, that means that the database listener is not configured with a static registration entry for this server. The DBA has not permitted remote SYSDBA connections when the database instance is not up. You would need to configure static server registration in your listener.ora configuration file. Please search for “Configuring Static Service Information” (in quotes) on the OTN (Oracle Technology Network) documentation search page for the version of the database you are using for details on configuring this static service. If you encounter an ORA-12528 “TNS:listener: all appropriate instances are blocking new connections” error, you can also configure the tnsnames.ora file with the UR=A parameter that will allow you to connect to a blocked instance.

We have been authenticated, so we are in. We can now successfully start up, shut down, and remotely administer this database using the SYSDBA account. Now, we have another user, TKYTE, who has been granted SYSDBA, but will not be able to connect remotely yet:

$ sqlplus tkyte/foobar@PDB1 as sysdba

ERROR:

ORA-01017: invalid username/password; logon denied

The reason for this is that TKYTE is not yet in the password file. In order to get TKYTE into the password file, we need to “regrant” that account the SYSDBA privilege:

$ sqlplus / as sysdba
SQL> alter session set container=PDB1; SQL> grant sysdba to tkyte; Grant succeeded.

SQL> exit
$ sqlplus tkyte/foobar@PDB1 as sysdba
SQL>

This created an entry in the password file for us, and Oracle will now keep the password in sync. If TKYTE alters their password, the old one will cease working for remote SYSDBA connections, and the new one will start working. The same process is repeated for any user who was a SYSDBA but is not yet in the password file.

More Details
May 22, 2022
The Storage Hierarchy in an Oracle Database- Files-1

A database is made up of one or more tablespaces. A tablespace is a logical storage container in Oracle that comes at the top of the storage hierarchy and is made up of one or more datafiles.

These files might be cooked files in a file system, raw partitions, ASM-­managed database files, or files on a clustered file system. A tablespace contains segments, as described next.

Segments

Segments are the major organizational structure within a tablespace. Segments are simply your database objects that consume storage—typically objects such as tables, indexes, undo segments, and so on.

Most times, when you create a table, you create a table segment. When you create a partitioned table, you are not creating a table segment, rather you create a segment per partition.

When you create an index, you normally create an index segment, and so on. Every object that consumes storage is ultimately stored in a single segment. There are undo segments, temporary segments, cluster segments, index segments, and so on.

Note It might be confusing to read “Every object that consumes storage is ultimately stored in a single segment.” You will find many CREATE statements that create multisegment objects. The confusion lies in the fact that a single CREATE statement may ultimately create objects that consist of zero, one, or more segments!

For example, CREATE TABLE T ( x int primary key, y clob ) will create four segments: one for the TABLE T, one for the index that will be created in support of the primary key, and two for the CLOB (one segment for the CLOB is the LOB index, and the other segment is the LOB data itself). On the other hand, CREATE TABLE T ( x int, y date ) cluster MY_CLUSTER will create zero segments (the cluster is the segment in this case). We’ll explore this concept further in Chapter 10.

Extents

Segments consist of one or more extents. An extent is a logically contiguous allocation of space in a file. (Files themselves, in general, are not contiguous on disk; otherwise, we would never need a disk defragmentation tool!

Also, with disk technologies such as Redundant Array of Independent Disks (RAID), you might find that a single file also spans many physical disks.) Traditionally, every segment starts with at least one extent. Oracle 11g Release 2 has introduced the concept of a “deferred” segment—a segment that will not immediately allocate an extent, so in that release and going forward, a segment might defer allocating its initial extent until data is inserted into it.

When an object needs to grow beyond its initial extent, it will request another extent be allocated to it. This second extent will not necessarily be located right next to the first extent on disk—it may very well not even be allocated in the same file as the first extent.

The second extent may be located very far away from the first extent, but the space within an extent is always logically contiguous in a file. Extents vary in size from one Oracle data block (explained shortly) to gigabytes.

Blocks

Extents, in turn, consist of blocks. A block is the smallest unit of space allocation in Oracle. Blocks are where your rows of data, or index entries, or temporary sort results are stored. A block is what Oracle typically reads from and writes to disk. Blocks in Oracle are generally one of four common sizes: 2KB, 4KB, 8KB, or 16KB (although 32KB is also permissible in some cases; there are restrictions in place as to the maximum size by operating system).

Note Per Oracle’s database reference documentation, the minimum block size is 2048, and larger block sizes must be a multiple of the operating system physical block size.

More Details
Mar 22, 2022
Tagging Trace Files- Files-1

There is a way to “tag” your trace file so that you can find it even if you are not permitted access to V$PROCESS and V$SESSION. Assuming you have access to read the trace directory, you can use the session parameter tracefile_identifier.
With this, you can add a uniquely identifiable string to the trace file name, for example:

SQL> alter session set tracefile_identifier = ‘Look_For_Me’; Session altered.
SQL> !ls /opt/oracle/diag/rdbms/cdb/CDB/trace/Look_For_Me

ls: cannot access /opt/oracle/diag/rdbms/cdb/CDB/trace/Look_For_Me: No such file or directory

SQL> exec dbms_monitor.session_trace_enable PL/
SQL procedure successfully completed.
SQL> !ls /opt/oracle/diag/rdbms/cdb/CDB/trace/Look_For_Me /opt/oracle/diag/rdbms/cdb/CDB/trace/CDB_ora_15972_Look_For_Me.trc /opt/oracle/diag/rdbms/cdb/CDB/trace/CDB_ora_15972_Look_For_Me.trm

The * character is a wildcard instructing ls to look for any files with the string of Look_For_Me included in the file name. As you can see, the trace file is now named in the standard ora format, but it also has the unique string we specified associated with it, making it easy to find “our” trace file name. The trace file ends with the extension of .trc. There’s also a corresponding trace map file (with the extension of .trm) which contains structural information about the trace file. Usually, you’ll only be interested in the contents of the .trc file.

Trace Files Generated in Response to Internal Errors

I’d like to close this section with a discussion about those other kinds of trace files—the ones we did not expect that were generated as a result of an ORA-00600 or some other internal error. Is there anything we can do with them?

The short answer is that, in general, they are not for you and me. They are useful to Oracle Support. However, they can be helpful when we file a service request with Oracle Support. That point is crucial: if you are getting internal errors, the only way they will ever be corrected is if you file a service request (SR). If you just ignore them, they will not get fixed by themselves, except by accident.

For example, in Oracle 10g Release 1, if you create the following table and run the query, you may well get an internal error (or not; it was filed as a bug and is corrected in later patch releases):

SQL> create table t ( x int primary key ); Table created.
SQL> insert into t values ( 1 );

1 row created.

SQL> exec dbms_stats.gather_table_stats( user, ‘T’ ); PL/SQL procedure successfully completed.
SQL> select count(x) over () from t;

ERROR at line 2:

ORA-00600: internal error code, arguments: [12410], [], [], [], [], [], [], []

Now, suppose you are the DBA and all of a sudden this trace file pops up in the trace area. Or you are the developer and your application raises an ORA-00600 error, and you want to find out what happened. There is a lot of information in that trace file (some 35,000 lines, in fact), but in general it’s not useful to you and me. We would generally just compress the trace file and upload it as part of our service request processing.

A command-line tool, in conjunction with a user interface via Enterprise Manager, allows you to review the trace information in the ADR and package and transmit it to Oracle Support. The Automatic Diagnostic Repository Command Interpreter (ADRCI) utility allows you to review “problems” (critical errors in the database) and incidents ­(occurrences of those critical errors) and to package them up for transmission to support. The packaging step includes retrieving not only the trace information but also details from the database alert log and other configuration/test case information. For example, I set up a situation in my database that raised a critical error. (No, I won’t say what it is. You have to generate your own critical errors.) I knew I had a “problem” in my database because the ADRCI tool told me so:

$ adrci
adrci> show problem
ADR Home = /opt/oracle/diag/rdbms/cdb/CDB:

More Details
Dec 22, 2021
Pluggable Databases- Files

Pluggable databases are designed to be a set of files you can move from one root container database to another.

That is, we can unplug a pluggable database, and upon plugging it back into either the same root container database or some other root container database, we would have our original pluggable database back—with all of the ­application schemas, users, metadata, grants, data, and even our pluggable database parameter settings (settings that were not inherited from the root container).

This is achieved by storing pluggable database–specific parameter settings in a data dictionary table: SYS.PDB_ SPFILE$.

You can view parameters modifiable at the PDB level via this query:

SQL> SELECT name, value
FROM v$system_parameter
WHERE ispdb_modifiable = ‘TRUE’
ORDER BY name;

NAME VALUE

adg_account_info_tracking
allow_deprecated_rpcs
allow_rowid_column_type
approx_for_aggregation
approx_for_count_distinct
approx_for_percentile
aq_tm_processes
asm_diskstring
auto_start_pdb_services
awr_pdb_autoflush_enabled
bitmap_merge_area_size
blank_trimming
blockchain_table_max_no_drop

You set the PDB level parameters by using the ALTER SYSTEM … CONTAINER clause or by connecting the pluggable database and issuing the ALTER SYSTEM command:

$ sqlplus / as sysdba

SQL> alter session set container=pdb1; SQL> alter system set statistics_level=all;

It is in this fashion that pluggable databases can override a parameter setting for some parameters in an SPFILE and have those parameter settings travel with them as they move from root container database to root container database.

Parameter File Wrap-Up

In this section, we covered the basics of managing Oracle initialization parameters and parameter files.

We looked at how to set parameters, view parameter values, and have those settings persist across database restarts.

We explored the two types of database parameter files: legacy PFILEs (simple text files) and the newer SPFILEs.

For all existing databases, using SPFILEs is recommended for the ease of administration and clarity they bring.

The ability to have a single source of parameter “truth” for the database, along with the ability of the ALTER SYSTEM command to persist the parameter values, makes SPFILEs a compelling feature. I started using them the instant they became available and haven’t looked back.

More Details