Nov 22, 2022
Control Files- Files

Control files are fairly small files (usually megabytes in size) that contain a directory of the other files Oracle needs.

The parameter file tells the instance where the control files are, and the control files tell the instance where the database and online redo log files are. You can view the location of the control files via the following:

SQL> show parameter control_files

NAME TYPE VALUE

control_files string /opt/oracle/oradata/CDB/
control01.ctl, /opt/oracle/oradata/C
DB/control02.ctl

The control files also tell Oracle other things, such as information about checkpoints that have taken place, the name of the database (which should match the db_name parameter in the parameter file), the timestamp of the database as it was created, an archive redo log history (this can make a control file large in some cases), RMAN information, and so on.

Control files should be multiplexed either by hardware (RAID) or by Oracle when RAID or mirroring is not available. More than one copy should exist, and the copies should be stored on separate disks to avoid losing them in case you have a disk failure. It is not fatal to lose your control files—it just makes recovery that much harder.

Control files are something a developer will probably never have to actually deal with. To a DBA, they are an important part of the database, but to a software developer they are not really relevant.

Redo Log Files

Redo log files are crucial to the Oracle database. These are the transaction logs for the database. They are generally used only for recovery purposes, but they can be used for the following as well:

•\ Instance recovery after a system crash
•\ Media recovery after a datafile restore from backup
•\ Standby database processing
•\ Input into GoldenGate—redo log mining processes for information sharing (a fancy way of saying replication)
•\ Allow administrators to inspect historical database transactions through the Oracle LogMiner utility

Their main purpose in life is to be used in the event of an instance or media failure or as a method of maintaining a standby database for failover. If the power goes off on your database machine, causing an instance failure, Oracle will use the online redo logs to restore the system to exactly the point it was at immediately prior to the power outage. If your disk drive containing your datafile fails permanently, Oracle will use archived redo logs, as well as online redo logs, to recover a backup of that drive to the correct point in time. Additionally, if you “accidentally” drop a table or remove some critical information and commit that operation, you can restore a backup and have Oracle restore it to the point just before the accident using these online and archived redo log files.

Virtually every operation you perform in Oracle generates some amount of redo to be written to the online redo log files. When you insert a row into a table, the end result of that insert is written to the redo logs. When you delete a row, the fact that you deleted that row is written. When you drop a table, the effects of that drop are written to the redo log. The data from the table you dropped is not written; however, the recursive SQL that Oracle performs to drop the table does generate redo. For example, Oracle will delete a row from the SYS.OBJ$ table (and other internal dictionary objects), and this will generate redo, and if various modes of supplemental logging are enabled, the actual DROP TABLE statement will be written into the redo log stream.

Some operations may be performed in a mode that generates as little redo as possible. For example, I can create an index with the NOLOGGING attribute. This means that the initial creation of the index data will not be logged, but any recursive SQL Oracle performed on my behalf will be. For example, the insert of a row into SYS. OBJ$ representing the existence of the index will be logged, as will all subsequent modifications of the index using SQL inserts, updates, and deletes. But the initial writing of the index structure to disk will not be logged.

I’ve referred to two types of redo log file: online and archived. We’ll take a look at each in the sections that follow. In Chapter 9, we’ll take another look at redo in conjunction with undo segments, to see what impact they have on you as a developer. For now, we’ll just concentrate on what they are and what their purpose is.

More Details
Oct 22, 2022
Temp Files- Files

Temporary datafiles (temp files) in Oracle are a special type of datafile. Oracle will use temporary files to store the intermediate results of large sort operations and hash operations, as well as to store global temporary table data, or resultset data, when there is insufficient memory to hold it all in RAM. Temporary tablespaces can also hold the UNDO generated by operations performed on global temporary tables. Permanent data objects, such as a table or an index, will never be stored in a temp file, but the contents of a temporary table and its indexes would be. So, you’ll never create your application tables in a temp file, but you might store data there when you use a temporary table.

Temp files are treated in a special way by Oracle. Normally, every change you make to an object will be recorded in the redo logs; these transaction logs can be replayed at a later date to “redo a transaction,” which you might do during recovery from failure. Temp files are excluded from this process. Specifically, transactions in global temporary tables (located in temp files) never have REDO generated for them, although they can have UNDO generated. Thus, there may be REDO generated working with temporary tables since UNDO is always protected by REDO, as you will see in detail in Chapter 9. The UNDO generated for global temporary tables is to support rolling back work you’ve done in your session, either due to an error processing data or because of some general transaction failure. A DBA never needs to back up a temporary datafile, and, in fact, attempting to do so would be a waste of time, as you can never recover a temporary datafile.

Note In Oracle 12c and above, the UNDO generated for global temporary tables may be stored in the temporary tablespace. By default, UNDO will be generated into the permanent UNDO tablespace, just like prior releases. An init.ora system-level setting, or a TEMP_UNDO_ENABLED session-level settable parameter, may be set to TRUE to enable the UNDO generated for global temporary tables to be stored in a temp file. In this manner, no REDO will be generated for these operations. We will investigate this further in Chapter 9.

One of the nuances of true temp files is that if the OS permits it, the temporary files will be created sparse—that is, they will not actually consume disk storage until they need to. You can see that easily in this example (on Oracle Linux):

SQL> !df -h /tmp

Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-lv_root

50G 6.5G 41G 14% /
SQL> create temporary tablespace temp_huge tempfile ‘/tmp/temp_huge.dbf’ size 2g;

Tablespace created.
SQL> !df -h /tmp
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
50G 6.5G 41G 14% /
SQL> !ls -l /tmp/temp_huge.dbf
-rw-r—–. 1 oracle oinstall 2147491840 Feb 15 18:06 /tmp/temp_huge.dbf

Note The UNIX/Linux command df shows “disk free” space. This command showed that I have 41GB free in the file system containing /tmp before I added a 2GB temp file to the database. After I added that file, I still had 41GB free in the file system.

Apparently, it didn’t take much storage to hold that file. If we look at the ls output, it appears to be a normal 2GB file, but it is, in fact, consuming only a few kilobytes of storage currently. So we could actually create hundreds of these 2GB temporary files, even though we have roughly 41GB of disk space free. Sounds great—free storage for all! The problem is, as we start to use these temp files and they start expanding out, we would rapidly hit errors stating “no more space.” Since the space is allocated or physically assigned to the file as needed by the OS, we stand a definite chance of running out of room (especially if after we create the temp files, someone else fills up the file system with other stuff).

How to solve this differs from OS to OS. On UNIX/Linux, you can use dd to fill the file with data, causing the OS to physically assign disk storage to the file, or use cp to create a nonsparse file, for example:

SQL> !cp –sparse=never /tmp/temp_huge.dbf /tmp/temp_huge_not_sparse.dbf
SQL> !df -h /tmp

Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-lv_root
50G 8.5G 39G 19% /
SQL> drop tablespace temp_huge including contents and datafiles;

Tablespace dropped.

SQL> create temporary tablespace temp_huge tempfile ‘/tmp/temp_huge_not_ sparse.dbf’ reuse;
Tablespace created.

After copying the sparse 2GB file to /tmp/temp_huge_not_sparse.dbf and creating the temporary tablespace using that temp file with the REUSE option, we are assured that temp file has allocated all of its file system space, and our database actually has 2GB of temporary space to work with.

Note In my experience, Windows NTFS does not do sparse files, and this applies to UNIX/Linux variants. On the plus side, if you have to create a 15GB temporary tablespace on UNIX/Linux and have temp file support, you’ll find it happens very fast (instantaneously); just make sure you have 15GB free and reserve it in your mind.

More Details
Sep 22, 2022
Online Redo Log- Files-2

So, when this message appeared, processing was suspended in the database while DBWn hurriedly finished its checkpoint. Oracle gave all the processing power it could to DBWn at that point in the hope it would finish faster.

This is a message you never want to see in a nicely tuned database instance. If you do see it, you know for a fact that you have introduced artificial, unnecessary waits for your end users. This can always be avoided. The goal (and this is for the DBA, not the developer necessarily) is to have enough online redo log files allocated so that you never attempt to reuse a log file before the checkpoint (initiated by the log switch) completes. If you see this message frequently, it means a DBA has not allocated sufficient online redo logs for the application, or that DBWn needs to be tuned to work more efficiently.

Different applications will generate different amounts of redo log. A decision support system (DSS, query only) or DW system will naturally generate significantly less online redo logging than an OLTP (transaction processing) system would, day to day. A system that does a lot of image manipulation in Binary Large Objects (BLOBs) in the database may generate radically more redo than a simple order entry system. An order entry system with 100 users will probably generate a tenth the amount of redo 1000 users would generate. Thus, there is no “right” size for your redo logs, although you do want to ensure they are large enough for your unique workload.

You must take many things into consideration when setting both the size of and the number of online redo logs. Many of them are beyond the scope of this book, but I’ll list some of them to give you an idea:

•\ Peak workloads: You’d like your system to not have to wait for checkpoint-not-complete messages, to not get bottlenecked during your peak processing. You should size your redo logs not for average hourly throughput, but rather for your peak processing. If you generate 24GB of log per day, but 10GB of that log is generated between 9:00 am and 11:00 am, you’ll want to size your redo logs large enough to carry you through that two-hour peak. Sizing them for an average of 1GB per hour would probably not be sufficient.

•\ Lots of users modifying the same blocks: Here, you might want large redo log files. Since everyone is modifying the same blocks, you’d like to update them as many times as possible before writing them out to disk. Each log switch will fire a checkpoint, so you’d like to switch logs infrequently. This may, however, affect your recovery time.

•\ Mean time to recover: If you must ensure that a recovery takes as little time as possible, you may be swayed toward smaller redo log files, even if the previous point is true. It will take less time to process one or two small redo log files than a gargantuan one upon recovery. The overall system will run slower than it absolutely could day to day perhaps (due to excessive checkpointing), but the amount of time spent in recovery will be shorter. There are other database parameters that may also be used to reduce this recovery time, as an alternative to the use of small redo log files.

More Details
Aug 22, 2022
Archived Redo Log- Files

The Oracle database can run in one of two modes: ARCHIVELOG mode and NOARCHIVELOG mode. The difference between these two modes is simply what happens to a redo log file when Oracle goes to reuse it. “Will we keep a copy of that redo or should Oracle just overwrite it, losing it forever?” is an important question to answer. Unless you keep this file, you can’t recover data from a backup to that point in time.

Say you take a backup once a week on Saturday. Now, on Friday afternoon, after you have generated hundreds of redo logs over the week, your hard disk fails. If you have not been running in ARCHIVELOG mode, the only choices you have right now are as follows:

•\ Drop the tablespace(s) associated with the failed disk. Any tablespace that had a file on that disk must be dropped, including the contents of that tablespace. If the SYSTEM tablespace (Oracle’s data dictionary) or some other important system-related tablespace like your UNDO tablespace is affected, you can’t do this. You will have to use the next option instead.

•\ Restore last Saturday’s data and lose all of the work you did that week.

Neither option is very appealing. Both imply that you lose data. If you had been executing in ARCHIVELOG mode, on the other hand, you simply would have found another disk and restored the affected files from Saturday’s backup onto it. Then, you would have applied the archived redo logs and, ultimately, the online redo logs to them (in effect replaying the week’s worth of transactions in fast-forward mode). You lose nothing. The data is restored to the point of the failure.

People frequently tell me they don’t need ARCHIVELOG mode for their production systems. I have yet to meet anyone who was correct in that statement. I believe that a system is not a production system unless it is in ARCHIVELOG mode. A database that is not in ARCHIVELOG mode will, someday, lose data. It is inevitable; you will lose data (not might, but will) if your database is not in ARCHIVELOG mode.

“We are using RAID-5, so we are totally protected” is a common excuse. I’ve seen cases where, due to a manufacturing error, all disks in a RAID set froze, all at about the same time. I’ve seen cases where the hardware controller introduced corruption into the datafiles, so people safely protected corrupt data with their RAID devices. RAID also does not do anything to protect you from operator error, one of the most common causes of data loss. RAID does not mean the data is safe, it might be more available, it might be safer, but data solely on a RAID device will be lost someday; it is a matter of time.

“If we had the backups from before the hardware or operator error and the archives were not affected, we could have recovered.” The bottom line is that there is no excuse for not being in ARCHIVELOG mode on a system where the data is of any value. Performance is no excuse; properly configured archiving adds little to no overhead. This, and the fact that a fast system that loses data is useless, means that even if archiving added 100 percent overhead, you still need to do it. A feature is overhead if you can remove it and lose nothing important; overhead is like icing on the cake. Preserving your data and making sure you don’t lose your data isn’t overhead—it’s the DBA’s primary job!

Only a test or maybe a development system should execute in NOARCHIVELOG mode. Most development systems should be run in ARCHIVELOG mode for two reasons:

•\ This is how you will process the data in production; you want development to act and react as your production system would.
•\ In many cases, the developers pull their code out of the data dictionary, modify it, and compile it back into the database. The development database holds the current version of the code. If the development database suffers a disk failure in the afternoon, what happens to all of the code you compiled and recompiled all morning? It’s lost.

Don’t let anyone talk you out of being in ARCHIVELOG mode. You spent a long time developing your application, so you want people to trust it. Losing their data will not instill confidence in your system.

Note There are some cases in which a large DW could justify being in NOARCHIVELOG mode—if it made judicious use of READ ONLY tablespaces and was willing to fully rebuild any READ WRITE tablespace that suffered a failure by reloading the data.

More Details
Aug 22, 2022
Online Redo Log- Files-1

Every Oracle database has at least two online redo log file groups. Each redo log group consists of one or more redo log members (redo is managed in groups of members). The individual redo log file members of these groups are true mirror images of each other.

These online redo log files are fixed in size and are used in a circular fashion. Oracle will write to log file group 1, and when it gets to the end of this set of files, it will switch to log file group 2 and overwrite the contents of those files from start to end.

When it has filled log file group 2, it will switch back to log file group 1 (assuming we have only two redo log file groups; if we have three, it would, of course, proceed to the third group). This is shown in Figure 3-4.

Figure 3-4.  Writing to log file groups

The act of switching from one log file group to another is called a log switch. It is important to note that a log switch may cause a temporary “pause” in a poorly configured database. Since the redo logs are used to recover transactions in the event of a failure, we must be certain we won’t need the contents of a redo log file before we are able to use it. If Oracle isn’t sure that it won’t need the contents of a log file, it will suspend operations in the database momentarily and make sure that the data in the cache that this redo “protects” is safely written (checkpointed) onto disk. Once Oracle is sure of that, processing will resume and the redo file will be reused.

We’ve just started to talk about a key database concept: checkpointing. To understand how online redo logs are used, you’ll need to know something about checkpointing, how the database buffer cache works, and what a process called databaseblock writer (DBWn) does. Thedatabase buffer cache and DBWn are covered in more detail later on, but we’llskip ahead a little anyway and touch on them now. 

Thedatabase buffer cache is where database blocks are stored temporarily. This isa structure in Oracle’s SGA. As blocks are read, they are stored in this cache, hopefully so we won’t have to physically reread them later.

The buffer cache is first and foremost a performance tuning device. It exists solely to make the very slow process of physical I/O appear to be much faster than it is.

When we modify a block by updating a row on it, these modifications are done in memory to the blocks in the buffer cache. Enough information to redo or to replay this modification is stored in the redo log buffer, another SGA data structure.

When we COMMIT our modifications, making them permanent, Oracle does not go to all of the blocks we modified in the SGA and write them to disk. Rather, it just writes the contents of the redo log buffer out to the online redo logs.

As long as that modified block is in the buffer cache and not on disk, we need the contents of that online redo log in case the database fails. If, at the instant after we committed, the power was turned off, the database buffer cache would be wiped out. If this happens, the only record of our change is in that redo log file.

Upon restart of the database, Oracle will actually replay our transaction, modifying the block again in the same way we did and committing it for us. So, as long as that modified block is cached and not written to disk, we can’t reuse (overwrite) that redo log file. 

This is where DBWn comes into play. This Oracle background process is responsible for making space in the buffer cache when it fills up and, more important, for performing checkpoints. A checkpoint is the writing of dirty (modified) blocks from the buffer cache to disk.

Oracle does this in the background for us. Many things can cause a checkpoint to occur, the most common being a redo log switch. As we filled up log file 1 and switched to log file 2, Oracle initiated a checkpoint.

At this point, DBWn started writing to disk all of the dirty blocks that are protected by log file group 1. Until DBWn flushes all of these blocks protected by that log file, Oracle can’t reuse (overwrite) it. If we attempt to use it before DBWn has finished its checkpoint, we’ll get a message like this in our database’s ALERT log: … 

Thread 1 cannot allocate new log, sequence 16 Checkpoint not complete Current log# 3 seq# 15 mem# 0: /opt/oracle/oradata/CDB/redo03.log…

More Details
Jul 22, 2022
Password Files- Files

The password file is an optional file that permits the remote SYSDBA or administrator access to the database. When you attempt to start Oracle, there is no database available that can be consulted to verify passwords. When you start Oracle on the local system (i.e., not over the network, but from the machine the database instance will reside on), Oracle will use the OS to perform the authentication.

When Oracle was installed, the person performing the installation was asked to specify an OS group for the administrators. Normally, on UNIX/Linux, this group will be DBA by default, and ORA_DBA on Windows. It can be any legitimate group name on that platform, however. That group is “special,” in that any user in that group can connect to Oracle “as SYSDBA” without specifying a username or password.

However, what happens if you attempt to connect as SYSDBA over the network to a remote database:

$ sqlplus sys/foo@localhost:1521/CDB as sysdba

ERROR:

ORA-01017: invalid username/password; logon denied

OS authentication won’t work over the network for SYSDBA, even if the very unsafe (for security reasons) parameter REMOTE_OS_AUTHENT is set to true. So, OS authentication won’t work, and, as discussed earlier, if you’re trying to start up an instance to mount and open a database, then by definition there’s no database yet in which to look up authentication details. It is the proverbial chicken and egg problem.

Enter the password file. The password file stores a list of usernames and passwords that are allowed to remotely authenticate as SYSDBA over the network. Oracle must use this file to authenticate them, not the normal list of passwords stored in the database.

So, let’s correct our situation. First, verify that the REMOTE_LOGIN_PASSWORDFILE parameter is set to the default of EXCLUSIVE, meaning only one database uses a given password file:

SQL> show parameter remote_login_passwordfile
NAME TYPE VALUE
remote_login_passwordfile string EXCLUSIVE

Note Other valid values for this parameter are NONE, meaning there is no password file (there are no remote SYSDBA connections), and SHARED (more than one database can use the same password file).

The next step is to use the command-line tool (on UNIX/Linux and Windows) named orapwd to create and populate the initial password file:

$ orapwd

Usage: orapwd file= entries= force= asm= dbuniquename= format= sysbackup= sysdg= syskm= delete= input_file=Usage: orapwd describe file=where

There must be no spaces around the equal-to (=) character.

The command we’ll use when logged into the operating system account that owns the Oracle software is

$ orapwd file=orapw$ORACLE_SID password=foo entries=20

This creates a password file named orapwCDB in my case (my ORACLE_SID is CDB). That’s the naming convention for this file on most UNIX/Linux platforms (see your installation/OS admin guide for details on the naming of this file on your platform), and it resides in the $ORACLE_HOME/dbs directory. On Windows, this file is named PW%ORACLE_SID%.ora, and it’s located in the %ORACLE_HOME%\database directory. You should navigate to the correct directory prior to running the command to create that file, or move that file into the correct directory afterward.

Now, currently the only user in that file is SYS, even if there are other SYSDBA accounts on that database (they are not in the password file yet). Using that knowledge, however, we can for the first time connect as SYSDBA over the network:

$ sqlplus sys/foo@localhost:1521/CDB as sysdba
SQL> show user
USER is “SYS”

Note If you experience an ORA-12505 “TNS:listener does not currently know of SID given in connect Descriptor” error during this step, that means that the database listener is not configured with a static registration entry for this server. The DBA has not permitted remote SYSDBA connections when the database instance is not up. You would need to configure static server registration in your listener.ora configuration file. Please search for “Configuring Static Service Information” (in quotes) on the OTN (Oracle Technology Network) documentation search page for the version of the database you are using for details on configuring this static service. If you encounter an ORA-12528 “TNS:listener: all appropriate instances are blocking new connections” error, you can also configure the tnsnames.ora file with the UR=A parameter that will allow you to connect to a blocked instance.

We have been authenticated, so we are in. We can now successfully start up, shut down, and remotely administer this database using the SYSDBA account. Now, we have another user, TKYTE, who has been granted SYSDBA, but will not be able to connect remotely yet:

$ sqlplus tkyte/foobar@PDB1 as sysdba

ERROR:

ORA-01017: invalid username/password; logon denied

The reason for this is that TKYTE is not yet in the password file. In order to get TKYTE into the password file, we need to “regrant” that account the SYSDBA privilege:

$ sqlplus / as sysdba
SQL> alter session set container=PDB1; SQL> grant sysdba to tkyte; Grant succeeded.

SQL> exit
$ sqlplus tkyte/foobar@PDB1 as sysdba
SQL>

This created an entry in the password file for us, and Oracle will now keep the password in sync. If TKYTE alters their password, the old one will cease working for remote SYSDBA connections, and the new one will start working. The same process is repeated for any user who was a SYSDBA but is not yet in the password file.

More Details
Jun 22, 2022
The Storage Hierarchy in an Oracle Database- Files-2

The relationship between segments, extents, and blocks is shown in Figure 3-1.

Figure 3-1.  Segments, extents, and blocks

A segment is made up of one or more extents, and an extent is a logically contiguous allocation of blocks. A database may have up to six different block sizes in it.

Note  This feature of multiple block sizes was introduced for the purpose of making transportable tablespaces usable in more cases. The ability to transport a tablespace allows a DBA to move or copy the already formatted datafiles from one database and attach them to another—for example, to immediately copy all of the tables and indexes from an Online Transaction Processing (OLTP) database to a data warehouse (DW). However, in many cases, the OLTP database might be using a small block size, such as 2KB or 4KB, whereas the DW would be using a much larger one (8KB or 16KB). Without support for multiple block sizes in a single database, you wouldn’t be able to transport this information. Tablespaces with multiple block sizes should be used to facilitate transporting tablespaces; they are not generally used for anything else.

There will be the database default block size, which is the size specified in the initialization file during the CREATE DATABASE command. The SYSTEM tablespace will have this default block size always, but you can then create other tablespaces with nondefault block sizes of 2KB, 4KB, 8KB, 16KB, and, depending on the operating system, 32KB. The total number of block sizes is six if and only if you specified a nonstandard block size (not a power of two) during database creation. Hence, for all practical purposes, a database will have at most five block sizes: the default size and then four other nondefault sizes.

Any given tablespace will have a consistent block size, meaning that every block in that tablespace will be the same size. A multisegment object, such as a table with a LOB column, may have each segment in a tablespace with a different block size, but any given segment (which is contained in a tablespace) will consist of blocks of exactly the same size.

Most blocks, regardless of their size, have the same general format, which looks something like Figure 3-2.

Figure 3-2.  The structure of a block

Exceptions to this format include LOB segment blocks and hybrid columnar compressed blocks in Exadata storage, for example, but the vast majority of blocks in your database will resemble the format in Figure 3-2. The block header contains information about the type of block (table block, index block, and so on), transaction information when relevant (only blocks that are transaction managed have this information—a temporary sort block would not, for example) regarding active and past transactions on the block, and the address (location) of the block on the disk.

The next two block components are found on the most common types of database blocks, those of heap-organized tables. We’ll cover database table types in much more detail in Chapter 10, but suffice it to say that most tables are of this type.

The table directory, if present, contains information about the tables that store rows in this block (data from more than one table may be stored on the same block). The row directory contains information describing the rows that are to be found on the block.

This is an array of pointers to where the rows are to be found in the data portion of the block. These three pieces of the block are collectively known as the block overhead, which is space used on the block that is not available for your data, but rather is used by Oracle to manage the block itself.

The remaining two pieces of the block are straightforward: there may be free space on a block, and then there will generally be used space that is currently storing data.

Now that you have a cursory understanding of segments, which consist of extents, which consist of blocks, let’s take a closer look at tablespaces and then at exactly how files fit into the big picture.

More Details
May 22, 2022
The Storage Hierarchy in an Oracle Database- Files-1

A database is made up of one or more tablespaces. A tablespace is a logical storage container in Oracle that comes at the top of the storage hierarchy and is made up of one or more datafiles.

These files might be cooked files in a file system, raw partitions, ASM-­managed database files, or files on a clustered file system. A tablespace contains segments, as described next.

Segments

Segments are the major organizational structure within a tablespace. Segments are simply your database objects that consume storage—typically objects such as tables, indexes, undo segments, and so on.

Most times, when you create a table, you create a table segment. When you create a partitioned table, you are not creating a table segment, rather you create a segment per partition.

When you create an index, you normally create an index segment, and so on. Every object that consumes storage is ultimately stored in a single segment. There are undo segments, temporary segments, cluster segments, index segments, and so on.

Note It might be confusing to read “Every object that consumes storage is ultimately stored in a single segment.” You will find many CREATE statements that create multisegment objects. The confusion lies in the fact that a single CREATE statement may ultimately create objects that consist of zero, one, or more segments!

For example, CREATE TABLE T ( x int primary key, y clob ) will create four segments: one for the TABLE T, one for the index that will be created in support of the primary key, and two for the CLOB (one segment for the CLOB is the LOB index, and the other segment is the LOB data itself). On the other hand, CREATE TABLE T ( x int, y date ) cluster MY_CLUSTER will create zero segments (the cluster is the segment in this case). We’ll explore this concept further in Chapter 10.

Extents

Segments consist of one or more extents. An extent is a logically contiguous allocation of space in a file. (Files themselves, in general, are not contiguous on disk; otherwise, we would never need a disk defragmentation tool!

Also, with disk technologies such as Redundant Array of Independent Disks (RAID), you might find that a single file also spans many physical disks.) Traditionally, every segment starts with at least one extent. Oracle 11g Release 2 has introduced the concept of a “deferred” segment—a segment that will not immediately allocate an extent, so in that release and going forward, a segment might defer allocating its initial extent until data is inserted into it.

When an object needs to grow beyond its initial extent, it will request another extent be allocated to it. This second extent will not necessarily be located right next to the first extent on disk—it may very well not even be allocated in the same file as the first extent.

The second extent may be located very far away from the first extent, but the space within an extent is always logically contiguous in a file. Extents vary in size from one Oracle data block (explained shortly) to gigabytes.

Blocks

Extents, in turn, consist of blocks. A block is the smallest unit of space allocation in Oracle. Blocks are where your rows of data, or index entries, or temporary sort results are stored. A block is what Oracle typically reads from and writes to disk. Blocks in Oracle are generally one of four common sizes: 2KB, 4KB, 8KB, or 16KB (although 32KB is also permissible in some cases; there are restrictions in place as to the maximum size by operating system).

Note Per Oracle’s database reference documentation, the minimum block size is 2048, and larger block sizes must be a multiple of the operating system physical block size.

More Details
May 22, 2022
Datafiles- Files

Datafiles, along with redo log files, are the most important type of files in the database. This is where all of your data will ultimately be stored. Every database has at least one datafile associated with it and typically has many more than one.

You can view the datafiles for your database by querying the data dictionary. The names of the datafiles will vary depending on whether you’re using RAC (with datafiles on ASM disk) and/ or Oracle Managed Files (OMF).

For example, here are the datafiles in a RAC container database, using ASM disks, and the OMF naming convention:

SQL> select name from v$datafile;

NAME
+DATA/CDB/DATAFILE/system.257.1064287113
+DATA/CDB/DATAFILE/sysaux.258.1064287137
+DATA/CDB/DATAFILE/undotbs1.259.1064287153

+DATA/CDB/86B637B62FE07A65E053F706E80A27CA/DATAFILE/system.265.1064287787
+DATA/CDB/86B637B62FE07A65E053F706E80A27CA/DATAFILE/sysaux.266.1064287787
+DATA/CDB/DATAFILE/users.260.1064287153
+DATA/CDB/86B637B62FE07A65E053F706E80A27CA/DATAFILE/undotbs1.267.1064287787
+DATA/CDB/DATAFILE/undotbs2.269.1064288035
+DATA/CDB/BB1C6AAC137C6A4DE0536638A8C06678/DATAFILE/system.274.1064288517
+DATA/CDB/BB1C6AAC137C6A4DE0536638A8C06678/DATAFILE/sysaux.275.1064288517

The prior output shows three SYSTEM datafiles (where the Oracle data dictionary is stored). One is for the root container database, and the other two are associated with pluggable databases.

The long random string in the directory path is the GUID (unique identifier) associated with the pluggable databases when using OMF files.

Listed next are the datafiles in a single instance container database (not using OMF):

SQL> select name from v$datafile;

NAME

/opt/oracle/oradata/CDB/system01.dbf
/opt/oracle/oradata/CDB/sysaux01.dbf
/opt/oracle/oradata/CDB/undotbs01.dbf
/opt/oracle/oradata/CDB/pdbseed/system01.dbf
/opt/oracle/oradata/CDB/pdbseed/sysaux01.dbf
/opt/oracle/oradata/CDB/users01.dbf /opt/oracle
/oradata/CDB/pdbseed/undotbs01.dbf /opt/oracle
/oradata/CDB/PDB1/system01.dbf
/opt/oracle/oradata/CDB/PDB1/sysaux01.dbf
/opt/oracle/oradata/CDB/PDB1/undotbs01.dbf
/opt/oracle/oradata/CDB/PDB1/users01.dbf
/opt/oracle/oradata/CDB/PDB2/system01.dbf
/opt/oracle/oradata/CDB/PDB2/sysaux01.dbf
/opt/oracle/oradata/CDB/PDB2/undotbs01.dbf
/opt/oracle/oradata/CDB/PDB2/users01.db

The datafiles in this directory /opt/oracle/oradata/CDB belong to the root container.
The datafiles in subdirectories belong to pluggable databases. Each database will usually (minimally) have the following datafiles:

•\ SYSTEM: Stores the Oracle data dictionary
•\ SYSAUX: Contains other non-dictionary objects
•\ UNDO: Stores the undo segments used for rollback operations
•\ USERS: A default tablespace to be used for application data (if no other tablespace is designated)

After a brief review of file system types, we’ll discuss how Oracle organizes these files and how data is organized within them. To understand this, you need to know what tablespaces, segments, extents, and blocks are. These are the units of allocation that Oracle uses to hold objects in the database, and I describe them in detail shortly.

More Details
Apr 22, 2022
Tagging Trace Files- Files-2

PROBLEM_ID PROBLEM_KEY

LAST_INCIDENT LASTINC_TIME

There was recently an ORA-700 error in my database. I can now see what was affected by that error by issuing the show incident command:

adrci> show incident

INCIDENT_ID PROBLEM_KEY CREATE_TIME
———— ————————— ————————-
1 ORA 700 [pga physmem limit] 021-01-24 19:33:31.863000 +00:00
2402 ORA 700 [pga physmem limit] 021-01-24 19:34:50.006000 +00:00
4803 ORA 700 [pga physmem limit] 021-01-24 19:35:20.619000 +00:00
7204 ORA 700 [pga physmem limit] 021-01-24 19:42:03.463000 +00:00
9605 ORA 700 [pga physmem limit] 021-01-24 19:49:46.391000 +00:00

I can see there are several incidents, and I can identify the information related to each incident via the show tracefile command:

adrci> show tracefile -I 2402
diag/rdbms/cdb/CDB/incident/incdir_2402/CDB_ora_5317_i2402.trc

This shows me the location of the trace file for the incident number listed. Further, I can see a lot of detail about the incident if I so choose:

adrci> show incident -mode detail -p “incident_id=2402”
ADR Home = /opt/oracle/diag/rdbms/cdb/CDB:

INCIDENT INFO RECORD 1

INCIDENT_ID 2402

STATUS ready

PROBLEM_ID 1

CLOSE_TIME

FLOOD_CONTROLLED none

ERROR_FACILITY ORA

ERROR_NUMBER 700

ERROR_ARG1 pga physmem limit

And, finally, I can create a “package” of the incident that is useful for support. The package will contain everything a support analyst needs to begin working on the problem.

This section is not intended to be a full overview or introduction to the ADRCI utility, which is documented fully in the Oracle Database Utilities manual. Rather, I just wanted to introduce the existence of the tool—a tool that makes using trace files easy.

The database information is important to have when you go to http://support. oracle.com to file the service request or to search to see if what you are experiencing is a known problem. In addition, you can see the Oracle instance on which the error occurred. It is quite common to have many instances running concurrently, so isolating the problem to a single instance is useful.

Here’s another section of the trace file to be aware of:

*** SESSION ID:(266.55448) 2021-02-15T15:39:09.939349+00:00
*** CLIENT ID:() 2021-02-15T15:39:09.939354+00:00
*** SERVICE NAME:(SYS$USERS) 2021-02-15T15:39:09.939358+00:00
*** MODULE NAME:(sqlplus@localhost (TNS V1-V3)) 2021-02-­ 15T15:39:09.939364+00:00
*** ACTION NAME:() 2021-02-15T15:39:09.939368+00:00
*** CLIENT DRIVER:(SQL*PLUS) 2021-02-15T15:39:09.939372+00:00
*** CONTAINER ID:(1) 2021-02-15T15:39:09.939376+00:00

This part of the trace file shows the session information available in the columns ACTION and MODULE from V$SESSION. Here, we can see that it was a SQL*Plus session that caused the error to be raised (you and your developers can and should set the ACTION and MODULE information; some environments such as Oracle Forms and APEX already do this for you).

Additionally, we have the SERVICE NAME. This is the actual service name used to connect to the database—SYS$USERS, in this case—indicating we didn’t connect via a TNS service. If we logged in using user/pass@CDB, we might see

*** SERVICE NAME:(CDB) 2021-02-15T15:55:42.704845+00:00

Here, CDB is the service name (not the TNS connect string; rather, it’s the ultimate service registered in a TNS listener to which it connected). This is also useful in tracking down which process or module is affected by this error.

Lastly, before we get to the actual error, we can see the session ID (266 in this example), session serial number (55448 in this example), and related date/time information (all releases) as further identifying information:

*** SESSION ID:(266.55448) 2021-02-15T15:39:09.939349+00:00

From here, you can dig further into the trace file and try to determine what is causing the problem. The other important pieces of information are the error code (typically 600, 3113, or 7445) and other arguments associated with the error code. Using these, along with some of the stack trace information that shows the set of Oracle internal subroutines that were called in order, we might be able to find an existing bug (and workarounds, patches, and so on).

Typically, you’ll do a Google search of any relevant error messages. If you don’t readily find an answer, you can create a service request with Oracle Support and attach the trace files to the request. Oracle Support can help you identify if you’ve hit a bug or provide options on workarounds.

More Details