Jul 22, 2024
Database Block Buffer Cache- Memory Structures

So far, we have looked at relatively small components of the SGA. Now we are going to look at one that is potentially huge in size. The block buffer cache is where Oracle stores database blocks before writing them to disk and after reading them in from disk.

This is a crucial area of the SGA for us. Make it too small and our queries will take forever to run. Make it too big and we’ll starve other processes (e.g., we won’t leave enough room for a dedicated server to create its PGA, and we won’t even get started).

We have three places to store cached blocks from individual segments in the SGA:

•\ Default pool: The location where all segment blocks are normally cached. This is the original—and, previously, the only—buffer pool.
•\ Keep pool: An alternate buffer pool where by convention you assign segments that are accessed fairly frequently, but still get aged out of the default buffer pool due to other segments needing space.
•\ Recycle pool: An alternate buffer pool where by convention you assign large segments that you access very randomly and which would therefore cause excessive buffer flushing of many blocks from many segments. There’s no benefit to caching such segments because by the time you wanted the block again, it would have been aged out of the cache. You would separate these segments out from the segments in the default and keep pools so they would not cause those blocks to age out of the cache.

Note that in the keep and recycle pool descriptions I used the phrase “by convention.” There is nothing in place to ensure that you use either the keep pool or the recycle pool in the fashion described. In fact, the three pools manage blocks in a mostly identical fashion; they do not have radically different algorithms for aging or caching blocks.

The goal here was to give the DBA the ability to segregate segments to hot, warm, and “do not care to cache” areas. The theory was that objects in the default pool would be hot enough (i.e., used enough) to warrant staying in the cache all by themselves.

The cache would keep them in memory since they were very popular blocks. If you had some segments that were fairly popular but not really hot, these would be considered the warm blocks. These segments’ blocks could get flushed from the cache to make room for blocks you used infrequently (the “do not care to cache” blocks).

To keep these warm segments’ blocks cached, you could do one of the following:

•\ Assign these segments to the keep pool, in an attempt to let the warm blocks stay in the buffer cache longer.

•\ Assign the “do not care to cache” segments to the recycle pool, keeping the recycle pool fairly small so as to let the blocks come into the cache and leave the cache rapidly (decrease the overhead of managing them all).

Having to do one of these two things increased the management work the DBA had to perform, as there were three caches to think about, size, and assign objects to. ­Remember also that there is no sharing among them, so if the keep pool has lots of unused space, it won’t give it to the overworked default or recycle pool.

All in all, these pools were generally regarded as a very fine, low-level tuning device, only to be used after most other tuning alternatives had been looked at (if I could rewrite a query to do one-tenth the I/O rather than set up multiple buffer pools, that would be my choice).

There are up to four more optional caches, the DB_nK_CACHE_SIZE, to consider in addition to the default, keep, and recycle pools. These caches were added in support of multiple block sizes in the database. A database can have a default block size, which is the size of the blocks stored in the default, keep, or recycle pool, as well as up to four nondefault block sizes, as explained in Chapter 3.

The blocks in these buffer caches are managed in the same way as the blocks in the original default pool—there are no special algorithm changes for them either. Let’s now move on to see how the blocks are managed in these pools.

More Details
May 22, 2024
PGA and UGA Wrap-Up- Memory Structures-1

So far, we have looked at two memory structures: the PGA and the UGA. You should understand now that the PGA is private to a process. It is the set of variables that an Oracle dedicated or shared server needs to have independent of a session.

The PGA is a “heap” of memory in which other structures may be allocated. The UGA is also a heap of memory in which various session-specific structures may be defined.

The UGA is allocated from the PGA when you use a dedicated server to connect to Oracle and from the SGA under a shared server connection.

This implies that when using a shared server, you must size your SGA’s large pool to have enough space to cater to every possible user that will ever connect to your database concurrently.

So, the SGA of a database supporting shared server connections is generally much larger than the SGA for a similarly configured dedicated server mode-only database. We’ll cover the SGA in more detail next.

The System Global Area

Every Oracle instance has one big memory structure referred to as the System Global Area (SGA). This is a large, shared memory structure that every Oracle process will access at one point or another. It varies in size from dozens of megabytes on small test systems, to a few gigabytes on medium-to-large systems, up to hundreds of gigabytes for really big systems.

On UNIX/Linux, the SGA is a physical entity you can “see” from the OS command line. It is physically implemented as a shared memory segment—a stand-alone piece of memory to which processes may attach. It is possible to have an SGA on a system without having any Oracle processes; the memory stands alone. It should be noted, however, that if you have an SGA without any Oracle processes, this is an indication that the database crashed in some fashion. It is an unusual situation, but it can happen. This is what an SGA “looks like” on Oracle Linux:

0x00000000 32768 oracle 600 9138176 116
0x00000000 32769 oracle 600 1560281088 58
0x00000000 32770 oracle 600 7639040 58
0x322379e0 32771 oracle 600 12288 58

One SGA is represented here, and the report shows the OS account that owns the SGA (oracle for all of these in this example) and the size of the SGA. On Windows, you really can’t see the SGA as a distinct entity the way you can in UNIX/Linux. Because on the Windows platform Oracle executes as a single process with a single address space, the SGA is allocated as private memory to the oracle.exe process. If you use the Windows Task Manager or some other performance tool, you can see how much memory oracle.exe has allocated, but you can’t see the SGA vs. any other piece of allocated memory.

Note Unless you have my parameter settings and you are running my exact same version of Oracle on my exact same OS, you will almost certainly see different numbers than I do. The SGA sizing is very version/OS/parameter dependent.

Within Oracle itself, you can see the SGA regardless of the platform, using another magic V$ view called V$SGASTAT. It might look as follows:

SQL> compute sum of bytes on pool
SQL> break on pool skip 1
SQL> select pool, name, bytes from v$sgastat order by pool, name;

The SGA is broken up into various pools. Here are the major ones you’ll see:

•\ Shared pool: The shared pool contains shared cursors, stored procedures, state objects, dictionary caches, and many dozens of other bits of data. If a user executes a SQL statement, then Oracle will use the shared pool.
•\ Database buffer cache (block buffers): Data blocks read from disk as users query and modify data. Contains the most recently used data blocks.
•\ Fixed SGA: Contains internal housekeeping information regarding the state of the instance and database.
•\ Redo log buffer: A circular buffer that contains information regarding changes to the database. These changes are written to the online redo logs on disk. This information is used for database recovery.
•\ Java pool: The Java pool is a fixed amount of memory allocated for the JVM running in the database. The Java pool may be resized online while the database is up and running.
•\ Large pool: Optional memory area used by shared server connections for session memory, by parallel execution features for message buffers, and by RMAN backup for disk I/O buffers. This pool is resizable online.
•\ Streams pool: This is a pool of memory used by data sharing tools such as Oracle GoldenGate, Oracle Streams, and Data Pump. This pool is resizable online. If the Streams pool is not configured and you use the Streams functionality, Oracle will use up to ten percent of the shared pool for streams memory.
•\ Flashback buffer: Optional memory area used when Flashback Database is enabled. The recovery write process will copy modified blocks from the buffer cache to the flashback buffer, which are written to Flashback Database logs on disk.
•\ Shared I/O pool: Used for I/O operations on SecureFile Large Objects. This area is used for SecureFile deduplication, encryption, and compression.
•\ In-memory area: Optional memory area that allows tables and partitions to be stored in a columnar format. Useful for analytic operations that operate on a few columns returning many rows (as opposed to an OLTP application that returns a few rows with many columns).
•\ Memoptimize pool: Optional memory component that optimizes key-­ based queries.
•\ Optional Database Smart Flash Cache: Optional memory extension to the database buffer cache for Linux and Solaris systems. Resides on solid-state storage devices that use flash memory.

A typical SGA might look as shown in Figure 4-1. The optional memory components are indicated by the dashed outlines.

Figure 4-1.  Typical SGA

More Details
Dec 22, 2023
Change Tracking File- Files

The change tracking file is an optional file for use with Oracle 10g Enterprise Edition and above. The sole purpose of this file is to track what blocks have modified since the last incremental backup.

With this, the Recovery Manager (RMAN) tool can back up only the database blocks that have actually been modified without having to read the entire database.

As Oracle is running, and as blocks are modified, Oracle optionally maintains a file that tells RMAN what blocks have been changed. Creating this change tracking file is rather simple and is accomplished via the ALTER DATABASE command:

$ mkdir -p /opt/oracle/product/btc $ sqlplus / as sysdba
SQL> alter database enable block change tracking using file ‘/opt/oracle/ product/btc/changed_blocks.btc’; Database altered.

Caution I’ll say this from time to time throughout the book: please bear in mind that commands that set parameters, modify the database, or make fundamental changes should not be done lightly and definitely should be tested prior to performing them on your “real” system. The preceding command will, in fact, cause the database to do more work. It will consume resources.

To turn off and remove the block change tracking file, you’d use the ALTER DATABASE command once again:

SQL> alter database disable block change tracking; Database altered.

Note that this command will erase the block change tracking file. It does not just disable the feature—it removes the file as well.

Note On certain operating systems, such as Windows, you might find that if you run my example—creating a block change tracking file and then disabling it—the file appears to still exist. This is an OS-specific issue—it does not happen on many operating systems. It will happen only if you CREATE and DISABLE the change tracking file from a single session. The session that created the block change tracking file will leave that file open, and some operating systems will not permit you to erase a file that has been opened by a previous process (e.g., the session process that created the file). This is harmless; you just need to remove the file yourself later.

You can enable this new block change tracking feature in either ARCHIVELOG or NOARCHIVELOG mode. But remember, a database in NOARCHIVELOG mode, where the redo log generated daily is not retained, can’t recover all changes in the event of a media (disk or device) failure! A NOARCHIVELOG mode database will lose data someday. We will cover these two database modes in more detail in Chapter 9.

Flashback Logs

Flashback logs are used to support the FLASHBACK DATABASE command. Flashback logs contain “before images” of modified database blocks that can be used to return the database to the way it was at some prior point in time.

More Details
Aug 22, 2023
The Process Global Area and User Global Area- Memory Structures

The PGA is a process-specific piece of memory. In other words, it is memory specific to a single operating system process or thread.

This memory is not accessible by any other process or thread in the system. It is typically allocated via either of the C runtime calls malloc() or memmap(), and it may grow (or even shrink) at runtime.

The PGA is never allocated in Oracle’s SGA; it is always allocated locally by the process or thread—the P in PGA stands for process or program; it is not shared.

The UGA is, in effect, your session’s state. It is memory that your session must always be able to get to. The location of the UGA is dependent on how you connect to Oracle.

If you connect via a shared server, the UGA must be stored in a memory structure that every shared server process has access to—and that’s the SGA. In this way, your session can use any one of the shared servers, since any of them can read and write your session’s data.

On the other hand, if you are using a dedicated server connection, there’s no need for universal access to your session state, and the UGA becomes virtually synonymous with the PGA; it will, in fact, be contained in the PGA of your dedicated server.

When you look at the system statistics, you’ll find the UGA reported in the PGA in dedicated server mode (the PGA will be greater than or equal to the UGA memory used; the PGA memory size will include the UGA size as well).

So, the PGA contains process memory and may include the UGA. The other areas of PGA memory are generally used for in-memory sorting, bitmap merging, and hashing. It would be safe to say that, besides the UGA memory, these are the largest contributors by far to the PGA.

There are two ways to manage memory in the PGA: manual and automatic. The manual method should not be used (unless you’re on an old version of Oracle and don’t have a choice).

The automatic PGA memory management is the recommended technique that you should use. The automatic method is much simpler and more efficient in managing memory. The manner in which memory is allocated and used differs greatly in each case, so we’ll discuss each in turn.

More Details
Jul 22, 2023
Manual PGA Memory Management- Memory Structures

There are only a few scenarios that I can think of where manual PGA memory management may be appropriate. One is if you’re running an old version of Oracle and do not have the option to upgrade.

Another situation may be that you run large batch jobs that run during periods when they are the only activities in the instance where manual PGA management may provide performance benefits.

Therefore, it’s possible you may find yourself in an environment where this type of PGA memory management is used. Other than those scenarios, the manual management style should be avoided.

In manual PGA memory management, the following are the parameters that have the largest impact on the size of your PGA, outside of the memory allocated by your session for PL/SQL tables and other variables:

•\ SORT_AREA_SIZE: The total amount of RAM that will be used to sort information before swapping out to disk (using disk space in the temporary tablespace the user is assigned to).

•\ SORT_AREA_RETAINED_SIZE: The amount of memory that will be used to hold sorted data after the sort is complete. That is, if SORT_AREA_ SIZE is 512KB and SORT_AREA_RETAINED_SIZE is 256KB, your server process would use up to 512KB of memory to sort data during the initial processing of the query. When the sort is complete, the sorting area would “shrink” down to 256KB, and any sorted data that does not fit in that 256KB would be written out to the temporary tablespace.

•\ HASH_AREA_SIZE: The amount of memory your server process can use to store hash tables in memory. These structures are used during a hash join, typically when joining a large set with another set. The smaller of the two sets would be hashed into memory, and anything that didn’t fit in the hash area region of memory would be stored in the temporary tablespace by the join key.

Note Before using the parameters in the prior list, you must set the WORKAREA_ SIZE_POLICY parameter to MANUAL.

These parameters control the amount of space Oracle will use to sort or hash data in memory before using the temporary tablespace on disk, and how much of that memory segment will be retained after the sort is done. The SORT_AREA_SIZE (minus SORT_AREA_ RETAINED_SIZE) calculated value is generally allocated out of your PGA, and the SORT_ AREA_RETAINED_SIZE value will be in your UGA.

Here are the important things to remember about using the *_AREA_SIZE parameters:

•\ These parameters control the maximum amount of memory used by a SORT, HASH, or BITMAP MERGE operation.

•\ A single query may have many operations taking place that use this memory, and multiple sort/hash areas could be created. Remember that you may have many cursors opened simultaneously, each with its own SORT_AREA_RETAINED needs. So, if you set the sort area size to 10MB, you could use 10, 100, 1000, or more megabytes of RAM in your session. These settings are not session limits; rather, they are limits on a single operation, and your session could have many sorts in a single query or many queries open that require a sort.

•\ The memory for these areas is allocated on an “as needed” basis. If you set the sort area size to 1GB as we did, it doesn’t mean you’ll allocate 1GB of RAM. It only means that you’ve given the Oracle process the permission to allocate that much memory for a sort/hash operation.

Now that we’ve briefly reviewed manual PGA memory management, let’s move on to what you should be using, automatic PGA memory management.

More Details
Jun 22, 2023
Tablespaces- Files

As noted earlier, a tablespace is a container—it holds segments. Each segment belongs to exactly one tablespace. A tablespace may have many segments within it. All of the extents for a given segment will be found in the tablespace associated with that segment. Segments never cross tablespace boundaries.

A tablespace itself has one or more datafiles associated with it. An extent for any given segment in a tablespace will be contained entirely within one datafile. However, a segment may have extents from many different datafiles.

Graphically, a tablespace might look like Figure 3-3.

Figure 3-3.  A tablespace containing two datafiles, three segments, and four extents

Figure 3-3 shows a tablespace named USER_DATA. It consists of two datafiles, user_data01.dbf and user_data02.dbf. It has three segments allocated to it: T1, T2, and I1 (probably two tables and an index).

The tablespace has four extents allocated in it, and each extent is depicted as a logically contiguous set of database blocks. Segment T1 consists of two extents, one extent in each file. Segments T2 and I1 each have one extent depicted. If we need more space in this tablespace, we could either resize the datafiles already allocated to the tablespace or we could add a third datafile to it.

A tablespace is a logical storage container in Oracle. As developers, we will create segments in tablespaces. We will never get down to the raw file level—we don’t specify that we want our extents to be allocated in a specific file (we can, but in general we don’t). Rather, we create objects in tablespaces and Oracle takes care of the rest. If at some point in the future, the DBA decides to move our datafiles around on disk to more evenly distribute I/O, that is OK with us. It will not affect our processing at all.

Storage Hierarchy Summary

In summary, the hierarchy of storage in Oracle is as follows:

\ 1.\ A database is made up of one or more tablespaces.

\ 2.\ A tablespace is made up of one or more datafiles. These files might be cooked files in a file system, raw partitions, ASM-managed database files, or a file on a clustered file system. A tablespace contains segments.

\ 3.\ A segment (TABLE, INDEX, and so on) is made up of one or more extents. A segment exists in a tablespace, but may have data in many datafiles within that tablespace.

\ 4.\ An extent is a logically contiguous set of blocks on disk. An extent is in a single tablespace and, furthermore, is always in a single file within that tablespace.

\ 5.\ A block is the smallest unit of allocation in the database. A block is the smallest unit of I/O used by a database on datafiles.

More Details
Nov 22, 2022
Control Files- Files

Control files are fairly small files (usually megabytes in size) that contain a directory of the other files Oracle needs.

The parameter file tells the instance where the control files are, and the control files tell the instance where the database and online redo log files are. You can view the location of the control files via the following:

SQL> show parameter control_files

NAME TYPE VALUE

control_files string /opt/oracle/oradata/CDB/
control01.ctl, /opt/oracle/oradata/C
DB/control02.ctl

The control files also tell Oracle other things, such as information about checkpoints that have taken place, the name of the database (which should match the db_name parameter in the parameter file), the timestamp of the database as it was created, an archive redo log history (this can make a control file large in some cases), RMAN information, and so on.

Control files should be multiplexed either by hardware (RAID) or by Oracle when RAID or mirroring is not available. More than one copy should exist, and the copies should be stored on separate disks to avoid losing them in case you have a disk failure. It is not fatal to lose your control files—it just makes recovery that much harder.

Control files are something a developer will probably never have to actually deal with. To a DBA, they are an important part of the database, but to a software developer they are not really relevant.

Redo Log Files

Redo log files are crucial to the Oracle database. These are the transaction logs for the database. They are generally used only for recovery purposes, but they can be used for the following as well:

•\ Instance recovery after a system crash
•\ Media recovery after a datafile restore from backup
•\ Standby database processing
•\ Input into GoldenGate—redo log mining processes for information sharing (a fancy way of saying replication)
•\ Allow administrators to inspect historical database transactions through the Oracle LogMiner utility

Their main purpose in life is to be used in the event of an instance or media failure or as a method of maintaining a standby database for failover. If the power goes off on your database machine, causing an instance failure, Oracle will use the online redo logs to restore the system to exactly the point it was at immediately prior to the power outage. If your disk drive containing your datafile fails permanently, Oracle will use archived redo logs, as well as online redo logs, to recover a backup of that drive to the correct point in time. Additionally, if you “accidentally” drop a table or remove some critical information and commit that operation, you can restore a backup and have Oracle restore it to the point just before the accident using these online and archived redo log files.

Virtually every operation you perform in Oracle generates some amount of redo to be written to the online redo log files. When you insert a row into a table, the end result of that insert is written to the redo logs. When you delete a row, the fact that you deleted that row is written. When you drop a table, the effects of that drop are written to the redo log. The data from the table you dropped is not written; however, the recursive SQL that Oracle performs to drop the table does generate redo. For example, Oracle will delete a row from the SYS.OBJ$ table (and other internal dictionary objects), and this will generate redo, and if various modes of supplemental logging are enabled, the actual DROP TABLE statement will be written into the redo log stream.

Some operations may be performed in a mode that generates as little redo as possible. For example, I can create an index with the NOLOGGING attribute. This means that the initial creation of the index data will not be logged, but any recursive SQL Oracle performed on my behalf will be. For example, the insert of a row into SYS. OBJ$ representing the existence of the index will be logged, as will all subsequent modifications of the index using SQL inserts, updates, and deletes. But the initial writing of the index structure to disk will not be logged.

I’ve referred to two types of redo log file: online and archived. We’ll take a look at each in the sections that follow. In Chapter 9, we’ll take another look at redo in conjunction with undo segments, to see what impact they have on you as a developer. For now, we’ll just concentrate on what they are and what their purpose is.

More Details
Oct 22, 2022
Temp Files- Files

Temporary datafiles (temp files) in Oracle are a special type of datafile. Oracle will use temporary files to store the intermediate results of large sort operations and hash operations, as well as to store global temporary table data, or resultset data, when there is insufficient memory to hold it all in RAM. Temporary tablespaces can also hold the UNDO generated by operations performed on global temporary tables. Permanent data objects, such as a table or an index, will never be stored in a temp file, but the contents of a temporary table and its indexes would be. So, you’ll never create your application tables in a temp file, but you might store data there when you use a temporary table.

Temp files are treated in a special way by Oracle. Normally, every change you make to an object will be recorded in the redo logs; these transaction logs can be replayed at a later date to “redo a transaction,” which you might do during recovery from failure. Temp files are excluded from this process. Specifically, transactions in global temporary tables (located in temp files) never have REDO generated for them, although they can have UNDO generated. Thus, there may be REDO generated working with temporary tables since UNDO is always protected by REDO, as you will see in detail in Chapter 9. The UNDO generated for global temporary tables is to support rolling back work you’ve done in your session, either due to an error processing data or because of some general transaction failure. A DBA never needs to back up a temporary datafile, and, in fact, attempting to do so would be a waste of time, as you can never recover a temporary datafile.

Note In Oracle 12c and above, the UNDO generated for global temporary tables may be stored in the temporary tablespace. By default, UNDO will be generated into the permanent UNDO tablespace, just like prior releases. An init.ora system-level setting, or a TEMP_UNDO_ENABLED session-level settable parameter, may be set to TRUE to enable the UNDO generated for global temporary tables to be stored in a temp file. In this manner, no REDO will be generated for these operations. We will investigate this further in Chapter 9.

One of the nuances of true temp files is that if the OS permits it, the temporary files will be created sparse—that is, they will not actually consume disk storage until they need to. You can see that easily in this example (on Oracle Linux):

SQL> !df -h /tmp

Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-lv_root

50G 6.5G 41G 14% /
SQL> create temporary tablespace temp_huge tempfile ‘/tmp/temp_huge.dbf’ size 2g;

Tablespace created.
SQL> !df -h /tmp
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
50G 6.5G 41G 14% /
SQL> !ls -l /tmp/temp_huge.dbf
-rw-r—–. 1 oracle oinstall 2147491840 Feb 15 18:06 /tmp/temp_huge.dbf

Note The UNIX/Linux command df shows “disk free” space. This command showed that I have 41GB free in the file system containing /tmp before I added a 2GB temp file to the database. After I added that file, I still had 41GB free in the file system.

Apparently, it didn’t take much storage to hold that file. If we look at the ls output, it appears to be a normal 2GB file, but it is, in fact, consuming only a few kilobytes of storage currently. So we could actually create hundreds of these 2GB temporary files, even though we have roughly 41GB of disk space free. Sounds great—free storage for all! The problem is, as we start to use these temp files and they start expanding out, we would rapidly hit errors stating “no more space.” Since the space is allocated or physically assigned to the file as needed by the OS, we stand a definite chance of running out of room (especially if after we create the temp files, someone else fills up the file system with other stuff).

How to solve this differs from OS to OS. On UNIX/Linux, you can use dd to fill the file with data, causing the OS to physically assign disk storage to the file, or use cp to create a nonsparse file, for example:

SQL> !cp –sparse=never /tmp/temp_huge.dbf /tmp/temp_huge_not_sparse.dbf
SQL> !df -h /tmp

Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-lv_root
50G 8.5G 39G 19% /
SQL> drop tablespace temp_huge including contents and datafiles;

Tablespace dropped.

SQL> create temporary tablespace temp_huge tempfile ‘/tmp/temp_huge_not_ sparse.dbf’ reuse;
Tablespace created.

After copying the sparse 2GB file to /tmp/temp_huge_not_sparse.dbf and creating the temporary tablespace using that temp file with the REUSE option, we are assured that temp file has allocated all of its file system space, and our database actually has 2GB of temporary space to work with.

Note In my experience, Windows NTFS does not do sparse files, and this applies to UNIX/Linux variants. On the plus side, if you have to create a 15GB temporary tablespace on UNIX/Linux and have temp file support, you’ll find it happens very fast (instantaneously); just make sure you have 15GB free and reserve it in your mind.

More Details
Aug 22, 2022
Archived Redo Log- Files

The Oracle database can run in one of two modes: ARCHIVELOG mode and NOARCHIVELOG mode. The difference between these two modes is simply what happens to a redo log file when Oracle goes to reuse it. “Will we keep a copy of that redo or should Oracle just overwrite it, losing it forever?” is an important question to answer. Unless you keep this file, you can’t recover data from a backup to that point in time.

Say you take a backup once a week on Saturday. Now, on Friday afternoon, after you have generated hundreds of redo logs over the week, your hard disk fails. If you have not been running in ARCHIVELOG mode, the only choices you have right now are as follows:

•\ Drop the tablespace(s) associated with the failed disk. Any tablespace that had a file on that disk must be dropped, including the contents of that tablespace. If the SYSTEM tablespace (Oracle’s data dictionary) or some other important system-related tablespace like your UNDO tablespace is affected, you can’t do this. You will have to use the next option instead.

•\ Restore last Saturday’s data and lose all of the work you did that week.

Neither option is very appealing. Both imply that you lose data. If you had been executing in ARCHIVELOG mode, on the other hand, you simply would have found another disk and restored the affected files from Saturday’s backup onto it. Then, you would have applied the archived redo logs and, ultimately, the online redo logs to them (in effect replaying the week’s worth of transactions in fast-forward mode). You lose nothing. The data is restored to the point of the failure.

People frequently tell me they don’t need ARCHIVELOG mode for their production systems. I have yet to meet anyone who was correct in that statement. I believe that a system is not a production system unless it is in ARCHIVELOG mode. A database that is not in ARCHIVELOG mode will, someday, lose data. It is inevitable; you will lose data (not might, but will) if your database is not in ARCHIVELOG mode.

“We are using RAID-5, so we are totally protected” is a common excuse. I’ve seen cases where, due to a manufacturing error, all disks in a RAID set froze, all at about the same time. I’ve seen cases where the hardware controller introduced corruption into the datafiles, so people safely protected corrupt data with their RAID devices. RAID also does not do anything to protect you from operator error, one of the most common causes of data loss. RAID does not mean the data is safe, it might be more available, it might be safer, but data solely on a RAID device will be lost someday; it is a matter of time.

“If we had the backups from before the hardware or operator error and the archives were not affected, we could have recovered.” The bottom line is that there is no excuse for not being in ARCHIVELOG mode on a system where the data is of any value. Performance is no excuse; properly configured archiving adds little to no overhead. This, and the fact that a fast system that loses data is useless, means that even if archiving added 100 percent overhead, you still need to do it. A feature is overhead if you can remove it and lose nothing important; overhead is like icing on the cake. Preserving your data and making sure you don’t lose your data isn’t overhead—it’s the DBA’s primary job!

Only a test or maybe a development system should execute in NOARCHIVELOG mode. Most development systems should be run in ARCHIVELOG mode for two reasons:

•\ This is how you will process the data in production; you want development to act and react as your production system would.
•\ In many cases, the developers pull their code out of the data dictionary, modify it, and compile it back into the database. The development database holds the current version of the code. If the development database suffers a disk failure in the afternoon, what happens to all of the code you compiled and recompiled all morning? It’s lost.

Don’t let anyone talk you out of being in ARCHIVELOG mode. You spent a long time developing your application, so you want people to trust it. Losing their data will not instill confidence in your system.

Note There are some cases in which a large DW could justify being in NOARCHIVELOG mode—if it made judicious use of READ ONLY tablespaces and was willing to fully rebuild any READ WRITE tablespace that suffered a failure by reloading the data.

More Details
Jul 22, 2022
Password Files- Files

The password file is an optional file that permits the remote SYSDBA or administrator access to the database. When you attempt to start Oracle, there is no database available that can be consulted to verify passwords. When you start Oracle on the local system (i.e., not over the network, but from the machine the database instance will reside on), Oracle will use the OS to perform the authentication.

When Oracle was installed, the person performing the installation was asked to specify an OS group for the administrators. Normally, on UNIX/Linux, this group will be DBA by default, and ORA_DBA on Windows. It can be any legitimate group name on that platform, however. That group is “special,” in that any user in that group can connect to Oracle “as SYSDBA” without specifying a username or password.

However, what happens if you attempt to connect as SYSDBA over the network to a remote database:

$ sqlplus sys/foo@localhost:1521/CDB as sysdba

ERROR:

ORA-01017: invalid username/password; logon denied

OS authentication won’t work over the network for SYSDBA, even if the very unsafe (for security reasons) parameter REMOTE_OS_AUTHENT is set to true. So, OS authentication won’t work, and, as discussed earlier, if you’re trying to start up an instance to mount and open a database, then by definition there’s no database yet in which to look up authentication details. It is the proverbial chicken and egg problem.

Enter the password file. The password file stores a list of usernames and passwords that are allowed to remotely authenticate as SYSDBA over the network. Oracle must use this file to authenticate them, not the normal list of passwords stored in the database.

So, let’s correct our situation. First, verify that the REMOTE_LOGIN_PASSWORDFILE parameter is set to the default of EXCLUSIVE, meaning only one database uses a given password file:

SQL> show parameter remote_login_passwordfile
NAME TYPE VALUE
remote_login_passwordfile string EXCLUSIVE

Note Other valid values for this parameter are NONE, meaning there is no password file (there are no remote SYSDBA connections), and SHARED (more than one database can use the same password file).

The next step is to use the command-line tool (on UNIX/Linux and Windows) named orapwd to create and populate the initial password file:

$ orapwd

Usage: orapwd file= entries= force= asm= dbuniquename= format= sysbackup= sysdg= syskm= delete= input_file=Usage: orapwd describe file=where

There must be no spaces around the equal-to (=) character.

The command we’ll use when logged into the operating system account that owns the Oracle software is

$ orapwd file=orapw$ORACLE_SID password=foo entries=20

This creates a password file named orapwCDB in my case (my ORACLE_SID is CDB). That’s the naming convention for this file on most UNIX/Linux platforms (see your installation/OS admin guide for details on the naming of this file on your platform), and it resides in the $ORACLE_HOME/dbs directory. On Windows, this file is named PW%ORACLE_SID%.ora, and it’s located in the %ORACLE_HOME%\database directory. You should navigate to the correct directory prior to running the command to create that file, or move that file into the correct directory afterward.

Now, currently the only user in that file is SYS, even if there are other SYSDBA accounts on that database (they are not in the password file yet). Using that knowledge, however, we can for the first time connect as SYSDBA over the network:

$ sqlplus sys/foo@localhost:1521/CDB as sysdba
SQL> show user
USER is “SYS”

Note If you experience an ORA-12505 “TNS:listener does not currently know of SID given in connect Descriptor” error during this step, that means that the database listener is not configured with a static registration entry for this server. The DBA has not permitted remote SYSDBA connections when the database instance is not up. You would need to configure static server registration in your listener.ora configuration file. Please search for “Configuring Static Service Information” (in quotes) on the OTN (Oracle Technology Network) documentation search page for the version of the database you are using for details on configuring this static service. If you encounter an ORA-12528 “TNS:listener: all appropriate instances are blocking new connections” error, you can also configure the tnsnames.ora file with the UR=A parameter that will allow you to connect to a blocked instance.

We have been authenticated, so we are in. We can now successfully start up, shut down, and remotely administer this database using the SYSDBA account. Now, we have another user, TKYTE, who has been granted SYSDBA, but will not be able to connect remotely yet:

$ sqlplus tkyte/foobar@PDB1 as sysdba

ERROR:

ORA-01017: invalid username/password; logon denied

The reason for this is that TKYTE is not yet in the password file. In order to get TKYTE into the password file, we need to “regrant” that account the SYSDBA privilege:

$ sqlplus / as sysdba
SQL> alter session set container=PDB1; SQL> grant sysdba to tkyte; Grant succeeded.

SQL> exit
$ sqlplus tkyte/foobar@PDB1 as sysdba
SQL>

This created an entry in the password file for us, and Oracle will now keep the password in sync. If TKYTE alters their password, the old one will cease working for remote SYSDBA connections, and the new one will start working. The same process is repeated for any user who was a SYSDBA but is not yet in the password file.

More Details