Data Dictionary
> FPSADMIN Views
FPSADMIN Views
|
View |
Comment |
|---|---|
|
FPSADMIN.ADM_ACCESS |
This view shows what sessions are accessing certain objects. For some reason, V$LOCKED_OBJECT often does not show when objects are being accessed that prevents DDL changes but this shows that. This must be filtered on access_object otherwise it will be miserably slow. |
|
FPSADMIN.ADM_ALL_COLORS |
This view shows all colors used by all tables. |
|
FPSADMIN.ADM_ALL_INFO_COMPARED_IF_DIFF |
Use this view to compare this instance of the FPS Data Warehouse with one which is on an thiser version to see what changed. Unlike ADM_ALL_INFO_COMPARED_IF_SAME which compares absolutely everything, this view ignores column positions, column width differences, and whether the column is nullable. It also sorts differences in the num of characters in the view/constraint/source at the end. This view only works if the linked DWH is 5.6 or above because we made improvements to the ADM_ALL_INFO_TO_COMPARE view itself in 5.6. If the linked DWH is before 5.6 then use the lines we include which are commented out in the first two sections. |
|
FPSADMIN.ADM_ALL_INFO_COMPARED_IF_SAME |
Use this view to compare two instances of the FPS Data Warehouse to ensure they match. This view compares all objects in all FPS schemas along with columns, dependencies, and even the number of characters in views and constraints. It is a bit too much to compare the full PL/SQL code of functions, procedures, and triggers but this catches most discrepancies. |
|
FPSADMIN.ADM_ALL_INFO_TO_COMPARE |
Use this view to compare two instances of the FPS Data Warehouse to ensure they match. This view gets all objects in all FPS schemas along with columns, dependencies, and even the number of characters in views and constraints. It is not possible to compile the PL/SQL code of functions, procedures, and triggers but this catches most discrepancies. Here are two ways to compare your database to the standard version. This is recommended after a major upgrade: EASY METHOD: If you are comparing to another database accessible over a database link: 1. Login as FPSADMIN in Toad or SQL Developer. 2. Run the query "select * from chk_all" and make sure there are no errors and all objects are valid. 3. Execute the procedure adm_prepare_to_compare. This takes 5-10 minutes. It compiles statistics on FPSADMIN objects and counts the number of non-whitespace characters in each view and constraint which we use in the comparison. 4. Run the following query below. If no rows returned, the two databases match. If rows, go through each difference and evaluate what caused it. This is often quite tricky. Start with the history comments if applicable as this will tell you if you are missing a newer version. Ask Cabe if you have questions. with old as ( select * from adm_all_info_to_compare ), new as ( select * from adm_all_info_to_compare@yourdblink ) select * from ( select 'new' as source, a.* from new a minus select 'new', a.* from old a union select 'old' as source, a.* from old a minus select 'old', a.* from new a ) order by 2, 3, 4, 5, 6, source desc; COMPLEX METHOD: If you are comparing your database to the standard version not accessible over a database link: 1. Login as FPSADMIN in Toad. You can do this in SQL Developer too but you have to choose the appropriate options in step #7 to match what is below for Toad. 2. Run the query "select * from chk_all" and make sure there are no errors and all objects are valid. 3. Execute the procedure adm_prepare_to_compare. This takes 5-10 minutes. It compiles statistics on FPSADMIN objects and counts the number of non-whitespace characters in each view and constraint which we use in the comparison. 4. Select * from adm_all_info_to_compare where is_custom_input = 'N'; 5. The is_custom_input = N filter removes custom APD/REF views, custom LOAD/APF/RTDLOAD objects, and CHGLOG tables/triggers in FPSINPUT which are different for each client. This leaves only the standard FPS objects in FPSADMIN/INPUT/BASE/APP which should match exactly at each site. If you wish to compare two instances at the same client like production and development then omit the is_custom_input filter. 6. Right click on the data and choose Export Dataset. 7. Choose format of Delimited Text, output of File to your preferred location, check only "Display all results" and "Include column headers" with others unchecked, don't quote strings or numbers, and delimiter of Pipe. Then click OK to save the file. 8. If you are comparing two instances at the same site like production and development then repeat steps 5-7 for the other instance. Then compare them to each other in steps 9 and 10 rather than the standard. This is recommended after a release to production. 9. Compare your file with the standard for the version saved in the Team -> DWH -> Compare folder. I prefer using the UNIX/Linux diff function so I copy (or remote copy) both files to a UNIX/Linux machine. This might be a server at the client or a Mac if you have one. Word or WinMerge are options too. I recommend saving the differences output to a new file which you can then view and edit separately. 10. Your differences file should be empty. If not, go through each difference and evaluate what caused it. This is often quite tricky. Start with the history comments if applicable as this will tell you if you are missing a newer version. Ask Cabe if you have questions. |
|
FPSADMIN.ADM_APD_LOAD_DAILY_HIST |
Summary of ADM_LOAD_HIST grouped by day. This allows us to purge ADM_LOAD_HIST and still have past results for the entire life of the database. Obj_owner is allowed to be null in ADM_LOAD_HIST but it is part of the PK of this table therefore we have to do an nvl. |
|
FPSADMIN.ADM_APD_SPEED_DAILY_HIST |
Summary of ADM_SPEED_HIST grouped by day. This allows us to purge ADM_SPEED_HIST and still have past results for the entire life of the database. |
|
FPSADMIN.ADM_CHK_ALL |
This view uses the function CHK_ALL_TO_TABLE to first refresh all DBF tables and then query all of the CHK and SYSMNTR views at once. Because it refreshes the tables first the records returned from this view are realtime but this takes about a minute. To simply bring up the results without refreshing just query CHK_ALL_NO_REFRESH. |
|
FPSADMIN.ADM_COLUMNS_TO_IGNORE |
Get all ignored columns from both cfg_columns_to_ignore and gen_cols_updated_by_trigger and set appropriate information to use in all ADM procedures and CHK views. |
|
FPSADMIN.ADM_DEV_ONLY_OBJECTS |
This view shows all objects which are dev only. These objects should eventually be deleted when their development is finished. |
|
FPSADMIN.ADM_ETL_JOBS_AFTER_UPGRADE |
|
|
FPSADMIN.ADM_FK_VALUE_LISTS |
This view is used in BLD_STARTUP_WORKSHEET and can be created automatically with this query: select 'select ''' || r_constraint_name || ''' as r_constraint_name, ''' || table_name || ''' as table_name, ''' || column_name || ''' as column_name, ' || 'listagg(' || column_name || ', '', '') within group (order by ' || column_name || ') as value_list ' || 'from fpsadmin.' || table_name || case when column_name = 'ETP_STATE' then ' where etp_state like ''PRD%''' end || line_end as q from ( select distinct r_constraint_name, cc.table_name, cc.column_name, case when cc.table_name = max(cc.table_name) over () then ';' else ' union all ' end as line_end from dbf_constraints c join dbf_cons_columns cc on r_owner = cc.owner and r_constraint_name = cc.constraint_name where constraint_type = 'R' and r_owner = 'FPSADMIN' and c.owner = 'FPSINPUT' and r_constraint_name not in ('CFG_SCHED_PK', 'GEN_COLORS_PK', 'GEN_SITE_INC_BATCH') ) order by table_name, column_name |
|
FPSADMIN.ADM_LARGE_OBJECTS |
This query shows all non-system segments in the database sorted by the largest first. It includes dev only objects and objects in the recycle bin. |
|
FPSADMIN.ADM_LAST_DDL_OBJECTS |
This query shows all FPS objects in the database sorted by the last_ddl_time. It excludes jobs, synonyms, and database links whose last_ddl_time is either blank or irrevelant. It also excludes our temporary types starting with R_ or T_ used in ADM_TABLE_MATCH_COMMIT. It includes dev only objects and objects in the recycle bin. |
|
FPSADMIN.ADM_LOAD_COMBINED |
This view is used by the ADM_LOAD_DWH procedure. |
|
FPSADMIN.ADM_LOAD_STATUS_ACTIVE |
ADM_LOAD_STATUS includes objects which are not actively loading and we often want to query the status of the objects which are actively loading only. This view does this and although it does include objects which are disabled at least it sorts them at the bottom. We use this view in SYSMNTR_ERRORS and ADM_TURN_ETL_OFF plus we frequently query it directly. |
|
FPSADMIN.ADM_LOCKED_HIST_NOW |
This view is the most used query to these tables including the current information unioned with the recent history. Another useful query (not quite useful enough to merit its own view) shows the highest temp tablespace usage in the last two days: select ts, temp_used_mb, listagg(object_name, ',') within group (order by object_name) as obj from (select distinct ts, temp_used_mb, object_name from adm_locked_hist_now) group by ts, temp_used_mb order by temp_used_mb desc; |
|
FPSADMIN.ADM_LOCKED_STATUS |
This view is a snapshot of queries which have locked a table longer than one minute. It shows the temp space usage, undo space usage, progress, and other metrics. We record this in ADM_LOCKED_HIST every minute to help us track our database stress over time. |
|
FPSADMIN.ADM_LOCKED_SUMMARY |
This view gives information about the space available in the TEMP and UNDO tablespaces. It is formatted all in one row so that it can be cross joined to insert into log files. This query helps diagnose the cause when we encounter the dreaded "ORA-01652: unable to extend temp segment by 128 in tablespace TEMP" error plus the comments contains some more details and information. |
|
FPSADMIN.ADM_LOG_SWITCHES |
The number of redo log switches per day and hour, including the total size in GB for the entire redo log of each day. Once a redo log is completely filled, Oracle switches to the next and starts over again. Log switches can also be triggered by a fixed time interval or manually. The redo log is necessary to recover data, and it contains all changes made to the data, i.e. both committed and uncommitted. Each transaction is synchronously written to the redo log buffer, after which the log writer process (LGWR) writes the entries into the online redo log file, of which only one is active at any time. See the Oracle Concepts Manual for more details. It includes an ORDER BY clause, so that additional ordering by the user is not necessary. Note that the ORDER BY clause causes any further ordering to be done again. The following query can be used as an alternative. It allows for more easy querying. SELECT hist.log_day ,hist.log_hour ,hist.num_switches ,round(hist.num_switches * avg_bytes/1024/1024) AS size_mb FROM ( SELECT TO_CHAR(first_time, 'YYYY-MM-DD') AS log_day, TO_CHAR(first_time, 'HH24') AS log_hour, COUNT(1) AS num_switches FROM v$log_history GROUP BY ROLLUP(TO_CHAR(first_time, 'YYYY-MM-DD'), TO_CHAR(first_time, 'HH24'))) hist ,(SELECT AVG(bytes) avg_bytes FROM v$log) lg ORDER BY hist.log_day DESC, hist.log_hour; |
|
FPSADMIN.ADM_MONITORING_INSTRUCTIONS |
Currently it is only necessary to monitor the emails. If we get no emails then I'm pretty confident that everything is going well. There are four types of emails all with different subjects starting with FPSLOAD so it is easy to add an Outlook rule to separate these from regular emails. Here they are in descending order of severity: 1) FPSLOAD ALERT: Every 20-30 minutes we run a separate check on the health of the DWH. This checks the views CHK_INVALID and CHK_DISABLED and the procedure ADM_CHK_TABLE_UPDATES. CHK_INVALID shows invalid objects and tables which had issues with their last load. CHK_DISABLED shows constraints, triggers, and loads which are disabled. ADM_CHK_TABLE_UPDATES queries to see if any key tables like WIP_EVENT_HIST or EQP_EVENT_HIST have no recent data. The ADM_CHK_TABLE_UPDATES email is often sent to key employees of the client including the help desk because this is our most severe alert. All other messages typically just go to FPS or perhaps one or two of our close technical colleagues. 2) FPSLOAD ERROR: Sent for any load error in the Update_by_RTD scripts. The most common error is "SQL errors" which means that the query had errors like PK violated or excess length or whatever. The body of the email includes the query attempted and the error received. 3) FPSLOAD WARNING: Also sent by the Update_by_RTD scripts for things like "Input file garbled" or "No rows updated." These should be addressed but are less serious that the errors. 4) FPSLOAD MESSAGE: These are messages sent from within the XXX_QUERY report. An example is when the report has duplicate records for the PK. In this case, we only insert one of the two rows but we send a message that the report had duplicates. The syntax within the report is to write a query to DELETE FROM FPSLOAD_ERROR WHERE MESSAGE = 'Write message here' and then the script translates this to email the message. When we get an email, we use ADM_LOAD_STATUS and ADM_LOAD_HIST to research the details. Update_by_RTD.log is the log file on the UNIX/Linux server which is basically a copy of ADM_LOAD_HIST but there is no need to use this since ADM_LOAD_HIST has more information and is easier to query. In fact, I can't think of a reason why most of us other than me would need to login to the Linux server at all. Finally our programming team has plans to develop the Factory Forum into a site we can use for monitoring the health of the DWH so we could have some significant improvements once we move to using that site. But this is the situation for now. |
|
FPSADMIN.ADM_OBJECTS_IN_VIA_APD |
ADM_LOAD_EVENT_HIST_VIA_APD includes calls to several objects that are required to load along with the realtime history tables. These objects are not included in CFG_SCHED so we need to know the list so we can check that they are omitted from CFG_SCHED only if the site is using ADM_LOAD_EVENT_HIST_VIA_APD. We also use this during startup. Please note that this view must use DBF_xxx views if permissions are granted to FPSAPP for SYSMNTR because FPSAPP does not see all objects when querying DBF_xxx views since it does not have permission on everything. Since the only reason to use DBF_xxx is to grant to other users then we just use DBF_xxx. |
|
FPSADMIN.ADM_RESEARCH_MUTATING_TRIGGERS |
ORA-04091 is one of the most frustrating errors in Oracle. It happens when a row-level trigger attempts to query the table that is currently being modified by the statement that fired that trigger. But frequently we actually want to do this and there are cases where it actually is allowed so the frustrating piece is figuring out when it is acceptable to query the table and when it will result in this error. We figured this out finally with an assist from the article linked below. The latter two cases are when we have to use the DBPI method to loop through the APD view and insert one row at a time when a row level insert BEFORE trigger exists on the table: - Statement level triggers are allowed to query the table. CFG_SCHED_CHK_GDD and RTG_SMP_LINK_STEPS_CHECK are examples of this. - Row level update triggers are never allowed to the query the table. - Row level delete triggers are never allowed to the query the table. - Row level insert AFTER triggers are never allowed to the query the table. - Row level insert BEFORE triggers are allowed to query the table if only one row is inserted. - Row level insert BEFORE triggers are not allowed to query the table if multiple records are inserted. http://www.databasejournal.com/features/oracle/article.php/3329121/The-Mutation-error-in-Oracle-Database-Triggers.htm |
|
FPSADMIN.ADM_RTDLOAD_FINAL |
This view is used by the Load_FPS_DWH.sh script for loading tables via APF/RTD. |
|
FPSADMIN.ADM_RUNNING |
Frequently used query which just filters ADM_LOAD_STATUS_ACTIVE view for objects and jobs that are currently running. When this view return no rows that indicates that ETL has successfully been turned off. Because this uses V$SESSION permissions cannot be granted so this cannot be used in SYSMNTR. |
|
FPSADMIN.ADM_SCHEDULER_JOB_LOG |
Oracle provides both DBA_SCHEDULER_JOB_LOG and DBA_SCHEDULER_JOB_RUN_DETAILS. The latter provides more detail so we use that one but we like the shorter name of the former. |
|
FPSADMIN.ADM_SESSIONS |
This view shows full info for all sessions on the FPS schemas. Sessions where problem is not null will also appear in CHK_SESSIONS. |
|
FPSADMIN.ADM_SESSIONS_ACTIVE |
This view shows full info for all sessions which are not inactive. In other words, sessions which are active or killed. |
|
FPSADMIN.ADM_SESSIONS_BIND_VARS |
This view shows info for all sessions on the FPS schemas with the bind variables of all queries from V$SQL_BIND_CAPTURE. Please note that while this Oracle system view list all bind variables, it does not capture all values, only selected ones. If the trace level is set above basic then it will capture all values but this is not common and requires SYSDBA access. |
|
FPSADMIN.ADM_SESSIONS_MINE |
This view shows full info for all sessions for the same osuser as the current session In other words, all of "my" sessions. |
|
FPSADMIN.ADM_SESSIONS_PROBLEM |
This view shows full info for any locks or hung sessions or long queries on the FPS schemas. This should be empty if things are running well. |
|
FPSADMIN.ADM_SESSION_ACTIVE_HIST |
This view stores a useful query to V$ACTIVE_SESSION_HISTORY that lists all sessions which are ACTIVE each second. This means that queries which take longer than one second will be listed multiple times so you might want to use distinct and/or group by appropriately when querying this view. The V$ view goes back several days but this query just defaults to the last hour for speed. The related view ADM_SESSION_LOGON_LOGOFF_HIST might be useful too in situations where we want to research connection. Please note that I tried to bring in v$sql to the get the sql_text but this took a very long time so it would be best to query v$sql separately for the sql_id which is causing problems. |
|
FPSADMIN.ADM_SESSION_LOGON_LOGOFF_HIST |
This view stores a useful query to dba_audit_session that gives the logon and logoff time for every session. This table is only populate if "audit connect;" is executed by the DBA on the database. The related view ADM_SESSION_ACTIVE_HIST always has data and might be sufficient if audit connect has not been anabled. |
|
FPSADMIN.ADM_SHRINK_DATAFILES |
|
|
FPSADMIN.ADM_SQL_CURRENT_XPLAN |
This view shows the explain plans used for all active queries that have been running for longer than 5 seconds. You can also use this view as a template to see the explain plan for any active or recent query. |
|
FPSADMIN.ADM_SQL_HIST |
The view DBA_HIST_SQLTEXT is a powerful diagnostic tool for investigating top queries if we have access. This is a basic query of how to use it which should be used with a filter on begin_interval_time. BEGIN_INTERVAL_TIME: The start time of the snapshot. BUFFER_GETS_PER_EXEC: The number of buffer gets per execution during the snapshot. A buffer get is a block read, either from memory or from disk, which obviously requires physical I/O. CPU_TIME_SEC_PER_EXEC: The CPU time (in seconds) per execution during the snapshot. DATABASE_NAME: The database instance name. DBID: The database instance identifier. DISK_READS_PER_BUFFER_GET: The number of disk reads for each buffer get during the snapshot. A high number indicates that a buffer get often goes to the disk and thus causes a lot of physical I/O. DISK_READS_PER_EXEC: The number of disk reads per execution during the snapshot. ELAPSED_TIME_SEC_PER_EXEC: The elapsed time (in seconds) per execution during the snapshot. END_INTERVAL_TIME: The end time of the snapshot. EXECUTIONS: The number of executions during the snapshot. IO_WAIT_SEC_PER_EXEC: The I/O wait time (in seconds) per execution during the snapshot. A high number may indicate a large number of disk reads, for instance because of cursor sharing of statements without bind variables. MODULE: The module through which the code is executed. PARSE_CALLS_PER_EXEC: The number of parse calls per execution during the snapshot. A high number (i.e. around 1) indicates that a SQL statement or PL/SQL module has been flushed from the cache and thus requires parsing, which adds a few milliseconds. Code that is executed frequently that also requires parsing can be a bottleneck. PARSING_SCHEMA_NAME: The parsing schema name, i.e. the schema from which the code is executed. PHYS_READ_REQS_PER_EXEC: The number of physical read requests per execution during the snapshot. PHYS_WRITE_REQS_PER_EXEC: The number of physical write requests per execution during the snapshot. PLSQL_EXEC_TIME_SEC_PER_EXEC: The PL/SQL execution time (in seconds) per execution during the snapshot. PLSQL_SQL_TIME_RATIO: The ratio of PL/SQL execution to CPU time, which is more or less the fraction of the time Oracle spends inside the PL/SQL engine instead of the SQL engine. A high ratio may indicate that there may be many context switches affecting the performance. ROWS_PROCESSED_PER_EXEC: The number of rows processed per execution during the snapshot. SQL_ID: The SQL identifier of the parent cursor. SQL_TEXT: The full statement for which the metrics are shown. |
|
FPSADMIN.ADM_SQL_SCORES |
Displays the SQL and PL/SQL code that potentially impacts the database performance the most. The top entries are likely candidates for improvements and may need to be investigated further. The impact_score is the product of the scores for executions, parse calls, disk reads per execution, buffer gets per execution, physical read requests per execution, physical write requests per execution, CPU time per execution, I/O wait time per execution, and the PL/SQL-to-SQL ratio. Each of these scores are in the range [0,10]. The higher the number the more impact it has on the performance of the database. |
|
FPSADMIN.ADM_SQL_TOP_LAST_DAY |
The view DBA_HIST_SQLTEXT is a powerful diagnostic tool for investigating top queries if we have access. We adapted this query from the internet page named Ultimate Top 50 SQL / AWR Query. This is identical to ADM_SQL_TOP_SNAPSHOT but for the time filter in the snap with clause. |
|
FPSADMIN.ADM_SQL_TOP_SNAPSHOT |
The view DBA_HIST_SQLTEXT is a powerful diagnostic tool for investigating top queries if we have access. We adapted this query from the internet page named Ultimate Top 50 SQL / AWR Query. This is identical to ADM_SQL_TOP_LAST_DAY but for the time filter in the snap with clause. |
|
FPSADMIN.ADM_SQL_W_RECENT_TREND |
Useful query to compare history from ADM_LOAD_HIST with the SQL statistics from DBA_HIST_SQLSTAT for a particular object. The elapsed from SQLSTAT should closely match the avg_sec from ADM_LOAD_HIST for each hour. Executions_delta should match num_loads as well. SQLSTAT goes by hour so unlike other ADM_RECENT_TRENDS view you cannot edit to group by day however you can (and must) edit the name of the object to be evaluated. You can also extend the range longer than two days. There is a configuration in Oracle of how many days history to save in SQLSTAT which is often set to 7 and we could modify to extend this if we wish. |
|
FPSADMIN.ADM_SYSMETRICS_HIST |
Shows important database system metrics for the last 100 days, as available in DBA_HIST_SYSMETRIC_SYMMARY, in a row-based format that is best suited to in-depth (statistical) analyses to track down performance issues. Comments about each column: ACTIVE_PARALLEL_SESSIONS: The number of active parallel sessions. ACTIVE_SERIAL_SESSIONS: The number of active serial sessions. AVG_ACTIVE_SESSIONS: The mean number of active sessions, both serial and parallel. AVG_SYNC_BLCK_R_LATENCY: The mean latency (in ms) of synchronous single-block reads. Synchronous single-block reads are a reasonably accurate way of assessing the performance of the storage subsystem. High latencies are typically caused by a high I/O request load. Excessively high CPU load can also cause the latencies to increase. BCKGRD_CHKPTS_PER_SEC: The number of background checkpoints per second. A checkpoint is the writing by the DBWR process of all modified buffers in the SGA buffer cache to the database data files. Data file headers are also updated with the latest checkpoint SCN, even if the file had no changed blocks, as well as the control files. Checkpoints occur after every redo log switch and also at intervals specified by initialization parameters. BRANCH_NODE_SPLITS_PER_TXN: The number of branch node splits per transaction. It is the number of times an index branch block was split because of the insertion of an additional value. BUFFER_CACHE_HIT_RATIO: The percentage of times the data block requested by the query is already/still in memory. Effective use of the buffer cache can greatly reduce the I/O load on the database. If the buffer cache is too small, frequently accessed data will be flushed from the buffer cache too quickly which forces the information to be re-fetched from disk. Since disk access is much slower than memory access, application performance will suffer. In addition, the extra burden imposed on the I/O subsystem could introduce a bottleneck at one or more devices that would further degrade performance. CELL_PHYS_IO_ICONN_BYTES: The amount of bytes for disk-to-database-host interconnect traffic. CONS_R_CHG_PER_TXN: The number of times per transaction a user process has applied rollback entries to perform a consistent read on the block. Oracle always enforces statement-level read consistency, which guarantees that data returned by a single query is committed and consistent with respect to a single point in time. CONS_R_GETS_PER_TXN: The number of times per transaction a consistent read was requested for a block. CPU_USG_PER_TXN: The CPU usage (in ms) per transaction. Note that the default unit in DBA_HIST_SYSMETRIC_SUMMARY is centiseconds but this view converts it to milliseconds. CR_BLCK_CREATED_PER_TXN: The number of current blocks per transaction cloned to create consistent read (CR) blocks. Read consistency is used when competing processes are reading and updating the data concurrently. CR_UNDO_RECS_PER_TXN: The number of undo records applied for consistent read per transaction. Read consistency is used when competing processes are reading and updating the data concurrently and undo blocks are used to retain the prior image of the rows. CURR_LOGONS_COUNT: The current number of logons. CURR_OPEN_CURS_COUNT: The current number of opened cursors. CURR_OS_LOAD: The current number of processes running. CURS_CACHE_HIT_RATIO: The ratio of the number of times an open cursor was found to the number of times a cursor was sought. This is influenced by the SESSION_CACHED_CURSORS parameter. The SESSION_CACHED_CURSORS parameter is used to reduce the amount of parsing with SQL statements that use host variables. DBWR_CHKPTS_PER_SEC: The number of times per second the DBWR was asked to scan the cache and write all blocks marked for a checkpoint. The database writer process writes the contents of buffers to datafiles. The DBWR processes are responsible for writing modified (dirty) buffers in the database buffer cache to disk. When a buffer in the database buffer cache is modified, it is marked dirty. The primary job of the DBWR process is to keep the buffer cache clean by writing dirty buffers to disk. As user processes dirty buffers, the number of free buffers diminishes. If the number of free buffers drops too low, user processes that must read blocks from disk into the cache are not able to find free buffers. DBWR manages the buffer cache so that user processes can always find free buffers. DB_BLCK_CHG_PER_TXN: The total number of changes per transaction that were part of an update or delete operation that were made to all blocks in the SGA. DB_BLCK_GETS_PER_TXN: The number of times per transaction a current block was requested from the RAM data buffer. DB_CPU_TIME_RATIO: The CPU-to-DB time ratio. DB_WAIT_TIME_RATIO: The wait-to-DB time ratio. DISK_SORT_PER_TXN: The number of sorts going to disk per transactions for the sample period. For best performance, most sorts should occur in memory because sorts to disks are expensive to perform. If the sort area is too small, extra sort runs will be required during the sort operation. This increases CPU and I/O resource consumption. ENQUEUE_REQS_PER_TXN: The total number of table or row locks acquired per transaction. ENQUEUE_TIMEOUTS_PER_TXN: The total number of table and row locks (acquired and converted) per second that time out before they could complete. ENQUEUE_WAITS_PER_TXN: The total number of waits per transaction that occurred during an enqueue convert or get because the enqueue get was deferred. It''s important to remember when you take a look at the STATS$ENQUEUESTAT table that enqueue waits are a normal part of Oracle processing. It is only when you see an excessive amount of enqueue waits for specific processes that you need to be concerned in the tuning process. Oracle locks protect shared resources and allow access to those resources via a queuing mechanism. A large amount of time spent waiting for enqueue events can be caused by various problems, such as waiting for individual row locks or waiting for exclusive locks on a table. EXECS_PER_TXN: The number of executes per second. EXECS_WITHOUT_PARSE_RATIO: The percentage of statement executions that do not require a corresponding parse. A perfect system would parse all statements once and then execute the parsed statement over and over without reparsing. This ratio provides an indication as to how often the application is parsing statements as compared to their overall execution rate. A higher number is better. FULL_IDX_SCANS_PER_TXN: The number of index fast-full scans execution plans per second. HARD_PARSE_COUNT_PER_TXN: The number of hard parses per second during this sample period. A hard parse occurs when a SQL statement has to be loaded into the shared pool. In this case, the Oracle Server has to allocate memory in the shared pool and parse the statement. A soft parse is recorded when the Oracle Server checks the shared pool for a SQL statement and finds a version of the statement that it can reuse. HOST_CPU_UTIL: The percentage of CPU being used on the host. IO_MB_PER_SEC: The total I/O throughput of the database for both reads and writes in megabytes per second. A very high value indicates that the database is generating a significant volume of I/O data. IO_REQS_PER_SEC: The total number of I/O requests of the database for both reads and writes in megabytes per second. LEAF_NODE_SPLITS_PER_TXN: The number of times per transaction an index leaf node was split because of the insertion of an additional value. It is an important measure of the DML activity on an index and excessive DML can leave an index in a sub-optimal structure, necessitating a rebuild for optimal index performance. LIBRARY_CACHE_HIT_RATIO: The percentage of entries in the library cache that were parsed more than once (reloads) over the lifetime of the instance. Since you never know in advance how many SQL statements need to be cached, the Oracle DBA must set SHARED_POOL_SIZE large enough to prevent excessive re-parsing of SQL. LOGICAL_R_PER_TXN: The number of logical reads per transaction. The value of this statistic is zero if there have not been any write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding per second data item of the same name will be a better indicator of current performance. LOGONS_PER_TXN: The number of logons per transaction. LONG_TAB_SCANS_PER_TXN: The number of long table scans per transaction during sample period. A table is considered long if the table is not cached and if its high-water mark is greater than 5 blocks. MEMORY_SORTS_RATIO: The percentage of sorts (from ORDER BY clauses or index building) that are done to disk vs. in-memory. Disk sorts are done in the TEMP tablespace, which is hundreds of times slower than a RAM sort. The in-memory sorts are controlled by sort_area_size or by pga_aggregate_target. NETW_TRAFFIC_VOL_PER_SEC: The network traffic volume (in bytes) per second. OPEN_CURS_PER_TXN: The total number of cursors opened per transaction. One problem in performance is caused by multiple cursors with bind variables that open the cursor and execute it many times. PARSE_FAIL_COUNT_PER_TXN: The total number of parse failures per transaction. PGA_CACHE_HIT_RATIO: The total number of bytes processed in the PGA versus the total number of bytes processed plus extra bytes read/written in extra passes. PHYS_R_DRCT_PER_TXN: The number of direct physical reads per transaction. Direct reads are analogous to direct reads, done by bypassing the OS JFS cache. PHYS_R_PER_TXN: The number of disk reads per transaction. When a user performs a SQL query, Oracle tries to retrieve the data from the database buffer cache (memory) first, then goes to disk if it is not in memory already. Reading data blocks from disk is much more expensive than reading the data blocks from memory. The goal with Oracle should always be to maximize memory utilization. PHYS_R_TOT_BYTES_PER_SEC: Total amount of disk reads (in bytes) per second. PHYS_W_DRCT_PER_TXN: The number of direct physical disk writes per second. You can speed up disk write in several ways, most notably the segregation of high-write datafiles to a small data buffer, and by using solid-state-disk (SSD) for high disk write files. PHYS_W_PER_TXN: The number of disk reads per transaction. PHYS_W_TOT_BYTES_PER_SEC: Total amount of disk writes (in bytes) per second. PQ_QC_SESSION_COUNT: The number of parallel query sessions. PQ_SLAVE_SESSION_COUNT: The number of slave sessions for parallelized queries. PX_OPS_DOWNGR_PER_SEC: The number of parallel execution operations per second downgraded (i.e. degree of parallelism reduced or serialized) due to the adaptive multiuser algorithm or the depletion of available parallel execution servers. PX_OPS_NOT_DOWNGR_PER_SEC: The number of parallel execution operations per second not downgraded (i.e. degree of parallelism not reduced) due to the adaptive multiuser algorithm or the depletion of available parallel execution servers. QUERIES_PARALLELIZED_PER_SEC: The number of SQL queries parallelized per second. RECURSIVE_CALLS_PER_TXN: The number of recursive calls, per second. Sometimes, to execute a SQL statement issued by a user, Oracle must issue additional statements. Such statements are called recursive calls or recursive SQL statements. REDO_ALLOC_HIT_RATIO: The percentage of times users did not have to wait for the log writer to free space in the redo log buffer. Redo log entries contain a record of changes that have been made to the database block buffers. The log writer (LGWR) process writes redo log entries from the log buffer to a redo log file. The log buffer should be sized so that space is available in the log buffer for new entries, even when access to the redo log is heavy. When the log buffer is undersized, user process will be delayed as they wait for the LGWR to free space in the redo log buffer. REDO_GENERATED_PER_TXN: The amount of redo, in bytes, generated per transaction during this sample period. The redo log buffer is a circular buffer in the SGA that holds information about changes made to the database. This information is stored in redo entries. Redo entries contain the information necessary to reconstruct, or redo, changes made to the database by INSERT, UPDATE, DELETE, CREATE, ALTER or DROP operations. Redo entries are used for database recovery, if necessary. REDO_W_PER_TXN: The number of redo write operations per second during this sample period. RESP_TIME_PER_TXN: The time spent (in ms) in database operations per transaction. It is derived from the total time that user calls spend in the database (DB time) and the number of commits and rollbacks performed. A change in this value indicates that either the workload has changed or that the database is ability to process the workload has changed because of either resource constraints or contention. Note that the default unit in DBA_HIST_SYSMETRIC_SUMMARY is centiseconds but this view converts it to milliseconds. ROWS_PER_SORT: The average number of rows per sort for all types of sorts performed. ROW_CACHE_HIT_RATIO: The ratio of hits to the total number of bytes processed in the related the total number of bytes processed plus extra bytes read/written in extra passes. Oracle recommends that if the free memory is close to zero and either the library cache hit ratio is less than 0.95 or the row cache hit ratio is less than 0.95, then the SHARED_POOL_SIZE parameter should be increased until the ratios stop improving. SESSION_COUNT: The total number of session. SHARED_POOL_FREE_PCT: The percentage of the Shared Pool that is currently marked as free. SOFT_PARSE_RATIO: The ratio of soft parses (SQL is already in library cache) to hard parses (SQL must be parsed, validated, and an execution plan formed). The library cache (as set by shared_pool_size) serves to minimize hard parses. Excessive hard parsing could be due to a time shared_pool_size or because of SQL with embedded literal values. SQL_SRVC_RESP_TIME: The time (in ms) for each call. Note that the default unit in DBA_HIST_SYSMETRIC_SUMMARY is centiseconds but this view converts it to milliseconds. TEMP_SPACE_USED: The total amount of bytes used in the TEMP tablespace. TOT_IDX_SCANS_PER_TXN: The total number of index scans per transaction. Physical disk speed is an important factor in weighing these costs. As disk access speed increases, the costs of a full-table scan versus single block reads can become negligible. TOT_PARSE_COUNT_PER_TXN: The total number of parses per second, both hard and soft. TOT_PGA_ALLOC: Total amount of bytes allocated to the PGA. TOT_PGA_SQL_WORKAREAS: Total amount of bytes of PGA used by SQL work areas. With manual configuration of the SGA, it is possible that compiled SQL statements frequently age out of the shared pool because of its inadequate size. This can increase the frequency of hard parses, leading to reduced performance. When automatic SGA management is enabled, the internal tuning algorithm monitors the performance of the workload, increasing the shared pool if it determines the increase will reduce the number of parses required. TOT_TAB_SCANS_PER_TXN: The number of long and short table scans per transaction. TXNS_PER_LOGON: The number of transactions per logon. USR_CALLS_PER_TXN: The number of user calls per transaction. USR_COMMITS_PCT: The percentage of user commits to total user transactions. USR_RBCK_PCT: The percentage of user rollbacks to total user transations. USR_RBCK_UNDO_RECS_PER_TXN: The number of undo records applied to user-requested rollback changes per transaction. USR_TXN_PER_SEC: The number of user transactions per second. Abbreviations used in the column names: ALLOC - allocated/allocation BCKGRD - background BLCK - block(s) CHKPTS - checkpoints CHG - changes CONS - consistent CR - consistent read CURR - current CURS - cursor(s) DB - database DBWR - database writer DOWNGR - downgraded DRCT - direct EXECS - executions IDX - index ICONN - interconnect LIM - limit NETW - network PCT - percentage PHYS - physical PGA - parallel global area PQ - parallel query PX - parallel execution QC - query coordinator R - read(s) RBCK - rollbacks RECS - records REQS - requests RESP - response SRVC - service SYNC - synchronous TAB - table TOT - total TXN - transaction USG - usage USR - user UTIL - utilization VOL - volume W - write(s) More details about the metrics can be found in the Oracle Database Concepts guide. |
|
FPSADMIN.ADM_TEMP_FREE_SPACE |
This view is mostly instructions plus a copy of DBA_TEMP_FREE_SPACE so that we can find this information when we encounter the dreaded "ORA-01652: unable to extend temp segment by 128 in tablespace TEMP" error. DBA_TEMP_FILES is a basic view to just like the files and the temporary tablespace and the bytes: SELECT tablespace_name, file_name, bytes, autoextensible, maxbytes, status FROM dba_temp_files WHERE tablespace_name in (select tablespace_name from dba_tablespaces where contents = 'TEMPORARY'); DBA_TEMP_FREE_SPACE is the most important view. The columns are: -TABLESPACE_NAME - temporary tablespace name -TABLESPACE_SIZE - total size in bytes -ALLOCATED_SPACE - size in bytes for (allocated space in use) + (allocated space for reuse) -FREE_SPACE - size in bytes for (allocated space for reuse) + (unallocated space) Please note that allocated space for reuse is part of both allocated_space and free_space. Normally the entire tablespace_size will be allocated but most of it will be available to reuse and therefore in free space. If you want to shrink the file, meaning make the tablespace_size only equal to the allocated space in use, a DBA can do that with this query. Assuming that autoextend is enabled then it will just expand as needed. alter tablespace YOUR_TEMP_TABLESPACE_NAME shrink space; Here are some example results when we did this: Before shrink: tablespace_size and allocated_size both 32 GB which is the maxbytes, free_size nearly zero when CTM_SUMMARY running After shrink: tablespace_size and allocated_size both about 30 MB (yes MB not GB), free_size most of it when nothing was running 30 seconds after starting CTM_SUMMARY: tablespace_size and allocated_size both about 170 MB, free_size 0 as it was autoextending After 60 seconds: 700 MB and 0 free After 20 minutes: 4 GB and 0 free After complete: 8 GB and all 8 GB free |
|
FPSADMIN.ADM_TEMP_UNDO_SPACE |
This view gives information about the space available in the TEMP and UNDO tablespaces. It is formatted all in one row so that it can be cross joined to insert into log files. This query helps diagnose the cause when we encounter the dreaded "ORA-01652: unable to extend temp segment by 128 in tablespace TEMP" error plus the comments contains some more details and information. DBA_TEMP_FILES is a basic view to just like the files and the temporary tablespace and the bytes: SELECT tablespace_name, file_name, bytes, autoextensible, maxbytes, status FROM dba_temp_files WHERE tablespace_name in (select tablespace_name from dba_tablespaces where contents = 'TEMPORARY'); DBA_TEMP_FREE_SPACE is the most important view. The columns are: -TABLESPACE_NAME - temporary tablespace name -TABLESPACE_SIZE - total size in bytes -ALLOCATED_SPACE - size in bytes for (allocated space in use) + (allocated space for reuse) -FREE_SPACE - size in bytes for (allocated space for reuse) + (unallocated space) Please note that allocated space for reuse is part of both allocated_space and free_space. Normally the entire tablespace_size will be allocated but most of it will be available to reuse and therefore in free space. If you want to shrink the file, meaning make the tablespace_size only equal to the allocated space in use, a DBA can do that with this query. Assuming that autoextend is enabled then it will just expand as needed. alter tablespace YOUR_TEMP_TABLESPACE_NAME shrink space; Here are some example results when we did this: Before shrink: tablespace_size and allocated_size both 32 GB which is the maxbytes, free_size nearly zero when CTM_SUMMARY running After shrink: tablespace_size and allocated_size both about 30 MB (yes MB not GB), free_size most of it when nothing was running 30 seconds after starting CTM_SUMMARY: tablespace_size and allocated_size both about 170 MB, free_size 0 as it was autoextending After 60 seconds: 700 MB and 0 free After 20 minutes: 4 GB and 0 free After complete: 8 GB and all 8 GB free |
|
FPSADMIN.ADM_TOTAL_SIZE |
This simple query shows the total size of the database. |
|
FPSADMIN.ADM_VIEW_TO_EXECUTE |
This view must have one column named sql_query and you can write any insert, update, or delete query in this column then use the ADM_EXECUTE_FROM_VIEW procedure to run all queries in all rows of this view at once. Each sql_query value must end in a semicolon so you copy/paste and run as script to test before running with the procedure. |
|
FPSADMIN.BLD_ALL_SOURCE |
|
|
FPSADMIN.BLD_APF_TO_LOAD_HELPER |
|
|
FPSADMIN.BLD_CALL_SHELL_SCRIPT |
This view saved the tricky syntax needs to call a shell script from DBMS_SCHEDULER. |
|
FPSADMIN.BLD_CCFLAGS |
This view lists all of the conditional compilation flags used in our PL/SQL code along with the current value and a query to change the value of the flag. If the value is TRUE then the query sets to FALSE otherwise it sets to TRUE. This is most commonly used with the is_speed_test_on flag but this reads all flags. |
|
FPSADMIN.BLD_CHECK_FPSAPP_HIST |
This view builds a select statement to check if DASH history tables have data. |
|
FPSADMIN.BLD_CHGLOG |
This view builds the script to create or alter the CHGLOG tables and WRLOG triggers. It feeds ADM_SET_CHGLOG so all of this is done automatically. This seems like a good place for a note about why we have the next_chg_ts field in our CHGLOG tables. There are two reasons: 1) The value of null in this field indicates the current record. We could have put an is_curr_record flag instead of next_timestamp for this but either way we need an additional field and having next_timestamp allows for #2. 2) The simple query below gives a snapshot of the table at the time ts in the past. We could get this using lag or lead but that would be significantly slower: select * from xxx_chglog where ts between chg_timestamp and nvl(next_timestamp, sysdate); |
|
FPSADMIN.BLD_CHK_BAD_CHGLOG |
This fancy query builds a long union all query to check all CHGLOG tables for bad data. This was specifically used to fix a bug in 5.7.1 but is a very useful template to query all CHGLOG tables at once for any reason in the future. If you have a bad case of bad CHGLOGs, get DRL to run the Python script that reads the output of this VIEW and uses it to automatically fix the bad CHGLOGs. |
|
FPSADMIN.BLD_COLORS_PREVIEW |
Copy the output of this view to your favorite HTML preview page like https://www.w3schools.com/html/tryit.asp?filename=tryhtml_intro and you can see all of the FPS colors defined in GEN_COLORS along with the text color chosen to accompany it. |
|
FPSADMIN.BLD_COLUMN_COMMENT_WORKSHEET |
The output of this view should be pasted into Excel to add and edit comments and then the query can be run as a script. |
|
FPSADMIN.BLD_COMMON_LOT_QUERIES |
We recommend saving these queries in your favorites because they are repeatedly useful. It is often helpful to pull up these then find/replace for xxx with the lot you are researching and then you can execute each query. select * from fpsinput.wip_event_hist where lot = 'xxx' order by inst desc, lot_seq_within_sec desc; select * from fpsinput.wip_lots_verify where lot = 'xxx'; select * from fpsbase.wip_step_hist where lot = 'xxx' order by inst desc, lot_seq_within_sec desc; select * from fpsbase.wip_events_curr_step where lot = 'xxx' order by inst desc, lot_seq_within_sec desc; select * from fpsbase.wip_lots_realtime where lot = 'xxx'; select * from fpsbase.wip_lots_plus where lot = 'xxx'; select * from fpsbase.wip_step_hist_and_future where lot = 'xxx' order by overall_lot_seq_num; select * from fpsbase.wip_step_future where lot = 'xxx' order by num_seq_away; select * from fpsbase.wip_step_future_plus where lot = 'xxx' order by num_seq_away; select * from fpsbase.wip_step_future_assignments where lot = 'xxx' order by num_steps_away; |
|
FPSADMIN.BLD_COMPARE |
This view builds a query to compare two tables/views/queries. This comparison query is really useful to test changes. For example, create a new version of a view as ABC_NEW and then run a comparison on ABC and ABC_NEW to confirm the differences are as expected. |
|
FPSADMIN.BLD_COPY_DASH_C_TABLES |
Here are instructions on how to copy Dashboard configuration from a database at a different site. This is normally used only for a new site that is the first FPS site at the client. If we are starting up a new site at an existing client then we almost always want to copy the configuration from an existing site of that client which we can usually do easily via a database link. However if the sites cannot be connected via a database link then these instructions will work. ----- PART 1: ON THE SOURCE (EXISTING) DATABASE 1) Run this script with F5 (not F9) in SQL Developer or Toad. Fill in OLDFAC with the name of the facility. If the site has multiple facilities, pick the facility that you want to use for DASH_C. The insert syntax only works for up to 5000 rows at a time so DASH_C_CATEGORY_TABLE_COLS is split into three queries that should be less than 5000 rows each. Since this is in a comment, we cannot actually includes slashes and asterisks so you have to find-replace the slashes: --These tables each have less than 5000 rows per facility select (slash)*insert*(slash) * from dash_c_columns order by sort_order; select (slash)*insert*(slash) * from dash_c_spotlights where facility = 'OLDFAC' order by web_version, category, sort_order; select (slash)*insert*(slash) * from dash_c_category_tables where facility = 'OLDFAC' order by web_version, category, table_id; --This table had 32k rows per facility as of 7.7.4 and we need to break it up into 5000 row chunks for (slash)*insert*(slash) so we do this 7 times plus one more just in case it grows select (slash)*insert*(slash) WEB_VERSION, FACILITY, CATEGORY, TABLE_ID, COLUMN_ID, COLUMN_WIDTH_PCT, COLUMN_ENTITY_MAX_NUM, OVR_SORT_ORDER, SHOULD_TOTAL, SHOULD_DISPLAY, SHOULD_AUTO_HIDE, SHOULD_EXPORT, SHOULD_DISPLAY_EXP, OVR_HEADER_DISPLAY from (select c.*, row_number() over (order by web_version, category, table_id, column_id) as rn from dash_c_category_table_cols c where facility = 'OLDFAC') where rn > 0 and rn <= 5000; select (slash)*insert*(slash) WEB_VERSION, FACILITY, CATEGORY, TABLE_ID, COLUMN_ID, COLUMN_WIDTH_PCT, COLUMN_ENTITY_MAX_NUM, OVR_SORT_ORDER, SHOULD_TOTAL, SHOULD_DISPLAY, SHOULD_AUTO_HIDE, SHOULD_EXPORT, SHOULD_DISPLAY_EXP, OVR_HEADER_DISPLAY from (select c.*, row_number() over (order by web_version, category, table_id, column_id) as rn from dash_c_category_table_cols c where facility = 'OLDFAC') where rn > 5000 and rn <= 10000; select (slash)*insert*(slash) WEB_VERSION, FACILITY, CATEGORY, TABLE_ID, COLUMN_ID, COLUMN_WIDTH_PCT, COLUMN_ENTITY_MAX_NUM, OVR_SORT_ORDER, SHOULD_TOTAL, SHOULD_DISPLAY, SHOULD_AUTO_HIDE, SHOULD_EXPORT, SHOULD_DISPLAY_EXP, OVR_HEADER_DISPLAY from (select c.*, row_number() over (order by web_version, category, table_id, column_id) as rn from dash_c_category_table_cols c where facility = 'OLDFAC') where rn > 10000 and rn <= 15000; select (slash)*insert*(slash) WEB_VERSION, FACILITY, CATEGORY, TABLE_ID, COLUMN_ID, COLUMN_WIDTH_PCT, COLUMN_ENTITY_MAX_NUM, OVR_SORT_ORDER, SHOULD_TOTAL, SHOULD_DISPLAY, SHOULD_AUTO_HIDE, SHOULD_EXPORT, SHOULD_DISPLAY_EXP, OVR_HEADER_DISPLAY from (select c.*, row_number() over (order by web_version, category, table_id, column_id) as rn from dash_c_category_table_cols c where facility = 'OLDFAC') where rn > 15000 and rn <= 20000; select (slash)*insert*(slash) WEB_VERSION, FACILITY, CATEGORY, TABLE_ID, COLUMN_ID, COLUMN_WIDTH_PCT, COLUMN_ENTITY_MAX_NUM, OVR_SORT_ORDER, SHOULD_TOTAL, SHOULD_DISPLAY, SHOULD_AUTO_HIDE, SHOULD_EXPORT, SHOULD_DISPLAY_EXP, OVR_HEADER_DISPLAY from (select c.*, row_number() over (order by web_version, category, table_id, column_id) as rn from dash_c_category_table_cols c where facility = 'OLDFAC') where rn > 20000 and rn <= 25000; select (slash)*insert*(slash) WEB_VERSION, FACILITY, CATEGORY, TABLE_ID, COLUMN_ID, COLUMN_WIDTH_PCT, COLUMN_ENTITY_MAX_NUM, OVR_SORT_ORDER, SHOULD_TOTAL, SHOULD_DISPLAY, SHOULD_AUTO_HIDE, SHOULD_EXPORT, SHOULD_DISPLAY_EXP, OVR_HEADER_DISPLAY from (select c.*, row_number() over (order by web_version, category, table_id, column_id) as rn from dash_c_category_table_cols c where facility = 'OLDFAC') where rn > 25000 and rn <= 30000; select (slash)*insert*(slash) WEB_VERSION, FACILITY, CATEGORY, TABLE_ID, COLUMN_ID, COLUMN_WIDTH_PCT, COLUMN_ENTITY_MAX_NUM, OVR_SORT_ORDER, SHOULD_TOTAL, SHOULD_DISPLAY, SHOULD_AUTO_HIDE, SHOULD_EXPORT, SHOULD_DISPLAY_EXP, OVR_HEADER_DISPLAY from (select c.*, row_number() over (order by web_version, category, table_id, column_id) as rn from dash_c_category_table_cols c where facility = 'OLDFAC') where rn > 30000 and rn <= 35000; select (slash)*insert*(slash) WEB_VERSION, FACILITY, CATEGORY, TABLE_ID, COLUMN_ID, COLUMN_WIDTH_PCT, COLUMN_ENTITY_MAX_NUM, OVR_SORT_ORDER, SHOULD_TOTAL, SHOULD_DISPLAY, SHOULD_AUTO_HIDE, SHOULD_EXPORT, SHOULD_DISPLAY_EXP, OVR_HEADER_DISPLAY from (select c.*, row_number() over (order by web_version, category, table_id, column_id) as rn from dash_c_category_table_cols c where facility = 'OLDFAC') where rn > 35000 and rn <= 40000; 2) Copy or save the output. If you will be able to connect to the new database with the same SQL Developer or Toad application then just copy/paste into a new tab. If you have to use Citrix or Remote Desktop then save to a file. 3) The output will have six sections. In between each section, replace these lines: 999 rows selected. REM INSERTING into DASH_C_XXX SET DEFINE OFF; With these lines: commit; end; (slash) begin At the top, start with just begin. At the bottom, end with just commit-end-slash. ----- PART 2: ON THE NEW DATABASE 1) Turn ETL off but ignore the message that tells you to wait for running jobs: exec adm_turn_etl_off; 2) Disable some constraints: exec adm_disable_cons('FPSAPP.DASH_C_CATEGORY_TABLES_FK_FAC'); exec adm_disable_cons('FPSAPP.DASH_C_CAT_TABLE_COLS_C_FK'); exec adm_disable_cons('FPSAPP.DASH_C_SPOTLIGHTS_FK_FAC'); exec adm_disable_cons('FPSAPP.DASH_C_CAT_TABLE_COLS_TABLE'); 3) Delete the existing DASH_C tables: delete from dash_c_spotlights; delete from dash_c_category_tables; delete from dash_c_category_table_cols; delete from dash_c_columns; commit; 4) Run the script from part 1 using F5. Hopefully it will not have any errors but fix if necessary: 5) Update the facility to your facility: update dash_c_spotlights set facility = 'NEWFAC' where facility = 'OLDFAC'; update dash_c_columns set facility = 'NEWFAC' where facility = 'OLDFAC'; update dash_c_category_tables set facility = 'NEWFAC' where facility = 'OLDFAC'; update dash_c_category_table_cols set facility = 'NEWFAC' where facility = 'OLDFAC'; 6) Enable the disabled constraints: exec adm_enable_cons('FPSAPP.DASH_C_CATEGORY_TABLES_FK_FAC'); exec adm_enable_cons('FPSAPP.DASH_C_CAT_TABLE_COLS_C_FK'); exec adm_enable_cons('FPSAPP.DASH_C_SPOTLIGHTS_FK_FAC'); exec adm_enable_cons('FPSAPP.DASH_C_CAT_TABLE_COLS_TABLE'); 7) If your site has multiple facilities then you can duplicate the configuration with the STARTUP_ADD_FACILITY procedure. 8) When finished, turn ETL on: exec adm_turn_etl_on('Y'); |
|
FPSADMIN.BLD_CRASH_DIAG_QUERIES |
These queries are useful but not standard. Plus many of them require a time parameter so it is just easier to edit them rather than try to save as standard. |
|
FPSADMIN.BLD_DATABASE_LINK |
The syntax to create a database link is difficult to remember so we store it here so we do not have to search the internet every time we need to create a link. Here is an example of the syntax: create database link yourdblink connect to fpsxxx identified by "PASSWORD" using '(DESCRIPTION= (ADDRESS= (PROTOCOL=TCP) (HOST=server.host.com) (PORT=1521)) (CONNECT_DATA=(SID=FPSDWH)))'; To build the statement above based on DBA_DB_LINKS then use this: select 'create database link ' || db_link || ' connect to ' || username || ' identified by "PASSWORD" using ''' || host || ''';' from dba_db_links; Finally, creating a link for the database to point to itself can be useful. One common case is when you have three FPS databases and one of them is the single original source of specific client information. Let's say P1 is the source and D and P2 are the others. On D and P2 you create a link with the normal syntax above but on P1 you create a link of the same name with this syntax pointing to itself. Then you can reference the same link name in views and they will work on all three databases. Syntax is like this: create database link yourdblink connect to fpsxxx identified by "PASSWORD" using 'localhost/FPSDWH'; |
|
FPSADMIN.BLD_DBMS_SCHEDULER_JOB_SYNTAX |
This view saves the tricky syntax needed to create, modify, drop, run, and view a job from DBMS_SCHEDULER. For complete details, see the excellent documentation at: https://docs.oracle.com/cd/E11882_01/server.112/e25494/scheduse.htm#ADMIN034 It is important to note that the repeat_interval is based on the start time of the previous run. Therefore the next scheduled execution time might arrive while the job is still running. If so, the new instance of the job will wait to start until the current one completes. There is no easy way to schedule based on the end time of the previous run. |
|
FPSADMIN.BLD_DEPENDENCY_PATH |
This view builds a query to get all of the objects dependent on a given object. It is way too slow to omit the START WITH line and filter afterwards and not worth it to make this a function returning a table so we just save this query in this view. You can to copy/paste to use it. |
|
FPSADMIN.BLD_DISABLE_FK_TRG_FOR_TABLE |
This view builds a query to disable all foreign keys and triggers for a given table. It is useful when we need to make major changes to a table and just want to do a mass delete and insert of all rows. |
|
FPSADMIN.BLD_ETP_QUERIES |
We recommend saving these queries in your favorites because they are repeatedly useful for debugging ETP. It is often helpful to pull up these then find/replace for xxx with the tool you are researching and then you can execute each query. select * from etp_hist where tool = 'xxx' and etp_from_inst > trunc(sysdate) + 4/24 order by etp_from_inst, etp_hist_seq_within_sec, entity; select * from wip_hist_for_etp where tool = 'xxx' and inst > trunc(sysdate) + 4/24 and t = 'WWH' and is_etp = 'Y'; select * from wip_step_hist where tool = 'xxx' and inst > trunc(sysdate) + 4/24 order by inst desc; select * from dash_p_wip_wafer_hist where tool = 'xxx' and inst > trunc(sysdate) + 4/24 order by inst desc; select * from dash_p_gantt_entity_state where tool = 'xxx' and to_inst > trunc(sysdate) + 4/24 order by from_inst; select * from dash_p_gantt_tool_state where tool = 'xxx' and to_inst > sysdate - 4/24 order by from_inst; select * from dash_p_gantt_tool_lot where tool = 'xxx' and comp_inst > trunc(sysdate) + 4/24 order by lot_beg_inst_for_sort desc, lot, from_inst; select * from dash_b_h_etp_7d where tool = 'xxx' and to_inst > trunc(sysdate) + 4/24 order by from_inst; |
|
FPSADMIN.BLD_EXCEL_IMPORT |
This view returns an Excel formula to insert data from an Excel spreadsheet into an Oracle database table. It also includes a header row. In order to use this, you must have the data in the Excel spreadsheet for all columns in the database table except UPDATED_INST in the same order. Basically if your Excel spreadsheet looks exactly like the table then you can use this to import it. NOTE: This is currently not perfect! It does not properly handle columns which are DATE or TIMESTAMP. It will croak if there are any single tick marks (a.k.a. apostrophes) in any cell in the spreadsheet which you might be able to fix with SUBSTITUTE(SUBSTITUTE(cell, "'", "''"). So if you use this and can improve it with your next use then feel free! |
|
FPSADMIN.BLD_FACILITY_CHG_INSTRUCTIONS |
This view stores a script that helps an expert user change the facility and/or split out a new facility. This is an extremely painful process that is not recommended and best avoided by configuring facility correctly at startup. If your site has less than a couple of months of data then resetting the entire history and starting from scratch at initialize_inst is likely easier. But if we have to do it then this script should help. Keep in mind this is likely to take a full day or more. |
|
FPSADMIN.BLD_FAVORITE_QUERIES |
This view stores frequently used queries that we typically store in our Toad personal queries for easy access. If using SQL Developer or a new copy of Toad we can access them from this view. |
|
FPSADMIN.BLD_FIND_DUPLICATES |
This view builds queries to help resolve issues with duplicate primary/unique keys in tables. It produces two SQL queries for each primary/unique key selected: FIND_DUPLICATE_KEYS -- this query selects only the key records which are duplicated in the ref-view FIND_ROWS_W_DUPLICATE_KEYS -- selects all records from the ref-view ordered with duplicates at the beginning Note that Oracle message looks like "ORA-00001: unique constraint (FPSAPP.TABLE_NAME_PK) violated" so we include the owner_dot_constraint column so you can just copy/paste. Example usage from Oracle error message (one row): select * from bld_find_duplicates where owner_dot_constraint = 'OWNERCONSTRAINT'; Example usage by table (one row for each unique key): select * from bld_find_duplicates where table_name = 'TABLE'; If you would to build a query to remove the duplicates, see the BLD_REMOVE_DUPLICATES view. |
|
FPSADMIN.BLD_FIRST_VALUE_EXAMPLE |
This view has a useful query to figure out which of the many options to use to get the desired behavior from the first_value and last_value window functions. |
|
FPSADMIN.BLD_POPULATE_COLORS |
When we have a new client we will want to populate the color field in modules, technologies, customers, routes, etc. with random values and this view builds a script of queries to do this. Potentially we could make this a procedure and have an insert trigger which automatically populates the color. |
|
FPSADMIN.BLD_PRIVS_AND_SYNONYMS |
This view compares the desired privileges for FPS schemas with the privileges which are currently granted and then builds grant and revoke queries to get us to the ideal state. This view is used by ADM_SET_PRIVILEGES to do this automatically when FPSADMIN has the "grant any object privilege" system privilege. Otherwise the privileges will have to be granted/revoked by each schema. These queries can be run blindly without review as the logic is all built in this view. *** CRITICAL: This view must not refer to any objects in FPS schemas besides FPSADMIN without an *** explicit prefix. If it does, it will introduce a circular dependency that will *** break the DWH_build process. |
|
FPSADMIN.BLD_REMOVE_DUPLICATES |
This view builds a query to update a column which is part of a primary or unique key. It disables the constraint, does the update, removes the duplicates, and enables the constraint. If you are just looking for the query template to remove duplicate rows manually, which is useful if you are trying to drop a column from a unique constraint, here it is: delete from TABLE where rowid in ( select rowid from ( select rowid, count(*) over (partition by COLUMN) as n, min(rowid) over (partition by COLUMN) as m from TABLE ) where n > 1 and rowid != m ); |
|
FPSADMIN.BLD_SAILBOAT_MULTIPLE_TABLES |
This view saves a useful semi-automated sailboat racing technique where we create a temp table and then insert rows into it within a begin/end statement so we know we have to query each row and we are not affected by connection speed. The concern with this technique is that we are only selecting a few columns and Oracle is smart about this so it might be slower to select all columns. --create empty table create table aaa_sailboat# as select lot, carrier, prd as table_name, prd as filter_by, systimestamp as ts, est_end_tool_reason as col1, process_state_inst as col2 from wip_lots_plus where 0=1; --generate the query, copy the first 30 lines to paste into next part use the query in this view --Paste first 30 lines (5 of each) wrapped by begin/start and end/end begin delete from test_speed#; insert into aaa_sailboat# values ('START', 'START', 'START', 'START', systimestamp, null, null); past 30 lines here; insert into aaa_sailboat# values ('END', 'END', 'END', 'END', systimestamp, null, null); commit; end; --Summarize the results select table_name, filter_by, avg(d), min(d), max(d), count(*) from ( select timestamp_delta_sec(ts, lead(ts) over (order by ts)) as d, t.* from aaa_sailboat# t ) where table_name not in ('START','END') group by table_name, filter_by order by 1, 2; --Drop table when finished drop table aaa_sailboat#; |
|
FPSADMIN.BLD_SAILBOAT_SINGLE_TABLE |
This view saves a useful semi-automated sailboat racing technique where we create a temp table and then insert rows into it within a begin/end statement so we know we have to query each row and we are not affected by connection speed. As opposed to the multiple table version, this technique queries all non-empty columns so it is a test of the real speed and is not affecting by Oracle being smart about selecting only a few columns. --get list of non-empty columns select get_tab_col_list('wip_lots_plus', 'Y') from dual; --create empty table create table aaa_sailboat# as select systimestamp as ts, paste_non_empty_col_list_here from wip_lots_plus where 0=1; --generate the query, copy the first 30 lines to paste into next part use the query in this view --Paste first 30 lines (5 of each) wrapped by begin/start and end/end begin delete from test_speed#; insert into aaa_sailboat# values ('START', 'START', 'START', 'START', systimestamp, null, null); past 30 lines here; insert into aaa_sailboat# values ('END', 'END', 'END', 'END', systimestamp, null, null); commit; end; --Summarize the results select table_name, filter_by, avg(d), min(d), max(d), count(*) from ( select timestamp_delta_sec(ts, lead(ts) over (order by ts)) as d, t.* from aaa_sailboat# t ) where table_name not in ('START','END') group by table_name, filter_by order by 1, 2; --Drop table when finished drop table aaa_sailboat#; |
|
FPSADMIN.BLD_TABLE_QUERIES |
This view builds the delete, insert, update, and select statements needed for the APF report XXX_QUERY used to populate table XXX by the Load_FPS_DWH.sh shell script. Simply paste the results of this view into the delete_query, insert_query, and/or update_query function blocks. Example use: select sql_query from bld_table_queries where table_name = 'RTG_PRDS' and query_type = 'INSERT'; Honestly this entire view is ridiculously tricky but one particularly tricky part is the need to check if NUMBER columns are allowed to be null. This is because APF will return "[Not Set]" if you convert a null number to a string therefore we must check if null and only convert to string if not. |
|
FPSADMIN.BLD_TABLE_QUERIES_SLOW |
This view builds the delete, insert, update, and select statements needed for the APF report XXX_QUERY used to populate table XXX by the Load_FPS_DWH.sh shell script. Simply paste the results of this view into the delete_query, insert_query, and/or update_query function blocks. Example use: select sql_query from bld_table_queries where table_name = 'RTG_PRDS' and query_type = 'INSERT'; Honestly this entire view is ridiculously tricky but one particularly tricky part is the need to check if NUMBER columns are allowed to be null. This is because APF will return "[Not Set]" if you convert a null number to a string therefore we must check if null and only convert to string if not. |
|
FPSADMIN.BLD_TAB_COLS |
Shortcut view to use for table columns since all_tab_cols includes views too |
|
FPSADMIN.BLD_TEST_OTHER_FACILITY_LOGIC |
This is a sample script to test the other facility logic |
|
FPSADMIN.BLD_VS_PROC_CT_BY_ROUTE |
This useful view compares the Finished CT on three databases, the old method for Dynamic CT on all three databases, TCT (from our Throughput Calculations) on all three databases, and the new method for Dynamic CT on this database. It is critical to choose a set of filters that return only one set of records from CTM_SUMMARY. By one set, I mean an entire prd but just one record for each step. In other words, this should filter by the primary key except for step. To use this, just find/replace for FFF, PPP, GGG, and @yourdblink plus 'Prod' if your production lot_group is not Prod. |
|
FPSADMIN.BLD_VS_PROC_CT_BY_STEP |
This view is a drilldown of BLD_VS_PROC_CT_BY_ROUTE for each step. It compares the cycle time from the old WIP_STEP_HIST -> WIP_END_SHIFT_HIST -> CTM_SUMMARY method against the new ECT_HIST -> ECT_xxx_7D_2D_CW -> ECT_SUMMARY method. It is critical to choose a set of filters that return only one set of records from CTM_SUMMARY. By one set, I mean an entire prd but just one record for each step. In other words, this should filter by the primary key except for step. To use this, just find/replace for FFF, PPP, and GGG plus 'Prod' if your production lot_group is not Prod. |
|
FPSADMIN.BLD_WEBAPP_SPEED_TEST |
This series of queries is very effective for testing the queries used by a slow web application: create table aaa_test_webapp# as select sysdate as inst, prev_exec_start, exec_time, sid, serial#, sql_fulltext, sql_text, machine, progress, target from adm_sessions where 0=1; insert into aaa_test_webapp# select sysdate as inst, prev_exec_start, exec_time, sid, serial#, sql_fulltext, sql_text, machine, progress, target from adm_sessions where username = 'FPSWEB' and status = 'ACTIVE' and upper(sql_text) like '%WF%'; select prev_exec_start, max(inst), sid, serial#, sql_fulltext, sql_text from aaa_test_webapp# group by prev_exec_start, sid, serial#, sql_fulltext, sql_text order by 1, 3; drop table aaa_test_webapp#; |
|
FPSADMIN.CHK_ALL |
This view uses the function CHK_ALL_TO_TABLE to first refresh all DBF tables and then query all of the CHK and SYSMNTR views at once. Because it refreshes the tables first the records returned from this view are realtime but this takes about a minute. To simply bring up the results without refreshing just query CHK_ALL_NO_REFRESH. |
|
FPSADMIN.CHK_ALL_NO_REFRESH |
This view uses the function CHK_ALL_TO_TABLE to query all of the CHK and SYSMNTR views at once. Unlike the normal CHK_ALL, this view does not refresh the DBL tables so it is very fast but only pulls up the last run of CHK_ALL rather than the realtime updated information. |
|
FPSADMIN.CHK_CHGLOG |
This view checks for all of the CHGLOG tables and the triggers to populate them. We want CHGLOG for all FPSINPUT tables which are updated manually or have at least one column updated manually. We also want FPSADMIN.CFG_xxx tables and FPSAPP C and W tables. Finally we always want EQP_PORTS and MHS_CARRIERS because port history and location history is critical to many applications. This view checks both for new CHGLOG tables that need to be created and existing ones that need to be modified or dropped. It also checks if the trigger should be recreated if the tables have DDL changes since the trigger was updated. This view feeds BLD_CHGLOG and then ADM_SET_CHGLOG if is_auto is set to Y so most changes are done automatically when you select from CHK_ALL. |
|
FPSADMIN.CHK_COLUMNS |
This view looks for nullable columns that should never be nullable and manual not null columns with no default value. |
|
FPSADMIN.CHK_COLUMNS_FOR_APD_REF |
This view checks three things regarding APD and REF views: 1) Finds APD and REF views where the corresponding table does not exist or is not configured to update by APD or REF. 2) Finds tables which should be updated by APD or REF where the view does not exist. 3) Checks if any columns in a APD or REF view have a column name or datatype different from the corresponding table which the view populates. Note that we would like to have the columns in the same order but it does not matter. 4) Checks that all columns in LOOP table matches main table Note this view should be able to use full outer join but Oracle bug prevents this. |
|
FPSADMIN.CHK_COLUMN_WIDTHS |
This view checks if a column of the same name in multiple tables has different types or column widths. Discrepancies within FPSADMIN/INPUT/BASE tables are listed first. Then discrepancies involving columns which exist in a table in any of those three schemas -- and for the desired type/length it uses the mode type and max length from tables in FPSADMIN/INPUT/BASE. Then finally discrepancies involving columns which are only in FPSAPP -- and it uses the mode type and max length from FPSAPP tables. |
|
FPSADMIN.CHK_INVALID |
This view looks for invalid objects in all FPS schemas as well as errors in ADM_LOAD. We must use dbf_xxx views here because dbf_objects does not include triggers and synonyms in other schemas and we need to check if those are invalid. Then because we must use dbf_xxx views and Oracle does not allow grant select on views using dbf_xxx and we need to grant permission on CHK_INVALID to other users, our solution is to refresh a table with this data which we can grant. |
|
FPSADMIN.CHK_LOAD_CFG |
This view checks that all FPSINPUT/BASE/APP tables are accounted for in CFG_TABLES. Tables loaded manually need to be listed as MANUAL. We also check if there are any records in CFG_TABLES which do not exist. Please note that this view must use DBF_xxx views if permissions are granted to FPSAPP for SYSMNTR because FPSAPP does not see all objects when querying DBF_xxx views since it does not have permission on everything. Since the only reason to use DBF_xxx is to grant to other users then we just use DBF_xxx. |
|
FPSADMIN.CHK_NAMING_CONVENTION |
This view checks that all objects are named accordingly to our agreed upon conventions: -- Underscore is the delimiter. -- First token is the application like DASH, FF, SEC, TPUT, etc -- Second token is the use of the table. Currently this is one of the following: -- P for page. All web pages query only from P tables/views (or W which are both written and read by web pages). P tables/views come from FPSINPUT, FPSBASE, B, C, and W. P tables/views must be preserved intact unless coordinated with a web page change. -- B for base. These tables/views are the link between FPSINPUT/FPSBASE and the P tables/views. All data comes from FPSINPUT/FPSBASE plus other B tables/views and is transformed as an intermediate step to load the P tables/views which are actually used by the page. We know that we can make any modifications we want to B tables/views or even delete them as long as the P views remain valid. -- C for config. C tables are also base tables and never used by pages. If we want to use the data from a C table on a page we just make a simple P view that selects from the C table. Like B, we can modify C tables as long as P views remain valid and include the same columns. -- G is for global. G tables are configuration tabls which are global to all FPS sites so the data is the same everywhere similar to FPSADMIN. The insert statements to populate G tables will be in the create script. -- W means the table is populated by a webpage. Most of the FF tables and all feedback tables are W tables since they are populated by user input on the webpage. Like P tables, W tables must be preserved intact unless coordinated with a web page change since our pages have insert/update/delete queries for these tables. Web pages are allowed to select from W tables. -- E for example. These views are an example of a query from the website that we want to store within the database so that we can test and we can ensure it remains valid. -- REF for refresh. Views used to refresh tables with ADM_TABLE_REFRESH or ADM_TABLE_MATCH procedure. For example, the REF view for DASH_B_WIP would be named DASH_REF_B_WIP. -- APD for append. Views used to append data to historical tables with the ADM_TABLE_APPEND procedure. For example, the REF view for DASH_P_H_WIP would be named DASH_APD_P_H_WIP. -- DEL for delete. DEL views are only used with the ADM_TABLE_MATCH procedure for faster delete. These views only need to know what records exist for the PK and not the values so they can be much faster. For example, WIP_DEL_LOTS_REALTIME_T just needs to know the lots and can get that from WIP_LOTS_STATIC. -- CHK for check views. These views are used to check and validate data used specifically for the application. -- APF for reports. These views are only used by APF reports. -- Third token can be H for history. We will have B_H, P_H, and W_H but we would never have C_H. We would also never have APD_H nor REF_H because the third token after APD or REF would also be B/C/W/P. -- All remaining info (starting with fourth token if third is H or third token otherwise) is the description of the table. |
|
FPSADMIN.CHK_SCHEDULER |
This view is part of CHK_ALL. Like all FPSADMIN.CHK views any row that is returned represents an error that should be fixed or warning that should be The absence of any rows indicates perfection! For sites where Scheduler is turned off, this view will always return a single row indicating that but no other warnings. |
|
FPSADMIN.CHK_TABLES |
This view does a whole bunch of checks on all FPS tables and indexes. |
|
FPSADMIN.DBF_REF_ARGUMENTS |
Refresh SYS view DBA_ARGUMENTS into table with FPS only filter |
|
FPSADMIN.DBF_REF_COLUMNS_FROM_SOURCE |
This table extracts data from DBA_SOURCE for use in CHK views. Because the permissions on DBA views prevent views using them from being granted to others, we must use DBF tables in our CHK views instead of the DBA view directly. This table initially only includes one use but it has a generic name so we can use it for other future cases where we need to check DBA_SOURCE. |
|
FPSADMIN.DBF_REF_COL_COMMENTS |
Refresh SYS view DBA_COL_COMMENTS into table with FPS only filter |
|
FPSADMIN.DBF_REF_CONSTRAINTS |
Refresh SYS view DBA_CONSTRAINTS into table with FPS only filter |
|
FPSADMIN.DBF_REF_CONS_COLUMNS |
Refresh SYS view DBA_CONS_COLUMNS into table with FPS only filter |
|
FPSADMIN.DBF_REF_DEPENDENCIES |
Refresh SYS view DBA_DEPENDENCIES into table with FPS only filter |
|
FPSADMIN.DBF_REF_DEV_ONLY_OBJECTS |
We do not want dev only objects in our normal DBF tables including DBF_OBJECTS but we want to include those which are invalid in SYSMNTR_ERRORS and we want to store them in GrimRepo via DBF_SCHEMA_SCOPE. In order to be able to grant that view, we need to avoid querying any DBA views so we have to refresh these into a table. |
|
FPSADMIN.DBF_REF_ERRORS |
Refresh SYS view DBA_ERRORS into table with FPS only filter. We filter out FPS objects in the recyclebin but we do not filter out FPS objects which are development as we want to see those errors. |
|
FPSADMIN.DBF_REF_INDEXES |
Refresh SYS view DBA_INDEXES into table with FPS only filter |
|
FPSADMIN.DBF_REF_IND_COLUMNS |
Refresh SYS view DBA_IND_COLUMNS into table with FPS only filter |
|
FPSADMIN.DBF_REF_JOBS |
Refresh SYS view DBA_JOBS into table. We do not include the FPS only filter here because Oracle does not use DBA_JOBS for system jobs. |
|
FPSADMIN.DBF_REF_OBJECTS |
Refresh SYS view DBA_OBJECTS into table with FPS only filter |
|
FPSADMIN.DBF_REF_PROCEDURES |
Refresh SYS view DBA_PROCEDURES into table with FPS only filter |
|
FPSADMIN.DBF_REF_SCHEDULER_JOBS |
Refresh SYS view DBA_SCHEDULER_JOBS into table with FPS only filter |
|
FPSADMIN.DBF_REF_SEQUENCES |
Refresh SYS view DBA_SEQUENCES into table. |
|
FPSADMIN.DBF_REF_SESSIONS |
Because we must use dba_xxx views and Oracle does not allow grant select on views using dba_xxx and we need to grant permission on ADM_SESSIONS to FPSAPP for SYSMNTR, our solution is to refresh a table with this data which we can grant. |
|
FPSADMIN.DBF_REF_SESSION_COUNT |
Because Oracle does not allow grant select on views using v$xxx and we need to grant permission for SYSMNTR, our solution is to refresh a table with this data which we can grant. Note we make a special exception to our rule of not using REF views in other places and use this REF view in ADM_LOCKED_STATUS. |
|
FPSADMIN.DBF_REF_SOURCE_FOR_CHK |
Because we must use DBA_xxx views and Oracle does not allow grant select on views using DBA_xxx and we need to grant permission on various CHK and SYSMNTR views then we store the necessary information from DBA_xxx views into DBF tables. DBA_SOURCE is a huge table and there is no reason to store all of it so we just store the lines that we need to use for CHK and SYSMNTR views and give it the suffix so that users do not expect the DBF table to contain all rows from the DBA view. |
|
FPSADMIN.DBF_REF_SYNONYMS |
Refresh SYS view DBA_SYNONYMS into table with FPS only filter. Distinct is used although rarely needed because it is possible to have duplicate synonyms with different values for origin_con_id in a container database. |
|
FPSADMIN.DBF_REF_TABLES |
Refresh SYS view DBA_TABLES into table with FPS only filter |
|
FPSADMIN.DBF_REF_TABLESPACES |
Refresh SYS view DBA_TABLESPACES into table with FPS only filter |
|
FPSADMIN.DBF_REF_TAB_COLUMNS |
Refresh SYS view DBA_TAB_COLUMNS into table with FPS only filter. We use this view rather than DBA_TAB_COLS because it does not include hidden columns. The three OVR columns are used by GrimRepo to override the real values for data_type, data_length, and column_id so that when we compare the snapshot branch to any other branch it appears to have these properties when in reality it does not. The reason why we need such overrides for data_type and data_length is because it is nearly impossible to shrink columns in large history tables and we changed IS_NONSTD so long after creation that we could not actually make the alter. The reason for column_id is because Oracle only allows new columns to be added to the end of the table and sites often add columns in different orders. We only do this when necessary since it is normally better to have GrimRepo match the real order. For the tables where we have differences we manually choose how many columns to lock (normally the original columns) then we sort by PK columns first then original columns to lock in their original order then alphabetically. |
|
FPSADMIN.DBF_REF_TAB_COMMENTS |
Refresh SYS view DBA_TAB_COMMENTS into table with FPS only filter |
|
FPSADMIN.DBF_REF_TAB_PARTITIONS |
Refresh SYS view DBA_TAB_PARTITIONS into table with FPS only filter including some absolutely crazy logic to convert the high_value column from a LONG into a DATE. |
|
FPSADMIN.DBF_REF_TRIGGERS |
Refresh SYS view DBA_TRIGGERS into table with FPS only filter |
|
FPSADMIN.DBF_REF_TYPES |
Refresh SYS view DBA_TYPES into table with FPS only filter |
|
FPSADMIN.DBF_REF_TYPE_ATTRS |
Refresh SYS view DBA_TYPE_ATTRS into table with FPS only filter |
|
FPSADMIN.DBF_REF_VIEWS |
Refresh SYS view DBA_VIEWS into table with FPS only filter |
|
FPSADMIN.DBF_SCHEMA_SCOPE |
This is the base query to define the scope of GrimRepo, our database schema scraper. All objects in this view except CHGLOG have their DDL (structure) recorded by GrimRepo with the folder determined by the DDL_SCOPE column. Our DDL_SCOPE values (and therefore our GrimRepo folders except CHGLOG) are CHGLOG, GLOBAL, INSTANCE, RPT, and SITE. For tables, the DML_SCOPE determines whether it will have its DML (data) recorded by GrimRepo: - CFG, GLOBAL, and MANUAL tables are always recorded. - CHGLOG, DWH, ETL, FPSRPT, and OFF are never recorded. - WRITE_BY_APP ("written and maintained by application code") is determined by flag in GEN_PREFIXES. - ETL_W_MANUAL_COLS ("table records determined by ETL with at least one column updated manually") are currently not recorded but this could change in the future. |
|
FPSADMIN.DBF_TABLES_PLUS |
This view adds information from CFG_TABLES to DBF_TABLES. Having this as a separate view rather than refreshed into DBF_TABLES means that this view will immediately reflect changes from CFG_TABLES without having to refresh DBF_TABLES. |
|
FPSADMIN.DOC_HOW_TO_CHANGE_PRCS_OPER |
It would be nice if we could handle a change in the definition of the process more gracefully but because of how we use this as the critical field for our throughput data is a bit complicated. We documented how to do it as gracefully as possible on our FPS Help Site at: http://help.finalphasesystems.com/display/DW/How+To+Change+Definition+of+Process Changing the definition of operation is not nearly as messy as changing process but we wrote some queries to help adjust the oper moves in the history tables if you wish to make adjustments. These queries are also on our Help Site at: http://help.finalphasesystems.com/display/DW/How+to+Change+Definition+of+Operation |
|
FPSADMIN.DOC_HOW_TO_POPULATE_MANUAL |
The topic of how to populate data which is not stored in the MES is one of the most tricky topics of our DWH. Many of the columns in EQP_TYPES are excellent examples of this but we will use exp_cascade_qty which is an important column because it is a huge factor in calculating UPH on cascading tools. EQP_TYPES is definitely an automatically populated table but exp_cascade_qty is probably not stored in the MES. So how do we set the value of exp_cascade_qty manually? We have three options: 1) Hardcode the logic in the ETL (either APF report or REF view). In other words, just specify that exp_cascade_qty is "if eqp_type like 'IMP%' then 6 else if eqp_type like 'LITHO%' then 4" or something like this. The advantage of this method is that the value will be sure to update when we copy the ETL logic between development and production. The disadvantage is that it is rather awkward to update the value because you have to modify the ETL. 2) Configure the column to be set to manual within an automatically populated table using ADM_LOAD_CFG_MANUAL_COLS. This means the ETL will not update this column for existing records even when other columns change. The advantage is that it is easy to modify directly in the database. The disadvantage is that production and development can have different values since it is set manually. 3) Add a column or populate a user-defined column in the MES. This is the best way to store necessary information if possible. The disadvantages are that it might be impossible to do this and that it might require more change control that desired to modify the value in the future. |
|
FPSADMIN.DOC_HOW_TO_QUERY_DBMS_OUTPUT |
|
|
FPSADMIN.DOC_HOW_TO_UNCOMPRESS |
If the table is compressed we get ORA-39726 when we attempt to drop a column. Just uncompress the table and then it works fine. |
|
FPSADMIN.DOC_MODIFY_CAL_TABLES |
Updating the CAL tables for times in the past is one of our least favorite things to do therefore we try extremely hard to get the CAL tables correct the first time. But sometimes it is necessary. Each update is different but this is a good example of how to do it. This script will update the start of various work months. In this example, the quarters and years do not change but we changed from 5-4-4 to 4-4-5 format. --Run this first, it should return no rows meaning the current calendars have integrity select * from chk_calendars; --Disable unique constraints exec adm_disable_cons('FPSINPUT.CAL_WORK_MONTHS_U'); exec adm_disable_cons('FPSINPUT.CAL_WORK_MONTHS_DISPLAY_U'); --Insert "new" months (i.e. you will have a second record for March 2021 with a different start date) insert into cal_work_months (start_work_month, work_month, end_work_month, start_work_quarter, work_month_display) select distinct start_work_month, work_month, end_work_month, start_work_quarter, work_month_display from load_cal_shifts where work_year >= 2021 and start_work_month not in (select start_work_month from cal_work_months); --Update months to "new" months merge into fpsinput.cal_work_weeks a using (select distinct start_week, start_work_month from load_cal_shifts where work_year >= 2021) b on (a.start_week = b.start_week) when matched then update set a.start_work_month = b.start_work_month where a.start_work_month != b.start_work_month; --Update end_work_month of previous months merge into fpsinput.cal_work_months a using (select distinct start_work_month, end_work_month from load_cal_shifts where work_year >= 2021) b on (a.start_work_month = b.start_work_month) when matched then update set a.end_work_month = b.end_work_month where a.end_work_month != b.end_work_month; --Update corresponding_start_work_month merge into cal_plan_months a using ( select * from ( select distinct start_plan_month, corresponding_start_work_month, first_value(start_work_month) over (partition by start_plan_month order by abs(start_work_month - start_plan_month)) as new_cswm from cal_plan_months cross join cal_work_months ) where corresponding_start_work_month != new_cswm ) b on (a.start_plan_month = b.start_plan_month) when matched then update set a.corresponding_start_work_month = b.new_cswm; --Delete all records from PERF table delete from fpsapp.perf_p_h_comp_wip_month where start_month >= (select min(start_work_month) from cal_work_months where start_work_month >= to_date('2021', 'YYYY') and start_work_month not in (select start_work_month from load_cal_shifts)); --This one is slow, took almost 10 minutes on D merge into fpsapp.perf_p_h_otd_shift a using (select * from load_cal_shifts where work_year >= 2021) b on (a.start_shift = b.start_shift) when matched then update set a.start_month = b.start_work_month, a.month = b.work_month where a.start_month != b.start_work_month; --Finally delete the "old" months delete from cal_work_months where start_work_month >= to_date('01/01/2021', 'MM/DD/YYYY') and start_work_month not in (select distinct start_work_month from load_cal_shifts where work_year >= 2021); --Enable the constraints exec adm_enable_cons('FPSINPUT.CAL_WORK_MONTHS_U'); exec adm_enable_cons('FPSINPUT.CAL_WORK_MONTHS_DISPLAY_U'); --In case you deleted any months then reload them exec adm_load_object('PERF_P_H_COMP_WIP_MONTH'); --Run this last, it should return no rows meaning the new calendars have integrity select * from chk_calendars; |
|
FPSADMIN.DOC_PARTITION_INSTRUCTIONS |
Instructions and information on partitioned tables is on the Help Site at: https://help.inficonims.com/display/DW/Partitioned+tables |
|
FPSADMIN.DOC_WHY_WE_DO_NOT_USE_KEY |
Many databases use a key value rather than the real values for many columns which enables easy renaming. If we used a key value for tool, for example, then the column tool with the tool name would only be stored in EQP_TOOLS along with tool_key and then all other tables like WIP_EVENT_HIST and EQP_EVENT_HIST and EQP_STATUS would use tool_key. If the tool name changed, we would simply update the value in the tool column in EQP_TOOLS and that would do it. We chose not to do this for two main reasons: 1) When a tool name changes it may signify a change to the tool that could impact its speed, yield, etc. so tracking its data separately can be beneficial. If the tool moves and that's it, it could still be impacted in some way such as running less lots, running more lots of different routes, etc. It's useful to easily see this and not have it be hidden within a key value. 2) It's easier to follow what's going on and thus write queries and create reports when using actual object names instead of keys. The DWH is already complicated enough and we didn't want to add this layer of complexity to its usage. For example, you cannot just do select * from wip_event_hist if you use key values for important columns like tool and this is really frustrating. So while we made the easy choice to use real values, it's worth discussing how a tool name change affects throughput tracking. If tool A is renamed to B and the EQP_TYPE remains the same then they will show together in reports. It's useful to see the switch and then if someone really wants to backfill B with A's data it can be done, but I wouldn't want to just show B with all of the data without it being clear that B was previously A. If B changes EQP_TYPE then it is a different tool and A's data does not apply. Having the history by key would be nice since you could see the rename in the key's history, but in our case you will still see the change in the CHGLOG table even though it may show up as a delete of A and insert of B. Using a key, for each report you have to decide whether to show the name of the tool at the time the data was recorded (which would show like above) or to show the current name of the tool for all of the data (which means you have to know that all name changes will not signify an event that impacts the data for the tool). |
|
FPSADMIN.GEN_COLORS_PLUS |
Fun with colors |
|
FPSADMIN.RECENT_DATA_DATE_HIST |
Detailed history on data_date recorded for the GEN_DATA_DATE object |
|
FPSADMIN.RECENT_HIST_EVENTS_BY_HOUR |
This is a nice query to track if the key FPSINPUT history tables are loading properly. |
|
FPSADMIN.RECENT_INACTIVE_HIST |
This is a quick shortcut to view the recent activity regarding the RTG_ROUTE_STEPS_INACTIVE table. These records are always logged to ADM_SPEED_HIST regardless of the setting of any CC flag. |
|
FPSADMIN.RECENT_LOAD_ERRORS |
This view shows all errors, failures, warnings from ADM_LOAD_HIST for the last day, adds a duration column, and sorts descending. |
|
FPSADMIN.RECENT_LOAD_HIST |
This view simply shows ADM_LOAD_HIST for the last 12 hours, adds a duration column, and sorts descending. There are identical views for the last 3 days and last 7 days. |
|
FPSADMIN.RECENT_LOAD_HIST_3D |
This view simply shows ADM_LOAD_HIST for the last 3 days, adds a duration column, and sorts descending. There are identical views for the last 12 hours and last 7 days. |
|
FPSADMIN.RECENT_LOAD_HIST_7D |
This view simply shows ADM_LOAD_HIST for the last 7 days, adds a duration column, and sorts descending. There are identical views for the last 12 hours and last 3 days. |
|
FPSADMIN.RECENT_LOAD_SINCE_DDL |
This view shows the recent load history for a table since it was modified or since the view/procedure that loads it was modified. It uses RECENT_LOAD_HIST so this only goes back for the last 3 days and therefore only includes tables which have been modified or had their load view/procedure modified in the last three days |
|
FPSADMIN.RECENT_REALTIME_BY_HOUR |
Useful query to calculate the job metrics by hour including the average seconds per insert for recent WEH loads. This measures the performance of the RealTime in general and the WIP_EVENT_HIST triggers specifically. Because we use C on WIP_EVENT_HIST, this should work for both DB and APF ETL clients. Run this query and then copy to Excel and choose Charts -> Scatter -> Marked Scatter for the first five columns to generate a graph of recent refresh intervals for the critical data loads. You might want to extend the time longer than 2 days to query for the graph. |
|
FPSADMIN.RECENT_REALTIME_HIST |
Simple view to quickly get recent load history for the Realtime job only. |
|
FPSADMIN.RECENT_REALTIME_SPEED |
Summarize speed from recent WEH, WWH, and EEH inserts using the delta on inserted_time. The view basically accounts for every second in the last full hour. |
|
FPSADMIN.RECENT_SLOWEST_INST |
Lists all jobs which were in progress at the time when the 3min load was the slowest. This uses RECENT_WEH_LOAD_HIST to determine this slowest_inst so as written this view applies to APF ETL clients. However it is easy to simply replace slowest_inst with a specific time that you want to investigate. |
|
FPSADMIN.RECENT_SPEED_BY_INSERTED_TIME |
Summarize speed data using deltas of inserted_time from events inserted into our three FPSINPUT history tables. This view omits ADM_SPEED_TEST_HIST which provides more details but also divides events in a way that we do not always want. If you want to include ADM_SPEED_TEST_HIST then use RECENT_SPEED_TESTS view which is similar to this one. |
|
FPSADMIN.RECENT_SPEED_HIST |
Simple view to quickly get data from recent speed tests. For summarization and analysis of speed tests on the 3min/Realtime job please use RECENT_SPEED_TESTS which combines data from recent speed tests with inserted_time from three FPSINPUT history tables. Here is a query for summarizing speed tests specifically on the critical UPDATE_WIP_STEP_FUTURE procedure which is not quite useful enough to merit its own view: select round_ten_min, speed_group, count(*) as num_total, count(case when seconds > 0.5 and seconds < 1 then 1 end) as num_half_sec, count(case when seconds >= 1 and seconds < 2 then 1 end) as num_one_sec, count(case when seconds > 2 then 1 end) as num_two_plus_sec, round(count(case when seconds > .5 then 1 end) / count(*) * 100, 1) as pct_over_half_sec, sum(seconds) as total_sec, round(sum(case when seconds > .5 then seconds end) / sum(seconds) * 100, 1) as pct_of_total_sec_from_slow, round(sum(seconds) / count(*) * 1000) as avg_millisec, round(sum(case when seconds <= .5 then seconds end) / count(case when seconds <= .5 then 1 end) * 1000) as avg_millisec_under_half, round(sum(case when seconds > .5 then seconds end) / count(case when seconds > .5 then 1 end) * 1000) as avg_millisec_over_half from ( select trunc((cast(ts as date) - trunc(sysdate)) * 144) / 144 + trunc(sysdate) as round_ten_min, h.* from recent_speed_hist h where speed_class = 'UPDATE_WIP_STEP_FUTURE' and ts > trunc(sysdate, 'HH') - 1/24 ) group by round_ten_min, speed_group order by speed_group desc, round_ten_min desc; |
|
FPSADMIN.RECENT_SPEED_TESTS |
Summarize data from recent speed tests logged in ADM_SPEED_HIST. Please note this view takes about a minute to run for the last 12 hours if we have been logging speed test history for WIP_EVENT_HIST -- which by the way you can turn on/off in GEN_SITE. |
|
FPSADMIN.RECENT_TREND_BY_EVENT |
Useful query to compare times of recent WIP_EVENT_HIST, EQP_EVENT_HIST, and WIP_WAFER_HIST events over time along with other objects in the same load. This query can be easily modified to compare two periods or even two databases through a link. You can also modify to group by event if desired. |
|
FPSADMIN.RECENT_TREND_BY_OBJECT |
Useful query to plot trends of each object in the 3min job over a period of time. We suggest that you copy paste this directly into Excel and make a PivotTable with object_name on the left side and round_time along the top. For a longer time period, remove the HH from the trunc to group by day instead of hour. |
|
FPSADMIN.RECENT_TREND_SPECIFIED_OBJECTS |
Useful query to plot trends of specific objects over a period of time. Just use like this: select * from recent_trend_specified_objects where object_name = 'WIP_CURR_LOTS_TOOLS'; select * from recent_trend_specified_objects where object_name in ('WIP_LOTS_STATIC','WIP_LOTS_VERIFY'); We suggest that you copy paste this directly into Excel and make a PivotTable with object_name on the left side and round_time along the top. For a longer time period, remove the HH from the trunc to group by day instead of hour. |
|
FPSADMIN.RECENT_VARIATION_BY_OBJECT |
Useful query to rank the objects where the time to load the object is highly variable. For now this is just a saved query to run when interested. Because this uses the same base query as SYSMNTR_OOC_AND_REPEATED perhaps some we might add some automatic alerts within that view if we could have confidence in these results. |
|
FPSADMIN.RECENT_WEH_HIST |
This view is for DB ETL sites only which use ADM_LOAD_EVENT_HIST_VIA_APD. It queries the WEH+EEH_WWH record from ADM_LOAD_HIST and records basic information plus two extra calculated columns seconds behind at start and seconds behind at complete. |
|
FPSADMIN.RECENT_WEH_LOAD_HIST |
Lists all information about each recent WIP_EVENT_HIST load including the average seconds per insert. This is useful on its own and is the base for other RECENT_xxx views. As written it only applies to APF ETL clients although it could be easily modified work for DB clients. |
|
FPSADMIN.STARTUP_COLUMN_WORKSHEET |
This view is our worksheet to use for initial startup of the DWH at a new site. It includes only tables which are required for startup and only columns in those tables which are not null or which are specifically required for startup even though nullable. The output of this view should be pasted into Excel where we should populate the GEN_EXPLANATIONS columns and then write back to the DWH when done. |
|
FPSADMIN.STARTUP_COPY_NONHIST_DATA |
DWH_build.sh populates data in all tables where DML_scope is GLOBAL or SITE or CFG. This view builds a script to copy all data not already populated by DWH_build.sh which is not history or status data to prepare to start to load data. Typically this is used on the second database at a startup site after the first database which has limited history is proven to be working. Then we use DWH_build.sh to create a second database then use this to copy data and then start the load back quite a long time like three months. I considered making this a procedure but I really want to watch this step by step so I would rather have it be a view to create the script to run manually. |
|
FPSADMIN.STARTUP_CREATE_DBMS_JOBS |
When we startup a new facility using database links for ETL we must create the DBMS_SCHEDULER jobs to call ADM_LOAD_DWH to load the data. These jobs should be reasonably standard at most link-based sites so this view stores a template to use for startup at any site. Please review the script particularly the hour to run the StWk job to ensure that you schedule them at the correct time for your site. It is important to note that the repeat_interval is based on the start time of the previous run. Therefore the next scheduled execution time might arrive while the job is still running. If so, the new instance of the job will wait to start until the current one completes. There is no easy way to schedule based on the end time of the previous run. We use the FREQ syntax rather than the SYSTIMESTAMP syntax for repeat_interval because we have observed some strange behavior with SYSTIMESTAMP syntax where it shows the next_run_time in the past. The FREQ syntax gives the same behavior and seems a bit safer. For some good examples on using the repeat_interval field creatively see this link: https://docs.oracle.com/cd/E11882_01/server.112/e25494/scheduse.htm#ADMIN10040 BEGIN SYS.DBMS_SCHEDULER.CREATE_JOB ( job_name => 'FPSADMIN.ADM_LOCKED_HIST_ONE_MIN' ,repeat_interval => 'FREQ=MINUTELY;INTERVAL=1' ,end_date => NULL ,job_class => 'DEFAULT_JOB_CLASS' ,job_type => 'PLSQL_BLOCK' ,job_action => 'ADM_LOCKED_HIST_INSERT;' ,enabled => true ,comments => 'This records any rows from ADM_LOCKED_STATUS into ADM_LOCKED_HIST every minute.' ); END; (slash) BEGIN SYS.DBMS_SCHEDULER.CREATE_JOB ( job_name => 'FPSADMIN.DWH_ETL_REALTIME' ,repeat_interval => 'FREQ=SECONDLY;INTERVAL=59' ,end_date => NULL ,job_class => 'DEFAULT_JOB_CLASS' ,job_type => 'PLSQL_BLOCK' ,job_action => 'ADM_LOAD_DWH(''RealTime'');' ,enabled => false ,comments => 'This is the real time job which just runs repeatedly. The default here is to run every 59 seconds (and this means to start every 59 seconds) but we might want to set more frequent in production.' ); END; (slash) BEGIN SYS.DBMS_SCHEDULER.CREATE_JOB ( job_name => 'FPSADMIN.DWH_ETL_5MIN' ,repeat_interval => 'FREQ=MINUTELY;INTERVAL=5' ,end_date => NULL ,job_class => 'DEFAULT_JOB_CLASS' ,job_type => 'PLSQL_BLOCK' ,job_action => 'ADM_LOAD_DWH(''5min'');' ,enabled => false ,comments => 'This is the 5 minute job which usually contains WIP_LOTS_STATIC and VERIFY which we want to refresh frequently but not in parallel with the RealTime job.' ); END; (slash) BEGIN SYS.DBMS_SCHEDULER.CREATE_JOB ( job_name => 'FPSADMIN.DWH_ETL_10MI' ,repeat_interval => 'FREQ=MINUTELY;INTERVAL=11' ,end_date => NULL ,job_class => 'DEFAULT_JOB_CLASS' ,job_type => 'PLSQL_BLOCK' ,job_action => 'ADM_LOAD_DWH(''10mi'');' ,enabled => false ,comments => 'This is the 10 minute job which usually contains DASH tables like GANTT and LINE_HOLDS which take a bit longer to refresh.' ); END; (slash) BEGIN SYS.DBMS_SCHEDULER.CREATE_JOB ( job_name => 'FPSADMIN.DWH_ETL_30MI' ,repeat_interval => 'FREQ=MINUTELY;INTERVAL=31' ,end_date => NULL ,job_class => 'DEFAULT_JOB_CLASS' ,job_type => 'PLSQL_BLOCK' ,job_action => 'ADM_LOAD_DWH(''30mi'');' ,enabled => false ,comments => 'This is the 30 minute job which usually contains RTG and EQP updates.' ); END; (slash) BEGIN SYS.DBMS_SCHEDULER.CREATE_JOB ( job_name => 'FPSADMIN.DWH_ETL_EASH' ,repeat_interval => 'FREQ=DAILY; BYHOUR=1' ,end_date => NULL ,job_class => 'DEFAULT_JOB_CLASS' ,job_type => 'PLSQL_BLOCK' ,job_action => 'ADM_LOAD_DWH(''EaSh'');' ,enabled => false ,comments => 'This script is called EaSh for each shift but we generally run daily starting at 01:00 since it is really slow due mainly to CTM_SUMMARY.' ); END; (slash) |
|
FPSADMIN.STARTUP_SET_CTR_BANKS |
This is a helpful query to update the ctr_bankn columns in GEN_FACILITIES to the top 5 banks by WIP. If nothing else this will allow you to enable the foreign keys. You could run this after startup to get sample queries to update these fields as well. |
|
FPSADMIN.STARTUP_TABLE_WORKSHEET |
This view is our second worksheet to use for initial startup of the DWH at a new site. This one has one line for each table so it is a better overview than BLD_STARTUP_WORKSHEET. It includes only tables which are required for startup. The output of this view should be pasted into Excel. |
|
FPSADMIN.SYSMNTR_DDL_CHANGES |
This view summarizes objects whose DDL changed in the last 24 hours by owner and object_type. This is basically a GrimRepo summary. |
|
FPSADMIN.SYSMNTR_DELAYS |
This view looks for jobs and tables where the load is delayed longer than a configured limit. |
|
FPSADMIN.SYSMNTR_DISABLED |
This view looks for disabled objects in all FPS schemas and loads. |
|
FPSADMIN.SYSMNTR_ERRORS |
This view looks for invalid objects in all FPS schemas as well as errors in ADM_LOAD. We must use dbf_xxx views here because dbf_objects does not include triggers and synonyms in other schemas and we need to check if those are invalid. Then because we must use dbf_xxx views and Oracle does not allow grant select on views using dbf_xxx and we need to grant permission on CHK_INVALID to other users, our solution is to refresh a table with this data which we can grant. |
|
FPSADMIN.SYSMNTR_HEALTH |
We want CHK_ALL to show us everything we need to know about the DWH in one query. The other SYSMNTR views shows us if anything is wrong but we also want to check on the load intervals and trends so this view adds some important information: -Always return one row listing the DWH version -Always return one row summarizing the performance of the job including WIP_EVENT_HIST with average last hour, max last hour, and average last day. -Always return one row summarizing the performance of the job including MHS_CARRIERS |
|
FPSADMIN.SYSMNTR_MISS_EVENTS |
This view summarizes count of DIFF and MISS events in the last 24 hours for both WIP and EQP which indicate less than perfection with ETL. |
|
FPSADMIN.SYSMNTR_OOC_AND_REPEATED |
This view has two parts which are in the same view because they use the exact same data so it is faster to run together. PART 1: OUT OF CONTROL FROM YESTERDAY This checks for extreme shifts in the number of updates/deletes/inserts made by each load object as well as an extreme shift in the duration of each load object to run. This view will highlight problems with loads that run without error but which have significant changed. Obviously this change might be expected after a DWH change but if not then something is probably wrong despite no actual errors. One example of this was MHS_CARRIERS that was updating about 50 rows per load. Then we made a change to the trigger and we immediately jumped to updating 2000 rows per load. This jump was not expected and indicated a problem with how the change was implemented. This problem did not cause the trigger to error so we would not have found it without individual QA except for this view. Another likely scenario would be an object that consistently updates 50 rows per load and then starts updating 0 rows. Even though there are no errors this means that the load is likely not working. Of course it might also mean a shutdown or a company meeting but we should know about this and be able to ignore this warning when appropriate. This view also checks for the duration of the load. We had an issue with ADM_FIX_WIP_STEP_FUTURE where it was consistently taking around 5 seconds per run and then after a change it jumped to 90 seconds per run. This was pure "sailboat racing" where Oracle just parsed the new query in a way that was not optimal but the procedure still worked without errors so only this view will find this change automatically. PART 2: REPEATED ATTEMPTS This checks for loads that attempted the same number of updates or deletes repeatedly. Sometimes these are just random. For example, a table might update 2 rows on three consecutive runs and make it into this view. But this often indicates that the queries are not successful. For example, a table might show that it deleted exactly 16 rows on every run for a long time. This means that it attempted the same 16 deletes each time and they failed to delete. Note there is no error it just "successfully" deletes 0 rows. Also this failure might occasionally delete more like 16-16-16-25-16-16-30-16. The same 16 rows failed but 9 other rows successfully deleted on the 4th run and 14 on the 7th run. If you look at the ADM_LOAD_HIST table, you can see the number of rows updated/deleted on each run and it is usually clear whether it is random or a failure. Note that failed inserts always result in an error so we do not need to check repeated inserts. Previewing the APF report on the Preview Here block usually shows the cause of the failure which is usually either: 1) Foreign characters or other illegal characters that APF interprets differently that SQL so the where clause doesn't match. - The solution is to use German_chars or Bad_chars macro to translate 2) The where clause in the APF update_query or delete_qery block does not match the primary/unique key of the table. This happens when we make table changes and forget to change the report -- or in this case where we have a unique key which could be null instead of primary key. - The solution here is to simply fix the query in the report. Use BLD_TABLE_QUERiES view unless the table has UK instead of PK. |
|
FPSADMIN.SYSMNTR_SESSIONS |
View to check for problem sessions using logic in ADM_SESSIONS view |
|
FPSADMIN.SYSMNTR_SLOW |
This view highlights objects which are taking too long to load. We consider an object to be slow if both the average and the last run took longer than the warning limit. If the current run is in progress then we use only the average. If we do not have an average (no successful complete in last three days) then use only the last/current run. |
|
FPSADMIN.SYSMNTR_TABLESPACES |
|
|
FPSADMIN.WIP_TESTS_DIAG_SUMMARY |
This view gets all of the data to compare shift completes between this database and yourdblink database link. The desired behavior here is for all six records of each shift to have the same qty_comp. If any of the rows do not match then we will have a failed match. If WSH is different from WESH for the same database then use WIP_TESTS_DIAG_WSH_VS_WESH view to get the details. If WSH and WESH match but this database is different from the link then use WIP_TESTS_DIAG_WSH_VS_LINK to get the details. |
|
FPSADMIN.WIP_TESTS_DIAG_WSH_VS_LINK |
This view builds a query to compare WIP_STEP_HIST for shifts of the last days between this database and the link. It is used to support the WIP_TESTS_DIAG_SUMMARY view when the match fails. |
|
FPSADMIN.WIP_TESTS_DIAG_WSH_VS_WESH |
This view builds a query to compare WIP_STEP_HIST to WIP_END_SHIFT_HIST for shifts of the last days. It is used to support the WIP_TESTS package when the test fails. |