data-dictionary

FPSBASE Views

Data Dictionary

> FPSBASE Views

FPSBASE Views

View

Comment

FPSBASE.BLD_EQP_CLUSTER_DIAG

This view allows us to do research on the tool_pct returned by our cluster tool calculations. This view should almost always be queried with filters on multiple num_xxx columns. For example, to see examples where the cluster tool had at least one chamber processing and at least one chamber unscheduled and no chambers in engineering then you would do this query: select * from fpsbase.bld_eqp_cluster_diag where num_prd > 0 and num_udt > 0 and num_eng = 0;

FPSBASE.CAL_MONTHS_LAST_N

Get last n months where n=0 is the current month, n=1 is last month, etc. Distinct is to attempt to force database to run this first when executing.

FPSBASE.CAL_PLAN_DAYS_PLUS

Combine our plan_days table with weeks/months/quarters/years for a useful view.

FPSBASE.CAL_PLAN_MONTHS_PLUS

Combine our months/quarters/years table into a useful view.

FPSBASE.CAL_QUARTERS_LAST_N

Get last n quarters where n=0 is the current quarter, n=1 is last quarter, etc. Distinct is to attempt to force database to run this first when executing.

FPSBASE.CAL_SHIFTS_LAST_N

Get last n shifts where n=0 is the current shift, n=1 is last shift, etc. Distinct is to attempt to force database to run this first when executing. Using (END_SHIFT - START_SHIFT) allows this view to work for any length shift.

FPSBASE.CAL_SHIFTS_NEXT_N

Get next n shifts where n=0 is the current shift, n=1 is next shift, etc. Distinct is to attempt to force database to run this first when executing. Using (END_SHIFT - START_SHIFT) allows this view to work for any length shift.

FPSBASE.CAL_SHIFTS_PLUS

Combine our shifts/days/weeks/months/quarters/years tables into a useful view.

FPSBASE.CAL_SHIFT_CURR

Get one line of information for the current shift. All of the max functions are unnecessary but they tell the compiler that there is only row which makes queries using this view faster.

FPSBASE.CAL_SHIFT_CURR_AND_LAST

Get two lines of information for current and last shifts. Distinct is to attempt to force database to run this first when executing. Using (END_SHIFT - START_SHIFT) allows this view to work for any length shift.

FPSBASE.CAL_SHIFT_LAST

Get one line of information for the last shift. All of the max functions are unnecessary but they tell the compiler that there is only row which makes queries using this view infinitely faster. Using (END_SHIFT - START_SHIFT) allows this view to work for any length shift.

FPSBASE.CAL_WEEK_CURR_AND_LAST

Get two lines of information for current and last weeks. Distinct is to attempt to force database to run this first when executing. This logic works for any length of week.

FPSBASE.CAL_WEEK_LAST

Get one line of information for the last week. All of the max functions are unnecessary but they tell the compiler that there is only row which makes queries using this view infinitely faster. Using (end_week - start_week) allows this view to work for any length week although if a customer has weeks of lengths other than 7 days that would be rather interesting.

FPSBASE.CAL_WORK_DAYS_PLUS

Combine our days/weeks/months/quarters/years table into a useful view.

FPSBASE.CAL_WORK_MONTHS_PLUS

Combine our months/quarters/years tables into a useful view.

FPSBASE.CAL_WORK_WEEKS_LAST_N

Get last n weeks where n=0 is the current week, n=1 is last week, etc. Distinct is to attempt to force database to run this first when executing.

FPSBASE.CAL_WORK_WEEKS_NEXT_N

Get next n weeks where n=0 is the current week, n=1 is next week, etc. Distinct is to attempt to force database to run this first when executing.

FPSBASE.CAL_WORK_WEEKS_PLUS

Combine our weeks/months/quarters/years tables into a useful view.

FPSBASE.CHK_CALENDARS

This view combines many sanity checks on the CAL tables that originally were check constraints into a CHK view. The constraints were annoying because they would prevent insert and update so it was really difficult to figure out what was wrong and fix it. After creating or modifying the calendars you should check this view and it should have no rows.

FPSBASE.CHK_ECT_HIST

View to check the integrity of the ECT_HIST table

FPSBASE.CHK_EQP_CHAMBERS

This view checks the EQP_CHAMBERS table to ensure that there is at least one non-assist chamber for each tool and a whole bunch more required checks. Each one is described in the error message so we will not list all in the comments. This was originally a trigger on EQP_CHAMBERS update but it was tricky to fix with the trigger enabled so we made this a check instead.

FPSBASE.CHK_EQP_STATUS

This view finds differences between FPSINPUT.EQP_STATUS_VERIFY and FPSBASE.EQP_STATUS. This view is used by the FIX_EQP_STATUS procedure to write an event to EQP_EVENT_HIST to adjust the entity to the correct state.

FPSBASE.CHK_ETP_HIST

This view does some basic sanity checks on FPSBASE.ETP_HIST_NOW which is a view that combines ETP_STATUS and ETP_HIST. If there are errors then they will cause errors with the shift/week tables which are checked separately in CHK_ETP_SHIFT_AND_WEEK.

FPSBASE.CHK_ETP_SHIFT_AND_WEEK

This view does some basic sanity checks on all EQP and ETP shift and week tables. It checks things like the sum of seconds for each entity in each shift is equal to the number of seconds in a shift (and same for week) and that data is present for all shifts for all tools. The root cause of these errors is nearly always a problem with ETP_HIST which is checked in the CHK_ETP_HIST view.

FPSBASE.CHK_FACILITY_SEGMENTS

This view checks the RTG_COMMON_STEPS table to ensure that facility_segment is contiguous. See the error message for details. This was originally a trigger on RTG_COMMON_STEPS update but it was tricky to fix with the trigger enabled so we made this a check instead.

FPSBASE.CHK_INVALID_EQP_ETP_STATES

We do not want foreign keys on the EQP/ETP_STATE_DIAGRAM tables for locking purposes but we have serious issues if any eqp_state or etp_state is not in the diagram so we check with this view which is used in SYSMNTR_ERRORS.

FPSBASE.CHK_JOBS_P1_GAPS

First of two CHK views to check our job/batch logic. This view checks that the lots with the same event logged within same_job_if_within_sec are in the same job. It also checks that converse that lots which do not have the same event logged within same_job_if_within_sec + 30 seconds are not in the same job. This means there is a 30 seconds gray area where we allow to go either way which is fine.

FPSBASE.CHK_JOBS_P2_COUNTS

Second of two CHK views to check our job/batch logic. This view checks that the information in WIP_CURR_JOBS matches the counts and sums of all lots in those jobs from WIP_EVENTS_CURR_STEP.

FPSBASE.CHK_LOTS_IN_TABLES

All active lots should be in WIP_LOTS_STATIC, WIP_LOTS_VERIFY, and WIP_EVENTS_CURR_STEP. Lots should not be in one or two of these tables rather only all three or none. In addition, those lots which are not TW and not in bank should be in WIP_STEP_FUTURE. This view checks all of these and reports where we have problems.

FPSBASE.CHK_LOT_SHIFT_CTM

This view checks that all lots which exist at the start and end of the shift have a total cycle time equal to the length of the shift. This is another validation for cycle time.

FPSBASE.CHK_MAX_BATCH_SIZE

This view higllights eqp_types which the max_batch_size_carriers might be set incorrectly. This is a critical field for throughput.

FPSBASE.CHK_MAX_INSTS

This convenient query quickly gives the maximum inst from our key history tables. This is particularly useful after executing ADM_DELETE_ALL_HIST.

FPSBASE.CHK_OPER_MOVES

Please refer to https://help.finalphasesystems.com/display/DW/Guide+to+Moves+Matching for a complete guide to Oper Moves Matching. This is a template of a view to use for moves matching. If your site populates WIP_OVR_OPER_MOVES_HIST then you can actually use this as is. But at most sites it will need to be adapted with some site specifics and saved a new view named FPSINPUT.LOAD_CHK_OPER_MOVES. This is fully described in the guide on the Help Site.

FPSBASE.CHK_OPER_MOVE_CFG

Please refer to https://help.finalphasesystems.com/display/DW/Guide+to+Moves+Matching for a complete guide to Oper Moves Matching. A convenient view used for oper moves debugging which shows the values of all of the configurable columns in GEN_FACILITIES and WIP_LOT_GROUPS that affect the oper moves calculations. It also checks a couple of fields in RTG_OPERATIONS and RTG_OPERATION_FAMILIES which can be set. The related view CHK_OPER_MOVE_COMMENTS provides an easy reference of the column comments for all of these comments where are relevant to oper moves configuration.

FPSBASE.CHK_OPER_MOVE_COMMENTS

Please refer to https://help.finalphasesystems.com/display/DW/Guide+to+Moves+Matching for a complete guide to Oper Moves Matching. This view provides an easy reference of the column comments for all columns which are relevant to oper moves configuration. The related view CHK_OPER_MOVE_CFG shows the values of most of these columns.

FPSBASE.CHK_PRD_ETP_VS_WIP

This view checks that the current lots processing according to lot status matches the current tools processing according to ETP_STATUS.

FPSBASE.CHK_PROCESS_SEGMENT_UNIQUE

We should be able to say that a given process is in the same segment whenever it is appears on any route in a given route_family. It can repeat on a route within the same segment but it should not be on different segments on different routes. This view checks for that. If there are repeats then it lists them in this view and the refresh will get a PK violation. So we will see the error and then simply query the contents of this view to see the culprits. I think checking by route_family and route_segment is the correct way to do this since this is how we group in the Line Viewer but it gets some errors so this view is by route and route_segment instead. To check by route_family and route_segment just replace route (whole word) with route_family except in the using clause.

FPSBASE.CHK_PROC_CARRIER_LOCATIONS

This view identifies problems with consistency or synchronization between two fundamental components of our real-time ETL: FPSINPUT.WIP_EVENT_HIST: single-source-of-truth for what LOTs (and, optionally DURABLEs) are processing on what TOOLs at DATA_DATE FPSINPUT.MHS_CARRIERS: single-source-of-truth for the current physical location of LOTs and, in conjunction with FPSINPUT.RTG_DURABLES, DURABLEs In FPS location terminology, if a (LOT|DURABLE) is currently processing on a TOOL, then the CARRIER holding that (LOT|DURABLE) must have a STATION matching the TOOL: MHS_CARRIERS.LOCATION -> MHS_LOCATIONS.LOCATION select .STATION == WIP_LOT_PLUS.TOOL If these are mismatched, it indicates a lack of consistency or synchronization in the ETL. Some degree of lag is unavoidable, because: (a) There may be a lag in the underlying factory systems (b) We (currently) load WIP_EVENT_HIST in a batched stream on a finite interval, while MHS_CARRIERS and RTG_DURABLES refresh on a finite interval. Due to (a) and/or (b) we may, for example, load a begin-processing event for some (TOOL, LOT, DURABLE) before we load the updated locations of the lot-carrier or durable-carrier. In the interim, we will have a logical inconsistency insofar as we believe that the (LOT, DURABLE) are processing on TOOL -- but physically located elsewhere. ---------------------------------------- This view attempts to describe such inconsistencies for currently-processing WIP. When the STATION of a (LOT|DURABLE)-carrier's LOCATION does not match the TOOL where that (LOT|DURABLE) is currently processing: (LOT|DUR)_CARRIER_LOCATION_PROBLEM: describes the problem qualitatively (LOT|DUR)_CARRIER_LOCATION_LAG_MIN: the lag, in minutes, between the UPDATED_INST of the CARRIER's LOCATION and the BEG_PROC_INST on the TOOL. When this value is small and positive, e.g.: 0 < lag_min < 3 * GREATEST(WEH_update_interval, carrier_location_update_interval), then the lag may be due to one of the "unavoidable" reasons above. However, if the value is >>0 or <<0, it indicates a more serious problem. ----------------------------------------

FPSBASE.CHK_REF_WSF_LOTS_TO_FIX

This stores lots where WIP_SMP_FUTURE or RTG_ROUTE_STEPS or RTG_TOOL_ASSIGNMENTS_LOT has been updated after WIP_STEP_FUTURE in a way that requires an update to WSF. This query can be slow so we refresh this table outside of the RealTime job and then use the results in two sections of the ADM_FIX_WIP_STEP_FUTURE procedure which must run in the RealTime job since it updates WIP_STEP_FUTURE. The ALL section call UPDATE_WIP_STEP_FUTURE procedure to fully refresh all records for the lot. The SINGLE section goes in a merge statement to update individual records.

FPSBASE.CHK_REWORK_START_END

Checks the rework start and end behave properly regarding route, main_route, and next_route.

FPSBASE.CHK_RTG_ACTIVE_THD_LOTS

View to Check to ensure RTG_ACTIVE_THREADING_LOTS is correct

FPSBASE.CHK_SHARED_PARENT_WARNINGS

This view summarizes recent warnings where the WIP_EVENT_HIST_INSERT_BEF trigger is unable to determine the tool for an event logged to the shared_parent_tool. This is because either no tools are allowed or multiple tools are allowed for the process. The ETL should be reviewed in these cases but we have agreed that it is not quite worth sending our errors in SYSMNTR_ERRORS so we log these as I rather than R in ADM_LOAD_HIST and provide this view.

FPSBASE.CHK_SHIFT_MOVES

Useful query to compare moves by shift to the raw data in WIP_STEP_HIST. This view should be used as a base for modification when shift moves do not match between different versions of DWH and/or Dashboard to see what is the root cause of the discrepancy. Once you prove that the numbers from the Dashboard and the numbers from the DASH query and the numbers from the WIP_STEP_HIST_PLUS query match then you can drill down into the WIP_STEP_HIST_PLUS query by grouping by more columns like event_type or prd or whatever. Frequently it is helpful to union this query to another copy of this query with all tables pointing to the other DWH via link.

FPSBASE.CHK_THP_BY_CH_TYPE_CNT

Test for Jeff to review

FPSBASE.CHK_THP_OVER_SCH_CAP

This view shows records where the automatically calculated throughput values are over the caps set in EQP_TYPES. These records either indicate problems with our throughput calculation or a cap which is set too low.

FPSBASE.CHK_WIP_END_SHIFT_HIST_LAST

This view validates data from last shift in WIP_END_SHIFT_HIST such that the endwip is the expected result given the activity during the shift. We expect that startwip + arrival - moves = endwip but we must adjust for create, terminate, split, merge, scrap, etc. The logic in this view is identical to CHK_WIP_END_SHIFT_HIST_REAL but this checks the table rather than the view.

FPSBASE.CHK_WIP_EVENT_TYPES

We use EVENT_PARMx and EVENT_INSTx columns in WIP_EVENT_HIST to particular data specific to just one event and this view checks that we configure the meaning of each PARM or INST for each event in WIP_EVENTS.

FPSBASE.CHK_WIP_EVENT_VS_STEP_HIST

Ensure that all moves and qty delta events in WIP_EVENT_HIST match exactly with all events in WIP_STEP_HIST.

FPSBASE.CHK_WIP_LOTS_REALTIME

This view finds lots in WIP_LOTS_REALTIME which do not match the info in WIP_LOTS_VERIFY for step, route, qty, hold, and related columns. This view is used by the ADM_FIX_WIP_LOTS_REALTIME procedure to write an event to WIP_EVENT_HIST to adjust the lot to the correct info.

FPSBASE.CHK_WIP_STEP_FUTURE_PLUS

This view checks the WIP_STEP_FUTURE_PLUS view for logical errors.

FPSBASE.CHK_WIP_STEP_HIST_LAG

This view checks that all "from" properties of each event in WIP_EVENT_HIST and WIP_STEP_HIST match the "to" properties of the previous event for that lot. We do check this automatically to some degree in the WIP_EVENT_HIST_INSERT_BEF trigger and insert DIFF events in WIP_STEP_HIST to fix some differences real-time. Therefore this view ignores any discrepancies in WIP_EVENT_HIST where a DIFF event has been logged. Then we also ignore any MISS or DIFF events so this view is a comprehensive summary of any inconsistencies with WIP_EVENT_HIST logging and a great check for our ETL.

FPSBASE.CHK_WSF_CURR_STEP

This view finds discrepancies in WIP_STEP_FUTURE as compared to WIP_LOTS_REALTIME. The results of this view are fed to ADM_FIX_WSF_CURR_STEP procedure which loops through the lots and calls the normal UPDATE_WIP_LOTS_REALTIME and/or UPDATE_WIP_STEP_FUTURE_ONE_LOT procedures. Because we normally only update WIP_STEP_FUTURE when a lot moves and we do not insert rows if the route/step is not in RTG_ROUTE_STEPS_PLUS, these discrepancies normally occur after RTG_ROUTE_STEPS_PLUS is refreshed with a new record for the current route/step. However if we see fixes made after each lot even when RRSP did not refresh that would indicate a problem. Please note that checks on the entire WIP_STEP_FUTURE table including future steps are done in ADM_FIX_WIP_STEP_FUTURE.

FPSBASE.CTM_APD_START_LOT_HIST

A record for each non-TW lot started showing the cycle time expectations at the time of the start.

FPSBASE.CTM_APD_SUMMARY_WEEK_HIST

Weekly snapshot of CTM_SUMMARY for all production lots which are active during that week

FPSBASE.CTM_APD_WEEK_HIST

This view summarizes data from WIP_END_SHIFT_HIST for the previous week. The logic here is similar to the logic in CTM_REF_FINAL and this table feeds that view so any changes will likely need to be made to both views. This view has only the raw data for the specific grouping only without filling in the best available data..

FPSBASE.CTM_COMPARE

Summary of CTM and related info over entire routes and banks with selected examples representing various CTM cases. This should give us a good representation of the data in CTM_SUMMARY in a reasonable format to compare. For example, use the query below to compare the new logic of CTM_SUMMARY in this instance to the old logic of CTM_SUMMARY in the database at yourdblink to confirm the differences are as expected: with new as (select * from fpsbase.ctm_compare) select 'new' as which, facility, why_compare, prd, planprd, route, bank, prty_ctm_group, lot_group, total_avg_wip_full, sum_target_step_days, sum_commit_step_days, sum_step_full_wavg_days, sum_step_7d_wavg_days, sum_step_7d_med_days, sum_hold_7d_wavg_days, sum_nohold_wsf_days, avg_smp_pct_full, num_steps, never_skipped, rarely_skipped, often_skipped, always_skipped, is_nonstd, is_bank, is_tw, is_main_route, is_mostly_fillin from new union all select 'old' as which, facility, new.why_compare, prd, planprd, route, bank, prty_ctm_group, lot_group, c.total_avg_wip_full, c.sum_target_step_days, c.sum_commit_step_days, c.sum_step_full_wavg_days, c.sum_step_7d_wavg_days, c.sum_step_7d_med_days, c.sum_hold_7d_wavg_days, c.sum_nohold_wsf_days, c.avg_smp_pct_full, c.num_steps, c.never_skipped, c.rarely_skipped, c.often_skipped, c.always_skipped, is_nonstd, c.is_bank, c.is_tw, c.is_main_route, c.is_mostly_fillin from fpsbase.ctm_compare@yourdblink c join new using (facility, prd, planprd, route, bank, prty_ctm_group, lot_group, is_nonstd) order by 2, 3, 4, 5, 6, 7, 8, 9, which desc;

FPSBASE.CTM_REF_FINISHED_PERIODS

This view generates time periods and their sorting order for the graph. If the entire database is less than 10 weeks old then we use weekly buckets. Once we have more than 10 weeks of data then we switch to our normal logic of last few months then last few quarters then years. Note that the trigger CTM_FINISHED_PERIODS_DELETE takes care of the switch automatically if the periods get larger which they normally do as time goes on. However if you ever want to manually change the logic then this simple merge query will update all rows with the new periods: merge into ctm_finished_lot_hist h using ctm_finished_periods p on (h.inst >= p.start_period and h.inst < p.end_period) when matched then update set h.finished_period = p.finished_period;.

FPSBASE.CTM_REF_FOR_ROUTE_STEPS

This table exists purely to speed the load of RTG_ROUTE_STEPS_PLUS. Since CTM_SUMMARY loads much less frequently that RRSP we can refresh this table after CTM_SUMMARY and then each RRSP load is significantly faster.

FPSBASE.CTM_REF_SUMMARY

This series of views summarizes data from WIP_END_SHIFT_HIST into 2D, 7D, LONG, and FULL periods. The base logic here is similar to CTM_APD_WEEK_HIST so any changes to the base logic will likely need to be made to both views. This view has a record for all steps on all active prds/routes with all active combinations of lot_group, prty_ctm_group, and is_nonstd. Missing data is filled in with our best estimates from similar valid data so that we have reasonable values for all combinations. This complex fill-in logic is documented separately. This view also includes more rather complex logic to calculate planned cycle times (target and commit) for each step to add up to the total specified in RTG_PRDS. This view used to be combined and was REALLY long! At about 120k characters it was four times longer than any other view in the FPS DWH. We split into six separate refreshed tables mostly to reduce temp tablespace used at one time. In addition, these smaller views are short enough that if you have an error in the view it will take you to the position of the error like normal. Previously it just told you that there was an error and you had to find it yourself. At one point we could not do "create table aaa as select * from ctm_ref_summary" for testing even with filters. It would hang forever and will not even kill. Instead we created an empty table by "create table aaa as select * from ctm_summary where 0=1" and then populated it by "insert into aaa select * from ctm_ref_summary". We have not tested whether this is still an issue with the six split tables but we hope not.

FPSBASE.CTM_REF_SUMMARY_P1_MEDIANS

This view feeds CTM_SUMMARY but it is independent and complicated enough to split into its own view for improved understanding, documentation, and testing. The complicated part of this is the logic for moves for lots which have either split or merged earlier in the step. Basically we want the original step_ent_inst of the units which are in the lot which moved out. If the lot was split then we want the step_ent_inst of the parent lot from which it was split. This is normally s2.step_ent_inst (h.step_ent_inst is the time of the split, sm.split_lot is the parent, s2.step_ent_inst is the step_ent_inst of the parent) but sometimes it is just sm.step_ent_inst. If the lot was merged, we want the latest of the step_ent_inst values for each of the two lots which were merged together (sm.step_ent_inst is the "to" lot which is moving out while m2.step_ent_inst is the "from" lot which no longer exists after the merge).

FPSBASE.CTM_REF_SUMMARY_P2_BASE

This view collects the data from WIP_END_SHIFT_HIST and CTM_WEEK_HIST which is used to generate CTM_SUMMARY.

FPSBASE.CTM_REF_SUMMARY_P3_FILLIN_G1

One of nine levels of fillin logic for CTM_SUMMARY. See full details in CTM_REF_SUMMARY.

FPSBASE.CTM_REF_SUMMARY_P3_FILLIN_G2

One of nine levels of fillin logic for CTM_SUMMARY. See full details in CTM_REF_SUMMARY.

FPSBASE.CTM_REF_SUMMARY_P3_FILLIN_G3

One of nine levels of fillin logic for CTM_SUMMARY. See full details in CTM_REF_SUMMARY.

FPSBASE.CTM_REF_SUMMARY_P3_FILLIN_G4

One of nine levels of fillin logic for CTM_SUMMARY. See full details in CTM_REF_SUMMARY.

FPSBASE.CTM_REF_SUMMARY_P3_FILLIN_G5

One of nine levels of fillin logic for CTM_SUMMARY. See full details in CTM_REF_SUMMARY.

FPSBASE.CTM_REF_SUMMARY_P3_FILLIN_G6

One of nine levels of fillin logic for CTM_SUMMARY. See full details in CTM_REF_SUMMARY.

FPSBASE.CTM_REF_SUMMARY_P3_FILLIN_G7

One of nine levels of fillin logic for CTM_SUMMARY. See full details in CTM_REF_SUMMARY.

FPSBASE.CTM_REF_SUMMARY_P3_FILLIN_G8

One of nine levels of fillin logic for CTM_SUMMARY. See full details in CTM_REF_SUMMARY.

FPSBASE.CTM_REF_SUMMARY_P3_FILLIN_G9

One of nine levels of fillin logic for CTM_SUMMARY. See full details in CTM_REF_SUMMARY.

FPSBASE.CTM_REF_SUMMARY_P4_VALUES

This is part 4 of a series of refreshed tables used to populate CTM_SUMMARY while using as little temp tablespace as possible. Speed is secondary but this split strategy does slightly improve speed. See comments in CTM_REF_SUMMARY for complete details.

FPSBASE.CTM_REF_SUMMARY_TOTALS

This is the total cycle time for all steps in the route (or for the bank) for each variety of prd, planprd, priority, and lot group. Refreshing this into a table which is far smaller than CTM_SUMMARY speeds up the several places where we use this and ensures that all places are using the same calculations.

FPSBASE.CTM_REF_SUMMARY_VALUES_L1

One of five tables for fill-in logic used in GET_CTM_VALUES function.

FPSBASE.CTM_REF_SUMMARY_VALUES_L2

One of five tables for fill-in logic used in GET_CTM_VALUES function.

FPSBASE.CTM_REF_SUMMARY_VALUES_L3

One of five tables for fill-in logic used in GET_CTM_VALUES function.

FPSBASE.CTM_REF_SUMMARY_VALUES_L4

One of five tables for fill-in logic used in GET_CTM_VALUES function.

FPSBASE.CTM_REF_SUMMARY_VALUES_L5

One of five tables for fill-in logic used in GET_CTM_VALUES function.

FPSBASE.CTM_SUMMARY_PLANPRD_ROUTE_STEP

This view summarizes data from CTM_SUMMARY by planprd/route/step (or planprd/bank) using weighted averages. This is specifically for GP.

FPSBASE.CTM_SUMMARY_PLUS

Adds calculated columns to table. Commented and organized similar to the select query in CTM_REF_SUMMARY.

FPSBASE.CTM_SUMMARY_PRD_ROUTE_STEP

This view summarizes data from CTM_SUMMARY by prd/route/step (or prd/bank) using weighted averages.

FPSBASE.CTM_SUMMARY_PRD_RT_STEP_FAST

This view summarizes data from CTM_SUMMARY by prd/route/step (or prd/bank) but instead of using weighted averages it just uses the most common lot_group/prty_ctm_group/planprd/is_nonstd. We agree that weighted average is the better approach however we have proven this method is much faster when used in complex queries. Also this one can be used to get target and commit since those values are the same for a given prd.

FPSBASE.CTM_SUMMARY_ROUTE_STEP

This view summarizes data from CTM_SUMMARY by route/step (or bank) using weighted averages.

FPSBASE.CTM_SUMMARY_ROUTE_STEP_FAST

This view summarizes data from CTM_SUMMARY by route/step (or bank) but instead of using weighted averages it just uses the most common prd/lot_group/prty_ctm_group/planprd/is_nonstd. We agree that weighted average is the better approach however we have proven this method is much faster when used in complex queries. Please note there is some cycle time information in RTG_ROUTE_STEPS_PLUS which uses weighted average and that would be the fastest method.

FPSBASE.CTM_SUMMARY_TOTALS_BY_PRD_BANK

Frequently we need the total cycle time for the prd rather than the full CTM_SUMMARY combination including planprd, lot_group, prty_ctm_group. The CTM_SUMMARY_TOTALS table is fast enough and this view ensures that all places are using the same calculations.

FPSBASE.CTM_SUMMARY_TOTALS_BY_RT_BANK

Frequently we need the total cycle time for the route rather than the full CTM_SUMMARY combination including planprd, lot_group, prty_ctm_group. The CTM_SUMMARY_TOTALS table is fast enough and this view ensures that all places are using the same calculations.

FPSBASE.CTM_TO_EOL_DIAG

This view is useful to quickly review the total route CTM for the highest running prd-routes at the site. It is particularly helpful to decide which CTM value is appropriate to use for future arrivals in WIP_STEP_FUTURE.

FPSBASE.CTM_WEEK_HIST_PLUS

This view shows the weekly data in the same format as CTM_SUMMARY with wavg. The table stores data in total_sec to use in the CTM_REF_SUMMARY calculations.

FPSBASE.ECT_APD_MOVES_WEEK_HIST

This table summarizes data from WIP_STEP_HIST for the previous week. This view has only the raw data for the specific grouping only without any fill-in logic so it does not have all steps on all routes. NOTE: Since this view only looks at WIP_STEP_HIST and not at lots in the WIP at the end of the week, it does not have total_step_sec or qty_startwip or qty_endwip. Those values are in ECT_STATE_WEEK_HIST which uses ECT_LOT_HIST which does include current WIP. To get to total_step_sec or qty_startwip or qty_endwip for the grouping, we need to sum the values for each ect_state from ECT_STATE_WEEK_HIST.

FPSBASE.ECT_APD_STATE_WEEK_HIST

ECT_STATE_WEEK_HIST is a weekly history table with the ect_state column added to the standard primary key of facility, prd, planprd, route, step, bank, prty_ctm_group, lot_group. This means that instead of having separate columns for total_proc_sec, total_hold_sec, total_wait_sec, etc. that we have separate rows for each ect_state and a single column named total_unit_seconds. This total_unit_seconds column can be divided by the total seconds in the week to get the average WIP in that ect_state. It can also be divided by the total moves for the standard primary key in the week to get the cycle time. For example, if one 25 wafer lot spends 2 days in the ect_state and another 5 wafer lot spends 3 days in the ect_state then the total_unit_seconds for that ect_state will be 25*2*86400 + 5*3*86400 = 5616000 unit-seconds. To get average WIP, divide this by the total seconds in the week so 234000/604800 = 9.29 wafers in this ect_state on average. To get cycle time, divide by the total moves for the standard primary key of facility, prd, planprd, route, step, bank, prty_ctm_group, lot_group which we will get from the CTM_WEEK_HIST table which has the primary key of just these standard columns without ect_state. If these two lots were the only two lots of this key for the week and both moved then we would have 30 moves so 234000/30 = 7600 seconds = 2.16 hours. If there was another 20 wafer lot that moved which did not spend any time in this ect_state then we would have 50 moves and the cycle time would go down: 234000/50 = 4680 seconds = 1.3 hours. On the other hand, if no other lots moved and the 5 wafer lot remained in the WIP at the end of the WIP then we would have just 20 moves and the cycle time would go up: 234000/25 = 9360 seconds = 2.6 hours.

FPSBASE.ECT_APD_SUMMARY_WEEK_HIST

Weekly snapshot of ECT_SUMMARY for all production lots which are active during that week

FPSBASE.ECT_HIST_PLUS

Set SHUTD, OFFRT, and eventually MHS which are virtual process states for ECTA and ECTFL.

FPSBASE.ECT_REF_FOR_ROUTE_STEPS

This table exists purely to speed the load of RTG_ROUTE_STEPS_PLUS. Since CTM_SUMMARY loads much less frequently that RRSP we can refresh this table after CTM_SUMMARY and then each RRSP load is significantly faster.

FPSBASE.ECT_REF_MOVES_7D_2D_CW

This view summarizes data from WIP_STEP_HIST for the last seven days, last two days, and the current week. This view has only the raw data for the specific grouping only without any fill-in logic so it does not have all steps on all routes. NOTE: Since this view only looks at WIP_STEP_HIST and not at lots in the current WIP, it does not have total_step_sec or qty_startwip or qty_endwip. Those values are in ECT_STATE_7D_2D_CURRWK which uses ECT_LOT_HIST and does include current WIP. To get to total_step_sec or qty_startwip or qty_endwip for the grouping, we need to sum the values for each ect_state from ECT_STATE_7D_2D_CURRWK.

FPSBASE.ECT_REF_STATE_7D_2D_CW

ECT_STATE_7D_2D_CURRWK has the current data with the ect_state column added to the standard primary key of facility, prd, planprd, route, step, bank, prty_ctm_group, lot_group. This means that instead of having separate columns for total_proc_sec, total_hold_sec, total_wait_sec, etc. that we have separate rows for each ect_state and columns named total_unit_seconds_xxx. This total_unit_seconds column can be divided by the total seconds in the period to get the average WIP in that ect_state. It can also be divided by the total moves for the standard primary key in the period to get the cycle time.

FPSBASE.ECT_REF_STATE_DIAGRAM

This view combines ECT_STATE and PROCESS_STATE to get the full information.

FPSBASE.ECT_REF_SUMMARY

This table stores the base table for all non-trend Cycle Time Analyzer data using the ECT logic. Here we get all of the steps for routes where we have data for at least one step and we include all steps with avg WIP so that the avg WIP totals will be correct. This logic uses CTM_SUMMARY for fill-in logic, target/commit/fifo, sampling percentage and more which obviously means that it does not replace CTM_SUMMARY. If we are going to continue down the path of replacing it with ECT logic then we would have to include the logic for all of those columns here. First and foremost would be better fill-in logic. Please note that the ECTA queries must filter module, process family, and process at the very end in order to weight properly by route. This means that there is no reason to have indexes on any of these columns.

FPSBASE.ECT_REF_SUMMARY_P1_WEEKS

Part of a sequence of views to build ECT_SUMMARY in smaller pieces to avoid using too much PGA memory and temp tablespace.

FPSBASE.ECT_REF_SUMMARY_P2_STATE

Part of a sequence of views to build ECT_SUMMARY in smaller pieces to avoid using too much PGA memory and temp tablespace.

FPSBASE.ECT_REF_SUMMARY_P3_MOVES

Part of a sequence of views to build ECT_SUMMARY in smaller pieces to avoid using too much PGA memory and temp tablespace.

FPSBASE.ECT_REF_SUMMARY_P4_PERIODS

Part of a sequence of views to build ECT_SUMMARY in smaller pieces to avoid using too much PGA memory and temp tablespace.

FPSBASE.ECT_REF_SUMMARY_P5_BASE

Part of a sequence of views to build ECT_SUMMARY in smaller pieces to avoid using too much PGA memory and temp tablespace.

FPSBASE.ECT_STATE_IF_DISP_BY_TOOL

This view is used in the WIP_LOTS_REALTIME_UPDATE_BEF trigger and in the UPDATE_ECT_STATE_IF_WAIT procedure.

FPSBASE.ECT_SUMMARY_G3_FILLIN

This view will provide the group 3 fill in logic for the new ECT process states to be used in ECT_SUMMARY if the minimum lot count is not met for the given time period

FPSBASE.ECT_SUMMARY_G4_FILLIN

This view will provide the group 4 fill in logic for the new ECT process states to be used in ECT_SUMMARY if the minimum lot count is not met for the given time period

FPSBASE.EQP_APD_ENTITY_E10_PD_HIST

See detailed comments in EQP_APD_TOOL_E10_SHIFT_HIST IMPORTANT NOTE: The major difference between all of the ExP_APD_y_E10_z_HIST views are: entity vs. tool, ETP vs. EQP, and shift-based vs. plan-based Entity vs. tool has important logical differences, but, ETP vs. EQP and shift-based vs. plan-based are nearly identical. When making changes, just edit one of the two then it is safe to find/replace EQP and ETP for the first three sections only with three exceptions. EQP_UNITS_plan_day_HIST and ETP_MNT_EPISODE_HIST are table names in both views plus column is_eqp_related_event is in both. Fortunately those will error if you find/replace so it is actually fine to find/replace everything in these first three sections and then fix errors on these two tables and one column. Everything below that starting with episode_sums is identical in both views and should be copied directly. All EQP/ETP must stay.

FPSBASE.EQP_APD_ENTITY_E10_PW_HIST

See detailed comments in EQP_APD_TOOL_E10_SHIFT_HIST

FPSBASE.EQP_APD_ENTITY_E10_SHIFT_HIST

See detailed comments in EQP_APD_TOOL_E10_SHIFT_HIST IMPORTANT NOTE: The major difference between all of the ExP_APD_y_E10_z_HIST views are: entity vs. tool, ETP vs. EQP, and shift-based vs. plan-based Entity vs. tool has important logical differences, but, ETP vs. EQP and shift-based vs. plan-based are nearly identical. When making changes, just edit one of the two then it is safe to find/replace EQP and ETP for the first three sections only with three exceptions. EQP_UNITS_plan_day_HIST and ETP_MNT_EPISODE_HIST are table names in both views plus column is_eqp_related_event is in both. Fortunately those will error if you find/replace so it is actually fine to find/replace everything in these first three sections and then fix errors on these two tables and one column. Everything below that starting with episode_sums is identical in both views and should be copied directly. All EQP/ETP must stay.

FPSBASE.EQP_APD_ENTITY_E10_WEEK_HIST

See detailed comments in EQP_APD_TOOL_E10_SHIFT_HIST

FPSBASE.EQP_APD_ENTITY_WEEK_HIST

Adds row for next week not added to EQP_ENTITY_WEEK_HIST. This can be called at any time and repeatedly because if the previous week is already in the table then no rows are returned. This table does not have the cluster tool calculations so this table should not be summed over a tool or any grouping of tools. For that use EQP_TOOL_SHIFT_HIST. This is only to be used for an individual entity. We debated whether to write grouping columns like eqp_type and mnt_family with their values at the end of the week but decided that we will usually use the current values and we can look up in CHGLOG if we want the values at the time.

FPSBASE.EQP_APD_SETUP_CHG_TOOL_WEEK_HI

Append summary by tool and week to EQP_SETUP_CHG_TOOL_WEEK_HIST

FPSBASE.EQP_APD_SETUP_CHG_WEEK_HIST

Append summary by eqp_type and week to EQP_SETUP_CHG_WEEK_HIST

FPSBASE.EQP_APD_TOOL_E10_PD_HIST

See detailed comments in EQP_APD_TOOL_E10_SHIFT_HIST IMPORTANT NOTE: The major difference between all of the ExP_APD_y_E10_z_HIST views are: entity vs. tool, ETP vs. EQP, and shift-based vs. plan-based Entity vs. tool has important logical differences, but, ETP vs. EQP and shift-based vs. plan-based are nearly identical. When making changes, just edit one of the two then it is safe to find/replace EQP and ETP for the first three sections only with three exceptions. EQP_UNITS_plan_day_HIST and ETP_MNT_EPISODE_HIST are table names in both views plus column is_eqp_related_event is in both. Fortunately those will error if you find/replace so it is actually fine to find/replace everything in these first three sections and then fix errors on these two tables and one column. Everything below that starting with episode_sums is identical in both views and should be copied directly. All EQP/ETP must stay.

FPSBASE.EQP_APD_TOOL_E10_PW_HIST

See detailed comments in EQP_APD_TOOL_E10_SHIFT_HIST

FPSBASE.EQP_APD_TOOL_E10_SHIFT_HIST

This table is used for the E10 Equipment metrics and for Equipment Dashboard spotlights including the Tool pages. Includes: - the 6 basic E10 states (PRD, SBY, ENG, SDT, UDT, NST) [ Unit: seconds ] - combined states (MFG, EQP UP, EQP DOWN, OPER, TOTAL) [ Unit: seconds ] - UDT without waits/delays and SDT without waits/delays [ Unit: seconds ] - number of times UDT episode begins (e.g. for MTBF) - number of times SDT episode begins (e.g. for MTTS/MTTPM) - number of times continuous downtime begins (e.g. for MTOL) - _p subset of the group that specifically occurred during production - _e subset of the group that is Equipment related - mUxx metrics denote Unit metrics - mTxx metrics denote Time metrics - e10_x_entity denotes seconds logged to all entities for the tool. -- e10_x_entity is necessary for the Equipment Maintainability (8.4) metrics specifically. You might wonder if we really need both EQP and ETP versions considering that the only difference between EQP and ETP is the PRD and SBY time. The short answer is that we need both. The long answer for a specific example follows: Let's take a look at what metrics use the two columns e10_prd and e10_sby: - Mean time between failures during productive time (mtbf_p) - Mean Productive Time between Equipment-related Failures (e_mtbf_p) - Total Utilization (utu) - Operational Utilization (uou) Additionally there is a slight difference in some of the classifications of down episodes. The main differences is looking at the e10_state before the episode. The difference between PRD and SBY in the e10_state_before affects the the following episode classifications: - n_udt_p_episodes - n_e_udt_p_episodes - n_udt_p_episodes_entities - n_sdt_p_episodes, n_sdt_p_episodes_entities - n_cdt_episodes_entities And could change the values in the following metrics: - Mean time between failures during productive time (mtbf_p) - Mean failure duration during producive time. (mfd_p) - Mean Productive Time between Equipment-related Failures (e_mtbf_p) - Mean time off line (mtol) The quality of both the Eqp State Diagram and automated PRD/SBY logging at the site will determine how close the EQP metrics are to the ETP metrics. IMPORTANT NOTE: The major difference between all of the ExP_APD_y_E10_z_HIST views are: entity vs. tool, ETP vs. EQP, and shift-based vs. plan-based Entity vs. tool has important logical differences, but, ETP vs. EQP and shift-based vs. plan-based are nearly identical. When making changes, just edit one of the two then it is safe to find/replace EQP and ETP for the first three sections only with three exceptions. EQP_UNITS_plan_day_HIST and ETP_MNT_EPISODE_HIST are table names in both views plus column is_eqp_related_event is in both. Fortunately those will error if you find/replace so it is actually fine to find/replace everything in these first three sections and then fix errors on these two tables and one column. Everything below that starting with episode_sums is identical in both views and should be copied directly. All EQP/ETP must stay.

FPSBASE.EQP_APD_TOOL_E10_WEEK_HIST

See detailed comments in EQP_APD_TOOL_E10_SHIFT_HIST

FPSBASE.EQP_APD_TOOL_WEEK_HIST

Adds row for next WEEK not added to EQP_TOOL_WEEK_HIST. This can be called at any time and repeatedly because if the previous WEEK is already in the table then no rows are returned. We debated whether to write grouping columns like eqp_type and mnt_family with their values at the end of the week but decided that we will usually use the current values and we can look up in CHGLOG if we want the values at the time.

FPSBASE.EQP_APD_UNITS_PLAN_DAY_HIST

FPSBASE.EQP_APD_UNITS_SHIFT_HIST

FPSBASE.EQP_CURR_DURABLES

A simple view to show which durables are currently on each tool

FPSBASE.EQP_ENTITIES

Combine EQP_TOOLS and EQP_CHAMBERS to get full list of entities

FPSBASE.EQP_ENTITIES_PLUS

A view to join all EQP tables to get all info about each entity. This is the base for all EQP views in all FPSAPP applications, either directly or through EQP_TOOLS_PLUS which uses this.

FPSBASE.EQP_ENTITY_SHIFT_HIST_NOW

Unions EQP_TOOL_SHIFT_HIST table with the realtime data for the current shift which is not yet in the history. IMPORTANT NOTE: This view uses DASH_B_H_ETP_7D table for speed but that table only has data for either EQP or ETP whichever one is active on the Dashboard. Therefore this view will return data for the previous and current shift only if EQP is active! This works perfectly for the Dashboard as long as we are still using the flag. In a future version, we will show both EQP and ETP together and at that time we will have to populate two different DASH_B_H_xxx_7D tables and modify this view accordingly.

FPSBASE.EQP_LOGGED_ENTITIES_ALL_ENTS

This view shows all entities which apply to a logged entity. For example, if you filter this for the main tool you get a row for each chamber.

FPSBASE.EQP_MNT_FUTURE_ALL_INSTANCES

Adds future instances of frequent time-based PM's up to 31 days out based on the interval_sec. Adds up to 31 future instances so daily PM's will go 31 days out.

FPSBASE.EQP_MNT_FUTURE_PLUS

Adds estimated times for counter-based PM's using rate of change, is_early/due/late, counter_remaining, and other columns.

FPSBASE.EQP_PORTS_PLUS

This view was renamed to EQP_TOOL_PORTS because of the possibility that a port on a shared parent can exist multiple times here, once for each tool which is sharing the parent. This view now exists only for backwards compatibility with the Scheduler which uses it in several places but all with reference to tool so the multiples do not cause problems. Once Scheduler is modified to query from EQP_TOOL_PORTS then we can drop this.

FPSBASE.EQP_PORT_HIST_PLUS

At some point we will include port mode changes in EQP_EVENT_HIST and EQP_STATUS but until then we build the HIST table with a trigger than captures changes to EQP_PORTS. Note we include the same fast join with EQP_PORT_MODES as is in EQP_TOOL_PORTS but we do not join with EQP_TOOLS_PLUS since it is possible the tool values might have changed.

FPSBASE.EQP_REF_ALL_SETUPS_BY_TOOL

This refreshed table is used in EQP_REF_CH_ALLOWED_STATUS only to increase the speed. Even though this table takes just a second or two refresh, when this query was embedded in the REF view, EQP_CH_ALLOWED_STATUS was taking over 30 seconds at one site. Using this refreshed table it dropped to under a second as expected. This data will change when tools are added/dropped and when setups are added/dropped. The latter will happen more frequently but the former is more critical so by default we add this job after the refresh of EQP_TOOLS_PLUS.

FPSBASE.EQP_REF_CAP_ENTITIES

This view calculates cap_entity based on the info in EQP_ENTITIES. It is fast enough as a view for CM since we use this in the GET_THP_VALUES function we need to refresh it into a table. Even though it uses ETP_STATUS, this table rarely changes because it only uses ETP_STATUS to determine which chambers are NST. Therefore we can create the GET_CH_TYPE_OF_CAP_ENTITY function with result_cache using only this table and it will get that information in memory for GET_THP_VALUES.

FPSBASE.EQP_REF_CH_ALLOWED_STATUS

This refreshed table is a replacement for the GET_CH_ALLOWED_STATUS function which was too slow. It is used in WIP_STEP_FUT_ASSIGNMENTS where it is critical for speed of the WIP_CURR_LOTS_TOOLS table refresh. IMPORTANT: This table does not check the status of the main tool because we already do that in WIP_STEP_FUT_ASSIGNMENTS so this simplifies and speeds this function. This query returns a delimited list of ch_type values which are included in the ch_allowed run path but do not have at least one chamber up. If this query returns null then the ch_allowed has a valid run path. If it return anything then it cannot run due to no run path on the one or more ch_types list. For a tool that has A+B of ch_type ET and it has C+D of ch_type DEP and A+C are MFG and B+D are NONMFG then ch_allowed ABCD would return null because it requires both chamber types and it can run on a chamber of each type. AC, ABC, ACD would also return null for the same reason. A would also return null because it does not require a DEP ch_type and it has an ET ch_type to use. Similarly C would return null. ABD would return DEP as would AD and D. BCD would return ET as would BC and B. BD would return ET,DEP because it requires both types but has neither available.

FPSBASE.EQP_REF_CH_PATH_STATUS

This refreshed table is a replacement for the GET_CH_PATH_STATUS function which was too slow. It is used in WIP_STEP_FUT_ASSIGNMENTS where it is critical for speed of the WIP_CURR_LOTS_TOOLS table refresh. IMPORTANT: This table does not check the status of the main tool because we already do that in WIP_STEP_FUT_ASSIGNMENTS so this simplifies and speeds this function. This query is similar to GET_CH_ALLOWED_STATUS but it only accepts the path if all chambers in the path are MFG. This is slightly different than the other query which accepts the path if at least one chamber of each ch_type is MFG. For the paths, we assume that all allowed paths will be specified. Let's take a tool that has A+B of ch_type ET and it has C+D of ch_type DEP. If we used ch_allowed then it would just be ABCD which we interpret as one of each ch_type. At sites when we use path then we would specify each allowable path so we would list ABCD, ABC, ABD, ACD, BCD, AC, AD, BC, and BD. This query is much simpler than the other query because it just checks that all chambers are MFG and will return a value if not.

FPSBASE.EQP_REF_LOGGED_ENTITIES

This view shows all possible logged entities since one may log a tool or a subtool or an entity or a port. This table can be used for the result_cache function GET_TOOL_FOR_LOGGED_ENTITY because even though the port_mode in EQP_PORTS is constantly changing those changes will not be reflected in this view. Therefore the EQP_LOGGED_ENTITIES table will rarely change so the result_cache will not need to refresh. The "not in" should be unnecessary since we should not have a port which is also an entity. But this can happen either by a mistake in the ETL or by a simple race condition when an entity changes to a port or vice versa. We get an error in SYSMNTR_ERRORS to fix it but here we still want this to succeed and not violation the PK of the table.

FPSBASE.EQP_REF_STATE_DIAGRAM

This view summarizes all levels of equipment state information into one table. This version is for the client state model as configured in EQP_L6_DETAILED_STATES. We almost always use EQP_STATE_DIAGRAM rather than the EQP_Lx_xxx tables.

FPSBASE.EQP_REF_TOOLS_MPCT_STATE

MPCT (Multi-Path Cluster Tool) state is the most representative overall state of the tool so these values are used in applications where we need to show just one state of a tool with multiple chambers.

FPSBASE.EQP_REF_TOOLS_PLUS

This view joins all EQP tables to get all info about each entity. EQP_TOOLS_PLUS is the base for all EQP views in all applications and is one of the most frequently used tables in the DWH. For Scheduler, each TOOL has a PROCESS_GROUP then each PROCESS_GROUP has a SCHED_GROUP. All tools in the same PROCESS_GROUP do roughly the same group of processes and PROCESS_GROUP is used for most tool-based Scheduler configuration. All processes and all tools in the same SCHED_GROUP are scheduled together and there may not be any crossover with any other SCHED_GROUP. Also each PROCESS_GROUP has a PROCESS_CLASS and all tools in the same PROCESS_CLASS have the same general purpose like Process or Metrology or Nonwafer. These classes are listed in FPSADMIN.RTG_PROCESS_CLASSES and are the same for all sites. For Equipment Dashboard and maintenance, each EQP_TYPE has a MNT_FAMILY. All tools in the same MNT_FAMILY are similar and on the same maintenance schedule. Each MNT_FAMILY is assigned to a MNT_MODULE which is the module responsible for maintaining them. For Operations Dashboard, each TOOL is assigned to a PROCESS_FAMILY by some relatively complex DWH logic. Then each PROCESS_FAMILY is assigned a PROCESS_MODULE which is the module responsible for operating the tools. See comments in GEN_MODULES for more details. All tools in the same family do roughly the same group of processes but crossover is allowed for alternates. See the comments on RTG_PROCESSES for details on how processes get assigned to PROCESS_FAMILY. The PROCESS_GROUP column is no longer in RTG_PROCESS_FAMILIES as this link is obsolete. We get PROCESS_GROUP from EQP_TOOLS for tools and from RTG_PROCESSES for processes. Different tools which do the same processes (perhaps an older and a newer model of similar tools) would have a different EQP_TYPE and probably a different MNT_FAMILY but all of the tools would be in the same PROCESS_GROUP and all of these tools should be assigned to the same PROCESS_FAMILY. Tools which normally do different processes but occasionally serve as alternates for each other (perhaps a long setup is required to do the other process so we do not expect to use them unless in an emergency) would have a different PROCESS_FAMILY and probably the same PROCESS_GROUP but definitely the same SCHED_GROUP. Tools which are identical but do different processes (perhaps an Etch sink and a Diffusion sink) might have the same EQP_TYPE although it is not required. Please note that while we differentiate between EQP_TYPE, PROCESS_FAMILY, MNT_FAMILY, and PROCESS_GROUP that it will be common for all four to be the same. This view mysteriously gives an "ORA-00600: internal error" when selected from in FPSDEMO but seems to work fine everywhere else so we are just going to keep an eye on this.

FPSBASE.EQP_REF_TOOL_PROCESSFAMS

This determines the primary process family for each tool as well as the list of subfamilies to which each tool belongs. This logic is similar to RTG_REF_PROCESS_EQPTYPES and these should often be modified together.

FPSBASE.EQP_REF_TYPE_CTC_MULTIPLIERS

Instead of just the records entered in the FPSINPUT.EQP_TYPE_CTC_MULTIPLIERS_OVR table, this table has ALL of the possible combinations for each eqp_type. This makes it much easier to debug what is happened because you just query this table and see the multipliers and if you do not like the value then you enter it in the OVR table! This table includes the basic ch_type_cnt combinations like 1ETCH to 3ETCH where a throughput value from 1ETCH is multiplied by 3 to be used for 3ETCH (multiplied for UPH, divided for MPU).  It also extends to records with multiple ch_types if the only difference is one of the first two types listed. That means 1ET+1CLN to 2ET+1CLN will be multiplied by 2 since the only difference is the first type. Also 1ET+1CLN to 1ET+2CLN will be multiplied by 2 since the only difference is the second type. But 1ET+1CLN to 2ET+2CLN will not be multiplied since both are different and we just do not know. And 1ET+1CLN+1TI to 1ET+1CLN+2TI will not be multiplied since we only check the first two types listed.

FPSBASE.EQP_REF_TYPE_MPU_MULTIPLIERS

These multipliers are used in GET_THP_VALUES to calculate the interval and first unit seconds when only MPU is provided in THP_MANUAL. They are simply a weighted average of the valid calculated values for the eqp_type which depends on exp_cascade_qty and so on.

FPSBASE.EQP_SETUP_CHG_FINAL

This view calculates the estimated setup change time to use for dispatching and scheduling. It combines the manual entries with 25th percentile of historical data.

FPSBASE.EQP_SETUP_CHG_HIST

History of each setup change from WIP_STEP_HIST. Filter for first_lot_in_job so multiple lot jobs are only counted once. The inst column is necessary so we can filter on inst which is fast rather than beg_proc_inst which has no index and is slow.

FPSBASE.EQP_TOOL_PLAN_DAY_HIST_NOW

Unions EQP_TOOL_PLAN_DAY_HIST table with the realtime data for all days not yet in the history. The previous day is never in the history since we do not load until one day has past to allow for post-updates of the eqp_state. This view includes the partial data for the current shift. IMPORTANT NOTE: This view uses DASH_B_H_ETP_7D table for speed but that table only has data for either EQP or ETP whichever one is active on the Dashboard. Therefore this view will return data for the previous and current shift only if ETP is active! This works perfectly for the Dashboard as long as we are still using the flag. In a future version, we will show both EQP and ETP together and at that time we will have to populate two different DASH_B_H_xxx_7D tables and modify this view accordingly.

FPSBASE.EQP_TOOL_PORTS

This view shows additional information about ports as well as duplicating those assigned to a shared parent tool rather than a normal tool. When ports are shared by multiple tools on the same shared parent tool, those ports will actually be listed once here for each tool. So the effective PK of this view is (facility, location, tool) or (facility, port, tool), but *not* facility-location as with EQP_PORTS. It is important to note that we only show shared_parent_tool when port is shared. Each port on a shared parent tool can be dedicated to a single tool by setting EQP_PORTS.TOOL to the tool or shared by all tools which share the parent tool by setting EQP_PORTS.TOOL to the shared parent. Previously we had a foreign key on EQP_PORTS.TOOL pointing to EQP_TOOLS but we had to disable this when setting to a shared parent so we removed this foreign key. The view was renamed from EQP_PORTS_PLUS to EQP_TOOL_PORTS when this change was made.

FPSBASE.EQP_TOOL_SHIFT_HIST_NOW

Unions EQP_TOOL_SHIFT_HIST table with the realtime data for the current shift which is not yet in the history. IMPORTANT NOTE: This view uses DASH_B_H_ETP_7D table for speed but that table only has data for either EQP or ETP whichever one is active on the Dashboard. Therefore this view will return data for the previous and current shift only if EQP is active! This works perfectly for the Dashboard as long as we are still using the flag. In a future version, we will show both EQP and ETP together and at that time we will have to populate two different DASH_B_H_xxx_7D tables and modify this view accordingly.

FPSBASE.EQP_WIP_COMP_TGTS

Base logic of wip and completes targets for NextMove Bay Performance pages. Targets are derived form wip_goal_est_shift by route step. Allowed tools (Rank P or A) are joined from RTAL where the route-step exists or from RTA where route-step does not exist in RTAL. Completes targets for the day are simply x2 that of the shift. WIP targets for day are equal to that of the shift. Customer override values are joined from Wip_metics_goal_ovr.

FPSBASE.ETP_APD_ENTITY_E10_PD_HIST

See detailed comments in EQP_APD_TOOL_E10_SHIFT_HIST IMPORTANT NOTE: The major difference between all of the ExP_APD_y_E10_z_HIST views are: entity vs. tool, ETP vs. EQP, and shift-based vs. plan-based Entity vs. tool has important logical differences, but, ETP vs. EQP and shift-based vs. plan-based are nearly identical. When making changes, just edit one of the two then it is safe to find/replace EQP and ETP for the first three sections only with three exceptions. EQP_UNITS_plan_day_HIST and ETP_MNT_EPISODE_HIST are table names in both views plus column is_eqp_related_event is in both. Fortunately those will error if you find/replace so it is actually fine to find/replace everything in these first three sections and then fix errors on these two tables and one column. Everything below that starting with episode_sums is identical in both views and should be copied directly. All EQP/ETP must stay.

FPSBASE.ETP_APD_ENTITY_E10_PW_HIST

See detailed comments in EQP_APD_TOOL_E10_SHIFT_HIST

FPSBASE.ETP_APD_ENTITY_E10_SHIFT_HIST

See detailed comments in EQP_APD_TOOL_E10_SHIFT_HIST IMPORTANT NOTE: The major difference between all of the ExP_APD_y_E10_z_HIST views are: entity vs. tool, ETP vs. EQP, and shift-based vs. plan-based Entity vs. tool has important logical differences, but, ETP vs. EQP and shift-based vs. plan-based are nearly identical. When making changes, just edit one of the two then it is safe to find/replace EQP and ETP for the first three sections only with three exceptions. EQP_UNITS_plan_day_HIST and ETP_MNT_EPISODE_HIST are table names in both views plus column is_eqp_related_event is in both. Fortunately those will error if you find/replace so it is actually fine to find/replace everything in these first three sections and then fix errors on these two tables and one column. Everything below that starting with episode_sums is identical in both views and should be copied directly. All EQP/ETP must stay.

FPSBASE.ETP_APD_ENTITY_E10_WEEK_HIST

See detailed comments in EQP_APD_TOOL_E10_SHIFT_HIST

FPSBASE.ETP_APD_ENTITY_WEEK_HIST

Adds row for next week not added to ETP_ENTITY_WEEK_HIST. This can be called at any time and repeatedly because if the previous week is already in the table then no rows are returned. This table does not have the cluster tool calculations so this table should not be summed over a tool or any grouping of tools. For that use ETP_TOOL_WEEK_HIST. This is only to be used for an individual entity. We debated whether to write grouping columns like eqp_type and mnt_family with their values at the end of the week but decided that we will usually use the current values and we can look up in CHGLOG if we want the values at the time.

FPSBASE.ETP_APD_TOOL_E10_PD_HIST

See detailed comments in EQP_APD_TOOL_E10_SHIFT_HIST IMPORTANT NOTE: The major difference between all of the ExP_APD_y_E10_z_HIST views are: entity vs. tool, ETP vs. EQP, and shift-based vs. plan-based Entity vs. tool has important logical differences, but, ETP vs. EQP and shift-based vs. plan-based are nearly identical. When making changes, just edit one of the two then it is safe to find/replace EQP and ETP for the first three sections only with three exceptions. EQP_UNITS_plan_day_HIST and ETP_MNT_EPISODE_HIST are table names in both views plus column is_eqp_related_event is in both. Fortunately those will error if you find/replace so it is actually fine to find/replace everything in these first three sections and then fix errors on these two tables and one column. Everything below that starting with episode_sums is identical in both views and should be copied directly. All EQP/ETP must stay.

FPSBASE.ETP_APD_TOOL_E10_PW_HIST

See detailed comments in EQP_APD_TOOL_E10_SHIFT_HIST

FPSBASE.ETP_APD_TOOL_E10_SHIFT_HIST

See detailed comments in EQP_APD_TOOL_E10_SHIFT_HIST IMPORTANT NOTE: The major difference between all of the ExP_APD_y_E10_z_HIST views are: entity vs. tool, ETP vs. EQP, and shift-based vs. plan-based Entity vs. tool has important logical differences, but, ETP vs. EQP and shift-based vs. plan-based are nearly identical. When making changes, just edit one of the two then it is safe to find/replace EQP and ETP for the first three sections only with three exceptions. EQP_UNITS_plan_day_HIST and ETP_MNT_EPISODE_HIST are table names in both views plus column is_eqp_related_event is in both. Fortunately those will error if you find/replace so it is actually fine to find/replace everything in these first three sections and then fix errors on these two tables and one column. Everything below that starting with episode_sums is identical in both views and should be copied directly. All EQP/ETP must stay.

FPSBASE.ETP_APD_TOOL_E10_WEEK_HIST

See detailed comments in EQP_APD_TOOL_E10_SHIFT_HIST

FPSBASE.ETP_APD_TOOL_WEEK_HIST

Adds row for next WEEK not added to ETP_TOOL_WEEK_HIST. This can be called at any time and repeatedly because if the previous WEEK is already in the table then no rows are returned. We debated whether to write grouping columns like eqp_type and mnt_family with their values at the end of the week but decided that we will usually use the current values and we can look up in CHGLOG if we want the values at the time.

FPSBASE.ETP_ENTITY_SHIFT_HIST_NOW

Unions ETP_ENTITY_SHIFT_HIST table with the realtime data for all shifts not yet in the history. The previous shift is never in the history since we do not load until one shift has past to allow for post-updates of the etp_state. This view includes the partial data for the current shift. IMPORTANT NOTE: This view uses DASH_B_H_ETP_7D table for speed but that table only has data for either EQP or ETP whichever one is active on the Dashboard. Therefore this view will return data for the previous and current shift only if ETP is active! This works perfectly for the Dashboard as long as we are still using the flag. In a future version, we will show both EQP and ETP together and at that time we will have to populate two different DASH_B_H_xxx_7D tables and modify this view accordingly.

FPSBASE.ETP_HIST_NOW

ETP_HIST does not include the current status since we obviously do not know the end time. So to get the full history up to the current time we need to union history with the current status. We set TO_INST to one second after curr_date so that "curr_date < to_inst" queries will work, specifically in ETP_STATUS_AT_INST Columns are sorted in this view based on their usefulness for ETP debugging and the order by has the most useful sort order which makes this view quite good for queries to debug ETP. Here is a good example: select * from etp_hist_now where facility = 'FAC' and tool = 'TOOL' and etp_to_inst > data_date - 6/24 order by orig_updated_time desc;

FPSBASE.ETP_MNT_EPISODE_CURR

FPSBASE.ETP_REF_STATE_DIAGRAM

This view summarizes all levels of ETP state information into one refreshed table. It combines enhanced states for PRD/SBY from ETP_L6_STATES with nonmfg states from EQP_L6_DETAILED STATES. We almost always use ETP_STATE_DIAGRAM rather than the individual ETP_Lx_xxx tables. If this refresh fails due to a violation of ETP_STATUS_FK_ETP_STATE then some states changed from NONMFG to MFG and you need to execute this script to fix this: alter trigger fpsbase.etp_status_after disable; update fpsbase.etp_status set etp_state = 'SBY-NONE' where etp_state in (select ETP_STATE from FPSBASE.ETP_STATE_DIAGRAM minus select ETP_STATE from FPSBASE.ETP_REF_STATE_DIAGRAM); alter trigger fpsbase.etp_status_after enable; If this refresh fails due to a constraint violation of either ETP_ST_DIAG_PRECEDENCE_TRANS or ETP_ST_DIAG_U_SBY_PRECEDENCE or ETP_ST_DIAG_U_OVERALL_SORT, this is because we have a duplicate value for sort_within_transition. This column comes from rank_sort if the etp_state is from a rank or from sby_precedence if it is an additional standby state. We cannot enforce a unique constraint across multiple tables so we enforce it in this table. The solution is to modify the values according so that this query has no duplicates: select 'AUTO' as s, rank, rank_display, rank_sort, sby_transition_state, use_in_sched, proc_state_if_wait from rtg_auto_ranks union all select 'PERM', rank, rank_display, rank_sort, sby_transition_state, use_in_sched, proc_state_if_wait from rtg_perm_ranks union all select 'SBY', sby_or_csc_state_suffix, sby_or_csc_display_suffix, sby_precedence, sby_transition_state, null, null from etp_l6_addl_sby_states a order by rank_sort;

FPSBASE.ETP_SBY_STATE

This view is used by the UPDATE_ETP_STATUS_SBY procedure. This view could be built into the procedure but the logic is complex and for debugging and reference purposes it is nice to be able to query it as a view.

FPSBASE.ETP_TOOL_FAILURE_HIST

We define a failure when the tool_mfg_pct for the entire tool is 0% and at least one entity is UDT. If we have a mix of UDT, SDT, ENG, NST then we still consider this a failure as long as at least one entity is UDT. We may also have entities that are SBY but contributing 0% to the tool_pct. However if we have any PRD/SBY entities which contribute more than 0% then it is not a failure. We get this info from ETP_MNT_EPISODE_HIST for speed purposes then we have to add current failures from ETP_STATUS since they will not go in EPISODE_HIST until the entity comes up and the episode ends.

FPSBASE.ETP_TOOL_PLAN_DAY_HIST_NOW

Unions ETP_TOOL_PLAN_DAY_HIST table with the realtime data for all days not yet in the history. The previous day is never in the history since we do not load until one day has past to allow for post-updates of the etp_state. This view includes the partial data for the current shift. IMPORTANT NOTE: This view uses DASH_B_H_ETP_7D table for speed but that table only has data for either EQP or ETP whichever one is active on the Dashboard. Therefore this view will return data for the previous and current shift only if ETP is active! This works perfectly for the Dashboard as long as we are still using the flag. In a future version, we will show both EQP and ETP together and at that time we will have to populate two different DASH_B_H_xxx_7D tables and modify this view accordingly.

FPSBASE.ETP_TOOL_SHIFT_HIST_NOW

Unions ETP_TOOL_SHIFT_HIST table with the realtime data for all shifts not yet in the history. The previous shift is never in the history since we do not load until one shift has past to allow for post-updates of the etp_state. This view includes the partial data for the current shift. IMPORTANT NOTE: This view uses DASH_B_H_ETP_7D table for speed but that table only has data for either EQP or ETP whichever one is active on the Dashboard. Therefore this view will return data for the previous and current shift only if ETP is active! This works perfectly for the Dashboard as long as we are still using the flag. In a future version, we will show both EQP and ETP together and at that time we will have to populate two different DASH_B_H_xxx_7D tables and modify this view accordingly.

FPSBASE.GEN_CONTIGUOUS_SHUTDOWNS

Some sites enter a separate records in GEN_FACILITY_SHUTDOWNS for each shift that is part of the shutdown. For example, at a facility with 12 hour shifts, there will be 14 separate records to make up a 7 day shutdown. We want a single record for the entire shutdown and this view does that.

FPSBASE.GEN_CURRENT_SHUTDOWNS

List upcoming shutdowns in the next 60 days including combining adjacent shutdowns listed in separate records.

FPSBASE.GEN_MODULES_PLUS

Add module_sort_order to basic module information

FPSBASE.GEN_REF_DATA_DATE

This caches DATA_DATE into a table for faster querying since DATA_DATE used so frequently. It also includes the shift and week information associated with DATA_DATE. See comments in CURR_DATE function for more information. Using (end_shift - start_shift) allows this view to work for any length shift. Same for (end_week - start_week) although if a customer has weeks of lengths other than 7 days that would be rather interesting. We use max in this view for two reasons. First to tell the compiler there is only one row. Second to we will always get a row returned even if data_date_value is not in CAL_SHIFTS or cal_work_weeks which will result in a not null error. Also for some strange reason this version is much faster with max(inst) from WIP_EVENT_HIST repeated three times rather than in a with clause. Sailboat racing! Finally we use data_date_value as to avoid confusion with the function data_date.

FPSBASE.GEN_REF_UPCOMING_SHUTDOWNS

List upcoming shutdowns in the next 60 days including combining adjacent shutdowns listed in separate records.

FPSBASE.MHS_ALL_POSSIBLE_LOCATIONS

This view adds the appropriate facilities to MHS_LOCATIONS for all uses where we want to filter on facility including Scheduler and NextMove.

FPSBASE.MHS_ALL_POSSIBLE_RACKS

This view adds the appropriate facilities to MHS_LOCATIONS for all uses where we want to filter on facility including Scheduler and NextMove.

FPSBASE.MHS_ALL_POSSIBLE_STATIONS

This view adds the appropriate facilities to MHS_STATIONS for all uses where we want to filter on facility including Scheduler and NextMove.

FPSBASE.MHS_ALL_POSSIBLE_VEHICLES

This view adds the appropriate facilities to MHS_VEHICLES for all uses where we want to filter on facility including Scheduler and NextMove.

FPSBASE.MHS_BAY_TO_BAY_MATRIX_AUTO

Automatically calculate bay to bay transit times based on carrier location history.

FPSBASE.MHS_BAY_TO_BAY_MATRIX_PLUS

Transit info calculated based on the bay coordinates to fill in if the table is not specifically populated.

FPSBASE.MHS_CARRIERS_PLUS

Add info about lots and durables to basic carrier info. This version is unfinished as it only includes one day history on last tool which is nowhere near long enough but fast for testing.

FPSBASE.MHS_REF_LOCATIONS

This view gives all possible locations for a carrier including location/station, port/tool, and vehicle. It is important that we do not change the values of location_type and station_type because these are used in the NextMove web application.

FPSBASE.MHS_STATIONS

Grouping of MHS_LOCATIONS by station.

FPSBASE.MHS_STATIONS_PLUS

A simple view to add station capacities from MHS_LOCATIONS.

FPSBASE.MHS_STOCKERS_PLUS

A view to show all information about stockers. This view is more complex and slower than MHS_STOCKERS which is why we have both.

FPSBASE.MSO_RULE_PROCESS_LINKS_ALL

View combines all different sources for MSO process links to output a single list of all process links by rule.

FPSBASE.MSO_RULE_PROCESS_LINKS_COMBO

View combines all different sources for MSO process links to output a single list of all process links by rule. It guarantees the uniqueness of the mso_rule_process_links PK (facility, smp_rule_id, from_mso_process_na, to_mso_process_na)

FPSBASE.MSO_RULE_PROCESS_SUBS_ALL

View combines all different sources for MSO process subs to output a single list of all process subs by rule.

FPSBASE.MSO_RULE_PROCESS_SUBS_COMBINED

View combines all different sources for MSO process subs to output a single list of all process subs by rule. It guarantees the uniqueness of the mso_rule_process_subs PK (facility, smp_rule_id, to_mso_process)

FPSBASE.RTG_ACTIVE_PRD_ROUTES

This view augments the contents of RTG_ACTIVE_PRD_ROUTES_BASE with all the products that are not active in their own right, but are on the flush-forward path from another product that *is* active in its own right. One of the goals for this view is that it should be very fast at single-facility sites, where RTG_PRD_FACILITY_NEXT is presumably empty.

FPSBASE.RTG_END_PRDS

This view shows all objects that may actually be finished, shipping products. These are products which appears as destinations but NOT as sources in RTG_PRD_FACILITY_NEXT, and which consist of dies, not wafers.

FPSBASE.RTG_EST_TRANSIT

FPSBASE.RTG_PROCESS_ALL_FAC_SORT

This somewhat complex query gets a sort order for each process. This is imperfect because processes can be in different orders on different routes but this is about as close as possible since it uses a weighted average by wip. There is a separate view for route_section which is identical.

FPSBASE.RTG_PROCESS_SUBFAMILY_COMMENT

The best solution to avoid processes in multiple process families or tools in multiple process families is to make the process families wider so that it is rare to have this problem. Within each process family we have a concept called process subfamily which covers shared tools. Here is a short example of how this works: * Each tool should be in one primary process family which includes the processes they run regularly. This is not used by scheduling/dispatching so alternates which are rarely used can be in a different family and will be still scheduled/dispatched when appropriate although these rarely used alternates will still suffer from the problem which is the subject of this email. * Similarly each process should be in one primary process family which includes the tools where it runs regularly. * This means merging eqp types where there is significant crossover into a single process family. * Each process can be in only one process subfamily within the process family. Note that subfamily is optional and is only used for tools with crossover like we are discussing here. * Tools will be in multiple subfamilies and this is fully expected (and in fact automatically generated by what they are capable of running). * Goal Planner has a capacity for each family which is the capacity of all of the tools -- then it has a capacity for each subfamily based on the tools in each subfamily. Because of the nature of subfamily and how it is shared, the total capacity of the subfamilies will always be greater than the capacity of the family. If this was not true then the subfamilies would not be shared and we would just make them separate families! Are you confused yet? Let's go to the most simple example: * Family has two tools T1 and T2 and three processes P1, P2A, and P2B. * P1 can only run on T1. P2x can run on both T1 and T2. P1 is in subfamily S1 and P2x are in S2. * Processes run at the same throughput which is 10 WPH and both tools have utilization of 75% or 6 hours per shift. * The capacity of the family is 12 hours per shift which is 120 wafers. * The capacity of the subfamily S1 is 6 hours/60 wafers because it can only use T1 * The capacity of the subfamily S2 is 12 hours/120 wafers because it can use either tool. In this example, GP cannot ask for more than 120 wafers total and no more than 60 of S1. It could ask for 120 of S2 and 0 of S1. Or 90 of S2 and 30 of S1. Or 60 of each. But it cannot ask for 90 of S1 and 30 of S2 because that exceeds the capacity of the S1 subfamily. So how does the dashboard handle this: * Currently not particularly well but this will be improved soon. * Main page and operations will still show process family as now which will be the larger combined group of tools and processes when configured properly. * The process family page will have a section which breaks down WIP/Moves/Coming by subfamily. This will look similar to the family breakdown on the module page except that tools will appear more than once. What you will note that we are still missing is totals by tool within smaller groups of tools but there is a reason why this is missing and will probably remain missing at least on the dashboard. It's because it is impossible to set goals like this. You can only set goals where each process is assigned to one grouping and we can do this by subfamily and by family. But you can't set goals for each tool by any group smaller than family because you cannot set goals by groups which are shared.

FPSBASE.RTG_PROCESS_SUBFAMS_W_TOOLS

Basic information on the number of tools up and down for each process subfamily which is used in queue timer logic and will eventually be used in Bottleneck Score and perhaps in est_move_sec. We agreed that this should use mfg_group (PRD+SBY) rather than availability (PRD+SBY+ENG). We have two versions of the number of tools for subfamily. Weighted assumes that the subfamily will continue to use the tools at the same rate as the last 7 days leaving expected capacity to be used by other subfamilies. Unweighted assumes that the subfamily could fully use all of its tools if necessary leaving no capacity for other subfamilies. NOTE: This view is only used by the RTG_QUEUE_MULTI_AREA logic. It is not used by the Dashboard subfamily logic. To get all subfamilies, we must query RTG_ROUTE_STEPS_PLUS so that subfamilies without tools are included.

FPSBASE.RTG_QUEUE_TIMERS_ALL_STEPS

This view has the effective primary key of timer_id, facility, route, and step. It only works for timers which start and end on the same route.

FPSBASE.RTG_QUEUE_TIMERS_LOT_START

This view shows either FPSINPUT.RTG_QUEUE_TIMERS_LOT_MANUAL if that table has any rows or FPSBASE.RTG_QUEUE_TIMERS_LOT_AUTO if the FPSINPUT table is empty.

FPSBASE.RTG_QUEUE_TIMERS_PLUS

This view adds the queue timer start and end step sequence numbers, in addition to other useful route/step columns, as a plus view. The end step sequence number will be the step's sequence number on the end route if different from the start route, in which case it cannot be used in relation to the start step sequence number. If you need a query that includes all steps on the timer with one row for each step then use RTG_QUEUE_TIMERS_ALL_STEPS although that view only works for timers that start and end on the same route.

FPSBASE.RTG_REF_ACTIVE_PRD_ROUTES_BASE

This view determines all of the (FACILITY, PRD, PLANPRD, ROUTE) tuples that should be considered active in their own right. From highest to lowest priority, the reasons that a particular tuple should be considered active are: * moves within last 3 days * active lots in WIP_LOTS_VERIFY or WIP_LOTS_REALTIME * off-main-route lots in WIP_LOTS_REALTIME * starts planned * demand for this PLANPRD * relevant for multi-facility sites where we only have starts for first facility. * will only add records for a single PRD with a corresponding DEFAULT_PLANPRD * will only add records if this (FACILITY, PLANPRD) is not active for any other reason * the same (FACILITY, PRD, PLANPRD, ROUTE) has been active recently * the REF-view recursively looks at the contents of table generated from it * these rows are removed from the output after GEN_FACILITIES.DAYS_TO_KEEP_PRD_ACTIVE days * CM scenario exists * force keep-active (FPSINPUT.RTG_PRDS.KEEP_ACTIVE='Y') Where there are multiple reasons to consider a tuple active, this view outputs only the highest priority reason. The view RTG_ACTIVE_PRD_ROUTES expands on this one by also including tuples which are in the flush-forward path of a tuple considered active according to this view.

FPSBASE.RTG_REF_ALL_PRD_SPLIT_30DAY

This view estimates the fraction of a given source (facility, prd, bank) that continues into each of multiple destinations (facility, prd, bank). Estimation is based on moves over the last 30 days. Originally this view included a ranking of the likelihood that a move from a given source will go to each of multiple possible destinations, or that a move to a given destination will come from each of multiple possible sources. With DWH-2704, we moved this to RTG_PRD_FAC_NEXT_W_BANK so this is now just the raw data. Please note that this includes all moves to a new facility, prd, or bank including moves that are not mapped in RTG_PRD_FACILITY_NEXT so it is left joined in RTG_PRD_FAC_NEXT_W_BANK.

FPSBASE.RTG_REF_COMMON_STEPS_PLUS

This view adds facility_segment_sort to the RTG_COMMON_STEPS. Facility_segment must be contiguous when sorted by common_step_sort which is checked by RTG_FACILITY_SEGMENT therefore using min_sort gives the desired result. This query is fast but we refresh this into a table so that we can use an index when filtering for on joining by a specific common step as we do in INSERT_NEW_ROUTE_STEP.

FPSBASE.RTG_REF_MISSING_LOT_TOOL_ASGN

For records where the lot-route-step has at least one tool in RTAL/RTALP, this view lists all tools which are in RTG_TOOL_ASSIGNMENTS for the process but are not including in RTAL/RTALP. It is critical that these tools are not allowed for the lot-route-step so that only the tools in RTAL/RTALP are considered. For example, if RTA has ET11, 12, 13 as three primary tools and RTA_LOT has just ET11 as a primary tool for lot L1 then we must consider ET11 as the only allowed tool for this lot. Please note that for another lot which has no tools specified in RTAL/RTALP that we would consider all three tools as primary. Initially we included these "extra" tools (ET12 and ET13) in WIP_STEP_FUT_ASSIGNMENTS with rank I and the nice perm_rank_comment which still exists in this logic below. We found many cases where there could be a bunch of these "extra" tools and that including them slowed down our WSFA logic considerably for little benefit. Now these "extra" tools are in WFSA_UNIONS only and then are filtered out in WSFA_FILTERED. Therefore it does not really matter what we set as the perm_rank here but we set I or V for debugging purposes when previewing in WSFA_UNIONS. We refresh this separate table simply for speed purposes. By unioning this table in WSFA_UNIONS with the other four RTA tables we can partition by lot-route-step-tool which makes a filter by tool fast. Without this table we would have to partition by lot-route-step to check all tools which would make a filter by tool very slow because it would have to query all records.

FPSBASE.RTG_REF_PATH_PRDS

This view shows all the products go into a given product, taking *only* the products with the highest SOURCE_PRD_NUM_PREFERENCE for each (DEST_FACILITY, DEST_PRD, SOURCE_PRD_NUM). Each row shows a possible combination of starting, current, AND ending product. Note that current product can be the same as starting product and/ or ending product.

FPSBASE.RTG_REF_PRDS_PLUS

IMPORTANT NOTE: This view is an exception to our normal rule that PLUS objects have the same rows as their parent object. In this case, RTG_PRDS includes all prds in the MES regardless of whether or not they are active. This is because we want all prds in the supply chain of an active prd to be active. But we only want to use active prds in all other uses so this view joins on RTG_ACTIVE_PRD_ROUTES so that inactive prds are filtered out. The other key logic here is when prd_to_use_for_ct is set that we get the cycle time from that other prd. Therefore any time we are pulling commit and target cycle time we need to pull from this view.

FPSBASE.RTG_REF_PRD_FACILITY_FLUSH

This view builds the future facility/prd/bank path from a given facility/prd/bank. The logic is considerably simplified with the new RTG_PRD_FAC_NEXT_W_BANK view. This will be used in the multi-facility, multi-product WIP_FLUSH.

FPSBASE.RTG_REF_PRD_FACILITY_REVERSE

This view uses connect by to build the past facility/prd path from a given facility/prd. Similar logic is used in the RTG_PRD_FACILITY_FLUSH for the future path. See note in SUM_FROM_STRING function about its usage here.

FPSBASE.RTG_REF_PRD_FAC_NEXT_W_BANK

RTD_PRD_FACILITY_NEXT describes the product routing through multiple facilities including assembly. Its PK is both the source and destination (facility, prd). In between facilities, WIP also moves through end banks defined for each (facility, prd) and listed in RTD_PRD_END_BANKS. Normally these end-banks are sequential, so that WIP is routed through each of them in sequence in between the source (facility, prd) and the destination (facility, prd) This ref-view includes the movement through the end banks. If there is a route from (fac1, prd1)->(fac2, prd2) in RTG_PRD_END_BANKS, and there are 2 end-banks (bk1, bk2) in between, then the following 3 rows will appear in this view: (S_F S_P S_B) (D_F D_P D_B) FIRST_BANK --- --- --- --- --- --- ---------- fac1 prd1 NA fac1 prd1 bk1 bk1 fac1 prd1 bk1 fac1 prd1 bk2 bk1 fac1 prd1 bk2 fac2 prd2 NA bk1 The FIRST_BANK column is also included to account for the special case of non-sequential/optional end-banks, which are used at Bosch in the PKG facility. In this special case, all the END_BANK_SEQ_NUM are 1 for this (source_facility, source_prd). This ref-view is used to draw the edges and nodes of a directed graph describing multi-product routing, including end-banks.

FPSBASE.RTG_REF_PRD_STEP_EQPTYPES

FPSBASE.RTG_REF_PROCESS_FAMILIES_PLUS

Basic information on process_families plus the Bottleneck Score. Bottleneck Score is the percentage of the goal expected to be achieved by the end of the shift assuming that all tools in the family which are currently up remain up and tools which are currently down remain down. A score under 100% means we do not expect to meet the goal while a score over 100% means that we expect to exceed the goal. Note that we group by process_family rather than by the full GP grouping so this will work even with est goals.

FPSBASE.RTG_REF_PROCESS_RCP_EQPTYPES

This view groups RTG_ASSIGNMENTS by process/est_machine_recipe/eqp_type and adds the standard throughput calculations. This has all possible process/est_machine_recipe/eqptype combinations and uses fill-in logic to estimate the throughput where we have no actual data nor a manual entry. We may set FILLIN_RECIPE in RTG_PROCESSES to SLOW, MED, FAST to use as default for new recipes and then we would manually populate SLOW, MED, FAST est_machine_recipe data for each EQP_TYPE in this table. Note that FIRST_unit_SEC, unit_INT_SEC, BATCH_INT_SEC allow us to calculate throughput for variable wafer sizes. Then we calculate end-to-end (EET) or theoretical (TCT) from these three values plus the wafer qty. The logic to determine pri_process_family is almost identical to EQP_REF_TOOL_PROCESSFAMS so these should often be modified together.

FPSBASE.RTG_REF_PROCESS_RCP_TOOL_7D

Base query for RTG_PROCESS_RCP_TOOL_BASE in a separate table for speed. That's right! This is the base of the base! This normally runs daily while RTG_PROCESS_RCP_TOOL_BASE and related tables are part of the 30mi job.

FPSBASE.RTG_REF_PROCESS_RCP_TOOL_BASE

This gets both assignments and recent usage grouped by facility, process, est_machine_recipe, and tool. It is used in both RTG_REF_PROCESS_RCP_EQPTYPES and EQP_REF_TOOL_PROCESSFAMS.

FPSBASE.RTG_REF_PROCESS_RCP_T_UNIONS

This gets both assignments and recent usage grouped by facility, process, est_machine_recipe, and tool. It is used in both RTG_REF_PROCESS_RCP_EQPTYPES and EQP_REF_TOOL_PROCESSFAMS.

FPSBASE.RTG_REF_PROCFAM_FAST_FROM_RRSP

This base view only exists to speed RTG_PROCESS_FAMILIES which refreshes in the RealTime job. This refresh takes a couple of seconds and can refresh at a less frequent interval.

FPSBASE.RTG_REF_QUEUE_MULTI_AREA_BASE

Queue timers can start at either DISPATCH, BEGIN, END, or COMP of the start step. END is preferred and is the default. The timer always ends with the BEGIN at the end step. This logic first calculates when the timer started (queue_start_inst) and then filters for lots where indeed the timer has started as well as not yet ended. These are the lots which we consider to be in a queue timer meaning: 1) We will consider increasing the priority to avoid missing the queue timer in RTG_QUEUE_MULTI_AREA_PRTY. This view includes some logic for this which is described in comments of that view. 2) We assume these lots will run ahead of any lot which we are considering whether to start into a new timer in RTG_QUEUE_MULTI_AREA_START.

FPSBASE.RTG_REF_QUEUE_MULTI_AREA_PATH

This checks all steps in the queue timer for each lot which are not skippable metrology to see if they have a run path. If not then we are not allowed to start into the timer. Unlike RTG_QUEUE_MULTI_AREA_WIP there is no intelligent calculation for when we might be able to start into the timer or any ability to trickle lots into the timer. We simply have no path so we cannot start.

FPSBASE.RTG_REF_QUEUE_MULTI_AREA_PRTY

This logic gives a curr_priority to lots in a queue timer if the X-Factor required to finish the timer before it expires is less than the threshold configured for that priority. For example, let's say QUEUE2x priority is configured to have a threshold of 2x processing time and QUEUE3x priority is configured at 3x. We have a lot which is 3 steps from the end of the queue timer and those three steps have 1, 2, and 2 hours of processing time, respectively. The queue timer will expire in 10 hours but all timers end when the lot begins processing at the end step. In order to calculate our X-Factor correctly, we need to add the 2 hours processing at the end step to the expire inst. So we get (10 + 2) / (1 + 2 + 2) = 12 / 5 = 2.4. This X-Factor Required of 2.4 is not less than the threshold for QUEUE2X so it does not get that highest priority but it is less than the threshold for QUEUE3X so it gets that priority. This priority of QUEUE3X is fed to the Dashboard for display and to the Scheduler for calculation and is used by all components of the DWH. The logic to calculate the xfactor_rqd is in RTG_QUEUE_MULTI_AREA_BASE but makes sense to comment about it here since this is the key to setting the curr_priority.

FPSBASE.RTG_REF_QUEUE_MULTI_AREA_STEPS

This table has each step in the timer sequence after the start step for every lot and timer_id. It is used as a base for both RTG_QUEUE_MULTI_AREA_WIP and RTG_QUEUE_MULTI_AREA_PATH. We use left join on WIP_STEP_FUTURE so we only calculate WIP for all timers even with no lots coming. This is because the timer could be full of lots inside even with none coming and we want to start any lot that unexpectedly arrives at the start step. Our logic to use this will first look for the specific lot+route+start_step which allows us to set wait_sec_to_sched by lot so we can slowly release lots based on WIP conditions. But if we have no information on the specific lot then we will stop any lot.

FPSBASE.RTG_REF_QUEUE_MULTI_AREA_WIP

This is checking if any step in the queue sequence already has too much WIP for us to want to start more lots into the timer. This logic only applies if the timer spreads over multiple sched_groups (counting not scheduled as a group) because Scheduler will handle if the timer is entirely within one sched_group. The logic here says that any lot which is already in a queue timer will need to run before the lots that we are considering whether or not we should start at the queue start step. In addition, we assume that lots without queue timers will run at their current step if not on hold unless they have been sitting for over seven days in which case we assume they will continue to sit. This logic is certainly not perfect but it is reasonable. For all lots that are at or coming to a step that is the start step for a timer over multiple sched groups, we use this logic for WIP ahead of this lot and only block the lot from starting if ALL of the following are true for one of the steps in the timer: 1) WIP per tool ahead of this lot by process family indicates that the lot cannot process before the queue timer will expire 2) WIP per tool ahead of this lot by process subfamily indicates that the lot cannot process before the queue timer will expire 3) Conservative subfamily logic assuming the subfamily only gets 50% of the tool capacity in the family that it used in the last week does not indicate that it can still make the timer. 4) Sched_start is after the queue timer will expire (or lot is not scheduled at the step) Consider #1 as the baseline logic and #2 and #3 and #4 as exceptions, #2 in one direction and #3+4 in the other direction, to cover cases where all tools in the family cannot run all processes. #2 covers the case where the process family overall has capacity but the specific tools that can run this subfamily are backed up. For this check, we assume that the subfamily could use all of the capacity of its allowed tools since we know the process family overall has capacity. In this case, we want to block the lot from starting. The opposite of this is #3 where the process family overall does not have capacity but the subfamily does. For this one we certainly cannot assume the subfamily can use all of the capacity of its allowed tools. It is tricky to even used the historical weighted capacity because we know the family overall is backed up and other subfamilies will likely take more of the capacity than normal. So we conservatively assume that the subfamily will be able to use 50% of the historical weighted capacity which will at least allow a few lots of each subfamily to trickle in to the timer. #4 also covers the similar case as #3 where the process family overall is over capacity but it is weighted towards other tools and the tools where this lot can run are able to run before the timer expires. This logic says if the Scheduler found a spot for the lot to run before the queue timer expires then far be it from this multi area queue logic which is not as smart as Scheduler to say that it cannot run. This is better than #3 but since Scheduler only schedule 12-24 hours in advance we still need #3 to cover queue timers which are longer than the schedule windows of 12-24 hours.

FPSBASE.RTG_REF_QUEUE_TIMERS_LOT_CURR

Status of lots currently subject to queue timers. This is refreshed into a table even though it is fast because it is used in WIP_REF_LOTS_COLS_W_DD_PRTN and it was strangely slow as a view even with a having clause.

FPSBASE.RTG_REF_QUEUE_TIMERS_RT_STEP

This is a refreshed table with the primary key of facility, route, step to use in UPDATE_WIP_STEP_FUTURE. Even with filtering for the specific route, we still needed to join three times on RTG_ROUTE_STEPS_PLUS which was taking 500 ms instead of less than 50 using this smaller table. The ALL_STEPS view only contains timers which start and end on the same route but this is fine since the columns in WIP_STEP_FUTURE are just for reference

FPSBASE.RTG_REF_QUEUE_TIMER_STEP_PCT

This table has the primary key of timer_id, facility, route, and step and calculates the timer start percentage for each step in the queue timer. It only works for timers which start and end on the same route.

FPSBASE.RTG_REF_RANKS

A simple view to combine current and permanent ranks which separated so that we can set foreign key on RTG_PERM_RANKS. The rank_sort is used for two different purposes which are critical. First is to determine the curr_rank of a particular lot on a particular tool. For this, we want the "worst" rank so the highest rank_sort is used. For example, if the tool is down (sort 12) and is inhibited (sort 33) and requires a setup (sort 5) then the rank is I for inhibited since that is the worst option as determined by the highest rank_sort. Second is to determine which tool is the best for the lot. For this, the lowest rank_sort is used. For example, if tool T1 is down (sort 12) and tool T2 is inhibited (sort 33) and tool T3 requires a setup (sort 5) then the "best" tool is T3 as determined by the lowest rank_sort. Since we agree that lots which are both DOWN and BLOCK should have the process state of BLOCK, we must ensure that all ranks where process_state is BLOCK have a higher rank_sort than ranks R and D where process state is DOWN. Similarly all ranks where process state is WAIT should have a lower rank_sort than R and D. We set the rank_sort for all ranks in FPSADMIN tables so we have full control over this in our DWH and are not dependent on FPSINPUT.

FPSBASE.RTG_REF_ROUTES_PLUS

A view to add extra information for each ROUTE. Using inner joins and group by makes this fast.

FPSBASE.RTG_REF_ROUTE_STEPS_PLUS

A view to combine all information about the route-step and its associated process. This is refreshed because we use this information frequently -- particularly WIP_MODULE and EQP_TYPE/FAMILY. The view does left joins on all tables to absolutely ensure that all records in RTG_ROUTE_STEPS are in RTG_ROUTE_STEPS_PLUS. Logically we should be able to do inner joins on several related tables including RTG_ROUTES, RTG_OPERATIONS, RTG_PROCESSES, and several RTG_PROCESS_xxx tables. All of these tables have foreign keys and not null constraints set up appropriately to ensure all records. However if the constraint is disabled or you load objects out of order then we could lose rows so we just use left join for all. This logic uses RTG_PROCESS_RCP_EQPTYPES so you must build RTG_ROUTE_STEPS then RTG_PROCESS_RCP_EQPTYPES then RTG_ROUTE_STEPS_PLUS.

FPSBASE.RTG_REF_RS_MODE_BY_BUILDING

This view will populate the mode_building by route and step in order to support Dashboard reporting by building. This will effectively chose 1 building per route and step until DWH 8.0 where the PK of WIP_END_SHIFT_HIST will include the building.

FPSBASE.RTG_REF_START_PRDS

This view shows the starting products (products with no further sources) which go into a given product, taking *only* the products with the highest SOURCE_PRD_NUM_PREFERENCE for each (DEST_FACILITY, DEST_PRD, SOURCE_PRD_NUM). In other words, it tries to show the current most recent version of the starting products that go into an arbitrary product.

FPSBASE.RTG_REF_TOOL_ASGN_RATIOS

This view is used to populate the table RTG_TOOL_ASGN_RATIOS

FPSBASE.RTG_ROUTE_STEPS_DUPLICATED

Quick reference list of route-steps that are duplicated to support shared facilities.

FPSBASE.RTG_RS_TO_KEEP_INACTIVE_FAST

Please read all about inactive steps on FPS Help Site at https://help.finalphasesystems.com/display/DW/All+about+inactive+steps.

FPSBASE.RTG_RS_TO_KEEP_INACTIVE_SLOW

Please read all about inactive steps on FPS Help Site at https://help.finalphasesystems.com/display/DW/All+about+inactive+steps.

FPSBASE.RTG_SECTION_ALL_FAC_SORT

This somewhat complex query gets a sort order for each route_section. This is imperfect because sections can be in different orders on different routes but this is about as close as possible since it uses a weighted average by wip. There is a separate view for process which is identical.

FPSBASE.RTG_STEP_TOOL_BALANCE_COMBINED

View that combines the RTG_STEP_BALANCE and RTG_COMMON_STEP_BALANCE configuration to provide data for WEH trigger to evaluate on COMP event types and log the tool balance tables.

FPSBASE.RTG_STEP_TOOL_BALANCE_RATIOS

View return list list of steps and tools that have start step balance ratios applied based on WIP qty in the balance window.

FPSBASE.THP_APD_EQPTYPE_WEEK_HIST

This important view uses WIP_STEP_HIST to generate the throughput for the week by all of the possible categories. It is nearly identical to THP_APD_TOOL_WEEK_HIST in terms of logic, and differs only in the groupings over which the logic is applied. See the function FPSBASE.THP_APD_HIST for the THP calculation details.

FPSBASE.THP_APD_TOOL_WEEK_HIST

This complex and important view uses WIP_STEP_HIST to generate the throughput for the week by all of the possible categories. The throughput calculation logic is found in the function FPSBASE.THP_APD_HIST.

FPSBASE.THP_REF_EQPTYPE_AUTO

This view calculates the throughput for each eqp_type, process, and est_machine_recipe on each chamber combination. It only includes all data even if only 1 data point. This is in contrast to THP_EQPTYPE_AUTO which has only valid data and additionally groups by tool and actual_machine_recipe. Both use the same THP_REF_AUTO function as all of the logic is identical. There is no fill-in logic here as we trust the GET_THP_VALUES function to do that.

FPSBASE.THP_REF_EQPTYPE_SUMMARY

This helper view compares manual and external THP values to the auto-calculated ones in THP_EQPTYPE_AUTO, and returns these results along with all of the default columns of THP_EQPTYPE_AUTO The logic here is similar to that in THP_P_SVTL_EXTL_MANL_AUTO_COMP. If one of these views changes, the other likely should as well.

FPSBASE.THP_REF_TOOL_AUTO

This view calculates the throughput for each tool, process, est_machine_recipe, and actual_machine_recipe on each chamber combination. It only includes valid data. This is in contrast to THP_EQPTYPE_AUTO which groups only by eqp_type, process, and est_machine_recipe. Both use the same THP_REF_AUTO function as all of the logic is identical. There is no fill-in logic here as we trust the GET_THP_VALUES function to do that.

FPSBASE.THP_REF_VALUES_LOOKUP

This table caches the results of the GET_THP_VALUES function for all combinations of input parameters. It is used in the very beginning of the GET_THP_VALUES function when i_use_values_from_table is Y so the function is extremely fast when it can use the record from this table.

FPSBASE.THP_TOOL_SUMMARY

This helper view compares manual and external THP values to the auto-calculated values in THP_TOOL_AUTO and returns the results along with all of the default columns of THP_TOOL_AUTO. The logic here is similar to that in THP_P_SUM_EXTL_MANL_AUTO_COMP. If one of these views changes, the other likely should as well.

FPSBASE.WIP_APD_DEMAND_HIST

This view loads the current demand plan into history.

FPSBASE.WIP_APD_GOALS_PER_SHIFT_HIST

This view appends the current shift goals to the history table if not appended yet. It also checks for previous shifts with missing goals and appends the most recent goals as the goals for those missing shifts. This matches what really happened because when we do not update the goals for the new shifts we just continue to use the previous shift.

FPSBASE.WIP_APD_STARTS_HIST

This view loads the current starts plan into history.

FPSBASE.WIP_DEL_CURR_LOTS_TOOLS

Faster view to delete records from this table without the complex logic.

FPSBASE.WIP_DEL_LOTS_COLS_NO_DD_PRTN

This is a fast view to delete records from this table without the complex logic of the REF view. Please note that the REF view joins WIP_LOTS_REALTIME with WIP_LOTS_STATIC but we use only WIP_LOTS_STATIC here for speed reasons. It is possible that a lot could be deleted from WIP_LOTS_REALTIME by an event before it is deleted from WIP_LOTS_STATIC by ETL. In this case, the recently deleted lot would not be deleted from WIP_LOTS_COLS_NO_DD_PRTN but that is acceptable because this table is only used in the WIP_LOTS_PLUS view where it is joined with WIP_LOTS_REALTIME. Then the record will soon be deleted on an upcoming refresh after it is deleted from WIP_LOTS_STATIC.

FPSBASE.WIP_DEL_LOTS_COLS_W_DD_PRTN

Duplicate of WIP_DEL_LOTS_COLS_NO_DD_PRTN. See comments in that view.

FPSBASE.WIP_DEMAND_BY_WEEK

This view summarizes our demand by plan week for use in WIP_FLUSH. The key feature of this view is that is subtracts what has already shipped from the current week demand.

FPSBASE.WIP_END_SHIFT_HIST_PLUS

This view does some useful calculations like adding the ENDWIP columns to get num_lots_endwip and qty_endwip and similarly for num_lots_moved and qty_moved. It also calculates the total columns used in the CTM views. We should almost always use this view rather than the WIP_END_SHIFT_HIST table.

FPSBASE.WIP_GOAL_SUM_SHIFT_HIST

This view summarizes lot level goals for the shift.

FPSBASE.WIP_HIST_FOR_ETP

A convenient view used for ETP debugging only which unions WIP_EVENT_HIST and WIP_WAFER_HIST events that are relevant to ETP. Please note this view does not have any filtering on inst so it is essential that you add this filter otherwise your query will do a full table scan on both WEH and WWH! We almost always want a filter on tool as well. Furthermore we often want to filter on is_entity which filters out ports or is_event which filters only BEG/END or is_etp which does both. So a sample query: select * from wip_hist_for_etp where inst > data_date - 0.5 and tool = 'TTT' and is_etp = 'Y';

FPSBASE.WIP_HOLD_TYPES_PLUS

This is a simple view to combine hold tables but this view is a good place to store our definitions for holds: - Current hold: A lot that is currently on hold. The cause could have been from a line hold or from a lot hold. - Future hold: A lot that we know will go on hold in the future when it reaches a particular step. Again the cause could be from a line hold or from a lot hold. - Line hold: A systematic hold that affects a defined group of lots. Line holds must include step and may also specify a route or product or lot type or other grouping. For example, you might place a line hold on any lot that arrives at step S100. Lots which are at this step and on hold would be considered current hold due to this line hold while lots which will arrive at this step soon would be considered future hold at this step due to line hold. - Lot hold: A hold that is specific to a particular lot id. A couple notes about these four definitions for clarity: - All holds are either current or future -- never both nor neither -- and a future hold changes to a current hold when the lot arrives at the particular step. - All holds are either line or lot -- never both nor neither. - Combining the two above gives that each hold is either current lot, current line, future lot, or future line.

FPSBASE.WIP_LOTS_PLUS

The four tables WIP_LOTS_REALTIME, WIP_LOTS_STATIC, WIP_LOTS_COLS_NO_DD_PRTN, and WIP_LOTS_COLS_W_DD_PRTN are joined together in this final view with inner joins on lot. This view is used by nearly everything including DASH_P_WIP_LOTS and WIP_STEP_FUTURE_PLUS. It includes what we consider useful columns rather than including every column in the four tables. The columns omitted are used to populate other columns or other tables, especially ECT_HIST. Here is a query that shows all omitted columns: select w1.table_name, w1.column_name from dbf_tab_columns w1 left join dbf_tab_columns w2 on w1.column_name = w2.column_name and w2.table_name = 'WIP_LOTS_PLUS' where w1.table_name in ('WIP_LOTS_REALTIME','WIP_LOTS_COLS_NO_DD_PRTN','WIP_LOTS_COLS_W_DD_PRTN', 'WIP_LOTS_STATIC') and w2.column_name is null order by w1.table_name, w1.column_name;

FPSBASE.WIP_LOTS_TOOLS_REALTIME

This view includes the critical logic for determining beg_proc_inst and end_proc_inst which goes into WIP_STEP_HIST. For the beg_proc_inst, we look for the following events in this order: 1) Automatically logged begin event (event_type BEGIN) for the lot from WIP_EVENT_HIST. 2) Automatically logged begin event (event_type BEGIN) for the first wafer from WIP_WAFER_HIST. This is the first_wfr_beg_inst column regardless. 3) Manually logged begin event (event_type BEGIN_M) for the lot from WIP_EVENT_HIST. For the end_proc_inst (which is also modified in WIP_LOTS_REALTIME_UPDATE_BEF when we know the COMP event): 1) Automatically logged end event (event_type END) for the lot from WIP_EVENT_HIST. 2) Automatically logged end event (event_type END) for the last wafer from WIP_WAFER_HIST. This is the last_wfr_end_inst column regardless. 3) Manually logged end event (event_type END_M) for the lot from WIP_EVENT_HIST. 4) Complete event for the lot (event_type COMP which moves to next step) from WIP_EVENT_HIST only if this event has a tool logged.

FPSBASE.WIP_LOTS_TOOLS_WEH_REALTIME

Uses WIP_EVENTS_CURR_STEP to get current information about each lot+tool which is perfectly in synch with WIP_EVENT_HIST. Note that if a lot has events logged to multiple tools at the same step that it will have multiple records in this view.

FPSBASE.WIP_LOTS_TOOLS_WWH_REALTIME

Uses WIP_WAFER_EVENTS_CURR_STEP to get current wafer information about each lot+tool which is perfectly in synch with WIP_WAFER_HIST. Note that if a lot has wafer events logged to multiple tools at the same step that it will have multiple records in this view. This view is used in WIP_LOTS_TOOLS_REALTIME which is used in the WIP_EVENT_HIST_UPDATE_BEF trigger. It is important to do that for accurate population of WIP_STEP_HIST. However the trigger does not get updated with each wafer event so it is also important to use this view directly in WIP_REF_LOTS_COLS_NO_DD_PRTN to get the updated values of num_wfrs_ended and wfr_int_sec_for_est_end.

FPSBASE.WIP_LOTS_TOOL_FOR_WSH

This view is only called before the insert to WIP_STEP_HIST to choose the best tool in the case when multiple tools were logged during the step. Therefore we can use the if_comp fields for end_proc_inst.

FPSBASE.WIP_LOTS_TO_KEEP_IN_HIST

Used in ADM_PURGE_TABLE_AUTO to keep active lots as recently finished lots in WIP_STEP_HIST, WIP_LOT_HIST, and ECT_HIST until their record is inserted in CTM_FINISHED_LOT_HIST and ECT_FINISHED_LOT_HIST. This view is either used in a merge statement to set next_inst_to_purge or used in a delete statement to prevent deleting active lots like this: where inst < t1 and (lot, facility) not in (select lot, facility from wip_lots_to_keep_in_hist) We tested that the inst is the limiting factor and that adding the filter with this view does not add any time to the delete. It is also not any faster to refresh this into a table.

FPSBASE.WIP_LOT_HIST_FOR_SNAPSHOT

The columns listed here are grouped and sorted alphabetically within each group: All columns which are in both WIP_LOT_HIST and WIP_LOTS_REALTIME are included except for those in groups 7 and 8: 1) Important fields listed first 2) Next_xxx values applies after the event if populated so we use nvl(next_xxx, xxx) 3) xxx_ent_inst columns need to be evaluated 4) Value applies after the event and therefore is used in this view 5) Value applies before is_next event and therefore is cleared or set to default 6) Value applies before is_in_wsh event and therefore is cleared or set to default 7) Only associated with the event so unnecessary for snapshot context but included to match normal WLRT behavior 8) Value applies to the route-step and should be looked up from RRS_PLUS therefore omitted

FPSBASE.WIP_LOT_TYPES_PLUS

A simple view to combine these tables so we can get LOT_GROUP from LOT_TYPE which is done frequently.

FPSBASE.WIP_OPER_MOVES_LAST_DAY

Quick view to get oper moves for the previous day and current day to use for comparison to client moves in CHK_OPER_MOVES view and to use in ADM_FIX_OPER_MOVE_AND_COMP procedure.

FPSBASE.WIP_OPER_MOVE_HIST

A convenient view used for oper moves debugging which shows all moves for all lots with the all of the columns necessary to show if each move was counted as an oper move. It also limits the columns and sorts in a logical order. Remember the logic is to check for any reason why the lot should NOT be counted as an oper move then if we find no reason we count it. By default this view shows the previous shift although it can be easily copied then edited to filter on any range. Sample queries: select * from wip_oper_move_hist where lot = 'LOT1'; select * from wip_oper_move_hist where (prd = 'PRD1' or route = 'Sailboat'); Also we did not make a separate view but you can query WIP_END_SHIFT_HIST or WIP_PLAN_DAY_HIST and see oper moves like this: select start_shift, end_shift, route, process, bank, qty_comp, qty_oper_moves, operation from fpsbase.wip_end_shift_hist where (prd = 'PRD1' or route = 'Sailboat') and start_shift > sysdate - 1 order by route, step;

FPSBASE.WIP_PLAN_DAY_HIST_PLUS

This view does some useful calculations like adding the ENDWIP columns to get num_lots_endwip and qty_endwip and similarly for num_lots_moved and qty_moved. It also calculates the total columns used in the CTM views. This view is a copy of WIP_END_SHIFT_HIST_PLUS and we should almost always use it rather than the WIP_PLAN_DAY_HIST table.

FPSBASE.WIP_PRIORITIES_PLUS

A simple view to combine these tables so we can get prty_ctm_factor from priority which is done frequently.

FPSBASE.WIP_REF_CURR_LOTS_TOOLS

Refreshed table to store the rank, standby state, and block reason for lots at the current step for faster access. This is not part of the 3min load so it is not as realtime as the rest of the DWH because it is slow. It is used for blocked lots on the Dashboard and standby state for ETP but should not be used for scheduling or NextMove. This should be joined on lot/step/tool even though lot/tool is the PK since step might be different. Filter for tool is not null is necessary because WSFA has all rows in WSF including those without tools assigned.

FPSBASE.WIP_REF_FLUSH

The FPS WIP Flush estimates when all of the production lots currently in line plus planned starts will exit the current facility and all future facilities using different methods including historical, plan, and FIFO. It also calculates the estimated quantity at the exit of the each facility based on line and sort yield and calculates when each lot needs to exit each facility to meet the demand for each product. It includes all of the necessary information about the lot like product, priority, lot type, qty, parent state, etc. as well as information about its current state like hold, rework, bank, step, etc. In short, our goal is for this table to include everything about every lot that is necessary for planning. Refer to http://help.finalphasesystems.com/display/WF/WIP+Flush+Logic for all WIP Flush application documentation as the Help Site has the latest version of the comments that were originally in this view. WIP_REF_FLUSH is now split into three parts (BASE, ADD_SHUTD, and the final WIP_REF_FLUSH) simply because Oracle refuses to compile the single combined view. We observed similar behavior with WIP_GOAL_EST_SHIFT and CTM_SUMMARY and we solve this just by splitting up the logic and refreshing each section into a table. The side benefit of this is that it allows us to preview the subsequent logic after the BASE view rather quickly. Refer to header comments in WIP_FLUSH_REORDER_DEMAND for details about our demand logic.

FPSBASE.WIP_REF_FLUSH_ADD_SHUTD

See comments in WIP_REF_FLUSH view for the entire WIP_FLUSH series. The logic in this particular view is pretty crazy but mostly mechanical. In the future, could potentially be replaced by a recursive program but for now we just do the same logic for each of the next n upcoming shutdowns at the site. We must react to the first shutdown and adjust all times for all facilities and then move on the second shutdown using the adjusted information. And so on for each of the n shutdowns. So what is n? Honestly it is because Oracle hung and would not compile this view if we tried too many loops. At one site, seven was fine but it hung at eight. At another site, three was fine, six barely succeeded but took 90 seconds to create the view, and seven hung. At sites that normally run 24x7 three is plenty (in fact one or rarely two would be enough) but at sites with a standard weekly shutdowns (i.e. facility does not work weekends) then more loops are needed. Seven is the max but that still means that only the next seven weeks are perfectly accurate. If we even get a site with nightly shutdowns (i.e. facility only operates during daytime) then we will need to review a whole bunch of logic in DWH including this.

FPSBASE.WIP_REF_FLUSH_BASE

See comments in WIP_REF_FLUSH view.

FPSBASE.WIP_REF_FLUSH_CURR_FAC

This refreshed table exists only to speed up the load of WIP_LOTS_ADDL_COLS. After extensive testing, we found that creating this smaller table with only the current facility and a primary key of lot resulted in significant savings at sites with multiple facilities.

FPSBASE.WIP_REF_GOALS_CURR_SHIFT

This view gets the completes and moves goals for the current shift from both EST and LOT tables. To refresh all goal logic on the Dashboard after making any input changes run this: exec adm_load_object('LOAD_GOALS_or_DASH_W_TARGETS_or_something'); -- For LB/DR based goals exec adm_load_object('GP_B_ROUTE_STEPS'); exec adm_load_object('GP_P_ROUTE_STEPS'); exec adm_load_object('GP_P_PRIORITY_LINE_BALANCE'); exec adm_load_object('GP_P_PRIORITY_DRUM'); -- end of LB/DR based goals exec adm_load_object('WIP_GOALS_PER_SHIFT'); exec adm_load_object('WIP_GOALS_CURR_SHIFT'); --This table has custom logic to only refresh when it is a new day unless empty truncate table fpsbase.wip_goals_curr_plan_day; exec adm_load_object('WIP_GOALS_CURR_PLAN_DAY');

FPSBASE.WIP_REF_HOLD_ESTIMATES

This view calculates the estimated_hold_sec for each process_family, hold_type, lot_group based on data from the last 28 days. These are the default values for estimated hold seconds and the values for remaining hold seconds are half of these estimated values. If you wish to use historical hold time by prd, route, step, and lot_group instead of these values then you set the use_ctm_bef_est_for_release flag in GEN_FACILITIES. Either way, the defaults can be overridden by ovr_estimated_hold_sec and ovr_remaining_hold_sec in WIP_HOLD_TYPES or more specifically by ovr_release_inst in WIP_LOTS_VERIFY. Please see the column comment on ovr_remaining_hold_sec for an example of how estimated and remaining are used to calculate the estimated release inst of all lots currently on hold.

FPSBASE.WIP_REF_LOTS_COLS_NO_DD_PRTN

This is the first refreshed table in our WIP_LOTS hierarchy. This view includes only columns which do not depend on data_date and which do not use a "partition by" clause. That means that when we refresh this table repeatedly in the RealTime job that only records for lots which had some activity will be updated. This table has many columns but because of the small number of records updated each time should refresh relatively quickly. All columns which depend on data_date or which use a "partition by" clause are in WIP_LOTS_COLS_W_DD_PRTN. The four tables WIP_LOTS_REALTIME, WIP_LOTS_STATIC, WIP_LOTS_COLS_NO_DD_PRTN, and WIP_LOTS_COLS_W_DD_PRTN are joined together in the WIP_LOTS_PLUS view which is used by nearly everything including DASH_P_WIP_LOTS.

FPSBASE.WIP_REF_LOTS_COLS_W_DD_PRTN

This is the second refreshed table in our WIP_LOTS hierarchy. This view includes columns which either depend on data_date or which use a "partition by" clause. That means that when we refresh this table repeatedly in the RealTime job that nearly all records will be updated even those for lots which had no activity. Fortunately this table has a small number of columns so it should refresh relatively quickly. All columns which do not depend on data_date and which do not use a "partition by" clause are in WIP_LOTS_COLS_NO_DD_PRTN. Since nearly every record in this table will be different on each refresh, there is a question to use DBPR or DBPM. We found that DBPR is faster (13 sec to 18 sec) but also uses quite a bit more CPU according to ADM_TOP_SQL_SNAPSHOT. We generally use DBPR but either is acceptable, especially since we improved ADM_LOAD_EVENT_HIST_VIA_APD to call this with ADM_TABLE_MATCH_COMMIT in DWH-2755. The four tables WIP_LOTS_REALTIME, WIP_LOTS_STATIC, WIP_LOTS_COLS_NO_DD_PRTN, and WIP_LOTS_COLS_W_DD_PRTN are joined together in the WIP_LOTS_PLUS view which is used by nearly everything including DASH_P_WIP_LOTS. The initial logic in this view gives a rough estimate of when lots which have not started processing will complete their current step. This rough estimate only applies to lots which do not have an estimated completion from an external reservation nor from Scheduler. Obviously when we have either of those more accurate estimates we will use them. This rough estimate is not expected to perfect but the main goal is to smooth out WIP bubbles so that we do not estimate that all lots waiting at a step will arrive to the next step at the same time. Here is how this rough estimate logic works: 1) We find all lots which have estimated complete times. These include lots which are processing, which have an external estimate, or which have a Scheduler end time. 2) We find the latest of these estimated complete times for the process family and consider that time to be the starting point for lots without estimates. 3) We estimate the interval at which lots will move out of this process family. Note this has nothing to do with tools because we do not know the tool. In fact, this estimate move interval for the process family is approximately the estimated complete interval for each tool divided by the number of tools. Then we adjust this estimate specifically for each lot based on its throughput and on the current availability of tools in the process family. This is not perfect nor is it intended to be but it is reasonable. 4) We cumulate this estimated move interval for all lots in the process family sorted by status (dispatched first, hold last), priority, and age. 5) We add this cumulate after the last estimate from #1. EXAMPLE: Lot A1 is processing on tool A and estimated to complete at 1:00. Lot A2 is also processing on tool A estimated to complete at 2:00. Lot B1 is processing on tool B and estimate to complete at 1:30. Lot B2 is dispatched to tool B with no estimated complete. Lots 4 and 5 are waiting. The last estimated complete for the process family is 2:00. Estimated move interval for the process family is 25 minutes per lot so the cumulated interval is 25 for lot B2, 50 for lot 4, and 75 for lot 5. We add this cum to the last estimated complete of 2:00 so we get an estimated complete of 2:25 for lot B2, 2:50 for lot 4, and 3:15 for lot 5. Note how we do not care which tool lots B2, 4, and 5 will use. In fact, B2's estimate is based on the last estimated complete of 2:00 from tool A even though we know the lot is dispatched to B.

FPSBASE.WIP_REF_PACE_ADJ_RATIO

This ref view calculates the adjustment to a linear pace that should be made based on historical performance by hour relative to final shift or plan day performance. For example, if historically we are at 45% of total moves 50% into a shift, then the pace calculation should be adjusted to compensate for what is expected to be an increase in completes in the last half of the shift.

FPSBASE.WIP_REF_PROCFAM_BANK_ENDWIP

These rates are used to split the cycle time accumulated by lots waiting in WIP at the end of a shift. Immediate skip is not included here because lots waiting have by definition already not immediately skipped. This is refreshed into a table for fast use in WIP_END_SHIFT_HIST_PLUS view.

FPSBASE.WIP_REF_STEP_FUT_DURABLE_ASGN

This query for durables was originally in WIP_STEP_FUT_ASSIGNMENTS but we found it to be slightly too slow for queries filtered by lot because lot is an optional column in RDA. Since it using RTAL+RTALP with RTG_DURABLE_ASSIGNMENTS, it is acceptable to be refreshed frequently into a table rather than queried each time in WSFA. This is heavily tested for speed which is why it uses the base tables rather than WIP_STEP_FUT_ASGN_UNIONS view. We only include the lot-based assignments tables here which is appropriate because any site where we have durable logic we will have lot assignments for those tools. We do not include is_in_prty_gantt but this does not make a big delta to the speed of DASH_P_GANTT_PRIORITY_LOTS refresh.

FPSBASE.WIP_REWORK_HIST

Simply filters WIP_EVENT_HIST to get only rework events and the relevant info to rework.

FPSBASE.WIP_SCRAP_HIST

Filters WIP_STEP_HIST to get only scrap events and the relevant info to scrap. Note this view should always be used with an inst filter similar to WIP_EVENT_HIST. Also note that scrap_module is set in WIP_STEP_HIST_INSERT_BEF trigger and the logic is described there.

FPSBASE.WIP_SNAPSHOT_EXTENDED_HIST

This view gets a snapshot of the WIP by process family used to built the WIP profile chart for longer than 8 days.

FPSBASE.WIP_SPLIT_MERGE_HIST

This view is an attempt to list the basic history of all lots that make up a given current lot including lots that merged into this lot and lots from which this lot was split. Please note that this is not recursive so it only goes one level back but it does list the basic desired info. It is critical that this view be used with a filter on this_lot like this: select * from wip_split_merge_hist where this_lot = 'LOT1' order by inst desc;

FPSBASE.WIP_STARTS_BY_LOT

When WIP_STARTS has a record with a qty greater than 25 wafers (note 1) then this view distributes those starts into individual "lots" of 25 wafers each (note 2) with start times ranging evenly throughout the week (note 3). This view is gets used by the WIP_FLUSH and WIP_STEP_FUTURE so both of those objects get the distributed starts by lot as desired. For example, if we have 160 wafers recorded to start in a particular week then we would show 25 on each day Mon-Sat and then the remaining 10 on Sunday. Note 1: Technically speaking we use the max_qty_per_carr value for GEN_FACILITIES which is usually 25 for wafer facilities but for backend facilities measuring qty in die it is obviously much larger. Note 2: Note that the last lot will have less than 25 wafers unless the start qty is an exact multiple of 25. Note 3: Most sites record the starts in WIP_STARTS by week but technically this works for any period.

FPSBASE.WIP_STEP_FUTURE_COMBINED

This view combines WIP_STEP_FUTURE and WIP_STEP_FUTURE_SCHED for fast querying. Once we can commit after each WEH event then we can merge these two tables back into one WIP_STEP_FUTURE table and replace all uses of this view with WIP_STEP_FUTURE.

FPSBASE.WIP_STEP_FUTURE_PLUS

This view combines WIP_STEP_FUTURE with other tables to get all of the relevant information needed for goal planning and scheduling, The current step is included along with all upcoming step. This view is dependent on the current time.

FPSBASE.WIP_STEP_FUT_ASGN_FILTERED

We want all records that match the minimum tab_sort which will include all paths if specified otherwise just one record. If the "best" table is RTG_MISSING_LOT_TOOL_ASGN then we do not want the record at all but it is critical to filter out records from the table after we determine the min_source_asgn_tab_sort. The common case that we care about here is where the lot-step-route-tool has a rank I from RTG_MISSING_LOT_TOOL_ASGN (tab_sort 3) and a rank P from the generic RTG_TOOL_ASSIGNMENTS (rank 5). The min_source_asgn_tab_sort will be 3 so we will filter out the record from RTG_TOOL_ASSIGNMENTS since 5 is not equal to the min of 3. Plus we will filter out the record from RTG_MISSING_LOT_TOOL_ASGN by name even though it is the min of 3. That gets us with the desired result of no records for this lot-step-route-tool.

FPSBASE.WIP_STEP_FUT_ASGN_FOR_SCH

See complete comments on the WIP_STEP_FUT_ASGN series of views in the WIP_STEP_FUT_ASSIGNMENTS view. Effective primary key: lot, facility, route, step, tool Alternate unique key: lot, num_steps_away, tool Must filter by one of: lot, tool, facility, step, route, num_steps_away, is_in_nmv_pick_list, is_in_prty_gantt, sched_group, process_group

FPSBASE.WIP_STEP_FUT_ASGN_PATHS

See complete comments on the WIP_STEP_FUT_ASGN series of views in the WIP_STEP_FUT_ASSIGNMENTS view. Effective primary key: lot, facility, route, step, tool, ch_path Alternate unique key: lot, num_steps_away, tool, ch_path Must filter by one of: lot, tool, facility, step, route, num_steps_away, is_in_nmv_pick_list, is_in_prty_gantt, sched_group, process_group

FPSBASE.WIP_STEP_FUT_ASGN_PATHS_SCH

See complete comments on the WIP_STEP_FUT_ASGN series of views in the WIP_STEP_FUT_ASSIGNMENTS view. Effective primary key: lot, facility, route, step, tool, ch_path Alternate unique key: lot, num_steps_away, tool, ch_path Must filter by one of: lot, tool, facility, step, route, num_steps_away, is_in_nmv_pick_list, is_in_prty_gantt, sched_group, process_group

FPSBASE.WIP_STEP_FUT_ASGN_UNIONS

See complete comments on the WIP_STEP_FUT_ASGN series of views in the WIP_STEP_FUT_ASSIGNMENTS view. Unlike other views, this view lacks an effective primary key so we must always expect it to have duplicates but it also lacks a "partition by" clause so it can be filtered on any field. The key to this view is that it generates the official rqd_setup value without querying WIP_STEP_FUTURE_SCHED so we can use this in the WIP_STEP_FUTURE_SCHED_BEF trigger.

FPSBASE.WIP_STEP_FUT_ASGN_W_TOOL_LIST

See complete comments on the WIP_STEP_FUT_ASGN series of views in the WIP_STEP_FUT_ASSIGNMENTS view. Effective primary key: lot, facility, route, step, tool Alternate unique key: lot, num_steps_away, tool Must filter by one of: lot, facility, step, route, num_steps_away, is_in_nmv_pick_list, is_in_prty_gantt, sched_group, process_group Do not filter by: tool

FPSBASE.WIP_STEP_FUT_ASSIGNMENTS

These extended comments describe the entire series of WIP_STEP_FUT_ASGN views. Having these as views saves a whole lot of triggered updates and eliminates what would be a huge table. Speed is critical because these views are used in the WIP_EVENT_HIST_INSERT_BEF trigger and WIP_REF_CURR_LOTS_TOOLS view, both of which are part of our RealTime job, as well as DASH_P_GANTT_LOT_STEPS, DASH_P_GANTT_PRIORITY_LOTS, and SCH_P_CURR_TOOL_ASSIGNMENTS. These views have several "partition by" clauses which all include the following columns which means that filtering on combinations of these columns will be reasonably fast: lot, tool, facility, step, route, num_steps_away, is_in_nmv_pick_list, is_in_prty_gantt, sched_group, process_group Here are the series of views: WIP_STEP_FUT_ASGN_UNIONS: Has only the five unions that start the series of WIP_STEP_FUT_ASGN views. The main reason that we need this view is that it generates the official rqd_setup value without querying WIP_STEP_FUTURE_SCHED so we can use this in the WIP_STEP_FUTURE_SCHED_BEF trigger. This view does not have filtering for the best row and will have duplicates for the same WIP_STEP_FUTURE record when rows from multiple RTG_TOOL_ASSIGNMENTS tables are applicable and when multiple paths are listed. WIP_STEP_FUT_ASGN_FILTERED: Filters only the best record for each lot-tool-step-route combination. WIP_STEP_FUT_ASGN_PATHS: This view filters UNIONS for only the best table for each record. Therefore the effective primary key is lot, facility, route, step, tool, ch_path. WIP_STEP_FUT_ASSIGNMENTS: This view approximates chamber specific run path logic using the functions GET_CH_PATH_STATUS and GET_CH_ALLOWED_STATUS. It checks setups, lot type, qual states, and a few other cases but it still reasonably fast because of the functions. This view is used in two places where speed is critical. In WIP_EVENT_HIST_INSERT_BEF trigger it is filtered by lot/route/step but this must take milliseconds because it is called for every tool event. Then in WIP_CURR_LOTS_TOOLS it is filtered for all lots at the current step which can take a few seconds but not much longer since this refreshes frequently. This view is also reasonably fast when filtered for a particular tool since all partition by clauses include tool. WIP_STEP_FUT_ASGN_W_TOOL_LIST: This view adds the allowed tool list as which is the "best" tool to use for the BLOCK logic and Gantt charts. Since this takes a bit longer, we do not want this included where it is not needed. Plus because this view does partition by not including tool (i.e. gets information about all tools for a lot-route-step) then it cannot reasonably be filtered for a tool. WIP_STEP_FUT_ASGN_PATHS_SCH: The Scheduler does not need curr_rank so we do not need to include any of this logic including the two chamber functions but instead only get the simpler eff_rank used by Scheduler. In addition, this view gets throughput information which is required by Scheduler but not for other uses. In order to reducing the amount of data, this view includes all lots that are eligible to be scheduled based on eff_rank plus all lots which are already DISP/PROC/ENDED and filters out lots/tools which are of a rank which is not used by the Scheduler. Finally this view lists a separate record for each chamber path (in other words it has the same effective primary key as PATHS). It is almost always queried with a filter on sched_group. WIP_STEP_FUT_ASGN_FOR_SCH: This view uses SCH_W_PATHS and all of the above about SCH_W_PATHS is true here too. The only difference is that this view filters for the best path of each tool so it has the effective primary key of lot, facility, route, step, tool. Here is a basic flow chart with their dependencies: ------------------------------------------------------------ -> WSFA -> W_TOOL_LIST UNIONS -> FILTERED -> PATHS < -> PATHS_SCH -> FOR_SCH ------------------------------------------------------------ Then here is a list of their most important uses: WIP_STEP_FUT_ASGN_UNIONS -> UPDATE_WIP_STEP_FUTURE WIP_STEP_FUT_ASGN_UNIONS -> RTG_STEP_TOOL_BALANCE_RATIOS WIP_STEP_FUT_ASGN_UNIONS -> WIP_STEP_FUTURE_SCHED_BEF WIP_STEP_FUT_ASGN_FILTERED -> WIP_DEL_CURR_LOTS_TOOLS WIP_STEP_FUT_ASSIGNMENTS -> WIP_EVENT_HIST_INSERT_BEF WIP_STEP_FUT_ASSIGNMENTS -> RTG_REF_QUEUE_MULTI_AREA_PATH WIP_STEP_FUT_ASSIGNMENTS -> NMV_P_PICK_LIST_CARRIERS WIP_STEP_FUT_ASGN_W_TOOL_LIST -> DASH_P_GANTT_LOT_STEPS WIP_STEP_FUT_ASGN_W_TOOL_LIST -> DASH_REF_P_GANTT_PRIORITY_LOTS WIP_STEP_FUT_ASGN_W_TOOL_LIST -> WIP_REF_CURR_LOTS_TOOLS WIP_STEP_FUT_ASGN_PATHS_SCH -> future Scheduler use WIP_STEP_FUT_ASGN_FOR_SCH -> SCH_P_CURR_TOOL_ASGNS_STD WIP_STEP_FUT_ASGN_FOR_SCH -> SCH_QUEUE_MGR

FPSBASE.WIP_STEP_HIST_AND_FUTURE

View past, present, and future steps for each lot. There are three ways to filter this view to make it fast: 1) Filter by lot. Both WIP_STEP_HIST and WIP_STEP_FUTURE have indexes on lot. 2) Filter hist by expected_comp_inst and future by num_steps_away. WSH has index on inst and WSF has index on num_steps_away: where is_hist = 'Y' and expected_comp_inst > data_date - 1 or is_hist = 'N' and num_steps_away < 10 3) Filter hist by expected_comp_inst and future by curr_comp_to_expected_arrv_sec. This is the closest approximation to filtering by time because data_date + curr_comp_to_expected_arrv_sec will also be sooner than expected_step_ent_inst and WSF has index on this column: where is_hist = 'Y' and expected_comp_inst > data_date - 1 or is_hist = 'N' and curr_comp_to_expected_arrv_sec < 43200 If you need the overall_seq_num or any current values you have to get them after you filter like this: rank() over (partition by lot order by expected_step_ent_inst, is_hist desc, lot_seq_within_sec) as overall_lot_seq_num, max(case when is_curr = 'Y' then step else null end) over (partition by lot) as curr_step,

FPSBASE.WIP_STEP_HIST_PLUS

This view provides all of the information we know about every move for the current shift. It will be used as a base for all moves information on the Dashboard as well as every other application where we need lot information. We did some speed testing and it proved to be more efficient to combine all columns in this big refreshed table rather than joining (for example, including plan_priority_display here rather than joining on WIP_PRIORITIES later). The added information here is similar to WIP_REF_LOTS_PLUS. In DWH-2056 we write step_xxx_sec as well as prev_qty, is_rework_move, and is_finish_event by the WSH_INSERT_BEF trigger. It is critical for any query using these columns that these columns have been populated. In the upgrade script with DWH-2056 we populate the last 35 days and then inside CTM_FINISHED_LOT_HIST_CUSTOM we populate any older record for the lot. This should be sufficient for normal use however if you are doing research on old data then you might need to update manually. To do that, update wip_step_hist set step_sec_debug_note = 'Reset' where step_sec_debug_note is null and YOUR_CRITERIA = Y.