The system includes a set of service instances. A service instance has a unique Instance Name and is made up of one or more processes.
Some service instances require that you have the applicable module licensed and enabled.
Process Name: ACCRUAL
Default Schedule: None
Module Required: Balance Accrual
The ACCRUAL service creates person balance records, adds to these balances, and performs carryover from one balance accrual period to the next. At the end of the balance period, the service will close out the old balance.
The Balance Policy determines how and when the balances should be accrued. The ACCRUAL service will run for a Balance Policy that has Auto. Accrual checked in the Accrual tab. The service uses the Accrual Ruleset attached to this Balance Policy to calculate the amount to accrue to the balance.
When the ACCRUAL service updates a person’s balance record, the update will appear in the Balance Transactions tab of the Balances form.
Note: If a balance period is based on an employee’s hire date, rehire date, or service date, and the ACCRUAL service cannot find these date values, the accrual will fail. Make sure these dates are specified in the employee’s Person record. See Effectivity tab (Balance Policy form) for more information.
Process Name: ANALYTICS
Default Schedule: None
Module Required: Analytics
The ANALYTICS service is used to calculate the data that is displayed in the KPI Dashboard and the KPI Report. The KPI Dashboard displays charts with Key Performance Indicators: Efficiency, Productivity, and Utilization. You can display this data for one or more persons. The KPI Report displays the same data in a report format.
In order to display this data, the ANALYTICS service must process Ruleset Profiles. A Ruleset Profile is assigned to specific events in a Process Policy (via the Events tab in the Process Policy form). When a person posts the event, the system check’s the person’s Process Policy to see which Ruleset Profile is assigned to the event. This Ruleset Profile will become the event’s Process Name. The ANALYTICS service (which is configured to run for specific Ruleset Profiles) will process events that also have this Ruleset Profile as their Process Name.
You can also specify which Person Groups the ANALYTICS service will process.
RULESET_PROFILE: This parameter determines the Ruleset Profiles that will be processed by the ANALYTICS service and the order in which they will be processed. Select the Ruleset Profiles you want the ANALYTICS service to process. Move the profile from the Available column to the Selected column. Use the up and down arrows to place the Selected profiles in the order you want them to be processed; the profile at the top of the list will be processed first.
Process Name: ATTENDANCE
Default Schedule: None
Module Required: Attendance Management
The ATTENDANCE service automatically posts non-labor transactions for the current date. These transactions include holiday, paid and unpaid time off, early departure, clock outs for persons who failed to clock out, short day, no-show events, and day worked for persons who are automatic or who only report elapsed transactions for exceptions. The service looks for these events in the Attendance Policy. If the event applies for that day and time, the service will post it.
In the Attendance Policy form you configure rules that determine which non-labor events can post to timecards via the ATTENDANCE service. The rules include posting holiday and penalty events, and using balances to cover penalty events. The Attendance Policy rules you create are assigned to a person or person group via the Person Setting/Person Group Setting forms. Only the events that are configured in a person's Attendance Policy will post to the person's timecard when the service runs.
For example, the service may be scheduled to run at 7 AM on June 1. The service first checks the person's Holiday Calendar. If June 1 is a holiday and the person is working the 7 AM shift, the service posts a holiday event to the person's timecard. If June 1 is not a holiday, the service checks Event Scheduling for any “auto-post” events such as vacation or scheduled time-off. The service will then post the applicable vacation or time-off events for the person. If the person is not on vacation, the service may post a tardy, day-worked, no report/no clock out, or short day event, depending on the time of day.
The ATTENDANCE service will select which dates to process based on the MODE parameter. If you want to specify another date, you can do so in the Service Parameters form. The ATTENDANCE service will post events based on the configuration in the person's Attendance Policy.
The ATTENDANCE service includes the tasks listed below. When you add or modify an instance of the ATTENDANCE service (via the Service Instance form), you can select which of these tasks the service should perform.
In order for the ATTENDANCE service to perform a task, it must be enabled in the person's Attendance Policy. For example, the service will perform the ATTENDANCE_HOLIDAY task if that task is Selected for the service instance and Enable Holiday is checked in the person's Attendance Policy Holiday record.
ATTENDANCE_HOLIDAY
The ATTENDANCE service will post holiday events based on the person’s Holiday Calendar. Note that a Holiday Calendar and a Normal Schedule must be in place in order for a Holiday event to post. See Holiday Tab – Attendance Policy for more information.
The service will first check to see if the person has a Holiday Calendar Override setting and, if so, the service will use the Holiday Calendar in that setting. If the person does not have a Holiday Calendar Override, the service will use the Holiday Calendar specified in the person’s Attendance Policy/Holiday tab.
ATTENDANCE_TIME_OFF
This task of the ATTENDANCE service will post approved time off requests to the person’s timecard, provided the person's Attendance Policy has Time Off enabled. The ATTENDANCE service will also look at the person's Attendance Policy Time Off settings to see what rules, if any, should apply to time off postings.
ATTENDANCE_DAY_WORKED
The ATTENDANCE service will post a day-worked event based on the person's Attendance Policy Day Worked settings.
AUTOMATIC_CLOCKOUT
The AUTOMATIC_CLOCKOUT task looks for a clock in without a matching clock out for each person and post date it processes. When the service finds a missing clock out, it then checks for the person's schedule on that post date.
If the person does not have a schedule, the service checks to see if the current time is after the person's last recorded activity plus the Max Missing Clk Out hours in the person's Clock Policy. If this condition is met, the service posts an automatic clock out. The timestamp of the automatic clock out will be the timestamp of the last recorded activity.
If the person does have a schedule, the ATTENDANCE service will check to see if the current time is after the scheduled end time plus the Max Missing Clk Out hours. If this condition is met, the service will post an automatic clock out. The timestamp of the automatic clock out will be based on the Sch OT ClockOut, Gap OT ClockOut, or ClockOut Time setting in the person's Clock Policy (depending on the kind of schedule the person has). For example, a person has a Normal, non-overtime schedule. In the Clock Policy, the ClockOut Time setting is At Last Activity. The ATTENDANCE service will check to see if the current time is after the scheduled end time plus the Max Missing Clk Out hours. If this condition is met, the service will post an automatic clock out with the timestamp of the last recorded activity.
When an automatic clock out is posted, you can configure a warning flag to appear in the timecard. Both the supervisor and the employee can also receive a message when an automatic clock out is posted. The warning flag and message will alert the supervisor of the automatic clock out, in case any adjustments need to be made. For example, an employee may forget to clock out and an automatic clock out posts at the employee's scheduled end time. The supervisor sees the warning on the employee's timecard and knows the employee stayed late and worked overtime that day. The supervisor is able to correct the employee's timecard accordingly.
ATTENDANCE_EARLY_DEPARTURE
The ATTENDANCE service will post an early departure event based on the person's Attendance Policy Early Departure settings.
ATTENDANCE_NO_SHOW
The ATTENDANCE service will post a no-show event based on the person's Attendance Policy No Show settings.
ATTENDANCE_SHORT_DAY
The ATTENDANCE service will post a short day event based on the person's Attendance Policy Short Day settings.
ATTENDANCE_SIGN_DAY
The ATTENDANCE service will apply employee and/or supervisor signatures to the person's timecard when the Sign Policy is set to Automatic, including gap days. If the person's Sign Policy is set to Exception, the ATTENDANCE service will sign the timecard when there is no schedule and the person has performed no sign actions (including on gap days).
DISCIPLINARY_LEVELS
This task will process the Ruleset Name specified in the Attendance Levels tab of the person’s Attendance Policy to determine a person’s point level and which supervisors will be notified of the point and level changes.
Note: Some Attendance tasks post immediately upon certain actions. A Clock In triggers several rulesets to fire:
Late Arrivals
Outside of Clocks Gap
Holidays
Vacations and Other Time Off
Day Worked if configured to Post Immediately
MODE: Determines the range of days for which the service will run.
TODAY - The service will run for the current date.
CURRENT_PAY_PERIOD - The service will run for the pay period that is in effect on the current date. The service determines a person's pay period based on their Pay Policy.
PREVIOUS_PAY_PERIOD - The service will run for the pay period before the pay period that is in effect on the current date.
NEXT_PAY_PERIOD - The service will run for the pay period after the pay period that is in effect on the current date. NEXT_PAY_PERIOD may be used to post holiday and vacation events in advance.
DATE_RANGE - The service will run for the dates in the START_DATE and END_DATE range.
GID: Group Identifier.
DID: Device Identifier.
START_DATE: If you select DATE RANGE in the MODE field, you must enter a START DATE. The START DATE specifies the first day that the service should include in the instance.
END_DATE: If you select DATE RANGE in the MODE field, you must enter an END DATE. The END DATE specifies the last day that the service should include in the instance.
Process Name: ATTENDANCE_REWARD
Default Schedule: None
Module Required: Attendance Reward
The ATTENDANCE_REWARD service uses the Attendance Reward Ruleset in the employee’s Attendance Policy to determine whether the employee has had any attendance violations or if the employee is eligible for an attendance reward.
An Attendance Reward is used to award balance hours to employees who do not have any attendance violations in a specified period of time. The award the employee receives is a specified number of hours in a balance. For example, you may want to award 4 hours of vacation time to employees who have not had any No Show, Late Arrival, or Early Departure events in the last three months.
Attendance Rewards are granted by a supervisor using the Attendance Reward form. The records in the Attendance Reward form are generated by the ATTENDANCE_REWARD service.
The ATTENDANCE_REWARD service first checks the person’s Attendance Policy to see if it includes an Attendance Reward Ruleset. If the Attendance Policy does not have an Attendance Reward Ruleset, the service will not process this person.
If the person has an Attendance Reward Ruleset, the service deletes the person’s existing eligibility and violation records. Award records are not deleted.
The service then uses the person’s Anchor Date (set on the Employee form) and the Range Amount and Range Type parameters in the Attendance Reward rules to create the eligibility and/or violation records. This range will usually begin on the date after the person’s Anchor Date and continue for the specified Range Amount and Range Type. However, if you select Calendar Months or Calendar Weeks as your Range Type, the range will begin on the start of the calendar month or week that is after the Anchor Date. For example, if the Anchor Date is 01/02/2015 and the Range Type is Calendar Months, the range will begin on 02/01/2015. If the Range Type is Months, the range will begin on 01/03/2015.
Note that if the person has no Anchor Date, the service looks at the person’s Employment Profile to find the most recent date range when the person was active. The start of that range will be used as the Anchor Date.
The service continues processing a person until the range starts later than the current day, or the range starts today or earlier, extends beyond today, and has no violations through today.
Once the service finishes running, you can use the Attendance Reward form to view the eligibility and violation records and to grant awards to eligible employees.
None
Process Name: BADGE_RESET
Default Schedule: Run every hour on the hour, indefinitely
Default Schedule Enabled: No
Module Required: Badge Management
The BADGE_RESET service looks for any Active badges that were issued to a person record at some point in time, but whose assignment is no longer valid (i.e., the end date of the record in the Badge tab of the Employee form has expired, or the record was deleted from the Badge tab of the Employee form and is no longer considered issued). These badges will get a Badge State of Available (meaning that the badge is now available for use). The updated Badge State will be reflected in the Badge tab of the Badge Group form.
Process Name: BATCH
Default Schedule: None
Module Required: None
The BATCH service is used to bundle service instances that need to be run dependently in a specified order. For example, you may need the OUT_EBS_WIP_COST service to run after LABOR_ALL_MT and RECALCULATION, so that the correct records are exported.
When the BATCH service runs, it runs the individual service instances in the batch in the order they are listed. If one of the individual service instances fails, the BATCH service will stop running. If an individual service instance has errors but is able to finish running, the BATCH will continue until all the services have run. See “View Batch Service Errors” below.
If you have individual services scheduled to run that are also included in a BATCH service instance, the individual service will stop running until the BATCH service is complete.
You need to define which service instances to include in the BATCH. You can run the BATCH service manually (in the Service Monitor form) or you can create a schedule for it.
In the Service Instance form, select the BATCH service and click Modify. Use the Instance Names parameter to select the individual services to include in the batch. Move a service from the Available column to the Selected column to include it in the batch. Use the up/down controls next to the Selected column to place the services in the order they will run (the service listed on top will run first).
The Error Log Count column in the Service Monitor form will indicate whether the BATCH service ran with any errors. In the following illustration, the service finished running but had 5 errors.
To find out which instance in the batch had the errors, use the Service Audit form. In the example below, the error occurred for the OUT_EBS_AP_EXPORT instance.
If the BATCH service fails, the Service Audit form will also show which instance in the batch caused it to stop processing. In the example below, the BATCH_ATTENDANCE service includes the ATTENDANCE, LABOR_ALL_MT, and RECALCULATION instances. The LABOR_ALL_MT instance was disabled, causing the BATCH service to stop processing.
Process Name: CALCULATE_RATES
Default Schedule: None
Module Required: Pay Rates
The Calculate Rates service will calculate the PAID_PERSON_PAY and PAID_PERSON_LABOR rates for a transaction. See Trans Rates for more information on the rates that are stored with a transaction.
The service calculates these rates based the Pay Rate Ruleset and Labor Rate Ruleset defined in the employee’s Pay Policy.
Calculate Rates is also done by the LABOR_ALL_MT service.
RATE_TYPE: Select the type of rate you want the service to calculate. Select Pay to calculate Payroll Rates (PAID_PERSON_PAY rate). Select Labor to calculate Labor Rates (PAID_PERSON_LABOR rate). Select Both to calculate both Payroll and Labor Rates.
RUN_FOR: Select POLICIES if you want to calculate rates for employees with specific Pay Policies. Make sure you specify these Pay Policies in the Selected column of the PAY_POLICY parameter. Select TRANSACTIONS if you want to calculate rates for all transactions that have not yet had their rates calculated.
PAY_POLICY: If you selected POLICIES as your RUN_FOR parameter, use this section to indicate which Pay Policies should have their rates calculated. Move the Pay Policy from Available to Selected if you want it to be processed. If there are no Pay Policies in the Selected column, the service will calculate rates for all the Pay Policies.
Process Name: CALL_STORED_PROCEDURE
Instance Name: PURGE_SQLSERVER_SEQUENCES
Default Schedule: None
This service is only needed if your Shop Floor Time database version is Microsoft SQL Server.
This service runs a stored procedure on your Shop Floor Time database that purges the identity tables in a SQL Server database (tables for each sequence that hold an identity column).The stored procedure will delete all records from all tables starting with “seq_” (except the record with the highest value ID).
How often you run this service will depend on how quickly the sequence tables are growing. For example, you may want to run this script once a week.
The CALL_STORED_PROCEDURE service has one parameter called STORED_PROCEDURE_NAME. Do not change the value of this parameter. It identifies the stored procedure that will be executed.
Process Name: CLASSIFY_MANUAL
Default Schedule: None
Module Required: None
The CLASSIFY_MANUAL service will post the appropriate hours classifications to timecard transactions for the current or previous pay period. The service will classify the timecards of employees assigned to Pay Policies that have Time Classification set to MANUAL.
The CLASSIFY_MANUAL service will only affect transactions that are not yet classified. If a transaction already has an hours classification, the CLASSIFY_MANUAL service will not change it.
Manual time classification is often used as part of the EWT/Comp Time feature. Manual time classification allows you to wait and classify the timecard after all the hours have been worked and all EWT or Comp Time has been authorized. With this method, adjustments to the timecard will not affect the hours classifications, and you will not have to reprocess the payroll export unnecessarily.
The CLASSIFY_MANUAL service allows you to classify employee timecards without waiting for a supervisor review or signature. Manual time classification can also be performed by the Classify Period button on the timecard, or the CLASSIFY Sign Trigger in a Sign Policy.
The CLASSIFY_MANUAL service should be run prior to a Payroll Export.
PERIOD_OFFSET: Indicates whether the service will classify the PREVIOUS_PAY_PERIOD or CURRENT_PAY_PERIOD. The pay period is determined based on the person’s Pay Policy that is in effect on the date the service runs.
PAY_POLICY: The service will classify the timecards of persons assigned to the Pay Policies selected in this parameter. To determine a person’s Pay Policy, the service will look at the Pay Policy assigned to the person on the date the service runs.
Move the Pay Policy from Available to Selected if you want the CLASSIFY_MANUAL service to classify timecards for persons assigned to the policy. The Available column will only display the Pay Policies that are configured for Manual time classification (Time Classification is set to MANUAL in the Pay Policy). The CLASSIFY_MANUAL service will classify timecards for the Pay Policies in the Selected column.
Process Name: COMPLETE_OFFLINE_STATE_CONTROLLER
Default Schedule: None
Module Required: Terminal Operating State
The COMPLETE_OFFLINE_STATE_CONTROLLER service will change the Operating State of the terminals that have been assigned a Terminal Off Policy. You can use this service to establish a schedule when terminals that are operating in COMPLETE_OFFLINE mode will send their offline transactions to the application server for processing. See Permanent Offline Data Collection with Scheduled Data Pull for more information.
STATE: Select the Operating State to which the service will change the terminals. The available options are ONLINE, OFFLINE_QUEUED, OFFLINE_PROCESSING, and OFFLINE_NO_PUNCH_TRANSMISSION. However, the service can only change a terminal’s Operating State to certain allowed options, depending on what the terminal’s operating state is when the service runs. See Operating State for more information.
TERMINAL_OFFLINE_POLICY: Select the Terminal Off Policies that will have their Operating State changed to the specified STATE. The Terminal Off Policy is assigned to a Terminal Profile via the Terminal Profile Setting tab. If you want this instance to apply to specific Terminal Off Policies, move the policies from the Available column to the Selected column. If no Terminal Off Policies are in the Selected column, this instance will apply to all Terminal Off Policies.
Process Name: DELETE_FILES
Default Schedule: None
Module Required: Reporting
The Delete Files Service deletes import-related files or reporting-related files based on directory name and retain days values. If the time between the creation date of the records in the given category and the current date is equal to or greater than the given retain days value, the record will be deleted.
CATEGORY: Identifies the type of files you want to delete (IMPORTS or REPORTS).
IMPORTS - Applies to backup files generated via the IMPORT_FILES service. When selected, any import data backup files located in the Shop Floor Time\import directory (that are older than the given retain_days value) will be deleted. Subdirectories are also deleted.
REPORTS - Applies to reports generated via the Reporting menu. When selected, files located in Shop Floor Time's \Reporting\birt\design\tmp directory are deleted. Subdirectories are also deleted.
RETAIN_DAYS: This value is the number of days that you want the system to compare with the creation date of the records in the given category. If the time between the creation date of the records in the given category and the current date is equal to or greater than this value, the record will be deleted. The default value is 30.
Process Name: DISCIPLINE_BALANCE
Default Schedule: None
Module Required: Discipline Balance Policy forms
The DISCIPLINE_BALANCE service is used to accumulate points to a person's Discipline Balances, apply the appropriate level to a Discipline Balance, and reduce a Discipline Balance based on the person’s Discipline Balance history. These actions are each performed by a different task of the DISCIPLINE_BALANCE service.
Discipline Balances are used to manage a person’s attendance infractions when a number of these infractions occur in a specific time period. For example, a person can earn a point for having two Unexcused Absences in 30 days. Disciplinary levels and tickets can also be issued based on these points.
The DISCIPLINE_BALANCE service uses the rulesets and settings in a person’s Discipline Balance Policy to update the person’s discipline balances. See How the DISCIPLINE_BALANCE Service Works.
The DISCIPLINE_BALANCE service’s END_POST_DATE_OFFSET parameter defines the end of the processing range. The service will run for each post date since the last run, up until the date determined by this offset.
How the DISCIPLINE_BALANCE Service Works
DISCIPLINE_BALANCE Service Tasks
DISCIPLINE_BALANCE Service Parameters
The DISCIPLINE_BALANCE service looks at a person’s Discipline Balance Policy to see what Discipline Balance Codes it includes and how they are configured. For each Discipline Balance Code, the Discipline Balance Policy has settings and rulesets that are used to accumulate the balance, change its level, or reduce it. These rulesets correspond to the DISCIPLINE_BALANCE service’s tasks. For each Discipline Balance Code, the service processes the rules for these tasks in order (BALANCE_ACCUMULATION, BALANCE_LEVELS, and BALANCE_REDUCTION).
If the person’s Discipline Balance Policy has the Run Indicator set to Run Date, the DISCIPLINE_BALANCE service will process the rulesets once per person based on the day the service runs, regardless of the last time it ran. The service uses the END_POST_DATE_OFFSET parameter to determine which date to process. For example, if today is March 25, the END_POST_DATE_OFFSET parameter is -7, and the Run Indicator is Run Date, the DISCIPLINE_BALANCE service will only process March 18.
If the person’s Discipline Balance Policy has the Run Indicator set to Each Post Date, the DISCIPLINE_BALANCE service will run for each post date since the last time the service ran (indicated by the Last Accum. Date and Last Reduction Date in the person’s Discipline Balance record), up to the date determined by the END_POST_DATE_OFFSET parameter. For example, if END_POST_DATE_OFFSET is set to -1, the service will process all the post dates from the Last Accum. Date/Last Reduction Date up until yesterday.
The service will update the person’s Discipline Balance record accordingly with any changes in level or tickets.
When an adjustment is made to a person’s timecard (a transaction is added, modified, or deleted on the timecard), and the transaction has a discipline balance record, the DISCIPLINE_BALANCE service will recalculate the person’s discipline balance, levels, and tickets.
Note that if the DISCIPLINE_BALANCE service has to process an adjusted transaction, it may take longer to run the service because it has more data to process over a longer period of time.
Example
The following illustrations explain how the DISCIPLINE_BALANCE service works.
A company has a policy that a Tardy event (Late Arrival) in the last 30 days results in 1 discipline balance point. An employee’s initial Discipline Balance is 0. However, he has tardy events on 2/23, 3/1, 3/8, and 3/15 that have not yet been processed. The person's Discipline Balance Policy has the Run Indicator set to Each Post Date.
You can view the employee’s balance in the Discipline Balance tab of the Employee form.
On 3/17/2016, the DISCIPLINE_BALANCE service runs. It processes transactions from the Last Accum. Date and Last Reduction Date (1/1/2016) to the service’s END_POST_DATE_OFFSET parameter (3/16/2016).
After the DISCIPLINE_BALANCE service runs, the Balance Value changes to 3.
If you click View Transactions, you will see that the Balance Transactions are for the Tardy events on 3/1, 3/8, and 3/15.
The employee forgot to report he was late on 3/10/2016, so the supervisor manually adds a Balance Transaction of 1 point for this date.
The person’s Balance Value is now 4.
The employee tells the supervisor that the Tardy event on 3/8/2016 was an error. The supervisor adjusts the employee’s timecard for this date and the Tardy event is removed.
This adjustment causes the Last Accum. Date and Last Reduction Date in the person’s Discipline Balance record to change. They change to one day prior to the date of the adjusted transaction (in this case, 3/7/2016). These date changes will allow the DISCIPLINE_BALANCE service to reprocess the transactions to adjust for the one that was removed.
The DISCIPLINE_BALANCE service runs again. It processes records from the Last Accum. Date/Last Reduction Date of 3/7/2016 to 3/16/2016 (yesterday, as determined by the service’s END_POST_DATE_OFFSET parameter of -1). First, the service deactivates all the discipline balance transactions in this range except the ones added manually. In this case, the transactions on 3/8 and 3/15 get deactivated. Next, the service processes the same date range but this time it processes the post dates as it normally would, applying the rules and updating the balance. It is during this step that the transaction on 3/15/2016 is added back.
The employee’s Balance Value is now 3.
The updated Balance Transactions are shown below.
The transaction for 3/1 was not affected because it was outside the date range being processed by the DISCIPLINE_BALANCE service.
The transaction for 3/8 has been cancelled.
The transaction for 3/10 was not affected because it was added manually.
The transaction for 3/15 was cancelled but then added back.
Note that if the person’s Discipline Balance Policy has the Run Indicator set to Run Date, the DISCIPLINE_BALANCE service may not be able to reprocess timecard adjustments that affect a person’s discipline balance. For example, a supervisor removes an absence event on March 18 from a person’s timecard. This adjustment changes the Last Accum. Date and Last Reduction Date in the person’s Discipline Balance record to March 17. The DISCIPLINE_BALANCE service has an END_POST_DATE_OFFSET parameter of -1. The DISCIPLINE_BALANCE service runs on March 25. If the person’s Discipline Balance Policy has the Run Indicator set to Run Date, the service only processes records from yesterday (March 24) as determined by the END_POST_DATE_OFFSET. If the person’s Discipline Balance Policy had the Run Indicator set to Each Post Date, the service processes records from March 17 to March 24, and it can reduce the person’s discipline balance due to the timecard adjustment.
You can configure a different instance of the DISCIPLINE_BALANCE service to process each task, or you can include one or more of these tasks in a single instance. If you separate each task into its own instance, you can change the order in which the tasks run. If all the tasks are in a single instance, they will run in the following order: BALANCE_ACCUMULATION, BALANCE_LEVELS, and BALANCE_REDUCTION.
BALANCE_ACCUMULATION
This task processes the Discipline Ruleset in the Discipline Balance Policy. This task looks at the person’s timecard to find any violations (e.g., unexcused absences or late arrivals) and adds the appropriate number of points to the Discipline Balance Code.
BALANCE_LEVELS
This task processes the Level Ruleset Name in the Discipline Balance Policy. This task looks at the amount of points in a person’s Discipline Balance and determines what level, if any, to assign it.
BALANCE_REDUCTION
This task processes the Reduction Ruleset in the Discipline Balance Policy. This task examines a person’s Discipline Balance history to determine if the balance can be reduced.
END_POST_DATE_OFFSET: This parameter defines the end of the processing range. A value of 0 means today. The default value is -1 (yesterday), giving supervisors a chance to make adjustments to the current day before the DISCIPLINE_BALANCE service processes it.
The Run Indicator in the person’s Discipline Balance Policy determines how the END_POST_DATE_OFFSET is used.
If the person’s Discipline Balance Policy has the Run Indicator set to Run Date, the service uses the END_POST_DATE_OFFSET to determine which date to process. For example, if today is March 25, the END_POST_DATE_OFFSET is -7, and the Run Indicator is Run Date, the DISCIPLINE_BALANCE service will only process March 18.
If the person’s Discipline Balance Policy has the Run Indicator set to Each Post Date, the DISCIPLINE_BALANCE service will run for each post date since the last time the service ran (indicated by the Last Accum. Date and Last Reduction Date in the person’s Discipline Balance record), up to the date determined by the END_POST_DATE_OFFSET. For example, if END_POST_DATE_OFFSET is -1, the service will process all the post dates from the Last Accum. Date/Last Reduction Date up until yesterday.
Process Name: EXCEPTION_MESSAGE_CREATION
Default Schedule: None
Module Required: Messaging
The EXCEPTION_MESSAGE_CREATION service creates messages when there is an “exception” with a service instance (e.g., the service has an error or runs too long) or when a particular Error Code is reported in the Error Log. These messages are created based on rules assigned to a Message Definition in the service’s parameter.
You must use the EXCEPTION_MESSAGE_CREATION service’s MESSAGE_NAME parameter to define which Message Definitions will be used when creating the messages.
You can configure different instances of the EXCEPTION_MESSAGE_CREATION service to create specific message types. For example, you can configure an instance specifically for service audit errors.
To create an Exception message, the EXCEPTION_MESSAGE_CREATION service processes the Message Definitions in its parameter. For example, if the service is processing the SERVICE_AUDIT_ERROR message, it fires the ServiceAuditErrorRuleset. This ruleset checks to see if the ATTENDANCE service (or any service configured in the rule) had any errors in the last 5 hours and if so, a message is sent to a Message Group.
MESSAGE_NAME: This parameter defines the Message Definitions for which the service will generate messages. The available options are Message Definitions with the EXCEPTION Message Type.
Move the policies from the Available column to the Selected column to enable them. The EXCEPTION_MESSAGE_CREATION service will only generate messages based on the Selected Message Definitions. If no options are in the Selected column, the service will not process any of them.
Process Name: EXPIRE_OFFERS
Default Schedule: None
Module Required: Overtime Offer and Response Service and Forms
The EXPIRE_OFFERS service updates overtime offers that have a status of Offered or Acknowledged and have expired per the Cutoff Date field in the OT Offer and OT Response forms. The service changes the offer status to Refused After Cutoff. Note that offers with a status of Refused After Cutoff can still be accepted (the status will be Accepted After Cutoff).
Process Name: EXPORT
Default Schedule: None
Module Required: Export
The EXPORT service can be used to run a specific Export Definition. The Export Definition you select will determine the type of data that will be exported, how the data will be formatted, and the destination for the exported data (file system, queue, or table).
You can schedule the EXPORT service to run automatically (via the Service Schedule tab) or you can run the service instance manually via the Service Monitor form.
Note: You can also run exports from the Exports form, provided the Export Destination is FILE or TABLE. When you run an export from the Exports form, you can include data that is entered in user-defined fields when you run the export. This data will not be included when you run the export via the EXPORT service instance. You can also specify whether to include Previously Exported Records and download exports that have already been generated.
The OUT_CONVERT and OUT_TEXT_CONVERT tasks are used when the Export Definition has Export Destination set to QUEUE. These tasks will transfer the export data from the Out XML Queue to the Interface Out Queue.
OUT_CONVERT: Select this Task if the Export Definition has Export Destination set to QUEUE and the Export Type is BCOREXML or XML. This Task takes the export data from the Out XML Queue and places it in the Interface Out Queue.
OUT_TEXT_CONVERT: Select this Task if the Export Definition has Export Destination set to QUEUE and the Export Type is CSV, CSV With Header, Fixed Length, or JSON. This Task takes the export data from the Out XML Queue and places it in the Interface Out Queue.
If the Export Definition has Export Destination set to QUEUE and neither of these Tasks are selected, the data will only be sent to the Out XML Queue.
Make sure you also configure the INTERFACE_NAME, TRANSACTION_NAME, and SENDER_NAME parameters so the data can be exported to the Interface Out Queue.
MODE: The MODE parameter determines the posting date of the records that will be exported. Options are ALL, TODAY, and DATE RANGE.
ALL causes the service to export all unprocessed records (i.e., records with a status of Ready) with the current date, a prior date, or a future date.
TODAY causes the service to export all unprocessed records (i.e., records with a status of Ready) up until today.
DATE RANGE causes the service to export all unprocessed records (i.e., records with a status of Ready) in a specific date range. If you select DATE RANGE, you must also specify a START DATE and END DATE.
START_DATE, END_DATE: If you select DATE RANGE as your MODE parameter, you must select the START_DATE and END_DATE of the date range of the records to export.
EXPORT_DEFINITION: Identifies which Exports record the instance should run for. Exports records are created in the Export form.
EXPORT_CREATOR: Identifies who the export belongs to. The default value is Admin User. Only the Export Creator has access to the output file in the Output form.
INTERFACE_NAME, TRANSACTION_NAME, and SENDER_NAME: If the EXPORT_DEFINITION you selected (above) has its Export Destination set to QUEUE, you need to set the INTERFACE_NAME, TRANSACTION_NAME, and SENDER_NAME parameters. Otherwise, an error will occur when you run the EXPORT service. You do not need to set these parameters if your EXPORT_DEFINITION has its Export Destination set to FILE or TABLE. These parameters are used to populate values in the Interface Out Queue when the export data is moved from the Out XML Queue to the Interface Out Queue. You need to select values for these parameters that match ones in the Distribution Model form; otherwise an error will occur when you try to run the service.
The POST_DATE parameter in your Export Definition is used to restrict the transactions that will be exported based on their Post Date. For example, you may want to prevent transactions with a Post Date in the future from being exported.
Make sure the MODE parameter for the EXPORT service coincides with the Export Definition’s POST_DATE parameter. If these settings are different, then the more restrictive setting will be used.
Process Name: IMPORT_FILES
Default Schedule: None
Module Required: Import Data
The Import Files Service applies to the Import Data Feature. The service reads data from the source you indicate in all of the enabled Import records (in the Import Definition form) and converts it to XML records in the application. The raw XML records can be viewed in the In XML Queue Detail form. The service also populates the tables that apply to the Context Names (i.e., Transaction Names) you define, such as PERSON and PERSON GROUP. For example, when source data is mapped to the PERSON context, the service imports the data and populates the person table.
The IMPORT_FILES service applies to the "Import" interface record.
IN_CONVERT: Converts the source data to XML format. The raw XML records can be viewed in the In XML Queue Detail form.
IN_XML_PROCESS: Populates the appropriate tables with the source data that was converted to XML format.
LOAD TYPE: You can choose to run an Incremental Load or Full Load.
Incremental Load: The service will add data to the applicable tables and update any records where existing mandatory field name data matches new mandatory field name data. An example of mandatory field name data is FI_FIRST _NAME and FI_PERSON_NUM for Person.
Full Load: The service will update all applicable tables and make non-matching records inactive. A record is considered non-matching when the mandatory field name data in the existing record does not correspond with mandatory field name data in any of the source data.
Note: Full Load is not supported for Person Group. If you select Full Load, the service will run Incremental Load for Person Group records.
IMPORT_NAME: The service will run only for the Import Names listed in the Selected box. The Import Names are created and configured in the Import Definition form.
SENDER_NAME: Identifies the sender of the data. Available options are Senders defined in the Interface Host form. The value selected here will populate the Sender Name column in the tables that are populated by the service instance. This value will also be used to populate the Sender Name column in the Distribution Model (interface_distribution_model) table.
TRANSACTION_GROUP: Transaction group is used to group transaction names. The table columns that are mapped to the Transaction Names that use the Transaction Group will be populated when the service runs. You can view the available Transaction Names and Transaction Groups for each interface in the Interface and Interface Trans forms.
Note: IMPORT is the default/valid transaction group value for the Import interface.
Process Name: IN_AT7
Default Schedule: Run every day at 1 AM, indefinitely
Default Schedule Enabled: No
Module Required: AutoTime Interfaces
The IN AT7 Service downloads data from the In XML Queue.
IN_CONVERT: Converts the source data to XML format. The raw XML records can be viewed in the In XML Queue Detail form.
IN_XML_PROCESS: Populates the appropriate tables with the source data that was converted to XML format.
Process Name: IN_DB_POLLING
Instance Name: IN_BAAN
Default Schedule: None
Module Required: None
Shop Floor Time uses a combination of database triggers and database polling to import charge element data from Baan. Triggers are used to detect add, update and delete actions on the Baan master tables. These triggers write to corresponding autotime_interface_tables created on the Baan database. If a record is added to or updated in the actual table (e.g., ttisfc001100), then a record gets written in the ttisfc001_interface table. When a charge element is deleted in Baan, a record is placed in the autotime_interface_delete table. The IN_BAAN service will poll these interface tables to find the charge elements to import or delete.
IN_DB_POLL: Polls the Baan database to find the data that can be downloaded.
IN_CONVERT: Converts the source data to XML format. The raw XML records can be viewed in the In XML Queue Detail form.
IN_XML_PROCESS: Populates the appropriate tables with the source data that was converted to XML format.
SENDER_NAME: Identifies the sender of the data. Select the name of the Baan instance you defined in the Interface Host form. Available options are Senders defined in the Interface Host form.
TRANSACTION_GROUP: A Transaction Group is used to group Transaction Names for a particular Interface. The table columns that are mapped to the Transaction Names that use the Transaction Group will be populated when the service runs. You can view the available Transaction Names and Transaction Groups for each interface in the Interface Trans tab of the Interface form.
The IN_BAAN service will poll the Baan interface tables to find charge elements to import or delete. The service uses its SENDER_NAME and TRANSACTION_GROUP parameters to look up the Import Name on the Distribution Model. This Import Name specifies the charge elements to import or delete.
Process Name: IN_ORACLE_EBS
Default Schedule: Run every day at 1 AM, indefinitely
Default Schedule Enabled: No
Module Required: AutoTime Interfaces
Receives Order, Operation, and Activity data from your external ERP.
IN_DB_POLL: Polls the Oracle EBS database to find the data that can be downloaded.
IN_CONVERT: Converts the source data to XML format. The raw XML records can be viewed in the In XML Queue Detail form.
IN_XML_PROCESS: Populates the appropriate tables with the source data that was converted to XML format.
SENDER_NAME: The value entered here will populate the Sender Name column in the tables that are populated by the service instance. The default value is ORACLE_EBS.
TRANSACTION_GROUP: Select the Transaction Group that will be processed by this instance of the IN_ORACLE_EBS service. For example, select ORACLE_EBS_PROJECT if you want this instance of the IN_ORACLE_EBS service to download projects and tasks from Oracle EBS.
If you do not select a TRANSACTION_GROUP, the service will attempt to process all the available Transaction Groups. It is therefore recommended that you create a separate instance of the IN_ORACLE_EBS service for each Transaction Group you want to process.
A Transaction Group is used to group transaction names. The table columns that are mapped to the Transaction Names that use the Transaction Group will be populated when the service runs. You can view the available Transaction Names and Transaction Groups for each interface in the Interface and Interface Trans forms.
Process Name: IN_QUEUE_TEXT
Instance Name: IN_AUTOTIME6
Default Schedule: None
Module Required: None
The IN_QUEUE_TEXT service is used to import AutoTime 6 API records into Shop Floor Time.
The service will take the AutoTime 6 API records from the interface_in_queue table (which can be viewed in the Interface In Queue form) and convert the records to a format that Shop Floor Time can read. It then places this data in the In XML Queue form (interface_in_xml_queue table). Finally, the IN_QUEUE_TEXT service imports the AutoTime 6 API data from the interface_in_xml_queue table into the appropriate Shop Floor Time tables.
Before running the IN_QUEUE_TEXT service, you will need to place your AutoTime 6 API records in the interface_in_queue table. See AutoTime 6 API Support for more information.
IN_TEXT_CONVERT: Converts the API data that is in the interface_in_queue table (Interface In Queue form) into a format that Shop Floor Time can read. The service will only process Interface In Queue records that have a Process Name of CONVERT_TEXT. It then places this data in the interface_in_xml_queue table (In XML Queue form).
Note: Before running the IN_QUEUE_TEXT service, you will need to place your AutoTime 6 API records in the interface_in_queue table. See AutoTime 6 API Support for more information.
IN_XML_PROCESS: Imports the AutoTime 6 API data from the interface_in_xml_queue table into the appropriate Shop Floor Time tables. To do so, the service uses its SENDER_NAME and TRANSACTION_GROUP parameters to look up a record in the Distribution Model that will indicate the Import Name to use.
SENDER_NAME: Identifies the sender of the data (in this case, AUTOTIME6). Available options are Senders defined in the Interface Host form.
TRANSACTION_GROUP: A Transaction Group is used to group Transaction Names for a particular Interface (in this case, AUTOTIME6). The table columns that are mapped to the Transaction Names that use the Transaction Group will be populated when the service runs. You can view the available Transaction Names and Transaction Groups for each interface in the Interface Trans tab of the Interface form.
The IN_QUEUE_TEXT service uses its SENDER_NAME and TRANSACTION_GROUP parameters to look up a record in the Distribution Model that will indicate the Import Name to use. The Import Name, defined in the Import Definition form, defines the API source data that will be imported from the interface_in_queue table.
Process Name: IN_SAP
Default Schedule: None
Module Required: SAP Interfaces
The IN_SAP service is used to import data from SAP. For more information, see SAP Interface.
You must create an instance of the IN_SAP service for each type of IDOC you are importing.
Shop Floor Time includes the pre-defined instance of the IN_SAP service shown below. It is recommended that you copy the system-defined service instances and use the duplicate versions, which can be modified as necessary.
IN_SAP_HRCC1DNWBSEL
IN_SAP_HRCC1DNCOSTC
IN_SAP_HRCC1DNINORD
IN_SAP_HRMD_A
IN_SAP_OPERA2
IN_SAP_OPERA3
IN_SAP_OPERA4
IN_SAP_WORKC2
IN_SAP_WORKC3
IN_SAP_WORKC4
The SAP Listener receives an IDOC from SAP and places the data in the Interface In Queue form (interface_in_queue table).
The IN_SAP service will convert the IDOC data that is in the Interface In Queue to a format that Shop Floor Time can read. It then places this data in the In XML Queue form (interface_in_xml_queue table).
Finally, the IN_SAP service will import the data from the In XML Queue to the appropriate Shop Floor Time table. To do so, the IN_SAP service uses its SENDER_NAME and TRANSACTION_GROUP parameters to look up a record in the Distribution Model that will indicate the Import Name to use. The Import Name, defined in the Import Definition form, defines the source data that will be imported from the In XML Queue to the appropriate Shop Floor Time table.
Note: To look up the record in the Distribution Model, the IN_SAP service first checks to see if the Interface In Queue record has a Transaction Name Alias. If the record does have a Transaction Name Alias, the IN_SAP service will use this alias to look up the record in the Distribution Model. If the record does not have a Transaction Name Alias, or if the record’s Transaction Name Alias does not match any records in the Distribution Model, the service will look up the record based on the Transaction Name.
IN_CONVERT: Converts the source data to XML format. The raw XML records can be viewed in the In XML Queue Detail form.
IN_XML_PROCESS: Populates the appropriate tables with the source data that was converted to XML format.
SENDER_NAME: Identifies the sender of the data. Select the name of the SAP system you defined in the Interface Host form. Available options are Receivers defined in the Interface Host form. Set this value to the name of the SAP system you defined in the Interface Host form.
TRANSACTION_GROUP: A Transaction Group is used to group Transaction Names for a particular Interface. The table columns that are mapped to the Transaction Names that use the Transaction Group will be populated when the service runs. You can view the available Transaction Names and Transaction Groups for each interface in the Interface Trans tab of the Interface form.
The IN_SAP service uses its SENDER_NAME and TRANSACTION_GROUP parameters to look up a record in the Distribution Model that will indicate the Import Name to use. The Import Name, defined in the Import Definition form, defines the source data that will be imported from the interface_in_queue tables into the charge_element table.
Each predefined instance of the IN_SAP service has its TRANSACTION_GROUP parameter set to the appropriate setting for the record being imported. In the Interface Trans tab of the Interface form, there is a Transaction Group defined for each of these Transactions. For example, the Transaction Name OPERA2 has the Transaction Group OPERA2.
Note: To look up the record in the Distribution Model, the IN_SAP service first checks to see if the Interface In Queue record has a Transaction Name Alias. If the record does have a Transaction Name Alias, the IN_SAP service will use this alias to look up the record in the Distribution Model. If the record does not have a Transaction Name Alias, or if the record’s Transaction Name Alias does not match any records in the Distribution Model, the service will look up the record based on the Transaction Name.
Process Name: LABOR_ALL
Default Schedule: Run every day at 1 AM, indefinitely
Default Schedule Enabled: No
Module Required: AutoTime Interfaces
The LABOR_ALL service performs the tasks listed below. The service will perform each task on a single person, then process the next person.
Generate Supporting Events
The LABOR_ALL service posts the following events:
Scheduled Events (defined in the Event tab of the Person Schedule form)
Inside Gap Events (defined by the Gap Event Name in the employee's Pay Policy)
Late Arrival and Outside Gap events as defined in the employee's Attendance Policy
Holiday, Time Off, and Day Worked events as defined in the employee's Attendance Policy only if the employee clocks in on days that these events are to post. If the employee does not clock in, the Attendance service posts these events.
Early Departure events as defined in the employee's Attendance Policy only if the event is configured to post Immediately upon clock out.
Pair Formation
The LABOR_ALL service forms transaction pairs based on the values in the Action table. For example, if the Action table has a CLOCK_IN, then a MEAL_START, the Pair Formation service would create the action pair CLOCK_IN MEAL_START as one record.
When you post an event directly on the timecard, the Pair Formation service runs automatically. It is not necessary to run the service in order to update the timecard.
Labor Split
The LABOR_ALL service searches the trans_atomic table for closed atomic records that have crossed an action, a schedule, a shift, or a day boundary and splits the record at the boundary. The rounded timestamp will be use in the creation and splitting of atomics. The trans_atomic table gets updated accordingly. For every atomic record created during the splitting, a new record is created in the trans_atomic_duration table.
Time Classification
The LABOR_ALL service posts the appropriate hours classifications on the timecard.
Distribute Labor
The LABOR_ALL service distributes labor atomics and adds time distribution to transactions. The amounts/values (e.g., number of hours, hours classification, and amounts going toward daily and weekly overtime) are displayed in the Transaction Duration tab of the Transaction Details form.
Calculate Rates
The LABOR_ALL service will calculate the PAID_PERSON_PAY and PAID_PERSON_LABOR rates for a transaction. See Transaction Rate for more information on the rates that are stored with a transaction.
The service calculates these rates based the Pay Rate Ruleset and Labor Rate Ruleset defined in the employee’s Pay Policy.
The Calculate Rates task can also be configured as a separate service instance.
Distribute Labor Details
The LABOR_ALL service calculates transaction details such as shift premiums and hours class premiums. The amounts/values are displayed in the Transaction Duration Detail tab of the Transaction Details form. This information is stored in the trans_action_duration_dtl table.
Process Name: LABOR_ALL_MT
Default Schedule: Run every day at 1 AM, indefinitely
Default Schedule Enabled: No
Module Required: AutoTime Interfaces
The LABOR_ALL_MT (Multi Task) service performs the tasks listed below. The service will perform each task on a single person, then process the next person.
Generate Supporting Events
The LABOR_ALL_MT service posts the following events:
Scheduled Events (defined in the Event tab of the Person Schedule form)
Inside Gap Events (defined by the Gap Event Name in the employee's Pay Policy)
Late Arrival and Outside Gap events as defined in the employee's Attendance Policy
Holiday, Time Off, and Day Worked events as defined in the employee's Attendance Policy only if the employee clocks in on days that these events are to post. If the employee does not clock in, the Attendance service posts these events.
Early Departure events as defined in the employee's Attendance Policy only if the event is configured to post Immediately upon clock out.
Pair Formation
The LABOR_ALL_MT service forms transaction pairs based on the values in the Action table. For example, if the Action table has a CLOCK_IN, then a MEAL_START, the Pair Formation service would create the action pair CLOCK_IN MEAL_START as one record.
When you post an event directly on the timecard, the Pair Formation service runs automatically. It is not necessary to run the service in order to update the timecard.
Labor Split
The LABOR_ALL_MT service searches the trans_atomic table for closed atomic records that have crossed an action, a schedule, a shift, or a day boundary and splits the record at the boundary. The rounded timestamp will be use in the creation and splitting of atomics. The trans_atomic table gets updated accordingly. For every atomic record created during the splitting, a new record is created in the trans_atomic_duration table.
Time Classification
The LABOR_ALL_MT service posts the appropriate hours classifications on the timecard.
Distribute Labor
The LABOR_ALL_MT service distributes labor atomics and adds time distribution to transactions. The amounts/values (e.g., number of hours, hours classification, and amounts going toward daily and weekly overtime) are displayed in the Transaction Duration tab of the Transaction Details form.
Calculate Rates
The LABOR_ALL_MT service will calculate the PAID_PERSON_PAY and PAID_PERSON_LABOR rates for a transaction. See Transaction Rate for more information on the rates that are stored with a transaction.
The service calculates these rates based the Pay Rate Ruleset and Labor Rate Ruleset defined in the employee’s Pay Policy.
The Calculate Rates task can also be configured as a separate service instance.
Distribute Labor Details
The LABOR_ALL_MT service calculates transaction details such as shift premiums and hours class premiums. The amounts/values are displayed in the Transaction Duration Detail tab of the Transaction Details form. This information is stored in the trans_action_duration_dtl table.
Process Name: MESSAGE_CREATION
Default Schedule: None
Module Required: Messaging
The MESSAGE_CREATION service generates Dialog and System messages. The messages are generated based on rules in the Message Definitions and Message Policies that are assigned to a person.
You must use the service’s MESSAGE_POLICY and MESSAGE_NAME parameters to define which Message Policies and Message Definitions will be used when generating the messages.
You can configure different instances of the MESSAGE_CREATION service to generate specific message types. For example, you can configure an instance specifically for unsigned time card messages.
To create a System Message, the MESSAGE_CREATION service looks at the persons who are assigned to the Message Policy and Message Definition in the service's parameters. The service then processes the rulesets in the Message Definitions. For example, if the MESSAGE_CREATION service is processing the UNSIGNED_TIMECARD_REMINDER Message Definition, it fires the UnsignedTimecardReminderRuleset. This ruleset checks the person's time card in the current pay period and if this time card is unsigned and it is the last scheduled day of the period, a message is sent to the person reminding them to sign their time card.
To create a Dialog Message, the MESSAGE_CREATION service processes records for persons with the Message Policy/Message Definition specified in the service’s parameters. If a Message Definition for a Dialog message is included, the service will generate the dialog message accordingly. The Dialog Message's Message Definition settings will determine if the message displays after login or after a specific event is selected on the terminal.
The MESSAGE_NAMES (Message Definitions) you select must be included in one of the selected MESSAGE_POLICIES in order for the service to create messages based on the definition.
For example, you may have one Message Policy that contains both the Unsigned Time Card Reminder and Unsigned Time Card Warning Message Definitions. You want to configure one instance of the MESSAGE_CREATION service to process the Unsigned Time Card Reminder messages, and another instance to process the Unsigned Time Card Warning messages. Both instances will have the same MESSAGE_POLICY parameter, but one instance will have the Unsigned Time Card Reminder MESSAGE_NAME and one instance will have the Unsigned Time Card Warning MESSAGE_NAME.
MESSAGE_POLICY: Defines the Message Policies for which the service will generate messages. Move the policies from the Available column to the Selected column to enable them. The MESSAGE_CREATION service will only generate messages for persons assigned to these Message Policies. If no Message Policies are in the Selected column, this instance will not generate any Dialog or System messages.
MESSAGE_NAME: Defines the Message Definitions for which the service will generate messages. The available options are Message Definitions with the DIALOG or SYSTEM Message Type. Move the policies from the Available column to the Selected column to enable them. The MESSAGE_CREATION service will only generate messages based on the selected Message Definitions.
Process Name: MESSAGE_DELIVERY
Default Schedule: Run every hour, indefinitely
Default Schedule Enabled: No
Module Required: Messaging
The MESSAGE_DELIVERY service delivers email messages.
You can send Broadcast, Trigger, System, and Exception messages via email.
Broadcast Messages: The Send Email box must be checked. You can define the subject, header, message text, and trailer of the email message when you create the broadcast message.
Trigger Messages: The SEND_EMAIL_TO_USER and/or SEND_EMAIL_TO_MANAGER Trigger Settings must be True.
System and Exception Messages: The Create Message operands in the Message Definition’s ruleset must include Email in the Medium parameter.
The MESSAGE_DELIVERY service also sends email messages regarding terminal status changes. These messages are generated by the TERMINAL_MONITOR service. Note that the terminals you want to monitor must have the Monitor box checked in the Terminal form.
You must configure your system’s settings in order to send email. To receive an email message, a person must have an e-mail address defined on the Employee form.
RETRY_DELIVERY
When an email message delivery fails (for example, if the recipient does not have an email address defined in their Person record), an error log record is created. The RETRY_DELIVERY parameter indicates whether the MESSAGE_DELIVERY service will continue trying to send the email each time the service runs.
The default setting is TRUE, meaning the MESSAGE_DELIVERY service will continue trying to send the email each time the service runs. The message will still be marked as Ready even after the error log record is created. If the message delivery continues to fail, a new error log record will be created for each failed attempt.
If you change this setting to False, the message will be marked as Error after the first failed message delivery. The MESSAGE_DELIVERY service will no longer try to send the email when the service runs.
Process Name: OFFLINE_DATA_PROCESSOR
Default Schedule: None
Module Required: None
Sometimes terminals lose connectivity with the application server and go offline. When a terminal is offline, employees can continue to punch transactions and the terminal will store the data in offline files.
Once the connectivity to the application server is restored and the terminal is back online, the offline files are moved to the terminal_offline_queue table. You can view these files in the Offline Data Queue form.
The OFFLINE_DATA_PROCESSOR service moves the data from the terminal_offline_queue table to the terminal_offline_queue_dtl table. You can view these records in the Offline Data Records form. The records will be sorted and processed in the order of their transaction timestamps. If a transaction is successfully processed, its status will be C (Complete).
Some terminals with offline data will not go back online until the OFFLINE_DATA_PROCESSOR service runs (in order to ensure the time sequence integrity of the transactions). The status of these terminals will remain as Offline - Processing Offline Data until the service runs.
Note: If a terminal is a member of a Terminal Group, the service will process the offline records of all the terminals in a Terminal Group when the Terminal Group's status is Offline Processing or Online, regardless of the status of the individual group members. If the Terminal Group's status is Offline Queuing, the service will not process the offline records of any terminals in the group, regardless of the status of the individual group members. See Configuring Shop Floor Time to Use Multiple Data Collection Systems for more information.
The Offline Data Queue form contains the offline data file that has been downloaded from the terminal to the application server. Each offline data file may contain multiple transactions; you can view these transactions in the Offline Data tab at the bottom of the Offline Data Queue form. The Offline Data Queue form displays the data in the terminal_offline_queue table.
The Offline Data Records form contains individual transactions from each of the processed files in the Offline Data Queue form. These transactions have been processed by the OFFLINE_DATA_PROCESSOR service. This data is stored in the terminal_offline_queue_dtl table.
The Offline Records tab in the Offline Data Records form displays the actual content of each transaction. The Offline Record Error tab in the Offline Data Records form displays details about offline data records with the Record Status of Error.
If the OFFLINE_DATA_PROCESSOR service encounters an error when processing the offline data (e.g., an employee enters an invalid work order number), the service will record the error in the Error Log. You can view this error in the Offline Record Error tab of the Offline Data Records form.
A message can also be sent to the person whose offline transaction caused the error, this person's supervisor, and a system administrator. The message will contain details about the error, where it occurred, and when it occurred.
To create these messages, you will need to make sure the EXCEPTION_MESSAGE_CREATION service is configured to process the ERROR_LOGGED_BY_OFFLN_DATA_PROC_EXCPTN Message Definition. This Message Definition uses a Messaging Ruleset to create messages for the person and the supervisor. You will need to create a rule and add it to the ruleset for this Message Definition.
The OFFLINE_DATA_PROCESSOR service will process records in the terminal_offline_queue table in the order of their timestamps. You can use the GID, DID, and TERMINAL_PROFILE parameters to configure the service to process these records for specific terminals or a Terminal Profile
GID, DID
GID is the terminal's Group ID number and DID is the terminal's Device ID number. These values are defined in the Terminal form.
If you want this instance of the OFFLINE_DATA_PROCESSOR service to only process the offline data of a specific terminal, enter the terminal's GID and DID.
If you specify only a GID or a DID, then this instance will process the offline data of all terminals with the specified GID or DID.
TERMINAL_PROFILE
If you want this instance of the OFFLINE_DATA_PROCESSOR service to only process the offline data of terminals with a specific Terminal Profile, move the Terminal Profile from the Available column to the Selected column.
If no Terminal Profiles are in the Selected column, and there is no GID/DID specified, this instance of the OFFLINE_DATA_PROCESSOR service will process all the offline data records in the terminal_offline_queue table.
Process Name: OUT_EXPORT_TABLE
Instance Name: OUT_BAAN_HRA
Default Schedule: None
Module Required: None
The OUT_BAAN_HRA service exports work order transactions to the HRA staging table in Baan. This service selects transactions with a specific Sender Name, which is determined by a person’s Process Policy. Baan function servers then update the core Baan tables with the HRA records. The function servers then write a success or error message to a Baan table, and Shop Floor Time imports this error information to update the transaction and the error log.
SENDER_NAME: Identifies the sender of the data. Select the name of the Baan instance you defined in the Interface Host form. Available options are Receivers defined in the Interface Host form.
TRANSACTION_GROUP: A Transaction Group is used to group Transaction Names for a particular Interface. The table columns that are mapped to the Transaction Names that use the Transaction Group will be populated when the service runs. You can view the available Transaction Names and Transaction Groups for each interface in the Interface Trans tab of the Interface form.
EXPORT_NAME: Export Definition that will be used by the service.
Process Name: OUT_EXPORT_TABLE
Instance Name: OUT_BAAN_HRS
Default Schedule: None
Module Required: None
The OUT_BAAN_HRS service exports project transactions to the HRS staging table in Baan. This service selects transactions with a specific Sender Name, which is determined by a person’s Process Policy. Baan function servers then update the core Baan tables with the HRS records. The function servers then write a success or error message to a Baan table, and Shop Floor Time imports this error information to update the transaction and the error log.
SENDER_NAME: Identifies the sender of the data. Select the name of the Baan instance you defined in the Interface Host form. Available options are Receivers defined in the Interface Host form.
TRANSACTION_GROUP: A Transaction Group is used to group Transaction Names for a particular Interface. The table columns that are mapped to the Transaction Names that use the Transaction Group will be populated when the service runs. You can view the available Transaction Names and Transaction Groups for each interface in the Interface Trans tab of the Interface form.
EXPORT_NAME: Export Definition that will be used by the service.
Process Name: OUT_EXPORT_TABLE
Instance Name: OUT_BAAN_SFC
Default Schedule: None
Module Required: None
The OUT_BAAN_SFC service will export project and financial transactions to the TTMSFC100 table in Baan.
SENDER_NAME: Identifies the sender of the data. Select the name of the Baan instance you defined in the Interface Host form. Available options are Receivers defined in the Interface Host form.
TRANSACTION_GROUP: A Transaction Group is used to group Transaction Names for a particular Interface. The table columns that are mapped to the Transaction Names that use the Transaction Group will be populated when the service runs. You can view the available Transaction Names and Transaction Groups for each interface in the Interface Trans tab of the Interface form.
EXPORT_NAME: Export Definition that will be used by the service.
Process Name: OUT_COSTPOINTSF
Instance Name: OUT_COSTPOINTSF
Default Schedule: Run every 5 minutes, indefinitely
Default Schedule Enabled: Yes
Module Required: Costpoint Shop Floor Core
The OUT_COSTPOINTSF Service sends transaction data to Costpoint.
A separate instance of the OUT_COSTPOINTSF service must be configured for each Costpoint system that will be sent data. The service’s SENDER_NAME parameter is used to identify the Costpoint system.
OUT_READ_EXPORT: Reads data from the transaction tables.
OUT_CONVERT: Takes data from Out XML Queue and puts it in the Interface Out Queue.
OUT_SEND_WEB_SERVICE: Sends the data to Costpoint.
SENDER_NAME: The Interface Name that is configured with the destination Costpoint server. Posted transactions will be sent to this Costpoint server. Identifies the sender of the data. Select the name of the Costpoint system you defined in the Interface Host form. Available options are Receivers defined in the Interface Host form.
TRANSACTION_GROUP: Transaction group is used to group transaction names. The table columns that are mapped to the Transaction Names that use the Transaction Group will be populated when the service runs. You can view the available Transaction Names and Transaction Groups for each interface in the Interface and Interface Trans forms.
NUMBER_PARALLEL_SEND: Number of interface records to send simultaneously.
Process Name: OUT_EBS_AP_EXPORT
Default Schedule: None
Module Required: AutoTime Interfaces
The OUT_EBS_AP_EXPORT service can be used to run an AP Export for a specific Export Definition. The service will export any records that have been Payroll Locked and have not been previously exported. You can also define the number of days prior to the day the service runs for which the service will export records.
You can schedule the OUT_EBS_AP_EXPORT service to run automatically or you can run the service instance manually via the Service Monitor form.
You can also run an AP Export manually from the Exports form.
OUT_READ_EXPORT: Reads data from the transaction tables.
OFFSET_DAYS: Number of days prior to the day the service runs for which the service will export records. For example, if OFFSET_DAYS is set to 7 and the service runs on 5/17/2013, the service will export records from 5/11/2013 to 5/17/2013. The default value is 0, meaning the service will only export records with the current date (the date the service runs).
SENDER_NAME: Indicates which external system will receive the exported data for this instance of the OUT_EBS_AP_EXPORT Service. Available options are Receivers defined in Interface Host form.
TRANSACTION_GROUP: Transaction group is used to group transaction names. The table columns that are mapped to the Transaction Names that use the Transaction Group will be populated when the service runs. You can view the available Transaction Names and Transaction Groups for each interface in the Interface and Interface Trans forms.
EXPORT_NAME: Identifies the Export Definition for which this instance should be run. Available options are Export Definitions defined in the Export form.
MODE: Select TODAY to export transactions with the current date or prior. Select ALL to export transactions with the current date, a prior date, or a future date.
The POST_DATE parameter in your Export Definition is used to restrict the transactions that will be exported based on their Post Date. For example, you may want to prevent transactions with a Post Date in the future from being exported.
Make sure the MODE parameter for the OUT_EBS_AP_EXPORT service is the same as the Export Definition’s POST_DATE parameter. If these settings are different, then the more restrictive setting will be used.
If the service’s MODE is TODAY, the Export Definition’s POST_DATE parameter should be set to Current. If the service’s MODE is ALL, then the Export Definition’s POST_DATE parameter should also be set to All.
Process Name: OUT_EBS_AP_INVOICE
Default Schedule: None
Module Required: AutoTime Interfaces
The OUT_EBS_AP_INVOICE service should be run after the OUT_EBS_AP_EXPORT service. The OUT_EBS_AP_INVOICE service executes a stored procedure in your Oracle EBS database. This stored procedure takes the data that was exported from Shop Floor Time to Oracle EBS (by the OUT_EBS_AP_EXPORT service) and moves this data to Oracle EBS tables that store invoice data.
You can schedule the OUT_EBS_AP_INVOICE service to run automatically or you can run the service instance manually via the Service Monitor form.
OUT_EXECUTE_STORED_PROC: Executes a stored procedure (specified in the STORED_PROCEDURE_NAME parameter) in your Oracle EBS database that takes the data that was exported from Shop Floor Time to Oracle EBS (by the OUT_EBS_AP_EXPORT service) and moves this data to Oracle EBS tables that store invoice data.
SENDER_NAME: Indicates which external system will receive the exported data for this instance of the OUT_EBS_AP_INVOICE Service. Available options are Receivers defined in Interface Host form.
STORED_PROCEDURE_NAME: Name of the stored procedure that is executed by this service The default stored procedure, ORACLE_EBS_AP_INVOICE.sql, is located in the \db directory where Shop Floor Time is installed (\db\sql\scripts\schema\ORACLE\ERP\ORACLE_EBS).
Process Name: OUT_EBS_GL_EXPORT
Default Schedule: None
Module Required: AutoTime Interfaces
The OUT_EBS_GL_EXPORT service can be used to run a GL Export.
The OUT_EBS_GL_EXPORT service uses its SENDER_NAME and TRANSACTION_GROUP parameters to look up a record in the Distribution Model that will indicate the Export Name to use.
You can schedule the OUT_EBS_GL_EXPORT service to run automatically or you can run the service instance manually via the Service Monitor form. You can also run a GL Export manually from the Exports form.
The OUT_EBS_GL_EXPORT service will only export transactions with the Sender Name specified in the service’s parameter. To ensure your transactions have the correct Sender Name, make sure you configure your Process Policy correctly. This service instance should be included as a Process Name in your Process Policy (via the Process or Event tab) and the Sender Name you specify for this service instance should be assigned to that Process Name.
OUT_READ_EXPORT: Reads data from the transaction tables.
SENDER_NAME: Indicates which external system will receive the exported data for this instance of the OUT_EBS_GL_EXPORT Service. Available options are Receivers defined in Interface Host form.
TRANSACTION_GROUP: Transaction group is used to group transaction names. The table columns that are mapped to the Transaction Names that use the Transaction Group will be populated when the service runs. You can view the available Transaction Names and Transaction Groups for each interface in the Interface and Interface Trans forms.
MODE: Select TODAY to export transactions with the current date or prior. Select ALL to export transactions with the current date, a prior date, or a future date.
The POST_DATE parameter in your Export Definition is used to restrict the transactions that will be exported based on their Post Date. For example, you may want to prevent transactions with a Post Date in the future from being exported.
Make sure the MODE parameter for the OUT_EBS_GL_EXPORT service is the same as the Export Definition’s POST_DATE parameter. If these settings are different, then the more restrictive setting will be used.
If the service’s MODE is TODAY, the Export Definition’s POST_DATE parameter should be set to Current. If the service’s MODE is ALL, then the Export Definition’s POST_DATE parameter should also be set to All.
Process Name: OUT_EBS_PROJECT
Default Schedule: None
Module Required: AutoTime Interfaces
The OUT_EBS_PROJECT service sends project data to Oracle EBS.
You need to define an instance of this service for each Sender Name/Transaction Group that will be receiving the export data.
The OUT_EBS_PROJECT service will only export transactions with the Sender Name specified in the service’s parameter. To ensure your transactions have the correct Sender Name, make sure you configure your Process Policy correctly. This service instance should be included as a Process Name in your Process Policy (via the Process or Event tab) and the Sender Name you specify for this service instance should be assigned to that Process Name.
OUT_READ_EXPORT: Reads data from the transaction tables.
SENDER_NAME: Indicates which external system will receive the exported data for this instance of the OUT_EBS_PROJECT Service. Available options are Receivers defined in Interface Host form.
TRANSACTION_GROUP: Transaction group is used to group transaction names. The table columns that are mapped to the Transaction Names that use the Transaction Group will be populated when the service runs. You can view the available Transaction Names and Transaction Groups for each interface in Interface and Interface Trans.
MODE: Select TODAY to export project transactions with the current date or prior. Select ALL to export project transactions with the current date, a prior date, or a future date.
The POST_DATE parameter in your Export Definition is used to restrict the transactions that will be exported based on their Post Date. For example, you may want to prevent transactions with a Post Date in the future from being exported.
Make sure the MODE parameter for the OUT_EBS_PROJECT service is the same as the Export Definition’s POST_DATE parameter. If these settings are different, then the more restrictive setting will be used.
If the service’s MODE is TODAY, the Export Definition’s POST_DATE parameter should be set to Current. If the service’s MODE is ALL, then the Export Definition’s POST_DATE parameter should also be set to All.
Process Name: OUT_EBS_WIP_COST
Default Schedule: Run every day at 3 AM, indefinitely
Default Schedule Enabled: No
Module Required: AutoTime Interfaces
The OUT_EBS_WIP_COST service sends work order data to Oracle EBS to track production costs (work in process costs).
You need to define an instance of this service for each Sender Name/Transaction Group that will be receiving the export data.
The OUT_EBS_WIP_COST service will only export transactions with the Sender Name specified in the service’s parameter. To ensure your transactions have the correct Sender Name, make sure you configure your Process Policy correctly. This service instance should be included as a Process Name in your Process Policy (via the Process or Event tab) and the Sender Name you specify for this service instance should be assigned to that Process Name.
OUT_READ_EXPORT: Reads data from the transaction tables.
SENDER_NAME: Indicates which external system will receive the exported data for this instance of the OUT_EBS_WIP_COST Service.
TRANSACTION_GROUP: The Transaction Group you select will determine the data that is included in the export.
A Transaction Group is used to group Transaction Names. These values are defined in the Interface Trans form. In the Distribution Model form, the Transaction Group has a defined Export Name. The Export Definition form is used to define the data that will be included in the Export Name.
The application comes with a predefined Transaction Group (ORACLE_EBS_WIP_COST) and predefined Export Definition (OUT_EBS_WIP_COST) for WIP Cost. You can use these settings or copy them and modify the copies to create your own custom WIP Cost export.
MODE: Select TODAY to export transactions with the current date or prior. Select ALL to export transactions with the current date, a prior date, or a future date.
The POST_DATE parameter in your Export Definition is used to restrict the transactions that will be exported based on their Post Date. For example, you may want to prevent transactions with a Post Date in the future from being exported.
Make sure the MODE parameter for the OUT_EBS_WIP_COST service is the same as the Export Definition’s POST_DATE parameter. If these settings are different, then the more restrictive setting will be used.
If the service’s MODE is TODAY, the Export Definition’s POST_DATE parameter should be set to Current. If the service’s MODE is ALL, then the Export Definition’s POST_DATE parameter should also be set to All.
Process Name: OUT_FILE_WRITER
Default Schedule: None
Module Required: None
The OUT_FILE_WRITER service will convert data in the Interface Out Queue (interface_out_queue table) to an XML file. If your Export Definition is configured to export to queue, you can use this service to create an XML file from the data in the queue.
The OUT_FILE_WRITER service will read the records from interface_out_queue as specified by the INTERFACE_NAME, RECEIVER_NAME, and TRANSACTION_NAME parameters.
Each record the service reads will be output to a file in the DESTINATION_FOLDER parameter. The processed record then will be marked as complete in the Interface Out Queue.
RECEIVER_NAME: Indicates which external system will receive the exported data. Select the name of the SAP system you defined in the Interface Host form. Available options are Receivers defined in Interface Host form.
INTERFACE_NAME: A unique identifier for the interface transaction record, as displayed in the Distribution Model form.
INTERFACE_NAME_ALIAS: (This field is not used in Shop Floor Time.) If you specified a custom interface name in the Distribution Model form (Interface Name Alias field) for this RECEIVER_NAME, select it from this field. Otherwise, you can leave this parameter blank.
TRANSACTION_NAME: This parameter is optional. Select a specific transaction to process. Available options are defined in the Interface Trans form.
TRANSACTION_NAME_ALIAS: If you specified a custom transaction name in the Distribution Model form (Trans Name Alias field) for this RECEIVER_NAME, select it from this field. Otherwise, you can leave this parameter blank.
OVERRIDE_FILE: Select whether you want any existing output files to be overridden. Select APPEND to add the new data to the existing file. Select DELETE to delete the existing output file before creating a new one. Select PREVENT to stop the processing and log an error; the record will remain in R status.
DESTINATION_FOLDER: Folder where the OUT_FILE_WRITER service will place the output file.
FILE_NAME: Enter the name for the XML file that will be created.
Process Name: OUT_SAP
Default Schedule: None
Module Required: SAP Interfaces
The OUT_SAP service is used to export data from Shop Floor Time to SAP. For more information, see SAP Interface.
You must create an instance of the OUT_SAP service for each Transaction Group you are exporting. You can view the available Transaction Names and Transaction Groups for the SAP interface in the Interface Trans tab of the Interface form.
Shop Floor Time includes the pre-defined instance of the OUT_SAP service shown below. It is recommended that you copy the system-defined service instances and use the duplicate versions, which can be modified as necessary.
OUT_SAP_CONF21
OUT_SAP_CONF32
OUT_SAP_CONF42
OUT_SAP_CONF21
OUT_SAP_ZCAN21
OUT_SAP_ZCAN32
OUT_SAP_ZCAN42
Each instance corresponds to the type of record that is being exported. It is recommended that you copy the system-defined service instances and use the duplicate versions, which can be modified as necessary.
The OUT_SAP service uses its SENDER_NAME and TRANSACTION_GROUP parameters to look up a record in the Distribution Model that will indicate the Export Definition to use. The Export Definition has a PROCESS_NAME Export Parameter. The OUT_SAP service will select transactions that have this process name. The OUT_SAP service uses the Export Definition to select data from these transactions to export.
The service places this data in the interface_out_xml_queue table (Out XML Queue form). It then converts the Out XML Queue data into an IDOC format, and places the data in the Interface Out Queue form (interface_out_queue table). Finally, it takes the SAP records that have a Ready status in the Interface Out Queue form and sends them to SAP as IDOCs.
OUT_READ_EXPORT: Reads data from the transaction tables.
OUT_CONVERT: Takes data from Out XML Queue and puts it in the Interface Out Queue.
OUT_SEND_RFC_SAP: Sends the records from Interface Out Queue (IDOCs) to SAP.
SENDER_NAME: Select the name of the SAP system you defined in the Interface Host form. Available options are Receivers defined in the Interface Host form. Set this value to the name of the SAP system you defined in the Interface Host form.
TRANSACTION_GROUP: A Transaction Group is used to group Transaction Names for a particular Interface. The table columns that are mapped to the Transaction Names that use the Transaction Group will be populated when the service runs. You can view the available Transaction Names and Transaction Groups for each interface in the Interface Trans tab of the Interface form.
The OUT_SAP service uses its SENDER_NAME and TRANSACTION_GROUP parameters to look up a record in the Distribution Model that will indicate the Export Name to use. See "How the OUT_SAP Service Works" above for more information.
MODE: Select TODAY to export transactions with the current date or prior. Select ALL to export transactions with the current date, a prior date, or a future date.
OVERRIDE_RECEIVER_NAME: Use this parameter to override the default value of the RCVPRN field in the generated IDOC. By default, the RCVPRN field contains the Receiver Name of the Interface Out Queue record. If the OVERRIDE_RECEIVER_NAME parameter is blank, the RCVPRN field will contain this default value. If you select a value for the OVERRIDE_RECEIVER_NAME, this value will be used in the RCVPRN field in the generated IDOC instead.
NUMBER_PARALLEL_SEND: Use this setting if you want to send multiple concurrent records to SAP.
The POST_DATE parameter in your Export Definition is used to restrict the transactions that will be exported based on their Post Date. For example, you may want to prevent transactions with a Post Date in the future from being exported.
Make sure the MODE parameter for the OUT_SAP service is the same as the Export Definition’s POST_DATE parameter. If these settings are different, then the more restrictive setting will be used.
If the service’s MODE is TODAY, the Export Definition’s POST_DATE parameter should be set to Current. If the service’s MODE is ALL, then the Export Definition’s POST_DATE parameter should also be set to All.
Process Name: OUT_WIP_MOVE
Default Schedule: Run every day at 4 AM, indefinitely
Default Schedule Enabled: No
Module Required: AutoTime Interfaces
Reads and sends transactions in the move tables (created by Move and Work Order events) to OTL.
OUT_READ_MOVE: Reads transactions from the move tables (created by Move and Work Order events).
OUT_CONVERT: Takes data from Out XML Queue and puts it in the Interface Out Queue.
OUT_SEND_STORED_PROC: Takes data from Interface Out Queue and sends it to the Oracle EBS database.
INTERFACE_NAME: A unique identifier for the interface transaction record, as displayed in the Distribution Model form. The default value is ORACLE_WIP.
PROCESS_NAME: If a process name is listed, the service will move only action records from the Action Move Quantity table that use the indicated process name. The Process Name can be viewed in the Process Name column of the Process Status tab (action_process_status table): Main Menu > Administration > Data Collection > Action Move Quantity > Process Status. The default value is READ_ACTION_MOVE.
TRANSACTION_GROUP: Transaction group is used to group transaction names. The table columns that are mapped to the Transaction Names that use the Transaction Group will be populated when the service runs. You can view the available Transaction Names and Transaction Groups for each interface in Interface and Interface Trans. The default TRANSACTION_GROUP value is WIP_MOVE.
ENFORCE_FIFO:
TRUE: Stop processing if an error occurs. The raw data will display in the Out Transaction Data form (interface_out_queue table).
FALSE: Keep processing even if an error has occurred.
Process Name: PAYROLL_EXPORT
Default Schedule: None
Module Required: Export
The PAYROLL_EXPORT service can be used to run a payroll export for a specific Export Definition. You can also define which Pay Groups and Pay Periods to include in the export.
The Export Definition you select will determine whether records must be employee signed, supervisor signed, or payroll locked in order to be exported. If your export includes split days from 9-80 schedules, the first and second half of a split day can be signed and locked independently. Only the half that meets the Export Definition’s payroll lock and sign requirements will be exported.
You can schedule the PAYROLL_EXPORT service to run automatically (via the Service Schedule tab) or you can run the service instance manually via the Service Monitor form.
You can also run a Payroll Export manually from the Exports form, provided the Export Destination is FILE or TABLE. When you run a Payroll Export from the Exports form, you can include data that is entered in user-defined fields when you run the export. This data will not be included when you run the Payroll Export via the PAYROLL_EXPORT service instance. You can also specify whether to include Previously Exported Records and download exports that have already been generated.
The OUT_CONVERT and OUT_TEXT_CONVERT tasks are used when the Export Definition has Export Destination set to QUEUE. These tasks will transfer the export data from the Out XML Queue to the Interface Out Queue.
OUT_CONVERT: Select this Task if the Export Definition has Export Destination set to QUEUE and the Export Type is BCOREXML or XML. This Task takes the export data from the Out XML Queue and places it in the Interface Out Queue.
OUT_TEXT_CONVERT: Select this Task if the Export Definition has Export Destination set to QUEUE and the Export Type is CSV, CSV With Header, Fixed Length, or JSON. This Task takes the export data from the Out XML Queue and places it in the Interface Out Queue.
If the Export Definition has Export Destination set to QUEUE and neither of these Tasks are selected, the data will only be sent to the Out XML Queue.
Make sure you also configure the INTERFACE_NAME, TRANSACTION_NAME, and SENDER_NAME parameters so the data can be exported to the Interface Out Queue.
EXPORT_DEFINITION: Identifies the payroll export definition for which this instance should be run. Available options are Export Definitions with the Payroll context.
PAY_POLICY: Identifies the Pay Groups (defined in the Pay Policy form) that will be considered for export by the PAYROLL_EXPORT service. The PAYROLL_EXPORT service will export the Pay Groups in the Selected column as long as these Pay Groups are included in the Export Definition you selected (Export Parameters tab).
PAY_PERIODS_BACK: If you want to include prior pay periods in the export, enter the number of prior pay periods in this field. The default value is 1. If you enter 0 (zero), then only the current pay period will be included; the current pay period is based on the Pay Policy in effect on the date the service runs. If the Export Definition has Payroll Lock Required checked, then every day that has a transaction in this pay period range must be payroll locked. Otherwise no data in this pay period range will be exported.
CREATOR_PERSON_NUM: Identifies the person who created the payroll export. If you are using Security Data Roles, make sure the CREATOR_PERSON_NUM has a Security Data Role with the PAY_GROUP_NAME item and that this item includes the Pay Groups listed in the selected Export Definition (Export Parameters tab).
INTERFACE_NAME, TRANSACTION_NAME, and SENDER_NAME: If the EXPORT_DEFINITION you selected (above) has its Export Destination set to QUEUE, you need to set the INTERFACE_NAME, TRANSACTION_NAME, and SENDER_NAME parameters. Otherwise, an error will occur when you run the EXPORT service. You do not need to set these parameters if your EXPORT_DEFINITION has its Export Destination set to FILE or TABLE. These parameters are used to populate values in the Interface Out Queue when the export data is moved from the Out XML Queue to the Interface Out Queue. You need to select values for these parameters that match ones in the Distribution Model form; otherwise an error will occur when you try to run the service.
Process Name: PAYROLL_LOCK
Default Schedule: None
Module Required: Payroll
Payroll Lock will "freeze" attendance and labor transactions so that employees and supervisors cannot post any additional transactions/punches to the date. However, it may be possible to sign/unsign a payroll locked timecard, depending on the employee's Sign Policy settings.
You can use the PAYROLL_LOCK service to lock or unlock a person’s timecard. The service’s default setting is to lock timecards. If you want to use the PAYROLL_LOCK service to unlock timecards, it is recommended that you change the name of the service instance accordingly.
The PAYROLL_LOCK service will lock or unlock records for specific Pay Policies. A person must be assigned a Pay Policy in order to have his or her timecard locked or unlocked by the PAYROLL_LOCK service. The PAYROLL_LOCK service will not lock or unlock the timecards of Terminated employees.
You can configure the PAYROLL_LOCK service to lock or unlock the current or previous pay period or pay week plus additional periods/weeks prior to the current or previous one. You can also configure the service to check for employee and/or supervisor signatures before locking or unlocking a day.
The PAYROLL_LOCK service can be scheduled to run at different times for different Pay Policies.
Note: The Lock by Pay Policy tab of the Payroll Lock form can also be used to lock or unlock the timecards for entire Pay Policies.
To determine which employees can be payroll locked or unlocked, the PAYROLL_LOCK service looks at the INCLUDE_ALL_PAY_POLICIES, PAY_POLICY, FACILITY, and EXTERNAL_PAY_GROUP parameters. The service will select the employees that belong to the Pay Policies, Facilities, and External Pay Groups specified in these parameters. Note that the service will not select employees whose status is Terminated during the specified periods.
For each of these employees, the service selects the dates to lock or unlock based on the PERIOD_TYPE, PERIOD, and ADDTL_PERIODS_BACK parameters. The PAYROLL_LOCK service also selects these records based on the Timezone configured for the service instance.
If the PERIOD parameter is set to CURRENT, the PAYROLL_LOCK service selects the dates in the current pay period/week. The current pay period/week is determined by the Pay Policy in effect at the time the service is running.
If the PERIOD parameter is set to PREVIOUS, the PAYROLL_LOCK service first determines the current pay period/week based on the Pay Policy in effect at the time the service is running. To determine the previous pay period/week, the service looks at the Pay Policy in effect on the day prior to the current period/week’s Start Date. The service then selects the dates in the previous pay period/week.
If any ADDTL_PERIODS_BACK are specified, the service determines these pay periods/weeks based on the Pay Policy in effect on the day prior to the previous period/week’s Start Date.
The PAYROLL_LOCK service also selects these records based on the Timezone configured for the service instance. If the service instance does not have a configured time zone, then it will use the application server’s time zone.
For example, the PAYROLL_LOCK service instance has its Timezone set to Eastern Time and the application server’s host machine is set to Mountain Time. The service’s PERIOD parameter is CURRENT. The previous pay period is from 6/27/2016 to 7/3/2016 and the current pay period is from 7/4/2016 to 7/10/2016.
The PAYROLL_LOCK service runs at 10:30 p.m. Mountain Time on 7/3/2016, which is 12:30 a.m. Eastern Time on 7/4/2016. Because the service instance's Timezone is set to Eastern Time, it determines the current period is from 7/4/2016 to 7/10/2016 and processes these records accordingly. If the PAYROLL_LOCK service did not have a configured Timezone, it would select records based on the server’s time zone (Mountain Time, 10:30 p.m. on 7/3/2016) and would process the period from 6/27/2016 to 7/3/2016 instead.
The service then locks or unlocks the selected dates, depending on the setting of the LOCK_UNLOCK parameter.
The PAYROLL_LOCK service’s parameters determine which employees and pay periods/weeks the service will attempt to lock or unlock. The service looks at pay periods/weeks for employees that are assigned to specific Pay Policies, Facilities, and External Pay Groups. It will select the Current or Previous pay period/week and possibly additional pay periods/weeks that meet the employee and supervisor signature requirements. The service will not select employees whose status is Terminated during the specified periods/weeks.
Note that the PAYROLL_LOCK service also determines which records to lock or unlock based on the Timezone configured for the service instance. If the service instance does not have a configured time zone, then it will use the server’s time zone.
If a pay period/week cannot be locked or unlocked because it lacks a required employee or supervisor signature, the PAYROLL_LOCK service will skip that pay period/week and create an Error Log record (one error message for each employee and pay period/week). If the other pay periods/weeks specified by the service’s parameters can be locked or unlocked, then the service will lock or unlock them.
INCLUDE_ALL_PAY_POLICIES
TRUE or FALSE. Indicates whether all the Pay Policies will be considered when selecting employees to lock or unlock, regardless of which policies are Selected in the PAY_POLICY parameter. If you set this parameter to TRUE, then employees from all Pay Policies will be considered. If you set this parameter to FALSE, the service will only consider the policies that are Selected in the PAY_POLICY parameter. If you select FALSE and no policies are Selected in the PAY_POLICY parameter, then the service will not consider employees from any Pay Policy.
PAY_POLICY
Identifies the Pay Policies that will be locked or unlocked by the PAYROLL_LOCK service. The service will lock or unlock the Pay Policies in the Selected column. Move the Pay Policies you want to lock or unlock from the Available to the Selected column.
PERIOD_TYPE
Indicates whether the PAYROLL_LOCK service will process a pay WEEK or a pay PERIOD. The default value is PERIOD.
This setting is used by the PERIOD parameter (see below). For example, if PERIOD_TYPE is set to WEEK, and PERIOD is set to PREVIOUS, the service will process the previous pay week.
PERIOD
Indicates whether the PAYROLL_LOCK service will lock or unlock records for the PREVIOUS or CURRENT pay period or pay week (defined by the PERIOD_TYPE). The default value is PREVIOUS.
The pay week or pay period that the PAYROLL_LOCK service processes may include half of a split day in a 9-80 schedule. The PAYROLL_LOCK service will respect the split day boundaries. For example, a pay week may include a split day from the previous pay week for which the Second Half is part of the pay week you have selected. The service will therefore lock or unlock the Second Half hours on this split day, but not the First Half hours.
Note that the PAYROLL_LOCK service also determines which records to lock or unlock based on the Timezone configured for the service instance. If the service instance does not have a configured time zone, then it will use the server’s time zone.
ADDTL_PERIODS_BACK
Number of pay periods/weeks that the service will lock or unlock in addition to the CURRENT or PREVIOUS period/week (specified by the PERIOD parameter). You may want to use this setting to make sure all your previous pay periods/weeks are locked or unlocked before exporting them to payroll, especially if you adjusted any transactions in these prior periods.
Select a value from 0 to 52. If you select 0, then only the CURRENT or PREVIOUS period/week will be locked or unlocked (specified by the PERIOD_TYPE and PERIOD parameters). If you select a value from 1-52, the service will lock or unlock this number of periods/weeks prior to the CURRENT or PREVIOUS period. For example, if PERIOD is set to CURRENT and ADDTL_PERIODS_BACK is set to 3, then the service will lock or unlock a total of 4 periods/weeks – the current period/week plus the last 3 periods/weeks. Likewise, if PERIOD is set to PREVIOUS and ADDTL_PERIODS_BACK is set to 3, the service will lock or unlock the previous pay period/week and the 3 periods/weeks prior to it.
FACILITY
Identifies the Facilities that will be locked or unlocked by the PAYROLL_LOCK service. The service will lock or unlock the Facilities in the Selected column. Move the Facilities you want to lock or unlock from the Available to the Selected column. If no Facilities are in the Selected column, the PAYROLL_LOCK service will run for all facilities.
EXTERNAL_PAY_GROUP
Identifies the External Pay Groups that will be locked or unlocked by the PAYROLL_LOCK service. The service will lock or unlock the External Pay Groups in the Selected column. Move the External Pay Groups you want to lock or unlock from the Available to the Selected column. If no External Pay Groups are in the Selected column, the PAYROLL_LOCK service will run for all External Pay Groups.
External Pay Groups are defined in the Charge Element form. The Charge Type is EXTERNAL_PAY_GROUP.
External Pay Groups are assigned to a person using a Person Assignment. The PAYROLL_LOCK service will only consider EXTERNAL_PAY_GROUP assignments with Level 0 (non-override records).
EMPLOYEE_SIGN
Indicates whether the PAYROLL_LOCK service will lock or unlock only those days that have been signed by the employee. If you select NOT_CONSIDERED, the service will lock or unlock all the days in the pay period, regardless of whether they are employee signed. If you select SIGNED, the service will check all the days in the pay period for an employee signature. If any of the days in the period do not have an employee signature, then the service will not lock or unlock any day in the period for that employee.
SUPERVISOR_SIGN
Indicates whether the PAYROLL_LOCK service will lock or unlock only those days that have been signed by the supervisor. If you select NOT_CONSIDERED, the service will lock or unlock all the days in the pay period, regardless of whether they are supervisor signed. If you select SIGNED, the service will check all the days in the pay period for a supervisor signature. If any of the days in the period do not have a supervisor signature, then the service will not lock or unlock any day in the period for that employee.
LOCK_UNLOCK
Indicates whether the PAYROLL_LOCK service will lock or unlock the selected records. The default setting is LOCK. If you change this setting to UNLOCK, it is recommended that you also change the name of the service instance accordingly.
Process Name: PTO_REQUEST_EXPIRE
Default Schedule: None
Module Required: None
The PTO_REQUEST_EXPIRE Service updates time off requests that have expired and assigns them a status of EXPIRED. An expired request has not been approved, disapproved, or cancelled before the Start Date of the request.
If you have any time off requests for dates in the past (prior to the current date), make sure the PTO_REQUEST_EXPIRE service does not mark the request as Expired before the supervisor has a chance to approve it.
Process Name: PURGE
Default Schedule: Run every day at 1 AM, indefinitely
Default Schedule Enabled: No
Module Required: None
The PURGE service deletes records from database tables. You can define how long data should remain in the tables before it is purged. Use the Purge form to create these settings.
PURGE_CATEGORY: If you want to purge only one specific category, you can select the category in the PURGE_CATEGORY field in the Service Parameters form. Note that the category must be enabled in the Purge form. To enable a category, check the Enabled box in the Purge form. If you do not select a PURGE_CATEGORY, the service will run for all enabled categories.
Purge Category |
ACTION (action, action_prompt, and action_process_status tables) Action data contains records of actions made at specific terminals. The data can be viewed in the Action, Action Prompts, and Process Status forms. The settings you apply here will affect all three types of data. Default Purge Days: 2557
|
AUDIT (audit_log table) Audit data is data from tables that you have enabled for auditing via the Audit Group form. Audit data can be viewed in the Audit Log and Audit Log Data forms. Default Purge Days: 2557
|
BIOMETRIC_TEMPLATE (biometric_template table) Deletes unenrolled fingerprint templates from the biometric_template table. These records can be viewed in the Biometric Assignment form. Default Purge Days: 2557
|
EMPLOYEE_LOAN (employee_loan and employee_loan_assignment tables) Deletes records from the employee_loan and employee_loan_assignment tables. These records can be viewed in the Employee Loan and Manage Employee Loan forms. The PURGE service will only delete employee loan records with an End Date that is less than or equal to the current date, minus the Purge Days. Default Purge Days: 2557
|
ERROR_LOG (error_log table) Deletes records from the error_log table. These error messages can be viewed in the Error Log form. Default Purge Days: 366
|
EXPORT (export_output, export_output_data, export_output_data_map, and export_output_parameter tables) This category will delete export data, which is generated from exports that are run via the Exports form, the EXPORT service instance, or the PAYROLL_EXPORT service instance. Default Purge Days: 31
|
INTERFACE_IN (interface_in_queue, interface_in_xml_queue, and interface_in_xml_queue_dtl tables) The setting for this Purge Category will affect Interface in Queue and In XML Queue records. Default Purge Days: 7 In the interface_in_queue table (Interface In Queue form), the PURGE service will delete records with the Transaction Status C (Complete), S (Sync), or X (Cancelled). In the interface_in_xml_queue table (In XML Queue form), the PURGE service will delete records with the Queue Status C (Complete), W (Waiting), X (Cancelled), E (Error), or S (Sync). In the interface_in_xml_queue_dtl table (In XML Queue Dtl tab), the PURGE service will delete records with the Record Status C (Complete), X (Cancelled), or E (Error).
|
INTERFACE_OUT (interface_out_queue and interface_out_xml_queue tables) The setting for this Purge Category will affect Interface out Queue and the Out XML Queue records. Default Purge Days: 7 In the interface_out_queue table (Interface Out Queue form), the PURGE service will delete records with the Transaction Status C (Complete) or X (Cancelled). In the interface_out_xml_queue table (Out XML Queue form), the PURGE service will delete records with the Transaction Status C (Complete) or X (Cancelled).
|
MESSAGE (message and message_response tables) Deletes records in the message and message_response tables. This data can be viewed in the Message Log and My Messages for in the web and in the Message View form in a client terminal. The PURGE service will delete expired messages – records with an end_date that is prior to the current date. If you do not want the PURGE service to delete a message, you can enable the DO_NOT_PURGE setting for the message. For Trigger Messages, this setting is enabled in the Trigger Setting tab in Message Policy. For Dialog, System, and Exception Messages, this setting is enabled in the Message Settings tab in the Message Policy. You can also use the Set Message Attribute Do Not Purge operand in your Messaging Ruleset. The value of this operand overrides the message setting. Default Purge Days: 366
|
OFFLINE DATA (terminal_offline_queue and terminal_offline_queue_dtl tables) Offline data is data that has been downloaded from the terminal to the application server. This data can be viewed in the Offline Data Queue and Offline Data Records forms. The purge settings you apply here will affect both types of data. Default Purge Days: 366
|
PAYROLL_LOCK (sign and sign_audit tables) Used to delete sign audit records that have been signed by Payroll Lock. The PAYROLL_LOCK category deletes records from the sign and sign_audit tables with a sign_category of PAYROLL. These records can be viewed in the Sign Audit form. Default Purge Days: 2557
|
PERSON_LOCK (person_posting_lock table) Deletes records from the person_posting_lock table. Default Purge Days: 366
|
PROCESS (process_audit table) Process data is available in the Service Audit form. It stores information about when a service instance started, ended, or failed. Default Purge Days: 31
|
PVE_SESSION_DATA (pve_session_data table) Deletes records from the pve_session_data table, which stores information created and used during event prompting sessions for individual persons. Once the event prompting session is no longer in use, the PVE Session Data can be deleted. Default Purge Days: 2
|
REPORTS (report_instance table) Deletes records from the report_instance table, which stores reports generated in the Reports form. Default Purge Days: 31
|
SIGN (sign and sign_audit tables) Used to delete sign audit records that have been signed by an employee or supervisor. The SIGN category deletes records from the sign and sign_audit tables with a sign_category of EMPLOYEE or SUPERVISOR. These records can be viewed in the Sign Audit form. Default Purge Days: 2557
|
TERMINAL STATUS AUDIT (terminal_status_audit table) Terminal status audit data is stored in Status History. The Status History form allows you to view a specific terminal's status history. Default Purge Days: 31 Note: The PURGE service will always keep the most recent record in the terminal_status_audit table.
|
TRANS_ACTION (all trans_action tables) This data pertains to time spent on actions, such as time spent between a clock in and a clock out. Default Purge Days: 2557
|
TRANS_ATOMIC (trans_atomic table) This data pertains to time spent between events, such as time spent between clock in, break start, break end, and clock out. Default Purge Days: 2557
|
CHUNK_SIZE: Number of records the PURGE service will collect before it performs a delete on the database. For example, if the CHUNK_SIZE is set to 5000 and there are 11,000 records to purge, the service will delete the first 5000 records it identifies, then it will delete the next 5000, and finally it will delete the last 1000.
Process Name: RECALCULATION
Default Schedule: None
Module Required: Recalculation Licensing
The RECALCULATION service is used to bring an employee’s timecard into compliance with their pay rules. Non-compliance can occur in a variety of situations. For example, if a supervisor adjusts the timecard, the transaction may not comply with the pay rules. Or, the gap days at the beginning of a pay period need to be processed after the scheduled days in order to receive overtime.
An employee's transactions will be checked for recalculation if the employee is assigned a Recalculation Policy that is enabled. The Process Order in the Recalculation Policy will be used to mark transactions that may need recalculation. When you run the RECALCULATION service, the employee's timecard will be brought into compliance with their pay rules based on the settings in the Recalculation Policy.
The RECALCULATION service parameters determine which employees will have their timecards checked for recalculation; whether recalculation will be forced; and the range of pay periods to check for recalculation. You can also configure the RECALCULATION service to run for values entered on a specific Terminal ID and prevent certain error messages from being logged for the RECALCULATION service.
See Also: How Often to Run the RECALCULATION Service
RECALCULATION_POLICY: If you select a Recalculation Policy, only employees assigned to this policy will have their timecards processed by the RECALCULATION service. If you do not select a Recalculation Policy, then the RECALCULATION service will process the timecards of employees assigned to all the available Recalculation Policies.
FORCE: Select Yes or No. The default value for the FORCE parameter is blank (No).
Set this parameter to Yes if you want to recalculate the selected records even if the service determines that recalculation is not necessary. Setting this parameter to Yes is the same as using the Recalculate button in the timecard and clicking OK to force recalculation. For example, you may want to set FORCE to Yes if the service will be processing timecards for employees that have the All Range Worked operand in their pay rules. This operand needs to be processed even when the service does not detect a need for recalculation (for example if the hours on each day were entered chronologically and no adjustments were made).
If you do set the FORCE parameter to Yes, it is recommended that do so in a separate instance of the RECALCULATION service. You should also configure the PERIOD_BACK_START and PERIOD_BACK_END parameters so that only one pay week or pay period will be affected, and select a RECALCULATION_POLICY that only includes the persons who will need to have forced recalculation. This instance of the RECALCULATION service should only be run once a pay week/pay period after all adjustments have been made.
PERIOD_BACK_START: This is the number of periods prior to the current period where the RECALCULATION service will start searching for records that need recalculation. You can enter a value of zero or greater in this field. A value of zero represents the current period.
PERIOD_BACK_END: This is the number of periods prior to the current period where the RECALCULATION service will stop searching for records that need recalculation. You can enter a value of zero or greater in this field. A value of zero represents the current period.
If both the PERIOD_BACK_START and the PERIOD_BACK_END are left blank, then these parameters will not be used. The service will only use the Max Periods Back setting in the employee's Recalculation Policy.
If the PERIOD_BACK_START and PERIOD_BACK_END range spans beyond the Max Periods Back setting in the employee's Recalculation Policy, the service will only search up to the Max Periods Back. The range between the PERIOD_BACK_END and the PERIOD_BACK_START can begin no earlier than the start of the Max Periods Back and can end no later than today. For example, a Recalculation Policy has Max Periods Back set to 2, meaning the current period and the two periods prior to the current period will be checked. The RECALCULATION service's PERIOD_BACK_START and PERIOD_BACK_END parameters are set to 4 and 0 respectively, meaning the service should check the current period and the 4 periods prior to the current period. However, this range falls outside the policy's Max Periods Back setting. Therefore, the RECALCULATION service will only check the current period and the two periods prior to the current period.
Refer to the RECALCULATION Service Example – PERIOD_BACK Parameters for more information.
GID, DID: Group Identifier and Device Identifier. The GID/DID combination identifies a specific type of terminal. If you want to narrow the RECALCULATION service to run for values entered on a specific Terminal ID, enter the GID/DID values here.
SUPPRESS_ERROR_CODES: This parameter will prevent certain error messages from being logged for the RECALCULATION service. This parameter is designed to prevent excessive error messages from accumulating for the service. By default, no error codes are selected for this parameter. To select an error code and prevent it from being logged, move the specified error code from the Available to the Selected column.
A company has weekly pay periods. The current period and previous four periods are shown below.
Pay Period |
Week of October 30 |
Week of November 6 |
Week of November 13 |
Week of November 20 |
Current
Period - |
The Recalculation Policy has Max Periods
Back set to 4, meaning
the RECALCULATION service can search the current period and the previous
4 periods for transactions to recalculate.
The PERIOD_BACK_START and PERIOD_BACK_END parameters in the RECALCULATION service are used to select specific periods (within the last 4 periods) to recalculate.
Pay Periods to Recalculate |
PERIOD_BACK_START and PERIOD_BACK_END Required |
Week of
October 30 |
PERIOD_BACK_START
= 4 |
Week of
November 6 |
PERIOD_BACK_START
= 3 |
Week of
November 13 |
PERIOD_BACK_START
= 2 |
Week of
November 20 |
PERIOD_BACK_START
= 1 |
Week of
November 27 |
PERIOD_BACK_START
= 0 |
Process Name: RESTART_TERMINALS
Default Schedule: None
Module Required: Terminal Monitor
The RESTART_TERMINALS service will restart selected 9300, 9520, and 9540 terminals in Application or Service mode.
When a terminal is in Service Mode, you can configure its network parameters (Terminal IP, DNS Server Address, etc.) using the Service Browser. In Application Mode, the client software runs on the terminal and employees can use the login screen and other forms. See Terminal Monitoring for more information.
You can select specific terminals to restart and the mode to which they should be restarted. Refer to the Service Parameters below for more information.
RESTART_MODE: Select APPLICATION or SERVICE. When a terminal is in APPLICATION mode, the client software runs on the terminal and employees can use the login screen and other forms. When a terminal is in SERVICE mode, you can configure its network parameters (Terminal IP, DNS Server Address, etc.) using the Service Browser. See Terminal Monitoring for more information.
TERMINALS: To select a terminal to be restarted by the service, move it from the Available column to the Selected column. The Available terminals are 9300, 9520, and 9540 terminals defined in the Terminal form. If you do not select any terminals, then all the 9300, 9520, and 9540 terminals defined in the Terminal form will be restarted by the service. If you select one or more specific terminals to be restarted, then the service will only restart those terminals.
Note: The Monitor box in the Terminal form does not have to be checked for a terminal to be restarted by this service.
Process Name: SCHEDULE_GENERATION
Default Schedule: None
Module Required: Schedules
The SCHEDULE_GENERATION service generates schedules that have been created in the Schedule Cycle form.
You must assign a schedule to a person in order to have the SCHEDULE_GENERATION service generate the schedule. You can assign a schedule to a person via the Person Assignment form or the Assign button in the Schedule Cycle form.
You can run this service for specific Schedule Cycles, a Person Number, or specific Person Groups. Schedules can be generated for a strict range of dates or you can specify a date range with an offset.
When you configure the service to run for PEOPLE, the service will generate all the Schedule Cycles that have been assigned to the specified employees.
When you run the service for a specific CYCLE, any employees that are assigned to the specified Schedule Cycle will have a schedule generated.
Once a schedule is generated, it will display in the Person Schedule form.
Note: You cannot generate a schedule on a day that is payroll locked. The SCHEDULE_GENERATION service will skip payroll locked days and record an error in the Error Log form.
These parameters determine which schedules will be generated by the SCHEDULE_GENERATION service. Note that a schedule must be assigned to a person before the schedule can be generated. You can assign a schedule via the Person Assignment form or the Assign button in the Schedule Cycle form.
PERSON_NUM: This parameter applies only if you select PEOPLE in the RUN FOR parameter. If you want to generate a schedule for a specific person (in addition to any groups already selected in the Person Group Values column in the Service Instance form), enter the Person Number here.
CYCLE_NAME: Identifies the Schedule Cycles you want to generate. When you run the service for specific Schedule Cycles, any of the specified persons who are assigned to the selected cycles will have a schedule generated. Move the Schedule Cycle from the Available column to the Selected column if you want it to be generated by the SCHEDULE_GENERATION service. If no Schedule Cycles are in the Selected Column, the service will generate all Schedule Cycles assigned to the specified persons.
RANGE_TYPE: Select STRICT or OFFSET.
STRICT = The service runs for the dates listed in the START_DATE and END_DATE parameters, and any days in between.
OFFSET = The service runs for the number of days listed in the LENGTH parameter.
RUN FOR: Indicates whether the service should run for CYCLES or PEOPLE.
CYCLES: The service will generate the Schedule Cycles that are selected in the CYCLE NAME field. Any persons who are assigned to the specified Schedule Cycles will have their assigned schedules generated. If you select this setting, neither the Person Group Value selection in the Service Instance form nor the PERSON_NUM parameter will be considered.
PEOPLE: The service will generate the CYCLE_NAME selections that have been assigned to any of the following:
The Groups listed in the Person Group Values field in the Service Instance form.
The Person Number listed in the PERSON NUM parameter.
START_OFFSET: This setting applies only if your RANGE_TYPE is OFFSET. The value entered here determines on which day the service should apply the LENGTH parameter. For today, enter 0. For tomorrow, enter 1, for the day after tomorrow, enter 2, etc.
LENGTH: This setting applies only if your RANGE_TYPE is OFFSET. Enter the number of days from the START_OFFSET for which the service will generate schedules.
START_DATE: This setting applies only if your RANGE_TYPE is STRICT. The START_DATE is the first date for which the service will generate schedules.
END DATE: This setting applies only if your RANGE_TYPE is STRICT. The END_DATE is the last date for which the service will generate schedules.
During Schedule Generation, if there is an overlap between day X (the day being generated) and day X+1 or X-1, the new schedule and the overlapping (old) schedule are examined before generating the schedule. The New Overlap Algorithm is as follows:
New schedule gets generated unless the old schedule is Protected.
The above is true regardless of the schedule types (Normal, Optional, Exclusion, Ad Hoc Placeholder or Availability), whether there are punches on the post date, or anything else.
Old schedule gets updated (if partial overlap) or deleted (if complete overlap – the Real range is completely consumed) so that the overlap condition is avoided.
Scheduled events in the old schedule either do not get updated at all (not affected by new schedule) or else they get deleted (because there's either partial or total overlap).
A schedule that is “Protected” cannot be touched. Any schedule attempting to generate that overlaps a protected schedule will fail.
Schedule Overlap
Complete Schedule Overlap – A complete overlap is where the “real” part of the old schedule (start time stamp to end time stamp) is entirely consumed by the applicable range of the new schedule.
For example, say the schedule on Tuesday is 0000 to 0700, with applicable range 2000 Monday to 1100 Tuesday. A new schedule for Monday is generated and is 2000 Monday to 0400 Tuesday, with applicable range 1600 Monday to 0800 Tuesday.
In this scenario, the “real” parts of the two schedules partially overlap, as do the applicable range. However, the applicable range of the new schedule (1600-0800) extends beyond the real part of the old schedule (0000-0700). This counts as a complete overlap, and in this scenario, the old schedule is deleted, including any events in it.
Partial Schedule Overlap - A partial overlap is any overlap that isn’t a complete overlap in that at least some real part of the schedule isn’t overlapped. In this case, the applicable and, if necessary, real ranges of the old schedule are adjusted so that the schedule shrinks and there is no overlap.
For example, say the schedule on Tuesday is 0000 to 0700, with applicable range 2000 Monday to 1100 Tuesday. A new schedule for Monday is generated and is 1600 Monday to 0000 Tuesday, with applicable range 1200 Monday to 0400 Tuesday.
In this scenario, in order to make the schedules no longer overlap, the old schedule is modified so that both the applicable start and real start times are both 0400 so at no moment do both schedules apply. The real and applicable end times are not adjusted as they are both outside the new schedule.
Scheduled Event Overlap
If a scheduled event is completely or partially overlapped, then the scheduled event is deleted. This applies regardless if an event is AUTO or PUNCH or OVERRIDE.
If the Scheduled event is defined as OFFSET instead of STRICT (with time stamps), the start and end of the event from the unmodified old schedule are calculated and if there’s any overlap, the Scheduled event is deleted.
In the case where the event is not overlapped, and the real start time of the schedule is being moved later (partial overlap), the offsets are changed such that the event does not move within the schedule. For example, if an event in a schedule that starts at 8 AM has offsets of 4 and 4.5 (12-12:30 PM), and that start time changes to 9:30 AM, then the event’s offsets must become 2.5 and 3 to remain 12-12:30 PM.
If a schedule is being generated on the Spring Daylight Saving Time day, and the schedule has a timestamp (Start Timestamp, End Timestamp, Applicable Start, or Applicable End) between 2:00 a.m. and 3:00 a.m., an hour offset will be added to the timestamp.
This offset is to account for Daylight Saving Time beginning at 2:00 a.m. local time on the Spring Daylight Saving Time day (the second Sunday in March). The time between 2 a.m. and 3 a.m. on this day technically does not exist, as the time has sprung forward starting at 2 a.m. (1:59 a.m. jumps to 3:00 a.m.).
For example, a Schedule Cycle’s Applicable End is March 8, 2015 at 2:30 a.m. When the SCHEDULE_GENERATION service generates this service, the Applicable End will be March 8 at 3:30 a.m.
Note: If this offset creates an overlap with the next day’s schedule, the next day’s schedule will be adjusted accordingly.
Process Name: SCHEDULED_EVENT
Default Schedule: None
Module Required: None
The SCHEDULED_EVENT service is used to post Suspend events to employee timecards based on schedules defined in the Scheduled Events form. Suspend events can be used to post events such as fire drills or team meetings so employees do not have to post the event themselves.
When the Suspend event starts, the labor, break, or meal event that was in progress will be suspended. This activity will resume when the Suspend event stops. The employee does not have to start or stop the activity that was in progress when the Suspend event occurred.
If the Suspend event is still in progress when the employee uses the shop floor terminal, the employee can choose to stop the Suspend event or clock out. Stopping the Suspend event will resume whatever activity was already in progress.
The event schedule determines whether the service will post the event as a punch event or an elapsed event. For a punch event, the SCHEDULED_EVENT service can post both the start and end of the event, or the service can start the event and the employee can end it.
The SCHEDULED_EVENT service will select event schedules based on the Person Group of the supervisor or administrator who created the schedule. You will need to configure the service’s Person Group Values accordingly.
You may want to create a Person Group of type SERVICE_GROUP for an instance of the SCHEDULED_EVENT service. The members of this group will be the persons who created the event schedules that will be processed by the service instance.
Select the SCHEDULED_EVENT service in the Service Instance form (Main Menu > Configuration > Services > Service Instance) and click Modify. In the Person Group Values field, move a Person Group from the Available column to the Selected column if you want the service to process schedules created by its members. If there are no Person Groups in the Selected column, then no schedules will be processed.
The SCHEDULED_EVENT service selects the event schedules to process using the following criteria:
The SCHEDULED_EVENT service uses its Person Group Values to determine which event schedules to process. It will only process the schedules created by members of these groups.
The event schedule must be Enabled.
The Status of the Start Timestamp, End Timestamp, or Post Date that is being processed must be Ready. For example, if the service is posting a punch pair event, and the Start Status is Complete but the End Status is Ready, then the service only needs to post the End Timestamp for the event.
The Start Timestamp, End Timestamp, or Post Date that is being processed must be for a date in the past (prior to the date and time the service is running).
To determine which employees will have the scheduled event posted to their timecard, the SCHEDULED_EVENT service looks at the Person Groups configured for the event schedule. If the schedule has All Supervised checked, then the event schedule will apply to all persons over which the schedule creator has authority. Otherwise, the Scheduled Group tab in the Scheduled Event form will show the Person Groups for which this schedule applies.
The service also looks at the Start Status and End Status of each Person Group configured for the schedule. If the status is Ready, then the service still needs to post the event’s Start Timestamp, End Timestamp, or Post Date for that particular Person Group.
For example, the service is posting a punch pair event for Facility_A and Facility_B. For Facility_A, the Start Status is Complete but the End Status is Ready. For Facility_B, both the Start and End Status are Ready. The service only needs to post the End Timestamp for the event for Facility_A but it needs to post both the Start and End Timestamps for Facility_B.
Note: The service will only post the event for the members of the Person Group that are managed by the schedule creator. Using the above example, if the schedule creator only manages 12 employees in Facility_B, then only those 12 employees will have the scheduled event posted on their timecard.
For each type of event posting (Punch Start, Punch Pair, or Elapsed), certain conditions must be met.
For the SCHEDULED_EVENT service to post a Punch Start, the person must be Clocked In; there can be no events posted later than the Start Timestamp of the Suspend event; and the person’s Entry Type (defined in the Employment Profile) must be PUNCHED.
When the SCHEDULED_EVENT service posts a Punch Pair, it is the same as using Add Punch Pair in the timecard. The start and end timestamps of the Suspend event must fall within an existing event. The person who is receiving the Suspend event must have an Entry Type of PUNCHED.
For the SCHEDULED_EVENT service to post an Elapsed event, the person’s Entry Type cannot be PUNCHED.
Note that the event’s timestamp or post date must be prior to the date and time when the service is running. The service will not future post an event.
Process Name: SPECIAL_PAY
Default Schedule: None
Module Required: None
The SPECIAL_PAY service processes transactions that have a Premium Code for Special Pay, provided the employee’s Pay Policy has Special Pay enabled. Special Pay is additional pay awarded to employees who have worked on a special assignment. This additional pay recognizes the special conditions in which the employee was working. For example, an employee who is assigned to work in a place with hazardous conditions may receive Special Pay in addition to their base pay.
Make sure you run the SPECIAL_PAY service before you run your Payroll Export.
You can check to see whether a transaction has been processed by the SPECIAL_PAY service by checking the Transaction Process Status tab in the Transaction Details form. The Process Name SPECIAL_PAY will show a status of R if it is ready to be processed, and C when processing is complete.
When the service finishes processing, the Special Pay rates and amounts will display in the Transaction Duration Detail tab of the Transaction Details form.
The SPECIAL_PAY service selects transactions for employees assigned to the Selected Pay Policies in its PAY_POLICY parameter. If there are no Pay Policies in the Selected column, the service will run for all Pay Polices. Note that these Pay Policies must have Special Pay enabled.
The service will also select Pay Policies based on their Special Pay Limit setting and the MAXIMUM_TIME parameter. If MAXIMUM_TIME is UNLIMITED, the service will only process Pay Policies that have Special Pay Limit set to Unlimited. If MAXIMUM_TIME is LIMITED, the service will only process Pay Policies that have Special Pay Limit set to Flat Value or Pay Period Schedule Paid.
The date range of the transactions that will be processed depends on the PERIOD parameter and the Special Pay Range in the Pay Policy. If the PERIOD parameter is set to CURRENT, the SPECIAL_PAY service will select the transactions in the current pay period or pay week – as well as earlier periods or weeks. Likewise, if the PERIOD parameter is set to PREVIOUS, the service will select the transactions in the previous pay period or pay week – as well as earlier periods or weeks.
For the transactions that meet the above criteria, the SPECIAL_PAY service will use the transaction’s Premium Code to calculate the Special Pay rate and amount. The service will update the status of the transaction’s SPECIAL_PAY process as needed. You can view this process status in the Transaction Process Status tab of the Transaction Details form.
You can view the transaction’s Special Pay rate and amount in the Transaction Duration Detail tab of the Transaction Details form.
The SPECIAL_PAY service has the following parameters:
Identifies the Pay Policies that can be processed by the SPECIAL_PAY service. The SPECIAL_PAY service will process transactions posted by employees who are assigned to these Pay Policies. Note that the Pay Policies in the Selected column must also have Special Pay enabled in order to be processed by the SPECIAL_PAY service.
Move the Pay Policies you want to process from the Available to the Selected column. You should only select Pay Policies that have the Special Pay box checked.
If no Pay Policies are in the Selected column, the SPECIAL_PAY service will run for all Pay Policies that have Special Pay enabled.
The MAXIMUM_TIME parameter (see below) also determines which of your selected Pay Policies will be processed by the SPECIAL_PAY service. If MAXIMUM_TIME is UNLIMITED, the service will only process Pay Policies that have Special Pay Limit set to Unlimited. If MAXIMUM_TIME is LIMITED, the service will only process Pay Policies that have Special Pay Limit set to Flat Value or Pay Period Schedule Paid.
Indicates whether the SPECIAL_PAY service will process transactions in the PREVIOUS pay week or pay period or the CURRENT pay week or pay period (as well as earlier periods or weeks).
The service selects the dates to process based on the PERIOD parameter and the Special Pay Range – Period, Week, or Period Incremental Week – in the Pay Policy.
If the Special Pay Range is set to Period, the service selects the transactions in the current pay period or the previous pay period (depending on the PERIOD parameter setting).
The current pay period is determined by the Pay Policy in effect at the time the service is running.
To determine the previous pay period, the service looks at the Pay Policy in effect on the day prior to the current period’s Start Date. The service then selects the dates in the previous pay period.
Example: The pay period is from Monday 5/23/2016 to Sunday 6/5/2016. The service runs on 6/2/2016. Special Pay Range is set to Period. If the SPECIAL_PAY service’s PERIOD parameter is CURRENT, the service will process transactions in the current period (5/23 to 6/5). If the PERIOD parameter is PREVIOUS, the service will process transactions in the previous period (5/9 to 5/22). In both cases, the SPECIAL_PAY service will also process earlier periods.
If the Special Pay Range is set to Week, the service selects the dates in the current pay week or the previous pay week (depending on the PERIOD parameter setting) in a biweekly pay period.
Example: The biweekly pay period is from Monday 5/23/2016 to Sunday 6/5/2016. The service runs on 6/2/2016. Special Pay Range is set to Week. If the SPECIAL_PAY service’s PERIOD parameter is CURRENT, the service will process transactions in the current pay week (5/30 to 6/5). If the PERIOD parameter is PREVIOUS, the service will transactions records in the previous pay week (5/23 to 5/29). In both cases, the SPECIAL_PAY service will also process earlier pay weeks.
Special Pay Range = Period Incremental Week
If the Special Pay Range is set to Period Incremental Week, the service processes the transactions in the first pay week in the biweekly pay period or in the entire biweekly pay period, depending on when the service is run within the pay period and the value of the service’s PERIOD parameter.
This option may be useful if you need to process and export the first week of a biweekly period, instead of waiting for the second week to complete. You can then process and export the entire biweekly period as a single unit when the whole period is ready.
Special Pay Range in Pay Policy = Period Incremental Week |
||
If the PERIOD Parameter is… |
And the SPECIAL_PAY service runs on the following date… |
The service will process transactions in… |
PREVIOUS |
After the end of the first week in the biweekly pay period but still inside the biweekly pay period |
The first week in the previous biweekly pay period (and earlier periods) |
PREVIOUS |
After the end of the biweekly pay period |
The previous biweekly pay period (and earlier periods) |
CURRENT |
Within the first week in the biweekly pay period |
The first week in the current biweekly pay period (and earlier periods) |
CURRENT |
Within the second week in the biweekly pay period |
The current biweekly pay period (and earlier periods) |
Indicates whether the SPECIAL_PAY service will process Pay Policies that have Special Pay Limit set to Unlimited or to Flat Value/Pay Period Schedule Paid (Limited).
If you select UNLIMITED, the service will only process Pay Policies that have Special Pay Limit set to Unlimited.
If you select LIMITED, the service will only process Pay Policies that have Special Pay Limit set to Flat Value or Pay Period Schedule Paid. A limited number of the hours (a Flat Value or the Pay Period Schedule Paid hours) that an employee reports with a Special Pay Premium Code will receive additional pay using the Special Pay Rate.
If this hours limit exceeds the number of hours in the pay period that have a Special Pay Premium Code, the Special Pay Rate will be prorated based on this limit.
See Limited and Unlimited Special Pay for more information.
Note: If your MAXIMUM_TIME setting is LIMITED, you should run the SPECIAL_PAY service at the end of the pay period, to ensure that the Special Pay Rate is prorated correctly.
Process Name: TERMINAL_MONITOR
Default Schedule: Run every five minutes, indefinitely
Default Schedule Enabled: No
Module Required: Terminal Monitor
The Terminal Monitor service checks the status of your shop floor terminals and updates information in the Terminal Monitor form (Main Menu > Administration > Terminal > Terminal Monitor). To enable the service, see Service Instance.
The service will check only the terminals that have the Monitor box checked in the Terminal form. The terminal status will not update until the service runs again.
Each terminal performs a status check with the application server at a predefined interval (the Check Online Time value in the Terminal Type form). The application updates the terminal_status table with the timestamp for this status check and displays it in the Last Comm To Server field in the Terminal Monitor form. To determine the status of the terminal, the Terminal Monitor service looks at the Last Comm To Server timestamp, the Check Online Time setting, and the existence of offline data. The Terminal Monitor Service then updates the terminal’s status. The updates are reflected in the Status column of the Terminal Monitor and Status History forms.
If you are using Terminal Groups as part of the configuration to use multiple data collection systems with Shop Floor Time, the Terminal Monitor service will update the individual status of the terminals in the Terminal Group and update the Terminal Group’s status.
The service will check only the terminals that have the Monitor box checked in the Terminal form. The terminal status will not update until the service runs again. The service also generates email messages for responsible parties, such as system administrators or managers. Messages are sent according to the configuration for the TERMINAL _STATUS_CHANGE trigger in the Message Policy form. Emails are sent when the MESSAGE_DELIVERY service runs. For details, see Terminal Monitoring Feature.
GRACE: Number of seconds that the Terminal Monitor service will add to a terminal's Check Online Time setting when determining if the terminal is offline. If a terminal is online and accepting punch transactions, it may not communicate with the application server before the Check Online Time Interval; in this case, the Terminal Monitor service may incorrectly determine that the terminal is offline. The Terminal Monitor service's GRACE period increases the length of time that the service will wait before determining if a terminal is offline.
For example, if the Check Online Time value is 60 seconds, and the GRACE period is 30 seconds, the Terminal Monitor service will check to see if the terminal is connecting to the server every 60 seconds. If a terminal is not connecting to the server after 60 seconds, the Terminal Monitor service will wait an additional 30 seconds before checking again. If the difference between the current time and the last time the terminal communicated with the server is greater than or equal to the Check Online Time plus the GRACE period, the terminal will be considered offline. In this example, if it has been 90 seconds or more since the terminal last communicated with the server, the terminal will be offline.
The default GRACE value is 30 seconds. You can increase this GRACE value in 10-second increments up to 180 seconds (30, 40, 50, etc.). Select a value from the GRACE drop-down list. If you do not change the GRACE value, the default value (30 seconds) will be used.
TERMINAL_PROFILE: Select the Terminal Profiles which the Terminal Monitor service will check to see if the terminal status has changed. Move the Terminal Profile from the Available column to the Selected column if you want the Terminal Monitor service to check it. If there are no Terminal Profiles in the Selected column, the service will check the status of all the Terminal Profiles.
MESSAGE_POLICY: Select the Message Policy that will be used to configure the messages for the Terminal Monitor service. The available options are Message Policies with the TERMINAL_STATUS_CHANGE message trigger.
See Also: Configure Email Notification for Terminal Status Changes
Process Name: TIME_COMPLETION
Default Schedule: None
Module Required: Time Completion
The TIME_COMPLETION service runs for employees who have Time Completion enabled in their Attendance Policy. The service will post a Day Worked event with the number of hours needed to complete the timecard.
The TIME_COMPLETION service checks the employee’s Attendance Policy to see if Time Completion is enabled. If Time Completion is not enabled, the service stops processing for that employee. If Time Completion is enabled, the service continues.
The service again checks the Attendance Policy (Time Completion tab) to see if the previous week or pay period should be processed (Range Indicator is Week or Period). The service then looks for the last day of the previous week or period.
Next, the TIME_COMPLETION service must determine the number of hours the employee is required to work in the week or period. If the employee has an Assignment of MINIMUM_ELAPSED_HOURS_WEEK or MINIMUM_ELAPSED_HOURS__PERIOD, the service will consider the value of this assignment as the minimum required hours. If the employee does not have these assignments, the service will sum the scheduled paid hours from the previous week or period to determine the minimum required hours.
The TIME_COMPLETION service will then sum the hours in the previous week or period. All hours classifications will be included in this sum. For example, if an employee posted 32.0 Regular hours and 8.0 Unpaid hours, the service will view this as 40.0 hours.
If the employee posted less than the minimum required hours, the TIME_COMPLETION service will post the difference in hours. The service will post these hours to the Event Name configured in the Time Completion tab of the Attendance Policy.
The service will post these hours on the last scheduled day of the previous week or period. If the last scheduled day is a split day in a 9-80 schedule, the service will post the hours on the first half of the day.
Note: If the Time Completion event has a MAXIMUM_DURATION setting, the TIME_COMPLETION service will ignore this value so that it can post the necessary hours to complete the timecard.
If you have configured messaging to notify employees and supervisors when the TIME_COMPLETION service has posted hours, you may want to create an instance of the BATCH service that will run the TIME_COMPLETION and MESSAGE_CREATION services successively. When the TIME_COMPLETION service finishes running, the MESSAGE_CREATION service will run. You can also include the MESSAGE_DELIVERY service in this batch.
See “Configure Messaging for Time Completion” in the Time Completion topic for information on how to configure your messaging rules, policies, and other settings.
Process Name: TOTAL_TIME_ACCOUNTING
Default Schedule: None
Module Required: Total Time Accounting
The Total Time Accounting feature allows you to configure specific events to receive an adjusted Total Time Rate. This Total Time Rate is designed to make labor costs the same per hour for all transactions in a pay period, even when some of those transactions are unpaid. The Total Time Rate is calculated by the TOTAL_TIME_ACCOUNTING service.
The TOTAL_TIME_ACCOUNTING service will only process the transactions of employees assigned to Pay Policies that have Total Time Acct enabled. The transactions that will be processed will also depend on the TTA Range Type in the Pay Policy and the service’s PERIOD_OFFSET parameter. The Total Time Rate is calculated based on the events and hour classifications you configure in the Events, Event Types, and Hours Class tabs of the Pay Policy form.
You can check to see whether a transaction has been processed by the TOTAL_TIME_ACCOUNTING service by viewing the Transaction Process Status tab in the Transaction Details form. The Process Name TOTAL_TIME_ACCOUNTING will show a status of R if it is ready to be processed, and C when processing is complete.
When the service finishes processing, the Total Time Rate and Total Time Amount will display in the Transaction Duration Detail tab of the Transaction Details form.
The TOTAL_TIME_ACCOUNTING service will select transactions for employees that belong to a Pay Group in the service’s PAY_POLICY parameter. The Pay Policy must have Total Time Accounting enabled.
The date range for which the service will look for these records is determined by the TTA Range Type setting in the Pay Policy, the service’s PERIOD_OFFSET parameter, and the date the service runs. The service looks for transactions with the Process Name TOTAL_TIME_ACCOUNTING and the Process Status of R.
When the service finds a transaction that meets these conditions, the service determines the start and end of the pay period or pay week and calculates the Total Time Rate accordingly. It processes earlier weeks or periods in the same way.
See “PERIOD_OFFSET” below for more information on how the TOTAL_TIME_ACCOUNTING service selects which transactions to process.
The TOTAL_TIME_ACCOUNTING service has two parameters: PERIOD_OFFSET and PAY_POLICY.
The PERIOD_OFFSET parameter is used along with the TTA Range Type setting in the person’s Pay Policy to determine the date range for which the service will look for transactions to process. These settings determine whether the service will calculate the Total Time Rate based on all the transactions in a pay period or based on a single week of transactions in the pay period.
The PERIOD_OFFSET is the minimum number of days after the end of the pay period or pay week when the service will process transactions. The default value is zero (0). The PERIOD_OFFSET is calculated differently depending on the TTA Range Type setting - Period, Week, or Period Incremental Week - in the person’s Pay Policy.
If the TTA Range Type in the person’s Pay Policy is Period, the TOTAL_TIME_ACCOUNTING service’s PERIOD_OFFSET parameter is the minimum number of days after the end of the pay period when the service will process transactions. The default value is zero (0), indicating that the service will process the current period and earlier periods if it is run on the last day of the period (or later). If the PERIOD_OFFSET is 1, the service will process the current period and earlier periods when it is run on the first day past the end of the period (or later). Likewise, if the PERIOD_OFFSET is 2, the service will process the current period and earlier periods when it is run on the second day past the end of the period (or later).
Example 1: The pay period is from December 22, 2012 to January 4, 2013. The TTA Range Type is Period and the service’s PERIOD_OFFSET is 0.
If the service runs on December 26, 2012, it will process transactions from the previous pay period (which ended December 21).
If the service runs on January 4, 2013, it will process the current pay period (which ended January 4).
Example 2: Same pay period (December 22, 2012 to January 4, 2013). The TTA Range Type is Period and the service’s PERIOD_OFFSET is 1.
If the service runs on December 26, 2012, it will process transactions from the previous pay period (which ended December 21).
If the service runs on January 4, 2013 it will also process transactions from the previous pay period (which ended December 21).
If the service runs on January 5, 2013, it will process the current pay period (which ended January 4).
If the TTA Range Type in the person’s Pay Policy is Week, the TOTAL_TIME_ACCOUNTING service’s PERIOD_OFFSET parameter is the minimum number of days after the end of the pay week when the service will process transactions. This configuration will calculate the Total Time Rate for each week in a biweekly pay period separately.
Example: Same biweekly pay period (December 22, 2012 to January 4, 2013). The TTA Range Type is Week and the service’s PERIOD_OFFSET is 1.
If the service runs on December 26, 2012, it will calculate the Total Time Rate based on the transactions in the previous pay week (December 15-21).
If the service runs on January 4, 2013 it will calculate the Total Time Rate based on the transactions in the previous pay week (December 22-28).
If the service runs on January 5, 2013, it will calculate the Total Time Rate for Week 2 (December 29 to January 4). If Week 1 has not yet been processed, it will also calculate the Total Time Rate for Week 1 (December 22 to 28).
TTA Range Type = Period Incremental Week
If the TTA Range Type in the person’s Pay Policy is Period Incremental Week, the TOTAL_TIME_ACCOUNTING service will process transactions for the first pay week in a biweekly pay period or for the entire biweekly pay period, depending on when the service is run within the pay period and the value of the service’s PERIOD_OFFSET parameter.
This option may be useful if you need to process and export the first week of a biweekly period, instead of waiting for the second week to complete. You can then process and export the entire biweekly period as a single unit when the whole period is ready.
TTA Range Type in Pay Policy = Period Incremental Week |
||
If the PERIOD_OFFSET is… |
And the TOTAL_TIME_ACCOUNTING service runs on the following date… |
The service will process transactions in… |
1 or greater |
The number PERIOD_OFFSET days after the end of the first week in the biweekly pay period but still inside the biweekly pay period |
The first week in the biweekly pay period (and earlier periods) |
1 or greater |
The number of PERIOD_OFFSET days after the end of the biweekly pay period |
The entire biweekly pay period (and earlier periods) |
0 |
On the last day of the first week in the biweekly pay period (or later) but before the last day of the biweekly pay period |
The first week in the biweekly pay period (and earlier periods) |
0 |
On the last day of the biweekly pay period |
The entire biweekly pay period (and earlier periods) |
Note that when the service processes the entire biweekly period, the service takes both weeks into account to calculate the Total Time Rate. For this reason, the Total Time Rate may be different when the TOTAL_TIME_ACCOUNTING service processes the first week of the period and when it processes both weeks of the period
The PAY_POLICY parameter indicates which Pay Policies will have their transactions calculated by the TOTAL_TIME_ACCOUNTING service. The Pay Policy must also have the Total Time Acct box checked in order for the TOTAL_TIME_ACCOUNTING service to process transactions for it.
Move the Pay Policy from Available to Selected if you want it to be processed. If there are no Pay Policies in the Selected column, the service will calculate rates for all the Pay Policies that have Total Time Acct enabled.
To determine whether an employee is assigned to one of these Pay Policies, the service will look at the employee's Pay Policy on the date of the transaction.
Process Name: TRIGGER_OFFLINE_VALIDATION_DOC_TRANSFER
Default Schedule: Run every day at 1 AM, indefinitely
Default Schedule Enabled: No
Module Required: None
The TRIGGER_OFFLINE_VALIDATION_DOC_TRANSFER service is used to trigger the transfer of the latest offline validation documents and terminal relay schedules to specific Terminal Profiles.
An offline validation document serves as a local database of time reporters or charge elements. The document is stored locally on a terminal and is used for validation of data when the terminal is in offline mode. See Offline Person Validation and Offline Charge Validation for more information.
A terminal relay schedule is used to schedule one or more terminals to activate a relay signal which can perform an action such as sounding an alarm or opening a door. For example, a terminal may be configured to ring a bell for 10 seconds during the start of a shift (9:00 a.m.) every week day (Monday – Friday). See Terminal Relay Schedule Feature for more information.
The service’s parameters (see below) determine whether the service will trigger the transfer of offline person records, offline charge records, and/or relay schedules to specific Terminal Profiles.
If you are using this service, you must enable the OFFLINE_CHARGE_ELEMENT_VALIDATION, OFFLINE_PERSON_AUTHENTICATION, and/or RELAY_SCHEDULE settings. These settings can be enabled for a group of terminals in the Terminal Profile Setting form, or for an individual terminal in the Terminal Setting form. The Terminal Setting will override the Terminal Profile Setting.
If a terminal has the OFFLINE_PERSON_AUTHENTICATION setting enabled and the TRIGGER_OFFLINE_VALIDATION_DOC_TRANSFER service includes the Category for PERSON_RECORDS, the service will trigger the terminal to request the person records from the server.
If a terminal has the OFFLINE_CHARGE_ELEMENT_VALIDATION setting enabled and the TRIGGER_OFFLINE_VALIDATION_DOC_TRANSFER service includes the Category for CHARGE_RECORDS, the service will trigger the terminal to request the charge records from the server.
If a terminal has the RELAY_SCHEDULE setting enabled and the TRIGGER_OFFLINE_VALIDATION_DOC_TRANSFER service includes the Category for RELAY_SCHEDULE, the service will trigger the terminal to request the relay schedule from the server.
CATEGORY: Identifies the type of records that the service will trigger the terminal to request. The options are PERSON_RECORDS, CHARGE_RECORDS, and RELAY_SCHEDULE. To include a CATEGORY of records, move it from the Available column to the Selected column. The terminals in the TERMINAL_PROFILE parameter will be triggered to request the CATEGORIES in the Selected column when the TRIGGER_OFFLINE_VALIDATION_DOC_TRANSFER service runs.
TERMINAL_PROFILE: Indicates which terminals will be triggered to request the documents specified in the CATEGORY parameter when the TRIGGER_OFFLINE_VALIDATION_DOC_TRANSFER runs. The available options are defined in the Terminal Profile form. To include a TERMINAL_PROFILE, move it from the Available column to the Selected column. The TERMINAL_PROFILES in the Selected column will be triggered to request the Selected CATEGORIES when the TRIGGER_OFFLINE_VALIDATION_DOC_TRANSFER service runs.
You must select one or more items from both the CATEGORY and TERMINAL_PROFILE parameters for the service to trigger a request to send records to the terminals.