Configure the Payroll Extract integration
The Payroll Extract integration uses APIs that are optimized to extract payroll and accruals data directly for local processing and maintenance.

- The Payroll Extract integration greatly improves the speed and performance of processing payroll data. It can process large payroll data sets 10 times faster than previous payroll export integrations, for example, in less than 60 minutes for 250,000 employee records.
- It exports the payroll data to an extension table that is summarized by day. The payroll system can access this data directly from the extension table, or you can further summarize and sort the data by way of structured query language (SQL) commands. The extension table can store additional data to satisfy any data requirements that are not handled by the staging table.
- It reduces service hours to develop and implement scalable payroll data extractions.
- It pulls payroll data to meet payroll-extract service level agreements (SLA) for any organization regardless of size.
- It supports organizations that do not use a full payroll application
- It provides quick processing of one-to-many pay groups.
This integration simplifies maintenance of payroll and timekeeping mapping by way of the following:
- Payroll to Paycode A category of time or money that employees earn, for example, Regular Hours, Bonus, or Sick. mapping
- Accrual Code mapping
- General Ledger mapping

The Payroll Extract service includes the following steps:
- Select employees by the attributes of pay groups:
- Employee attributes
- Worked attributes
- Mapping definitions to use for processing
- Mapping of paycodes, accruals, and general ledger to efficiently pre-process, stage, and maintain mappings between payroll and timekeeping systems. You can use custom tables to handle post-staging data needs.
- Aggregate up to 130 data elements for payroll and accruals. You can use custom tables to summarize variances and make other comparisons.
- Output options allow you to define file generation, headers, footers, and data formatting.

- The focus is on payroll processing, and analytics and reporting are not supported.
- Data and request limits as follows:
- 10x employee count per week + 1x Daily beyond
- Max 5 queries per extract/execution id
- 200 API calls/day for retrieval of custom tables
- Query limit – 2GB per call
- Custom tables limits as follows:
- Maximum = 10 tables.
- Maximum = 20 columns in each table.
- Tables are retained for 90 days for staging.
- Combined paycodes are not supported.
- Queries based on worked labor category or cost center are not supported.
- Multiple assignments (jobs) are not supported.

- Configure Access to Integrations:
- FAP for Integration Setup
- FAP for Wages
- Configure Payroll to Paycode Mappings.
- Configure Accrual Code Mappings.
- Configure General Ledger Mappings.
- Get the URL, User, and Password for the APIGatewayServer.
- Deploy the PayrollExtractE2E integration; see Deploy Integrations.
Configure the integration

-
Open the Integration Template Designer: Select Main Menu
> Administration > Application Setup > Integrations Setup > Design Integration Templates. Note: If prompted, enter your Username and Password. Click Tap Log in.
- Select the Manage tab > Atom Management.
- Select your environment.

- In Administration, click tap Environment Extensions.
- In Process Filter, click tap the magnifying glass
.
- Scroll to and select the integration: Payroll Extract > PayrollExtractE2E.

Caution: If you select Use Default for the connection settings and process properties, ensure that Value is blank. If Value is not blank, that value overrides the default value whether or not Use Default is selected or cleared. Example: If the default value is abc, but Value shows xyz, the integration uses xyz regardless of the setting of Use Default.
- Select Connection Settings.
-
From the Connection dropdown list, select and configure:
Connection Settings Setting Required Actions APIGatewayServer Required To change the default API gateway server:
- Clear Use Default.
- Enter the URL to the server.
Example: <tenantURL>/api
SFTPServer Required The SFTP server setting defines the connection to the file that contains the records. Integrations access only the internal SFTP account.
To change the default SFTP server parameters:
- For each field, clear Use Default.
- Enter the following values:
Enter the name of the internal Host.
Enter the number of the Port for the internal SFTP account.
In User, enter the username for the internal SFTP account.
In Password, select <Encrypted>. Enter the new password for the internal SFTP account.
Click Tap Apply.

- Select Process Properties.
-
Caution: Do not edit the default values of the AuthenticationProperties. By default, cookies are enabled and set the values for authentication properties.
Only while you test or design integration templates, should you edit the properties to connect to the authentication server and get the access token to execute APIs.
- From the Process Property dropdown list, select AuthenticationProperties.
- In GatewayDefaultPort, clear Use Default. Enter the path to the port for the API gateway.
-
Note: You no longer need an AppKey to call API operations. If one is defined, it is ignored.
-
From the Process Property dropdown list, select PayrollExtractE2E_Control to set process properties that must be configured before the integration can run.
Caution: If you select Use Default for the connection settings and process properties, ensure that Value is blank. If Value is not blank, that value overrides the default value whether or not Use Default is selected or cleared. Example: If the default value is abc, but Value shows xyz, the integration uses xyz regardless of the setting of Use Default.
Process Properties Property Required Actions HyperfindAndLocations Not required Default = 1 (shown as blank) which indicates All Home A query that returns a list of employees associated with a manager's employee group. All Home finds people who are active employees or active users as of today. and includes all active employees.
To select another hyperfind A search engine that filters and selects groups of employees through queries that specify conditions or locations (criteria). and locations:
- Clear Use Default.
- (Required) Enter the ID of a single hyperfind, or the IDs of one or more locations each separated by a comma (,) or number sign (#).
Caution: If you do not enter an ID for the Template Parameter, the integration cannot identify the hyperfind and the integration run fails.
Note:Ad-hoc hyperfinds are not supported.
All Home does not include terminated and inactive employees even if they have totals during the period. To include these totals, configure a hyperfind that includes terminated and inactive employees for the organization and select that hyperfind in this process property.
The maximum number of employees in a hyperfind is 3500. To process more employee records, divide the population into smaller hyperfinds to run sequentially.
IncludeEmployee Not required Default = Include all employees.
To process data for only a limited group of employees:
- Clear Use Default.
- Enter the person numbers, as defined in the source system, each separated by a comma (,) but no spaces.
Example: 13997,15556,20012
InitiateStaging Required Default = true.
- Clear Use Default.
- In Value, select false.
RunPayrollExport Required Default = true.
- Clear Use Default.
- In Value, select false.
RunAccrualExport Not required Default = false.
- Clear Use Default.
- In Value, select true.
PollInterval Not required How long to wait between checks of the payroll system for changes.
Default = 20 ms.
To change the wait time:
- Clear Use Default.
- Select the time in milliseconds (ms).
MaxWaitTime Not required Default = 6000 ms (6 seconds).
To change the wait time:
- Clear Use Default.
- Enter the wait time in milliseconds (ms).
FileLoadWaitTime Not required How long to wait for the input file to load.
Default = 60 ms.
To change the wait time:
- Clear Use Default.
- Enter the wait time in milliseconds (ms).
-
From the Process Property dropdown list, select PayrollExtractE2E_SFTPProperties.
SFTP Properties Property Required Actions SourceDirectory Required The directory path on the SFTP server where the integration process reads the file.
Default = Use the /Inbound directory.
To define a different directory path:
- Clear Use Default.
- Enter the path to the directory.
Caution: Make sure that this directory is configured on the SFTP server.
_OutputDirectory Not required Directory on the SFTP server where the destination file is archived after processing.
Default = use the default /outbound directory.
To define a different directory path:
- Clear Use Default.
- Enter the path to the directory.
Caution: Make sure that this directory is configured on the SFTP server.
_MoveProcessedFileToDirectory Not required The directory on the SFTP server to move the destination file after data is successfully uploaded.
Default = true, use the default directory.
To define a different directory path:
- Clear Use Default.
- Enter the path to the directory.
Caution: Make sure that this directory is configured on the SFTP server.
_DeleteSourceFileAfterReading Not required Default = false; do not delete the source file after processing.
To delete the source file but archive it to _DestinationDirectoryPath:
- Clear Use Default.
- Select Value.

Cross-reference tables (CRT) are the look-up tables that the integrations use to translate parameter values. One or more data values from the source system can be used to assign one or more parameters in the destination system.
A cross-reference table (CRT) translates parameter values in an integration as follows:
- Organizes data values into rows and columns:
- Maximums = 20 columns, 10,000 rows.
- Can combine values from multiple columns to determine a single output value.
- If more than one row matches a reference value, the first match is the output value.
- If no match is found, the output value can be null, or the integration can produce errors.
- Select Cross Reference.
-
From the Cross Reference dropdown list, select one of the following:
You cannot change the names of the tables.
PayrollExtractE2E_Paygroup
A pay group consolidates a logical set of employees within a company to facilitate payroll processing. It is based on shared characteristics such as employee type, pay frequency, country location.
Column headers:
PayGroup,Location,Fixed Rule,Worker Type,Labor Category 1,Labor Category 2,Labor Category 3,Labor Category 4,Labor Category 5,Labor Category 6,Custom Field 1,Custom Field 2,Custom Field 3,Custom Field 4,Custom Field 5,Custom Field 6,Custom Field 7,Custom Field 8,Custom Field 9,Custom Field 10
PayrollExtractE2E_TimezoneGroup
Column headers:
PayGroup,TimeZone
-
Select Override to:
- Download the tables when you run the integration
- Edit the table cells in Extensions
- Repeat for other tables.

(Optional) You can set thresholds to limit the number of processes or volume of data during integration runs.
Note: These settings apply only to the current tenant; you have to configure these settings for each tenant.
- Log into the tenant management system (TMS).
- Select Main Menu
> Administration > Application Setup > System Configuration > System Settings. - Select the Payroll Extract tab.
- Configure the following:
- payroll.extract.staging.run.concurrent.limit: Enter the maximum number of payroll-extract requests that can run at the same time. Default = 3.
- payroll.extract.staging.request.daterange.limit: Enter the maximum number of days to allow in the date range of a Payroll Extract integration run. Default = 90.
- payroll.extract.staging.employee.populations.multiplier: Enter the maximum number of employee records that the Payroll Extract integration can export. Default = 10.
- Click Tap Save.

Payroll data
The Payroll Extract integration populates a staging table in the extension database with the employee data.
The employee data is grouped by person_id, staging_request_id, apply_date, paycode_id, is_historical correction Edits that occur after timecards are approved and signed-off., worked_job, worked_labor_level1_id, worked_labor_level2_id, worked_labor_level3_id, worked_labor_level4_id, worked_labor_level5_id, worked_labor_level6_id, worked_cost_center_id.
To access or process the data, see the Extension Tables topic.
- Format: Comma delimited
- Frequency: On-demand or scheduled
- Header Row: Yes, 1 row
- Output Folder: /Outbound
prm_payroll_staging_data columns | Data type | Data source | Comments |
---|---|---|---|
id |
|
— | Unique ID for this table |
staging_request_id |
|
— | ID for this request |
staging_datetime |
|
current datetime | Current date and time |
custom_parameter_1 |
|
parameter value | From the API request: the integration can map other indicators such as paygroup or timezone. |
custom_parameter_2 |
|
parameter value | From the API request: the integration can map other indicators such as paygroup or timezone. |
person_id |
|
Person.personId |
|
person_num | varchar(15) | Person.personnum |
|
full_name | varchar(64) | Person.fullnm |
|
first_name | varchar(30) | Person.firstnm |
|
last_name | varchar(30) | Person.lastnm |
|
middle_name | varchar(1) | Person.middleinitialnm |
|
range_start_date |
|
Passed in parameter | Start date for apply_date range |
range_end_date |
|
Passed in parameter | End date for apply_date range |
apply_date |
|
actual_total.apply_date | — |
amount_type | varchar(50) | paycodeamttype.paycodeamttypenm |
|
hours_amount | bigint | actual_total.duration_in_seconds | — |
days_amount | numeric(19,6) | actual_total.duration_in_days | — |
money_amount | numeric(19,6) | actual_total.money_amount | — |
wage_multiplier | numeric(19,6) | paycode.wagemultiply | actual_total.paycode_id = paycode.paycodeId |
wage_addition | numeric(19,6) | paycode.wageaddition | actual_total.paycode_id = paycode.paycodeId |
effective_wage_rate | numeric(19,6) | actual_total.money_amount/(actual_total.duration_in_seconds/3600) | Cannot divide by 0. Ignore if amount_type is money |
paycode_id |
|
actual_total.paycode_id | — |
paycode_name | varchar(50) | paycode.name | actual_total.paycode_id = paycode.paycodeId |
paycode_mapped_output1 | varchar(50) | — | Paycode Payroll Mapping |
paycode_mapped_output2 | varchar(50) | — | Paycode Payroll Mapping |
paycode_mapped_output3 | varchar(50) | — | Paycode Payroll Mapping |
is_historical_correction | Boolean | actual_total.apply_date <> actual_total.corrected_apply_date | — |
original_date |
|
actual_total.corrected_apply_date | — |
original_payperiod_start |
|
payperiod.start_date | payperiod.payruleid = effective payruleid for employee on original_date |
original_payperiod_end |
|
payperiod.end_date | payperiod.payruleid = effective payruleid for employee on original_date |
include_in_totals | numeric(1,0) | actual_total.already_paid (switch 0 to 1 and 1 to 0) | Take the opposite value of already_paid to get the correct value for include_in_totals. |
worked_job | text |
|
|
worked_job_mapped | text | — | General Ledger Mapping for worked_job |
worked_location_job_type_path | text | org_effective_node_type.full_name for each level of path. Place / between each name |
|
worked_labor_level1_name | description varchar(250) | laboracct.laborlev1nm | actual_total.labor_account_id=laboracct.laboracctid |
worked_labor_level1_id | bigint | laboracct.laborlev1id | actual_total.labor_account_id=laboracct.laboracctid |
worked_labor_level2_name | description varchar(250) | laboracct.laborlev2nm | actual_total.labor_account_id=laboracct.laboracctid |
worked_labor_level2_id | bigint | laboracct.laborlev2id | actual_total.labor_account_id=laboracct.laboracctid |
worked_labor_level3_name | description varchar(250) | laboracct.laborlev3nm | actual_total.labor_account_id=laboracct.laboracctid |
worked_labor_level3_id | bigint | laboracct.laborlev3id | actual_total.labor_account_id=laboracct.laboracctid |
worked_labor_level4_name | description varchar(250) | laboracct.laborlev4nm | actual_total.labor_account_id=laboracct.laboracctid |
worked_labor_level4_id | bigint | laboracct.laborlev4id | actual_total.labor_account_id=laboracct.laboracctid |
worked_labor_level5_name | description varchar(250) | laboracct.laborlev5nm | actual_total.labor_account_id=laboracct.laboracctid |
worked_labor_level5_id | bigint | laboracct.laborlev5id | actual_total.labor_account_id=laboracct.laboracctid |
worked_labor_level6_name | description varchar(250) | laboracct.laborlev6nm | actual_total.labor_account_id=laboracct.laboracctid |
worked_labor_level6_id | bigint | laboracct.laborlev6id | actual_total.labor_account_id=laboracct.laboracctid |
worked_cost_center_name | description varchar(250) | laboracct.laborlev7nm | actual_total.labor_account_id=laboracct.laboracctid |
worked_cost_center_id | bigint | laboracct.laborlev7id | actual_total.labor_account_id=laboracct.laboracctid |
worked_labor_level1_mapped | varchar(50) | — | General Ledger Mapping for worked_labor_level1 |
worked_labor_level2_mapped | varchar(50) | — | General Ledger Mapping for worked_labor_level2 |
worked_labor_level3_mapped | varchar(50) | — | General Ledger Mapping for worked_labor_level3 |
worked_labor_level4_mapped | varchar(50) | — | General Ledger Mapping for worked_labor_level4 |
worked_labor_level5_mapped | varchar(50) | — | General Ledger Mapping for worked_labor_level5 |
worked_labor_level6_mapped | varchar(50) | — | General Ledger Mapping for worked_labor_level6 |
worked_cost_center_mapped | varchar(50) | — | General Ledger Mapping for worked_cost_center |
signoffthrudtm | timestamp without time zone | person.mgrsignoffthrudtm | — |
employment_status | varchar(30) | employmentstat.shortnm | personstatusmm.personid,employmentstatid/applydate between effective dates |
primary_job | text |
|
|
primary_job_mapped | text | — | — |
home_location_job_type_path | text |
|
|
home_labor_level1_name | description varchar(250) | laboracct.laborlev1nm |
|
home_labor_level1_id | bigint | laboracct.laborlev1id |
|
home_labor_level2_name | description varchar(250) | laboracct.laborlev2nm |
|
home_labor_level2_id | bigint | laboracct.laborlev2id |
|
home_labor_level3_name | description varchar(250) | laboracct.laborlev3nm |
|
home_labor_level3_id | bigint | laboracct.laborlev3id |
|
home_labor_level4_name | description varchar(250) | laboracct.laborlev4nm |
|
home_labor_level4_id | bigint | laboracct.laborlev4id |
|
home_labor_level5_name | description varchar(250) | laboracct.laborlev5nm |
|
home_labor_level5_id | bigint | laboracct.laborlev5id |
|
home_labor_level6_name | description varchar(250) | laboracct.laborlev6nm |
|
home_labor_level6_id | bigint | laboracct.laborlev6id |
|
home_cost_center_name | description varchar(250) | laboracct.laborlev7nm |
|
home_cost_center_id | bigint | laboracct.laborlev7id |
|
home_labor_level1_mapped | varchar(50) | — | — |
home_labor_level2_mapped | varchar(50) | — | — |
home_labor_level3_mapped | varchar(50) | — | — |
home_labor_level4_mapped | varchar(50) | — | — |
home_labor_level5_mapped | varchar(50) | — | — |
home_labor_level6_mapped | varchar(50) | — | — |
home_cost_center_mapped | varchar(50) | — | — |
base_wage_rate | numeric(16,6) | basewagerthist.basewagehourlyamt |
|
currency_code | varchar(10) | currency.currencycd |
|
payrule | varchar(50) | payruleids.name | assignpayrule.employeeid,payruleid applydate between effective dates |
employment_term | varchar(50) | groupschedule.shortnm | empgrpschedmm.employeeid,grpscheduleid apply date between effective dates/groupschedule.contractsw=1 |
hire_date | timestamp without time zone | person.companyhiredtm | — |
badge_number | varchar(50) | badgeassign.badgenum | Person.personid=badgeassign.personid and actual_total.apply_date between badgeassign effective dates |
worker_type | varchar(50) | workertype.workertypenm |
|
timezone | varchar(30) | timezone.english |
|
last_total_time | timestamp without time zone | totaleventts.last_totaltime | Person.personid = totaleventts.personid |
last_total_change_time | timestamp without time zone | totaleventts.last_chngtime | Person.personid = totaleventts.personid |
person_custom_field1_name | varchar(30) | customdatadef.shortnm | customdatadef.customdatadefid = personcstmdata.customdatadefid |
person_custom_field1_value | varchar(2000) | personcstmdata.personcstdatatxt | customdatadef.customdatadefid = personcstmdata.customdatadefid |
person_custom_field2_name | varchar(30) | customdatadef.shortnm | customdatadef.customdatadefid = personcstmdata.customdatadefid |
person_custom_field2_value | varchar(2000) | personcstmdata.personcstdatatxt | customdatadef.customdatadefid = personcstmdata.customdatadefid |
person_custom_field3_name | varchar(30) | customdatadef.shortnm | customdatadef.customdatadefid = personcstmdata.customdatadefid |
person_custom_field3_value | varchar(2000) | personcstmdata.personcstdatatxt | customdatadef.customdatadefid = personcstmdata.customdatadefid |
person_custom_field4_name | varchar(30) | customdatadef.shortnm | customdatadef.customdatadefid = personcstmdata.customdatadefid |
person_custom_field4_value | varchar(2000) | personcstmdata.personcstdatatxt | customdatadef.customdatadefid = personcstmdata.customdatadefid |
person_custom_field5_name | varchar(30) | customdatadef.shortnm | customdatadef.customdatadefid = personcstmdata.customdatadefid |
person_custom_field5_value | varchar(2000) | personcstmdata.personcstdatatxt | customdatadef.customdatadefid = personcstmdata.customdatadefid |
person_custom_field6_name | varchar(30) | customdatadef.shortnm | customdatadef.customdatadefid = personcstmdata.customdatadefid |
person_custom_field6_value | varchar(2000) | personcstmdata.personcstdatatxt | customdatadef.customdatadefid = personcstmdata.customdatadefid |
person_custom_field7_name | varchar(30) | customdatadef.shortnm | customdatadef.customdatadefid = personcstmdata.customdatadefid |
person_custom_field7_value | varchar(2000) | personcstmdata.personcstdatatxt | customdatadef.customdatadefid = personcstmdata.customdatadefid |
person_custom_field8_name | varchar(30) | customdatadef.shortnm | customdatadef.customdatadefid = personcstmdata.customdatadefid |
person_custom_field8_value | varchar(2000) | personcstmdata.personcstdatatxt | customdatadef.customdatadefid = personcstmdata.customdatadefid |
person_custom_field9_name | varchar(30) | customdatadef.shortnm | customdatadef.customdatadefid = personcstmdata.customdatadefid |
person_custom_field9_value | varchar(2000) | personcstmdata.personcstdatatxt | customdatadef.customdatadefid = personcstmdata.customdatadefid |
person_custom_field10_name | varchar(30) | customdatadef.shortnm | customdatadef.customdatadefid = personcstmdata.customdatadefid |
person_custom_field10_value | varchar(2000) | personcstmdata.personcstdatatxt | customdatadef.customdatadefid = personcstmdata.customdatadefid |
Accruals data
The Payroll Extract integration populates a staging table in the extension database with the accruals data.
To access or process the data, see the Extension Tables topic.
- Format: Comma delimited
- Frequency: On-demand or scheduled
- Header Row: Yes, 1 row
- Output Folder: /Outbound
prm_accruals_staging_data columns | Data type | Data source | Comments |
---|---|---|---|
id |
|
— | Unique ID for this table |
staging_request_id |
|
— | ID for this request |
staging_datetime |
|
current datetime | Current date and time |
custom_parameter_1 | varchar(50) | parameter value | From the API request: the integration can map other indicators such as paygroup or timezone. |
custom_parameter_2 | varchar(50) | parameter value | From the API request: the integration can map other indicators such as paygroup or timezone. |
person_id |
|
Person.personId |
|
person_num | varchar(15) | Person.personnum |
|
range_start_date |
|
Passed in parameter | Start date for apply_date range |
range_end_date |
|
Passed in parameter | End date for apply_date range |
accrual_code_name |
|
accrualcode.name |
|
accrual_amount_type |
|
accrualcode.type_sw("1"="HOUR"; "2"="DAY"; "3"="MONEY") |
— |
available_starting_balance | numeric(16,6) |
|
Enter the duration-In-seconds, Money_amount, or duration_in_days based on type_sw and range_start_date=encumbered_accrual_balance.balance_date |
available_ending_balance | numeric(16,6) |
|
Enter the duration-In-seconds, Money_amount, or duration_in_days based on type_sw and range_end_date=encumbered_accrual_balance.balance_date |
vested_starting_balance | numeric(16,6) |
|
Enter the duration-In-seconds, Money_amount, or duration_in_days based on type_sw and balance_date_range_start_date=accrual_balance.balance_date |
vested_ending_balance | numeric(16,6) |
|
Enter the duration-In-seconds, Money_amount, or duration_in_days based on type_sw, balance_date, and balance_date_range_end_date=accrual_balance.balance_date |
probationary_starting_balance | numeric(16,6) |
|
Enter the duration-In-seconds, Money_amount, or duration_in_days based on type_sw and range_start_date=accrual_balance.balance_date |
probationary_ending_balance | numeric(16,6) |
|
Enter the duration-In-seconds, Money_amount, or duration_in_days based on type_sw and range_end_date=accrual_balance.balance_date |
balance_code_mapped | varchar(50) | — | From new mapping table based on accrual code and employee's accrual profile if configured. |
accrual_takings_sum | varchar(50) | accrualtran.amount | Sum of all accrualtran.amounts for an accrual code where the accrualtran.type is configured in prm_accrual_transaction_mapping to include in takings and accrualtrans.effectivedate between the range_start_date and range_end_date for the accrualtrans.personid. |
accrual_earnings_sum | varchar(50) | accrualtran.amount | Sum of all accrualtran.amounts for an accrual code where the accrualtran.type is configured in prm_accrual_transaction_mapping to include in takings and accrualtrans.effectivedate between the range_start_date and range_end_date for the accrualtrans.personid. |
accrual_reporting_period_earnings_sum | numeric(16,6) | accrualtran.amount | Same as accrual_earnings_sum except accrualtran.effective is between trackingperiod.startdtm and trackingperiod.enddtm, and the range_start_date is between trackingperiod.startdtm and trackingperiod.enddtm. |
accrual_reporting_period_takings_sum | numeric(16,6) | accrualtran.amount | Same as accrual_earnings_sum except accrualtran.effective is between trackingperiod.startdtm and trackingperiod.enddtm, and the range_start_date is between trackingperiod.startdtm and trackingperiod.enddtm. |

If employees work multiple jobs or positions under separate employer identification numbers (EIN), the integration uses pay group attributes to separate the employees' payroll totals into multiple payroll groups that are based on the job or position that is worked. The payroll manager selects the employees based on their Worked Job or position rather than the Primary Job or position for employees who work single jobs or positions.
Accruals data are exported based on employees' Primary Job or position.
To access or process the data, see the Extension Tables topic.
- Format: Comma delimited
- Frequency: On-demand or scheduled
- Header Row: Yes, 1 row
- Output Folder: /Outbound
- Format: Comma delimited
- Frequency: On-demand or scheduled
- Header Row: Yes, 1 row
- Output Folder: /Outbound
Column | Data type | Data source | Comments |
---|---|---|---|
Position Name | — | ||
Exempt | |||
Position Location/Job | — | Effective dated | |
Position Status | — | — | |
Hired Date | — | Person-specific for the position | |
Seniority Date | Person-specific for the position | ||
Other Dates | — | Person-specific for the position | |
Position ID | Unique identifier for the position |
Custom data
(Optional) You can use the custom staging table in the extension database to aggregate or map any data.
To access or process the data, see the Extension Tables topic.
- File Name:
- Format: Comma delimited
- Frequency: On-demand or scheduled
- Header Row: Yes, 1 row
- Output Folder: /Outbound
prm_custom_data | Data type | Data source | Comments |
---|---|---|---|
id |
|
— | Unique ID for this table |
custom_data_1 | varchar(50) | — | — |
custom_data_2 | varchar(50) | — | — |
... | ... | — | — |
custom_data_50 | varchar(50) | — | — |

Purpose | API end point | Operation | Description |
---|---|---|---|
Stage request | /v1/commons/payroll/staging/async | POST |
Stage the pay data:
|
Stage request |
/v1/commons/payroll/{requestId}/status /v1/commons/payroll/{requestId}/status?exportType=payroll /v1/commons/payroll/{requestId}/status?exportType=accrual |
GET |
Retrieve the status of a submitted process:
|
Stage request |
/v1/commons/payroll/{requestId}/details /v1/commons/payroll/{requestId}/details?exportType=payroll /v1/commons/payroll/{requestId}/details?exportType=accrual |
GET | Get the details for an execution of the asynchronous operation. |
Custom table | /v1/commons/payroll/tables/apply_upsert | POST | Create a table to stage and persist the transactional data. |
Custom table | v1/commons/payroll/tables/apply_create | POST | Create a table for external applications to process the data. |
Retrieve data | /v1/commons/payroll/export/ | POST |
Retrieve the data by the following attributes:
|
Sample code
Stage request:
{"where": {
"employeeIn": [
{
"payGroup": {
"orgPath": [ {"qualifier": "QUALIFIER"} ],
"workerType": [{ "qualifier": "QUALIFIER"}],
"laborCategory": [{"qualifier": "QUALIFIER"}],
"payRule": [{"qualifier": "QUALIFIER"}],
"costCenter": [{"qualifier": "QUALIFIER"}],
"customField": [
{
"label": {"qualifier": "QUALIFIER"},
"values": []}]
}}]
},
"employees": {"qualifiers": []},
"dateRange": {
"startDate": "",
"endDate": "",
"symbolicPeriod": {"qualifier": "" }
},
"customParameter1": "parameter value",
"customParameter2": "parameter value",
"ignoreSignOff": true,
"includePayCodes": [{"qualifier": "QUALIFIER"}],
"excludePayCodes": [{"qualifier": "QUALIFIER"}],
"processBasedOn": "PRIMARY/WORKED",
"exportType": "Payroll/Accrual/Both",
"processTerminatedEmployeesOnly": false,
"payCodeMappingTable": {"qualifier": ""},
"historicalCorrections": "Included in Totals/Exclude in Totals/Correction only",
"executionId": "EXECUTIONID"
}
Custom table:
--data-raw '{
"query": "SELECT month FROM `lite_20210621093800_c6f49658-ca27-4177-b45a-0188423c4f3b`",
"tableName": "payroll_rollup",
"partitionConfig": {
"column": "payroll_start_date",
"granularity": "DAY",
"expirationHours": 168
}}
'file=@"/Users/Payroll/EffectiveWageLookup.csv"'
Retrieve data:
{
"prefix": "header_prefix :",
"queries": [
"SELECT employee_id FROM `payroll_20210621093800_c6f49658-ca27-4177-b45a-0188423c4f3b`",
"SELECT month FROM `payroll_20210621093800_c6f49658-ca27-4177-b45a-0188423c4f3b`"
]},
{
"queries": [
"SELECT aa_count FROM `payroll_20210621093800_c6f49658-ca27-4177-b45a-0188423c4f3b`"
]}],
"footers": [
{
"prefix": "footer prefix: ",
"queries": [
"SELECT f01_gt_rand FROM `payroll_20210621093800_c6f49658-ca27-4177-b45a-0188423c4f3b`",
"SELECT sumd FROM `payroll_20210621093800_c6f49658-ca27-4177-b45a-0188423c4f3b`"
]}],
"query": "SELECT * FROM `payroll_20210621093800_c6f49658-ca27-4177-b45a-0188423c4f3b`",
"exportConfig": {
"format": "csv",
"filename": "report.dat",
"delimiter": ","
}}'