- Print
- DarkLight
Job Scheduler Configuration
Introduction
The Job Scheduler encapsulates scheduling functions for background jobs that run with specific intervals or on specified times.
The purpose of this document is to describe:
• Configuration of Job Scheduler
• Architecture of Job Scheduler
Installation and operation of Job Scheduler requires general knowledge of how to install and run an application in Comflow.
Architecture
• The Job Scheduler is built as an ordinary process within the process framework of Comflow.
• The Job Scheduler utilizes the standard date & time events of the process framework. The date & time events in their turn build on the third party product ‘Quartz’ for the time scheduling.
• The Job Scheduler process is a generic process (base line process) for background jobs and can easily be adapted for a specific organization (stream line process). Although mainly a ‘background’ process without so much user interaction, the Job Scheduler can easily be extended with for example work items (work list integrated), for example in case of job exceptions.
Installation
The installation covers data base setup and Sitedef settings.
Installing Quartz/Scheduler data base
A scheduler data base shall be setup in order to support persistent schedules. The Job Scheduler is based on a scheduling framework called Quartz, which provides a data base persistence configuration. You install the database by running a SQL script. The script varies depending on RDBMS-software, so you have to select the right script based on in what RDBMS you have selected. This guide covers the most common, like Microsoft SQL Server and IBM DB2 (System-I and Derby).
You find the provided SQL-scripts in Comflow Studio, in the project folder /net.comactivity.core.jobscheduler/Docs/Quartz_2_3_0_DbTables.
- For DB2 on System-I or Derby, use tables_db2_v95.sql.
- For SQL Server, use tables_sqlServer.sql.
The scripts will generate tables in a Schema calles "QUARTZ". If you want to change that, you have to alter the scripts before running it.
The scripts will generate tables prefixed prefixed “COREJS_”.
Older setups of Comflow can have the tables in CACORE schema. They shall now be replaced with these new tables in the new Quartz-schema.
• Make sure that the Quartz Scheduler tables are present. They belong to the standard installation of Comflow, but can be missing in some older installations. They are usually found in the CORE schema and prefixed “COREJS_”.
- If not present, the scripts for installing these tables can be found under plug-in project: net.comactivity.AsynchWorkflow / Docs / Install / Quartz_1_4_2_DbTables /… (Select the one suitable for your DB, DB2 or MS SQL Server is recommended).
- (The persistent scheduler is used by other functions in the platform then the Job Scheduler. That is why the installation scripts are not stored inside the Job Scheduler project).
Site definition settings
2.2.1 Scheduler to be used by the Process Framework?
All properties with default values are given below. You only need to specify the values that differ from default values in your sitedef.xml. The values below can also be found and copied from the MasterSitedef.xml. The scheduler specified below is the scheduler that will be used by the process framework (and thus also the Job Monitor process).
<ServerInfo>
…
<Properties>
<!-- _PERSIST_TX (for persistent settings after restart via QUARTZ DB) or _Batch (primary memory, default value) -->
<Property name="asynchwf.scheduler" value="_Batch"/>
</Properties>
…
For TEST it is usually enough with the _Batch scheduler. The _Batch scheduler is primary memory based, and thus does not remember the scheduled state of the jobs. The jobs will therefore not be automatically re-activated on system restart, which is usually a good thing during development or test. The jobs will instead have an indication that they have to be manually re-activated
Job Scheduler [Installation and Configuration Guide]
For PROD the _PERSIST_TX scheduler should be used. It keeps the state of the jobs in DB tables and guarantees re-activation of jobs on system restart.
Start of the Persistent Scheduler
The site configuration below specifies the _PERSIST_TX scheduler to be started. The properties above only specify which scheduler to be used by the process framework, not the startup of the scheduler.
<!—- enabled=”true” starts the persistent scheduler -->
<Schedulers>
<Scheduler name="_PERSIST_TX" enabled="false">
<Properties>
<!—- thread pool -->
<Property name="threadPool.threadCount" value="8"/><!—- Default 5 -->
<Property name="jobStore.dataSource" value="CACORE"/>
<!—- If DB is MS SQL Server -->
<!-- Property name="jobStore.driverDelegateClass" value="org.quartz.impl.jdbcjobstore.MSSQLDelegate"/-->
</Properties>
</Scheduler>
</Schedulers>
If more then 8 jobs are trigged at the same time with a thread pool of 8 jobs (as above), then some jobs have to
wait until one of the running jobs have finished, and a worker thread is available. So, to guarantee that a job will run at the time that you have given it, make sure that you have enough threads in the pool matching the maximum number of jobs that can run at the same time.
Navigation tree
Add the following to get the Job Scheduler Control View.
<Navigation name="JobMonitor.navtree"/>
Logging
To see logs of the Job Scheduler, use both the Job Scheduler log tag (jobmon), and the log tag of the Process Framework (asynchwf). When using the asynchwf tag without the jobmon tag, logs from all processes except the Job Scheduler process will be seen. This is to avoid disturbing background logs at specified intervals if not wanted.
Log level ‘INFO’ is usually enough to locate mistakes during development.
<category name="jobmon" log-level="ERROR">
<log-target id-ref="stream"/>
</category>
<category name="asynchwf" log-level="ERROR">
<log-target id-ref="stream"/>
</category>
Repository
Add the repository of Job Scheduler, right above CACOREAWF in the repository order.
<Repository position="35" name="JobMonitor">
<Adapter class="net.comactivity.core.repository.adapters.zip.ZipRepositoryAdapter">
<Parameters>
<parameter name="file" value="${approot}/WEB-INF/reps/net.comactivity.core.jobmonitor.zip"/>
</Parameters>
</Adapter>
</Repository>
Robustness
To configure the Job Scheduler for maximum robustness; use DB re-trail flags below along with timeout on each job.
DB re-trial
The flags below give re-trials on DB access failure (for example in unstable networks).
<!-- Number of times to try the database access. Value 1 means no re-try, value 2 means re-try once etc. -->
<Property name="asynchwf.db.trial.max" value="1"/>
<!-- Number of milliseconds until re-trying -->
<Property name="asynchwf.db.trial.delay" value="1000"/>
Timeout
Set a timeout on each job to make sure they go back in state ‘Active’ after a certain period of time, even if the job rule has not returned. It is job specific if it is better to stop on failure, or if better to have a timeout, go back to ‘Active’, and then try again. If both minutes and seconds are given, the sum is used as timeout.
Basic test of the installation
(If you need a detailed explanation of how to run the Job Scheduler before proceeding with the basic test, please read the document CA BPP – Job Scheduler Step by Step).
- Start the portal.
- Activate the Job Scheduler process (if not already activated).
- Create a test job.
- Configure the test job to run every minute on current day of the week.
- Check the console that the job has been running as it should at least twice. The console should have
printouts from the as below.
TestRule: working...(14)
TestRule: working...(13)
TestRule: working...(12)
… etc down to…
TestRule: working...(1)
Now your basic installation is OK and you can start to develop your own job (an integration job for example).
Job Implementation
The Job Scheduler is a state machine that typically spends most of its time in state ‘Active’ where it waits for time events to occur (see chapter Operation below). When a time event occurs for a certain job, the “Work Service” of the job will be invoked and do its job.
Do the work in a Rule or in a Process?
The “Work Service” of a job is the place where the actual work of the business logic is performed. You have two options regarding which type of “Work Service” to use; “Work Rule” or “Work Process”. Use the “Work Rule” when the business logic is suitable to write in a single class. If the logic requires state, and/or the logic is suitable to divide into several classes, then it is preferable to visualize the different steps in the business logic as a process and at the same time utilize the state machine of a process.
When using a process for the business logic it is also possible to build logic that is a mix of user interaction and background job execution. The only type of ‘Work Service’ possible to refer to directly from the current version of the Job Scheduler is a “Work Rule”. A “Work Rule” is a java class that extends JobMonRule.java. Even though a “Work Rule” is the only type of job that Job Scheduler can start directly, it is possible anyway to indirectly start other types of jobs, for example a “Work Process”. This is done by letting the Job Scheduler rule just be a “starting rule” which only purpose is to start an instance of an instance of a work process. The work process is one which actually does the job, and reports back to the Job Scheduler when done.
Creating a “Work Rule”
1. Create a java class anywhere in your customer project and let the class extend JobMonRule.java in the Job Scheduler plug-in project. (The lazy way is to just copy the example rule TestRule.java from the Job Scheduler plug-in project).
2. Implement your job logic inside the execute method of the Job Scheduler rule that you just created
3. Decide whether your job should explicitly report back to the Job Scheduler process that the job is done, or if the job should be considered done when the execute method returns. Default behavior is autoContinue = ‘true’. If you are using an Integration Process, you would need autoContinue = ‘false’ (see “Creating an Integration Process’ below). To change behavior, just change return value of the autoContinue() method in your Job Scheduler rule (see the test rule for an example).
4. Decide if Job Scheduler should try to execute your job again after a failure, or if your job should go into state ‘Exception’. Default behavior is continueOnException = ‘true’, and can be changed by changing the return value of the continueOnException() method in your Job Scheduler rule (see the test rule for an example).
5. Override the WorkService.column of the Job Scheduler plug-in project, by copying the column to a suitable customer project. Create a constant for the name of your job and add a row that refers to the constant on the WorkService.column. This will give you a new choice in the Work Service drop down box in runtime.
6. Deploy your customer project, restart and create a job with your new Job Scheduler rule as the work service. Note: The work rule should be written so that it is interruptible; otherwise it can not be manually inactivated through the Job Scheduler interface.
See Test-rule below as example
public class TestRule extends JobMonRule {
public boolean execute(String point) {
//Implement Work Rule logic and complete it with done().
try {
int k = 15;
AsynchProcessInstance procInst = getProcessInstance("JobMonitor");
int procInstId = procInst.getProcessInstId();
// AsynchUtil.logInfo("JobMon: " + "TestRule: " + mapData.getVariable("Param_1"));
//setWorkTimeout(0, 7);
AsynchUtil.logInfo(JobMonUtil.getLogMessageInit(procInstId) + "Timeout: " + getWorkTimeoutMinutes() + ":" + getWorkTimeoutSeconds());
for (int i=k; i > 0; i--) {
System.out.println(JobMonUtil.getLogMessageInit(procInstId) + "TestRule; " + "Working...(" + i + ")");
Thread.sleep(1000);
}
done();
} catch (Exception ie) {
AsynchUtil.logInfo("JobMonitor: " + "Interrupt!");
WatchDog.handleFail(this.getClass());
// If you want an explicit done
done();
// Got to active on interrupt
return true;
}
return true;
}
Creating an “Work Process”
1. Create a process with the services, states and rules needed for your business logic. (The details of how to create a process is outside the scope of this guide. See documentation for the Process Framework instead).
2. Create a Job Scheduler rule as described above. The rule will not have any business logic, but only work as a starting rule for a “Work Process”.
3. In the execute method of the rule, start/call your work process:
a) Option: start new process instance each time startProcessInstance("MyProcess")
b) Option: start new process instance each time, but delete old one
String processId = "MyProcess";
boolean deleteOld = false; // delete old before start new one?startProcessInstance(processId, deleteOld);
c) Option: start an instance first time, and call the same singleton instance after that.
String processId = "MyProcess";
String startParam = "start"; // start event param. in process
String continueParam = "continue"; // continue event param. in process
startProcessInstanceSingleton(processId, startParam, continueParam);
Considerations
• Note: Never start several persistent quartz schedulers towards the same tables!
• Often, there is no need to have a persistent scheduler in TEST (A.)
• If possible, use separated DB for TEST and PROD (B.)
• If have to use same DB; use different schemas
• If not possible to use different schemas, then use different prefix for the quartz tables (see MasterSitedef.xml for how to configure)
• The Behavior if the rules above are not followed is unpredictable. Typically “event are lost” (goes towards the “other” scheduler) ‘Started by’ Log: Use the PersistTX Start Log to check that not more then one persistent scheduler is started towards the same quartz tables.
• Performance: The Job Scheduler has an overhead of a couple of hundred milliseconds. That is the drawback of the flexibility of having the Job Scheduler implemented as a process. This gives a limit somewhere around a couple of seconds as the lowest interval you should use for jobs.
Trouble-shooting
Quartz tables
The following quartz tables are used:
COREJS_BLOB_TRIGGERS - No application
COREJS_CALENDARS - No application
COREJS_CRON_TRIGGERSC- For date/time trigging. One record per trigger
COREJS_FIRED_TRIGGERS - No application
COREJS_JOB_DETAILS - For trigger related job info. One record per trigger
COREJS_LOCKS- No application
COREJS_PAUSED_TRIGGERS_GRPS - No application
COREJS_SCHEDULER_STATE - No application
COREJS_SIMPLE_TRIGGERS - For time trigging. One record per trigger
COREJS_SIMPROP_TRIGGERS - No application
COREJS_TRIGGERS - For trigger and job information relation. One record per trigger
A job runs too often
In some cases, a scheduled job can run too often. This is probably due to that records in the scheduler table is not deleted properly when deactivating the jobs. To correct this do the following in a SQL-editor (note that in this example the QUARTZ-tables are in the QUARTZ-schema and have the COREJS-prfix to the table CRON_TRIGGERS):
1.Deactivate all scheduled jobs.
2.Check if you still have records in the date/time triggering table:
select * from QUARTZ.COREJS_CRON_TRIGGERS
3.Delete the records in the table:
delete from QUARTZ.COREJS_CRON_TRIGGERS
4.Verify that there are no records in the table:
select * from QUARTZ.COREJS_CRON_TRIGGERS
5.Start the scheduled jobs again.
6.Verify that there are new records in the table:
select * from QUARTZ.COREJS_CRON_TRIGGERS
Other scheduling problems
When you convert from one version to another, you can get "unfinished" records in the Quartz tables that causes problems for the Job monitor. In that case, Inactivate all jobs in the Job scheduler and then clear all records in all Quartz tables. When that is done, you can Activate all jobs again.