One of the most useful applications of SimpleWorker is to replace Cron with a dependable "Cloud Scheduler." To do this you use the Let's start with ReportWorker from our Creating a Report Worker example, and instead of queuing it for one-time immediate delivery, let's schedule it for recurring delivery: Create a file called
Run it and that's it. Your fearless user will get this report in an hour, every hour! Other examples# to run at a higher priority worker = DataCrunchWorker.new worker.schedule(:start_at => 15.minutes.since, :priority=>1) # to schedule a job to recur every day worker = DailySomethingWorker.new worker.schedule(:start_at=>1.days.from_now.change(:hour=>3), :run_every=>24*60*60) Note: For the require 'active_support/core_ext' # includes only what's needed ('active_support/all' will work as well but it includes many additional files) ArgumentsScheduling arguments can include: Required:
Optional:
CautionIf Number of Scheduled JobsSimpleWorker limits initial accounts to 100 scheduled jobs per project on basic accounts. (Note this is just forscheduled jobs. Queued jobs are unlimited.) If you're using good worker patterns, you should be well under the limit (see below for tips). Although if you do need more, please contact us at support@simpleworker.com and we can up the limit. Scheduling PatternsManaging lots of scheduled job within an application can create administrative issues. For this reason, we recommend adopting certain design patterns around Scheduled Jobs. One in particular is a master/slave pattern whereby you have one or more master scheduled jobs come off schedule and create and queue worker jobs to perform the actually work (ideally with each worker job performing a number of individual tasks so as to amortize the worker setup steps). This is a far better approach than maintaining scheduled jobs for each user or task. Here's an example where you might have one "master" run each day to do something for each user. The only thing that master worker would do is go through your user table and queue up workers to perform each user task. Some pseudo code for this "master" worker might look like this: class DailyUserWorker < SimpleWorker::Base merge_worker "task_worker", "TaskWorker" def run users = User.all users.each do |u| task = TaskWorker.new task.users_id = u.id task.queue end end end The TaskWorker would be the one that performs a task for each user or client. (If the processing time for each task is in seconds, you might want to bundle several users/clients together to amortize worker setup.) You can see a worker calling worker in the "batch" example in the SimpleWorker examples on Github. You can also read more on this pattern (and the corresponding anti-pattern) here: Pattern: Creating Task-Level Workers at Runtime Anti-Pattern: Lots of scheduled jobs Extra CreditOne tenet of Cloud Computing is the idea of Disposable Infrastructure. In short, this means architecting for servers to fail. Building schedulers bring in their own set of problems. If you're running Cron jobs on servers running locally, what process is monitoring them and what happens if the server fails. This is where SimpleWorker comes in. Running scheduled jobs in the cloud lets you monitor them and verify they're performing correctly. NoteWhen scheduling jobs in SimpleWorker, you want to do so either externally from your application code or with tests to see if jobs are already running. Otherwise, you may schedule a duplicate job every time your application starts up. |
Ruby >