Releases: agronholm/apscheduler
Releases Β· agronholm/apscheduler
3.11.0
- Dropped support for Python 3.6 and 3.7
- Added support for
ZoneInfo
time zones and deprecated support for pytz time zones - Added
CalendarIntervalTrigger
, backported from the 4.x series - Added the ability to export and import jobs via
scheduler.export_jobs()
andscheduler.import_jobs()
, respectively - Removed the dependency on
six
- Changed
ProcessPoolExecutor
to spawn new subprocesses from scratch instead of forking on all platform - Fixed
AsyncIOScheduler
inadvertently creating a defunct event loop at start, leading to the scheduler not working at all - Fixed
ProcessPoolExecutor
not respecting the passed keyword arguments when a broken pool was being replaced
4.0.0a5
- BREAKING Added the
cleanup()
scheduler method and a configuration option (cleanup_interval
). A corresponding abstract method was added to theDataStore
class. This method purges expired job results and schedules that have exhausted their triggers and have no more associated jobs running. Previously, schedules were automatically deleted instantly once their triggers could no longer produce any fire times. - BREAKING Made publishing
JobReleased
events the responsibility of theDataStore
implementation, rather than the scheduler, for consistency with theacquire_jobs()
method - BREAKING The
started_at
field was moved fromJob
toJobResult
- BREAKING Removed the
from_url()
class methods ofSQLAlchemyDataStore
,MongoDBDataStore
andRedisEventBroker
in favor of the ability to pass a connection url to the initializer - Added the ability to pause and unpause schedules (PR by @WillDaSilva)
- Added the
scheduled_start
field to theJobAcquired
event - Added the
scheduled_start
andstarted_at
fields to theJobReleased
event - Fixed large parts of
MongoDBDataStore
still calling blocking functions in the event loop thread - Fixed JSON serialization of triggers that had been used at least once
- Fixed dialect name checks in the SQLAlchemy job store
- Fixed JSON and CBOR serializers unable to serialize enums
- Fixed infinite loop in CalendarIntervalTrigger with UTC timezone (PR by unights)
- Fixed scheduler not resuming job processing when
max_concurrent_jobs
had been reached and then a job was completed, thus making job processing possible again (PR by MohammadAmin Vahedinia) - Fixed the shutdown procedure of the Redis event broker
- Fixed
SQLAlchemyDataStore
not respecting custom schema name when creating enums - Fixed skipped intervals with overlapping schedules in
AndTrigger
(#911 <#911>_; PR by Bennett Meares) - Fixed implicitly created client instances in data stores and event brokers not being closed along with the store/broker
4.0.0a4
- BREAKING Renamed any leftover fields named
executor
tojob_executor
(this breaks data store compatibility) - BREAKING Switched to using the timezone aware timestamp column type on Oracle
- BREAKING Fixed precision issue with interval columns on MySQL
- BREAKING Fixed datetime comparison issues on SQLite and MySQL
- BREAKING Worked around datetime microsecond precision issue on MongoDB
- BREAKING Renamed the
worker_id
field toscheduler_id
in theJobAcquired
andJobReleased
events - BREAKING Added the
task_id
attribute to theScheduleAdded
,ScheduleUpdated
andScheduleRemoved
events - BREAKING Added the
finished
attribute to theScheduleRemoved
event - BREAKING Added the
logger
parameter toDatastore.start()
andEventBroker.start()
to make both use the scheduler's assigned logger - BREAKING Made the
apscheduler.marshalling
module private - Added the
configure_task()
andget_tasks()
scheduler methods - Fixed out of order delivery of events delivered using worker threads
- Fixed schedule processing not setting job start deadlines correctly
4.0.0a3
- BREAKING The scheduler classes were moved to be importable (only) directly from the
apscheduler
package (apscheduler.Scheduler
andapscheduler.AsyncScheduler
) - BREAKING Removed the "tags" field in schedules and jobs (this will be added back when the feature has been fully thought through)
- BREAKING Removed the
JobInfo
class in favor of just using theJob
class (which is now immutable) - BREAKING Workers were merged into schedulers. As the
Worker
andAsyncWorker
classes have been removed, you now need to passrole=SchedulerRole.scheduler
to the scheduler to prevent it from processing due jobs. The worker event classes (WorkerEvent
,WorkerStarted
,WorkerStopped
) have also been removed. - BREAKING The synchronous interfaces for event brokers and data stores have been removed. Synchronous libraries can still be used to implement these services through the use of
anyio.to_thread.run_sync()
. - BREAKING The
current_worker
context variable has been removed - BREAKING The
current_scheduler
context variable is now specified to only contain the currently running instance of a synchronous scheduler (apscheduler.Scheduler
). The asynchronous scheduler instance can be fetched from the newcurrent_async_scheduler
context variable, and will always be available when a scheduler is running in the current context, whilecurrent_scheduler
is only available when the synchronous wrapper is being run. - BREAKING Changed the initialization of data stores and event brokers to use a single
start()
method that accepts anAsyncExitStack
(and, depending on the interface, other arguments too) - BREAKING Added a concept of "job executors". This determines how the task function is executed once picked up by a worker. Several data structures and scheduler methods have a new field/parameter for this,
job_executor
. This addition requires database schema changes too. - Dropped support for Python 3.7
- Added support for Python 3.12
- Added the ability to run jobs in worker processes, courtesy of the
processpool
executor - Added the ability to run jobs in the Qt event loop via the
qt
executor - Added the
get_jobs()
scheduler method - The synchronous scheduler now runs an asyncio event loop in a thread, acting as a façade for
AsyncScheduler
- Fixed the
schema
parameter inSQLAlchemyDataStore
not being applied - Fixed SQLalchemy 2.0 compatibility