From 7744f3212d85411026184be258c80750773715f0 Mon Sep 17 00:00:00 2001 From: git_admin Date: Mon, 27 Apr 2026 08:46:13 +0000 Subject: [PATCH] Tower: upload queue_job 16.0.2.12.0 (via marketplace) --- addons/queue_job/README.rst | 707 ++++++++++++++++++++++++++++++++++++ 1 file changed, 707 insertions(+) create mode 100644 addons/queue_job/README.rst diff --git a/addons/queue_job/README.rst b/addons/queue_job/README.rst new file mode 100644 index 0000000..f22fd7b --- /dev/null +++ b/addons/queue_job/README.rst @@ -0,0 +1,707 @@ +.. image:: https://odoo-community.org/readme-banner-image + :target: https://odoo-community.org/get-involved?utm_source=readme + :alt: Odoo Community Association + +========= +Job Queue +========= + +.. + !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! + !! This file is generated by oca-gen-addon-readme !! + !! changes will be overwritten. !! + !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! + !! source digest: sha256:b92d06dbbf161572f2bf02e0c6a59282cea11cc5e903378094bead986f0125de + !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! + +.. |badge1| image:: https://img.shields.io/badge/maturity-Mature-brightgreen.png + :target: https://odoo-community.org/page/development-status + :alt: Mature +.. |badge2| image:: https://img.shields.io/badge/license-LGPL--3-blue.png + :target: http://www.gnu.org/licenses/lgpl-3.0-standalone.html + :alt: License: LGPL-3 +.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fqueue-lightgray.png?logo=github + :target: https://github.com/OCA/queue/tree/16.0/queue_job + :alt: OCA/queue +.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png + :target: https://translation.odoo-community.org/projects/queue-16-0/queue-16-0-queue_job + :alt: Translate me on Weblate +.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png + :target: https://runboat.odoo-community.org/builds?repo=OCA/queue&target_branch=16.0 + :alt: Try me on Runboat + +|badge1| |badge2| |badge3| |badge4| |badge5| + +This addon adds an integrated Job Queue to Odoo. + +It allows to postpone method calls executed asynchronously. + +Jobs are executed in the background by a ``Jobrunner``, in their own transaction. + +Example: + +.. code-block:: python + + from odoo import models, fields, api + + class MyModel(models.Model): + _name = 'my.model' + + def my_method(self, a, k=None): + _logger.info('executed with a: %s and k: %s', a, k) + + + class MyOtherModel(models.Model): + _name = 'my.other.model' + + def button_do_stuff(self): + self.env['my.model'].with_delay().my_method('a', k=2) + + +In the snippet of code above, when we call ``button_do_stuff``, a job **capturing +the method and arguments** will be postponed. It will be executed as soon as the +Jobrunner has a free bucket, which can be instantaneous if no other job is +running. + + +Features: + +* Views for jobs, jobs are stored in PostgreSQL +* Jobrunner: execute the jobs, highly efficient thanks to PostgreSQL's NOTIFY +* Channels: give a capacity for the root channel and its sub-channels and + segregate jobs in them. Allow for instance to restrict heavy jobs to be + executed one at a time while little ones are executed 4 at a times. +* Retries: Ability to retry jobs by raising a type of exception +* Retry Pattern: the 3 first tries, retry after 10 seconds, the 5 next tries, + retry after 1 minutes, ... +* Job properties: priorities, estimated time of arrival (ETA), custom + description, number of retries +* Related Actions: link an action on the job view, such as open the record + concerned by the job + +**Table of contents** + +.. contents:: + :local: + +Installation +============ + +Be sure to have the ``requests`` library. + +Configuration +============= + +* Using environment variables and command line: + + * Adjust environment variables (optional): + + - ``ODOO_QUEUE_JOB_CHANNELS=root:4`` or any other channels configuration. + The default is ``root:1`` + + - if ``xmlrpc_port`` is not set: ``ODOO_QUEUE_JOB_PORT=8069`` + + * Start Odoo with ``--load=web,queue_job`` + and ``--workers`` greater than 1. [1]_ + +* Keep in mind that the number of workers should be greater than the number of + channels. ``queue_job`` will reuse normal Odoo workers to process jobs. It + will not spawn its own workers. + +* Using the Odoo configuration file: + +.. code-block:: ini + + [options] + (...) + workers = 6 + server_wide_modules = web,queue_job + + (...) + [queue_job] + channels = root:2 + +* Environment variables have priority over the configuration file. + +* Confirm the runner is starting correctly by checking the odoo log file: + +.. code-block:: + + ...INFO...queue_job.jobrunner.runner: starting + ...INFO...queue_job.jobrunner.runner: initializing database connections + ...INFO...queue_job.jobrunner.runner: queue job runner ready for db + ...INFO...queue_job.jobrunner.runner: database connections ready + +* Create jobs (eg using ``base_import_async``) and observe they + start immediately and in parallel. + +* Tip: to enable debug logging for the queue job, use + ``--log-handler=odoo.addons.queue_job:DEBUG`` + +.. [1] It works with the threaded Odoo server too, although this way + of running Odoo is obviously not for production purposes. + +* Jobs that remain in `enqueued` or `started` state (because, for instance, their worker has been killed) will be automatically re-queued. + +Usage +===== + +To use this module, you need to: + +#. Go to ``Job Queue`` menu + +Developers +~~~~~~~~~~ + +Delaying jobs +------------- + +The fast way to enqueue a job for a method is to use ``with_delay()`` on a record +or model: + + +.. code-block:: python + + def button_done(self): + self.with_delay().print_confirmation_document(self.state) + self.write({"state": "done"}) + return True + +Here, the method ``print_confirmation_document()`` will be executed asynchronously +as a job. ``with_delay()`` can take several parameters to define more precisely how +the job is executed (priority, ...). + +All the arguments passed to the method being delayed are stored in the job and +passed to the method when it is executed asynchronously, including ``self``, so +the current record is maintained during the job execution (warning: the context +is not kept). + +Dependencies can be expressed between jobs. To start a graph of jobs, use ``delayable()`` +on a record or model. The following is the equivalent of ``with_delay()`` but using the +long form: + +.. code-block:: python + + def button_done(self): + delayable = self.delayable() + delayable.print_confirmation_document(self.state) + delayable.delay() + self.write({"state": "done"}) + return True + +Methods of Delayable objects return itself, so it can be used as a builder pattern, +which in some cases allow to build the jobs dynamically: + +.. code-block:: python + + def button_generate_simple_with_delayable(self): + self.ensure_one() + # Introduction of a delayable object, using a builder pattern + # allowing to chain jobs or set properties. The delay() method + # on the delayable object actually stores the delayable objects + # in the queue_job table + ( + self.delayable() + .generate_thumbnail((50, 50)) + .set(priority=30) + .set(description=_("generate xxx")) + .delay() + ) + +The simplest way to define a dependency is to use ``.on_done(job)`` on a Delayable: + +.. code-block:: python + + def button_chain_done(self): + self.ensure_one() + job1 = self.browse(1).delayable().generate_thumbnail((50, 50)) + job2 = self.browse(1).delayable().generate_thumbnail((50, 50)) + job3 = self.browse(1).delayable().generate_thumbnail((50, 50)) + # job 3 is executed when job 2 is done which is executed when job 1 is done + job1.on_done(job2.on_done(job3)).delay() + +Delayables can be chained to form more complex graphs using the ``chain()`` and +``group()`` primitives. +A chain represents a sequence of jobs to execute in order, a group represents +jobs which can be executed in parallel. Using ``chain()`` has the same effect as +using several nested ``on_done()`` but is more readable. Both can be combined to +form a graph, for instance we can group [A] of jobs, which blocks another group +[B] of jobs. When and only when all the jobs of the group [A] are executed, the +jobs of the group [B] are executed. The code would look like: + +.. code-block:: python + + from odoo.addons.queue_job.delay import group, chain + + def button_done(self): + group_a = group(self.delayable().method_foo(), self.delayable().method_bar()) + group_b = group(self.delayable().method_baz(1), self.delayable().method_baz(2)) + chain(group_a, group_b).delay() + self.write({"state": "done"}) + return True + +When a failure happens in a graph of jobs, the execution of the jobs that depend on the +failed job stops. They remain in a state ``wait_dependencies`` until their "parent" job is +successful. This can happen in two ways: either the parent job retries and is successful +on a second try, either the parent job is manually "set to done" by a user. In these two +cases, the dependency is resolved and the graph will continue to be processed. Alternatively, +the failed job and all its dependent jobs can be canceled by a user. The other jobs of the +graph that do not depend on the failed job continue their execution in any case. + +Note: ``delay()`` must be called on the delayable, chain, or group which is at the top +of the graph. In the example above, if it was called on ``group_a``, then ``group_b`` +would never be delayed (but a warning would be shown). + +It is also possible to split a job into several jobs, each one processing a part of the +work. This can be useful to avoid very long jobs, parallelize some task and get more specific +errors. Usage is as follows: + +.. code-block:: python + + def button_split_delayable(self): + ( + self # Can be a big recordset, let's say 1000 records + .delayable() + .generate_thumbnail((50, 50)) + .set(priority=30) + .set(description=_("generate xxx")) + .split(50) # Split the job in 20 jobs of 50 records each + .delay() + ) + +The ``split()`` method takes a ``chain`` boolean keyword argument. If set to +True, the jobs will be chained, meaning that the next job will only start when the previous +one is done: + +.. code-block:: python + + def button_increment_var(self): + ( + self + .delayable() + .increment_counter() + .split(1, chain=True) # Will exceute the jobs one after the other + .delay() + ) + + +Enqueing Job Options +-------------------- + +* priority: default is 10, the closest it is to 0, the faster it will be + executed +* eta: Estimated Time of Arrival of the job. It will not be executed before this + date/time +* max_retries: default is 5, maximum number of retries before giving up and set + the job state to 'failed'. A value of 0 means infinite retries. +* description: human description of the job. If not set, description is computed + from the function doc or method name +* channel: the complete name of the channel to use to process the function. If + specified it overrides the one defined on the function +* identity_key: key uniquely identifying the job, if specified and a job with + the same key has not yet been run, the new job will not be created + +Configure default options for jobs +---------------------------------- + +In earlier versions, jobs could be configured using the ``@job`` decorator. +This is now obsolete, they can be configured using optional ``queue.job.function`` +and ``queue.job.channel`` XML records. + +Example of channel: + +.. code-block:: XML + + + sale + + + +Example of job function: + +.. code-block:: XML + + + + action_done + + + + + +The general form for the ``name`` is: ``.method``. + +The channel, related action and retry pattern options are optional, they are +documented below. + +When writing modules, if 2+ modules add a job function or channel with the same +name (and parent for channels), they'll be merged in the same record, even if +they have different xmlids. On uninstall, the merged record is deleted when all +the modules using it are uninstalled. + + +**Job function: model** + +If the function is defined in an abstract model, you can not write +```` +but you have to define a function for each model that inherits from the abstract model. + + +**Job function: channel** + +The channel where the job will be delayed. The default channel is ``root``. + +**Job function: related action** + +The *Related Action* appears as a button on the Job's view. +The button will execute the defined action. + +The default one is to open the view of the record related to the job (form view +when there is a single record, list view for several records). +In many cases, the default related action is enough and doesn't need +customization, but it can be customized by providing a dictionary on the job +function: + +.. code-block:: python + + { + "enable": False, + "func_name": "related_action_partner", + "kwargs": {"name": "Partner"}, + } + +* ``enable``: when ``False``, the button has no effect (default: ``True``) +* ``func_name``: name of the method on ``queue.job`` that returns an action +* ``kwargs``: extra arguments to pass to the related action method + +Example of related action code: + +.. code-block:: python + + class QueueJob(models.Model): + _inherit = 'queue.job' + + def related_action_partner(self, name): + self.ensure_one() + model = self.model_name + partner = self.records + action = { + 'name': name, + 'type': 'ir.actions.act_window', + 'res_model': model, + 'view_type': 'form', + 'view_mode': 'form', + 'res_id': partner.id, + } + return action + + +**Job function: retry pattern** + +When a job fails with a retryable error type, it is automatically +retried later. By default, the retry is always 10 minutes later. + +A retry pattern can be configured on the job function. What a pattern represents +is "from X tries, postpone to Y seconds". It is expressed as a dictionary where +keys are tries and values are seconds to postpone as integers: + + +.. code-block:: python + + { + 1: 10, + 5: 20, + 10: 30, + 15: 300, + } + +Based on this configuration, we can tell that: + +* 5 first retries are postponed 10 seconds later +* retries 5 to 10 postponed 20 seconds later +* retries 10 to 15 postponed 30 seconds later +* all subsequent retries postponed 5 minutes later + +**Job Context** + +The context of the recordset of the job, or any recordset passed in arguments of +a job, is transferred to the job according to an allow-list. + +The default allow-list is `("tz", "lang", "allowed_company_ids", "force_company", "active_test")`. It can +be customized in ``Base._job_prepare_context_before_enqueue_keys``. +**Bypass jobs on running Odoo** + +When you are developing (ie: connector modules) you might want +to bypass the queue job and run your code immediately. + +To do so you can set `QUEUE_JOB__NO_DELAY=1` in your environment. + +**Bypass jobs in tests** + +When writing tests on job-related methods is always tricky to deal with +delayed recordsets. To make your testing life easier +you can set `queue_job__no_delay=True` in the context. + +Tip: you can do this at test case level like this + +.. code-block:: python + + @classmethod + def setUpClass(cls): + super().setUpClass() + cls.env = cls.env(context=dict( + cls.env.context, + queue_job__no_delay=True, # no jobs thanks + )) + +Then all your tests execute the job methods synchronously +without delaying any jobs. + +Testing +------- + +**Asserting enqueued jobs** + +The recommended way to test jobs, rather than running them directly and synchronously is to +split the tests in two parts: + + * one test where the job is mocked (trap jobs with ``trap_jobs()`` and the test + only verifies that the job has been delayed with the expected arguments + * one test that only calls the method of the job synchronously, to validate the + proper behavior of this method only + +Proceeding this way means that you can prove that jobs will be enqueued properly +at runtime, and it ensures your code does not have a different behavior in tests +and in production (because running your jobs synchronously may have a different +behavior as they are in the same transaction / in the middle of the method). +Additionally, it gives more control on the arguments you want to pass when +calling the job's method (synchronously, this time, in the second type of +tests), and it makes tests smaller. + +The best way to run such assertions on the enqueued jobs is to use +``odoo.addons.queue_job.tests.common.trap_jobs()``. + +Inside this context manager, instead of being added in the database's queue, +jobs are pushed in an in-memory list. The context manager then provides useful +helpers to verify that jobs have been enqueued with the expected arguments. It +even can run the jobs of its list synchronously! Details in +``odoo.addons.queue_job.tests.common.JobsTester``. + +A very small example (more details in ``tests/common.py``): + +.. code-block:: python + + # code + def my_job_method(self, name, count): + self.write({"name": " ".join([name] * count) + + def method_to_test(self): + count = self.env["other.model"].search_count([]) + self.with_delay(priority=15).my_job_method("Hi!", count=count) + return count + + # tests + from odoo.addons.queue_job.tests.common import trap_jobs + + # first test only check the expected behavior of the method and the proper + # enqueuing of jobs + def test_method_to_test(self): + with trap_jobs() as trap: + result = self.env["model"].method_to_test() + expected_count = 12 + + trap.assert_jobs_count(1, only=self.env["model"].my_job_method) + trap.assert_enqueued_job( + self.env["model"].my_job_method, + args=("Hi!",), + kwargs=dict(count=expected_count), + properties=dict(priority=15) + ) + self.assertEqual(result, expected_count) + + + # second test to validate the behavior of the job unitarily + def test_my_job_method(self): + record = self.env["model"].browse(1) + record.my_job_method("Hi!", count=12) + self.assertEqual(record.name, "Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi!") + +If you prefer, you can still test the whole thing in a single test, by calling +``jobs_tester.perform_enqueued_jobs()`` in your test. + +.. code-block:: python + + def test_method_to_test(self): + with trap_jobs() as trap: + result = self.env["model"].method_to_test() + expected_count = 12 + + trap.assert_jobs_count(1, only=self.env["model"].my_job_method) + trap.assert_enqueued_job( + self.env["model"].my_job_method, + args=("Hi!",), + kwargs=dict(count=expected_count), + properties=dict(priority=15) + ) + self.assertEqual(result, expected_count) + + trap.perform_enqueued_jobs() + + record = self.env["model"].browse(1) + record.my_job_method("Hi!", count=12) + self.assertEqual(record.name, "Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi!") + +**Execute jobs synchronously when running Odoo** + +When you are developing (ie: connector modules) you might want +to bypass the queue job and run your code immediately. + +To do so you can set ``QUEUE_JOB__NO_DELAY=1`` in your environment. + +.. WARNING:: Do not do this in production + +**Execute jobs synchronously in tests** + +You should use ``trap_jobs``, really, but if for any reason you could not use it, +and still need to have job methods executed synchronously in your tests, you can +do so by setting ``queue_job__no_delay=True`` in the context. + +Tip: you can do this at test case level like this + +.. code-block:: python + + @classmethod + def setUpClass(cls): + super().setUpClass() + cls.env = cls.env(context=dict( + cls.env.context, + queue_job__no_delay=True, # no jobs thanks + )) + +Then all your tests execute the job methods synchronously without delaying any +jobs. + +In tests you'll have to mute the logger like: + + @mute_logger('odoo.addons.queue_job.models.base') + +.. NOTE:: in graphs of jobs, the ``queue_job__no_delay`` context key must be in at + least one job's env of the graph for the whole graph to be executed synchronously + + +Tips and tricks +--------------- + +* **Idempotency** (https://www.restapitutorial.com/lessons/idempotency.html): The queue_job should be idempotent so they can be retried several times without impact on the data. +* **The job should test at the very beginning its relevance**: the moment the job will be executed is unknown by design. So the first task of a job should be to check if the related work is still relevant at the moment of the execution. + +Patterns +-------- +Through the time, two main patterns emerged: + +1. For data exposed to users, a model should store the data and the model should be the creator of the job. The job is kept hidden from the users +2. For technical data, that are not exposed to the users, it is generally alright to create directly jobs with data passed as arguments to the job, without intermediary models. + +Known issues / Roadmap +====================== + +* After creating a new database or installing ``queue_job`` on an + existing database, Odoo must be restarted for the runner to detect it. + +* When Odoo shuts down normally, it waits for running jobs to finish. + However, when the Odoo server crashes or is otherwise force-stopped, + running jobs are interrupted while the runner has no chance to know + they have been aborted. In such situations, jobs may remain in + ``started`` or ``enqueued`` state after the Odoo server is halted. + Since the runner has no way to know if they are actually running or + not, and does not know for sure if it is safe to restart the jobs, + it does not attempt to restart them automatically. Such stale jobs + therefore fill the running queue and prevent other jobs to start. + You must therefore requeue them manually, either from the Jobs view, + or by running the following SQL statement *before starting Odoo*: + +.. code-block:: sql + + update queue_job set state='pending' where state in ('started', 'enqueued') + +Changelog +========= + +.. [ The change log. The goal of this file is to help readers + understand changes between version. The primary audience is + end users and integrators. Purely technical changes such as + code refactoring must not be mentioned here. + + This file may contain ONE level of section titles, underlined + with the ~ (tilde) character. Other section markers are + forbidden and will likely break the structure of the README.rst + or other documents where this fragment is included. ] + +Next +~~~~ + +* [ADD] Run jobrunner as a worker process instead of a thread in the main + process (when running with --workers > 0) +* [REF] ``@job`` and ``@related_action`` deprecated, any method can be delayed, + and configured using ``queue.job.function`` records +* [MIGRATION] from 13.0 branched at rev. e24ff4b + +Bug Tracker +=========== + +Bugs are tracked on `GitHub Issues `_. +In case of trouble, please check there if your issue has already been reported. +If you spotted it first, help us to smash it by providing a detailed and welcomed +`feedback `_. + +Do not contact contributors directly about support or help with technical issues. + +Credits +======= + +Authors +~~~~~~~ + +* Camptocamp +* ACSONE SA/NV + +Contributors +~~~~~~~~~~~~ + +* Guewen Baconnier +* Stéphane Bidoul +* Matthieu Dietrich +* Jos De Graeve +* David Lefever +* Laurent Mignon +* Laetitia Gangloff +* Cédric Pigeon +* Tatiana Deribina +* Souheil Bejaoui +* Eric Antones +* Simone Orsi + +Maintainers +~~~~~~~~~~~ + +This module is maintained by the OCA. + +.. image:: https://odoo-community.org/logo.png + :alt: Odoo Community Association + :target: https://odoo-community.org + +OCA, or the Odoo Community Association, is a nonprofit organization whose +mission is to support the collaborative development of Odoo features and +promote its widespread use. + +.. |maintainer-guewen| image:: https://github.com/guewen.png?size=40px + :target: https://github.com/guewen + :alt: guewen + +Current `maintainer `__: + +|maintainer-guewen| + +This module is part of the `OCA/queue `_ project on GitHub. + +You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.