Sets the maximum number of workers that can be started by a single
Gather Merge node. Parallel workers are taken from the pool of processes established by max_worker_processes, limited by guc-max-parallel-workers. Note that the requested number of workers may not actually be available at run time. If this occurs, the plan will run with fewer workers than expected, which may be inefficient. The default value is 2. Setting this value to 0 disables parallel query execution.
Note that parallel queries may consume very substantially more resources than non-parallel queries, because each worker process is a completely separate process which has roughly the same impact on the system as an additional user session. This should be taken into account when choosing a value for this setting, as well as when configuring other settings that control resource utilization, such as work_mem. Resource limits such as work_mem are applied individually to each worker, which means the total utilization may be much higher across all processes than it would normally be for any single process. For example, a parallel query using 4 workers may use up to 5 times as much CPU time, memory, I/O bandwidth, and so forth as a query which uses no workers at all.
For more information on parallel query, see parallel-query.
- Postgres 9.6: Parallel query does not take max_parallel_workers_per_gather setting
- confirm max_parallel_workers_per_gather (parallelism) is being utilized correctly
- Postgresql 9.6 parallel execution don't work
- Postgresql9.6: Why 10 loops of parallel query done their work slower than 1 loop
- creating indexes on big tables - postgresql 9.6