We are in the process of setting up a new production database and would greatly appreciate some guidance on establishing a solid baseline for our postgresql.conf file.
Our Environment
PostgreSQL Version: 17.6
Server Hardware: 8 GB RAM, 8 CPU cores, SSD storage
Operating System: Linux (Ubuntu 22.04)
Application Stack: A suite of Spring Boot microservices using JDBC.
Workload Profile: The application is a B2B platform. This means we expect a moderate load with a relatively low number of concurrent users (e.g., 50-200 active connections at peak). The queries will be primarily transactional (OLTP), but some can be moderately complex.
Generally PostgreSQL cluster is tuned for your application, but the general guidelines, which work very well for most low to moderate level workloads are:
shared_buffers - 25% of RAM available for PostgreSQL cluster, on 8GB machine dedicated for DB 2GB, if 8GB for both DB and APP, assuming 4GB should be for app and 4 for DB then 1GB, etc.
work_mem - 25% of RAM available for PostgreSQL cluster divided by max_connections, in this case 2GB / 200 = 10MB
effective_cache_size - 75% of RAM
random_page_cost = 1.01 - this is generally one of first parameters that should be changed if you are using SSD disks, default value of 4 is dedicated to HDD disks
Then it is recommended to perform review of the cluster after 1st week or 1st month and fine tune parameters if needed, based on the usage.