Blog
Answers about Puppet
DevOps, Automation
Universe and Everything
Need Puppet help?
Contact Alessandro Franceschi / example42
for direct expert help on Puppet.
If solution is quick, it’s free. No obligations.
Puppet Tip 113 - Managing Puppet Enterprise - Part 1 - Services
Puppet Enterprise (PE) is Puppet’s commercial offering. It’s based on the Open Source core and provides various enterprise features, like the powerful and utterly useful Console to manage and visualize the whole infrastructure from a Web interface.
Puppet Enterprise can be configured as in All In One installation (AIO for short), where all the PE components are installed on a single node, or have them distributed on different ones.
An AIO PE server runs the following services:
- pe-puppetserver. The core Puppet Server service responsible for communication with clients and compilation of their catalogs
- pe-puppetdb. The PuppetDB service, responsible for handling all the data produced by Puppet
- pe-console-services. The PE web interface we can access with a browser
- pe-nginx. An NGINX reverse proxy for the PE console
- pe-postgresql. A PostgreSQL instance where is stored the data generated, used or handled by PuppetDB and the PE Console
- pe-orchestration-services. Responsible for handling Puppet Jobs (such as Tasks, Plans and remote Puppet runs)
All the Puppet clients (and also the Masters, which as clients of themselves) have the following services:
- puppet. The Puppet agent service, it requests the catalog from the server and applies it locally. Runs as root.
- pxp-agent. It’s used to allow the remote executing of Puppet runs, tasks and plans from the PE server.
Let’s review these services in detail.
- Puppet Server service
- PuppetDB service
- Console Services service
- NGINX service
- Orchestration Services service
- PostgreSQL service
- Global checks
Puppet Server service
It’s the main Puppet server service, it takes care of:
- Receiving catalogs requests from clients
- Handling Puppet CA and clients’ certificates
- Compiling catalogs for the clients and sending them back
- Receiving Puppet runs reports from clients and submitting them to PuppetDB
- Receiving facts from clients and submitting them to PuppetDB (besides using them when compiling catalogs)
- Handling Puppet code and data deployments via the Code Manager component
- Syncing code between HA Master and Replica servers
It’s a Closure application running as pe-puppet user inside a JVM.
Its configuration files are under the directory /etc/puppetlabs/puppetserver
.
Its log files are under the directory /var/log/puppetlabs/puppetserver
.
It listens on port 8140 (used for the communication with clients) and 8170 (used by the Code Manager component).
The process looks like this:
pe-pupp+ 29018 1 5 2018 ? 2-02:00:52 /opt/puppetlabs/server/bin/java -Xms2048m -Xmx2048m -Djava.io.tmpdir=/opt/puppetlabs/server/apps/puppetserver/tmp -XX:ReservedCodeCacheSize=512m -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/var/log/puppetlabs/puppetserver/puppetserver_gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=16 -XX:GCLogFileSize=64m -Djava.security.egd=/dev/urandom -XX:OnOutOfMemoryError=kill -9 %p -cp /opt/puppetlabs/server/apps/puppetserver/puppet-server-release.jar:/opt/puppetlabs/server/apps/puppetserver/jruby-9k.jar:/opt/puppetlabs/server/data/puppetserver/jars/* clojure.main -m puppetlabs.trapperkeeper.main --config /etc/puppetlabs/puppetserver/conf.d --bootstrap-config /etc/puppetlabs/puppetserver/bootstrap.cfg --restart-file /opt/puppetlabs/server/data/puppetserver/restartcounter
To check its status:
systemctl status pe-puppetserver
To stop and start the service (running inside a JVM the startup takes some seconds):
systemctl stop pe-puppetserver
systemctl start pe-puppetserver
If PE is configured in High Availability mode, this service runs on the Primary Master and the Primary Master Replica servers, in case of failure on the Primary Master:
- Compile master component still works on the Replica (existing clients still can request, fetch, apply and report back their catalog)
- CA component is not able to provision new clients certificates (new nodes cannot be added)
- Code Manager can’t deploy new code (Puppet code and data can’t be updated)
To re-establish full functionality, the Puppet server service has to run correctly on the Primary Master or the Primary Master Replica has to be promoted to Primary Master (and the existing Primary Master server must be decommissioned).
In case of failure of the Puppet server service on the Primary Master Replica, all the above activities still work but in case of Code Manager deployments, new code is not synced to the Replica (it will be synced as soon as the Puppet server service is re-established there).
PuppetDB service
It’s the component that takes care of storing (using a PostgreSQL backend) all the data generated by a Puppet run:
- the list of facts of each node
- the last catalog compiled for a node
- all the reports of all the Puppet runs of all the nodes (old reports are regularly purged)
- Puppet exported resources
It’s a Closure application running as pe-puppetdb user inside a JVM.
Its configuration files are under the directory /etc/puppetlabs/puppetdb
.
Its log files are under the directory /var/log/puppetlabs/puppetdb
.
It listens on ports:
- 127.0.0.1:8080 (for http traffic)
- 0.0.0.0:8081 (for https traffic)
It typically communicates only with the Puppet Server (and PostgreSQL for data storage).
The process looks like this:
pe-pupp+ 29254 1 0 2018 ? 05:00:11 /opt/puppetlabs/server/bin/java -Xmx512m -Xms512m -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/var/log/puppetlabs/puppetdb/puppetdb_gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=16 -XX:GCLogFileSize=64m -Djava.security.egd=/dev/urandom -XX:OnOutOfMemoryError=kill -9 %p -cp /opt/puppetlabs/server/apps/puppetdb/puppetdb.jar clojure.main -m puppetlabs.puppetdb.main --config /etc/puppetlabs/puppetdb/conf.d --bootstrap-config /etc/puppetlabs/puppetdb/bootstrap.cfg --restart-file /opt/puppetlabs/server/data/puppetdb/restartcounterl
To check its status:
systemctl status pe-puppetdb
To stop and start the service (running inside a JVM the startup takes some seconds):
systemctl stop pe-puppetdb
systemctl start pe-puppetdb
This service runs in HA both on the Primary Master and the Primary Master Replica servers, in case of failure on the Primary Master the service is guaranteed by the Primary Master Replica.
Console Services service
It’s the web application that presents the Web interface for Puppet Enterprise.
It’s a Closure application running as pe-console-services user inside a JVM.
Its configuration files are under the directory /etc/puppetlabs/console-services
.
Its log files are under the directory /var/log/puppetlabs/console-services
.
It listens on ports:
- 127.0.0.1:4430 (Web application listens here in http, proxied by a local Nginx server which terminates https connections to port 443)
- 0.0.0.0:4431 (Web application over https)
- 127.0.0.1:4432 (Used for local status checks)
- 0.0.0.0:4433 (Node classifier / console services API endpoint)
Puppet Server communicates with Console services over port 4433.
The Nginx proxy communicates over port 4430 and serves clients (Users’s browsers) over port 443.
The process looks like this:
ppe-cons+ 19234 1 0 2018 ? 02:59:15 /opt/puppetlabs/server/bin/java -Xmx256m -Xms256m -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/var/log/puppetlabs/console-services/console-services_gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=16 -XX:GCLogFileSize=64m -Djava.security.egd=/dev/urandom -XX:OnOutOfMemoryError=kill -9 %p -cp /opt/puppetlabs/server/apps/console-services/console-services-release.jar clojure.main -m puppetlabs.trapperkeeper.main --config /etc/puppetlabs/console-services/conf.d --bootstrap-config /etc/puppetlabs/console-services/bootstrap.cfg --restart-file /opt/puppetlabs/server/data/console-services/restartcounter
To check its status:
systemctl status pe-console-services
To stop and start the service (running inside a JVM the startup takes some seconds):
systemctl stop pe-console-services
systemctl start pe-console-services
This service runs only on the Primary Master and if it fails there the web console is no longer available but classification via the node classifier API still works (catalogs are compiled normally and the classes defined via the Web Console for a node are normally included).
To be able to use the Web console on the Replica, the Replica server has to be promoted.
NGINX service
It’s a normal Nginx instance that acts as a reverse proxy to the Web console.
Its configuration files are under the directory /etc/puppetlabs/nginx
.
Its log files are under the directory /var/log/puppetlabs/nginx
.
It starts as root then forks the processes that communicate with clients as pe-webserver user.
It listens on ports:
- 0.0.0.0:80 (http port, connections here are redirected to https port)
- 0.0.0.0:443 (https port, used by all the clients)
The Nginx service communicates over port 4430 with the Console and serves clients (Users’ browsers) on port 443.
The process looks like this:
root 18533 1 0 2018 ? 00:00:00 nginx: master process /opt/puppetlabs/server/bin/nginx -c /etc/puppetlabs/nginx/nginx.conf
pe-webs+ 18534 18533 0 2018 ? 00:00:59 nginx: worker process
To check its status:
systemctl status pe-nginx
To stop and start the service:
systemctl stop pe-nginx
systemctl start pe-nginx
This service runs only on the Primary Master and if it fails here the web console is no longer available for users.
Orchestration Services service
It manages orchestration services (the ability to trigger Puppet runs, tasks and plans from the web console or a cli command like puppet job)
It’s a Closure application running as pe-orchestration-services user inside a JVM.
Its configuration files are under the directory /etc/puppetlabs/orchestration-services
.
Its log files are under the directory /var/log/puppetlabs/orchestration-services
.
It listens on ports:
- 0.0.0.0:8142 (Used to accept inbound traffic and responses from client’s pxp-agents)
- 0.0.0.0:8143 (Used by PCP brokers and by orchestrator CLI client)
All managed servers, via their local pxp-agent service, communicate using port 8142.
When puppet job commands are used from the CLI (puppet job command run from management workstations or the same PE server) they communicate over port 8143.
The process looks like this:
pe-orch+ 18694 1 0 2018 ? 03:17:10 /opt/puppetlabs/server/bin/java -Xmx704m -Xms704m -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/var/log/puppetlabs/orchestration-services/orchestration-services_gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=16 -XX:GCLogFileSize=64m -Djava.security.egd=/dev/urandom -XX:OnOutOfMemoryError=kill -9 %p -cp /opt/puppetlabs/server/apps/orchestration-services/orchestration-services-release.jar clojure.main -m puppetlabs.trapperkeeper.main --config /etc/puppetlabs/orchestration-services/conf.d --bootstrap-config /etc/puppetlabs/orchestration-services/bootstrap.cfg --restart-file /opt/puppetlabs/server/data/orchestration-services/restartcounter
To check its status:
systemctl status pe-orchestration-services
To stop and start the service (running inside a JVM the startup takes some seconds):
systemctl stop pe-orchestration-services
systemctl start pe-orchestration-services
This service runs only on the Primary Master and if it fails here the orchestration services (ability to trigger remote Puppet runs or run tasks or plans) don’t work.
To be able to use the Orchestration services on the Replica, the Replica server has to be promoted.
PostgreSQL service
It’s a normal PostgreSQL instance that stores all the data handled by PuppetDB, configurations done on the Web Console (such as node classification, rbac settings, auditing of users activities on the web interface) and the jobs operated by the Orchestration services.
Databases and configurations are under /opt/puppetlabs/server/data/postgresql/11/data
Its log files are under the directory /var/log/puppetlabs/postgresql
.
It runs as pe-postgres user.
It listens on port:
- 0.0.0.0:5432 (Default PostgreSQL port)
The PostgreSQL service communicates over port 5432 with PuppetDB, the Console services and the Orchestration services.
The process looks like this:
pe-post+ 30002 1 0 2018 ? 00:17:05 /opt/puppetlabs/server/apps/postgresql/bin/postgres -D /opt/puppetlabs/server/data/postgresql/9.6/data -c log_directory=/var/log/puppetlabs/postgresql
pe-post+ 30003 30002 0 2018 ? 00:01:36 postgres: logger process
pe-post+ 30005 30002 0 2018 ? 00:03:00 postgres: checkpointer process
pe-post+ 30006 30002 0 2018 ? 00:00:59 postgres: writer process
pe-post+ 30007 30002 0 2018 ? 00:03:34 postgres: wal writer process
pe-post+ 30008 30002 0 2018 ? 00:05:02 postgres: autovacuum launcher process
pe-post+ 30009 30002 0 2018 ? 00:10:11 postgres: stats collector process
pe-post+ 30010 30002 0 2018 ? 00:00:01 postgres: bgworker: pglogical supervisor
pe-post+ 30015 30002 0 2018 ? 00:00:05 postgres: bgworker: pglogical manager 16397
[...]
pe-post+ 32574 30002 0 09:42 ? 00:00:00 postgres: pe-puppetdb pe-puppetdb 10.29.130.135(53026) idle
pe-post+ 7622 30002 0 10:24 ? 00:00:00 postgres: pe-rbac-write pe-rbac 10.29.130.135(55262) idle
pe-post+ 7697 30002 0 10:24 ? 00:00:00 postgres: pe-classifier-write pe-classifier 10.29.130.135(55274) idle
pe-post+ 8174 30002 0 2018 ? 00:01:49 postgres: wal sender process pe-ha-replication 10.29.130.136(41438) idle
[...] Several similar processes for the various databases
To check its status:
systemctl status pe-postgresql
To stop and start the service:
systemctl stop pe-postgresql
systemctl start pe-postgresql
This service runs on both the Primary Master and the Replica if one fails the other can keep on working.
Global checks
To check the overall health of the PE infrastructure, there’s the very handy puppet infrastructure status
command. Its output looks as follows, in an HA setup:
Notice: Contacting services for status information...
Code Manager: Running on Primary Master, https://puppet01.example.com:8170/
File Sync Storage Service: Running on Primary Master, https://puppet01.example.com:8140/
File Sync Client Service: Running on Primary Master, https://puppet01.example.com:8140/
Puppet Server: Running on Primary Master, https://puppet01.example.com:8140/
Classifier: Running on Primary Master, https://puppet01.example.com:4433/classifier-api
RBAC: Running on Primary Master, https://puppet01.example.com:4433/rbac-api
Activity Service: Running on Primary Master, https://puppet01.example.com:4433/activity-api
Orchestrator: Running on Primary Master, https://puppet01.example.com:8143/orchestrator
PCP Broker: Running on Primary Master, wss://puppet01.example.com:8142/pcp
PCP Broker v2: Running on Primary Master, wss://puppet01.example.com:8142/pcp2
PuppetDB: Running on Primary Master, https://puppet01.example.com:8081/pdb
Info: Last sync successfully completed 65 seconds ago (at 2019-01-09T11:06:54.963Z)
File Sync Client Service: Running on Primary Master Replica, https://puppet02.example.com:8140/
Puppet Server: Running on Primary Master Replica, https://puppet02.example.com:8140/
Classifier: Running on Primary Master Replica, https://puppet02.example.com:4433/classifier-api
RBAC: Running on Primary Master Replica, https://puppet02.example.com:4433/rbac-api
Activity Service: Running on Primary Master Replica, https://puppet02.example.com:4433/activity-api
PuppetDB: Running on Primary Master Replica, https://puppet02.example.com:8081/pdb
Info: Last sync successfully completed 79 seconds ago (at 2019-01-09T11:06:41.380Z)
2019-01-09 11:08:00 +0000
17 of 17 services are fully operational.
Some of what has been described here for Puppet Enterprise applies also to the open source Puppet Server and PuppetDB services.
In the next post, we are going to see where and how are all the PE related logs.
Stay awaken.
Alessandro Franceschi