Wednesday 14 January 2015

Installing Squid Proxy in Clustered CentOS 7 using Corosync, Pacemaker and PCS

Here I am writing how to set up a squid proxy server in clustered environment using pacemaker, corosync and PCS. You can use this  for other cluster setups like Httpd also.

in this  writing I am using single NIC on each server

node1 IP address: 172.16.1.11/24
node2 IP address: 172.16.1.12/24
virtiual_ip: 172.16.1.10/24

1) Install the Cent OS 7 minimal  using default settings
configure the nic using the your favorite editor
here is my configuration
Node1
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
NAME=ens32
UUID=ba623de6-cad5-4fc3-a8cc-bf92ca099b52
ONBOOT=yes
HWADDR=00:50:56:9E:5E:65
IPADDR0=172.16.1.11
PREFIX0=24
GATEWAY0=172.16.1.254
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes

Node2

TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
NAME=ens32
UUID=d01dd909-cc00-4c0c-9712-5c79b1a0e0d6
ONBOOT=yes
HWADDR=00:50:56:9E:02:1C
IPADDR0=172.16.1.12
PREFIX0=24
GATEWAY0=172.16.1.254
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes

2) update the system and install the corosync, pacemaker and pcs
yum install -y corosync pcs pacemaker

3) change SElinux settings to premissive
nano /etc/sysconfig/selinux
and change it to permissive as below
SELINUX=permissive

3) add the firewall rules to accept the ports for pacemaker, corosync and squid proxy
   here is my firewall rules

 firewall-cmd --permanent --zone=internal --change-interface=ifcfg-ens160  //change the nic in to public to internal zone

firewall-cmd --zone=internal --add-service=ssh --permanent
firewall-cmd --zone=internal --add-service=http --permanent
firewall-cmd --zone=internal --add-service=https --permanent
firewall-cmd --zone=internal --add-port=3126/tcp --permanent
firewall-cmd --zone=internal --add-port=3127/tcp --permanent
firewall-cmd --zone=internal --add-port=3128/tcp --permanent
firewall-cmd --zone=internal --add-port=5404/udp --permanent
firewall-cmd --zone=internal --add-port=5405/udp --permanent

Note: if you experience any issues with firewall simply disable it using 

systemctl disable firewalld //disable firewall
systemctl stop firewalld                  // stop firewall service

4) after that install net-tools package, it is very important for squid proxy HA, as default Cent OS 7 do not come with netstat command but squid ocf:heartbeatagent:Squid will use the netstat command to check the squid service on both nodes
yum install net-tools -y //network tools (otherwise squid HA resource agent doesn't start )

5) configure the node names in hosts file pacemaker and corosync will use node names only

6) then configure the password for hacluster user. this username created during the pacemaker and corosync installation
passwd hacluster //create hacluster password must be same on both nodes

then start the pacemaker service and that to start at booting
systemctl start pcsd //start the pcsd service
systemctl enable pcsd //adding as startup service

UP to Here you must run all commands on both nodes
from below you must run these on single node only
Starting the cluster configuration on single node

pcs cluster auth node1 node2  //execute this on only one node to check the authentication of hacluster 

setup the cluster with the name squid_clu
pcs cluster setup --name squid_clu node1 node2 //setup cluster with clustername squid_clu

then starting the cluster service

pcs cluster start --all //starting cluster on all servers
 pcs cluster enable --all //adding as startup service

below commands will useful for monitoring and trouble shooting

pcs status cluster
pcs status nodes
corosync-cmapctl | grep members
pcs status corosync

7) disabling the Quorum and Stonith

in this lab I am using only two nodes that is why I am disabling quorum policy and stonith
you can read about these settings here 

pcs property set stonith-enabled=false //disable stonith
 pcs property set no-quorum-policy=ignore //disable the quorum

8) creating the virtual node with IP 172.16.1.10

pcs resource create virtual_ip ocf:heartbeat:IPaddr2 ip=172.16.1.10 cidr_netmask=24 op monitor interval=30s meta target-role="Started" 

to check the virtual IP status use below command
pcs status | grep virtual_ip

Installing Squid Proxy

9) install squid package
yum install -y squid    //to install the squid proxy

then start the service for squid 
systemctl start squid        //start the squid service  (you must run this on both nodes)
systemctl enable squid    //start the squid service after every boot (you must run this on both nodes)

10) adding virtual squid service using ocf resource. I am using the resource name also using squid which is marked as red in below command. you can change if you like 

pcs resource create squid ocf:heartbeat:Squid squid_exe="/usr/sbin/squid" squid_conf="/etc/squid/squid.conf" squid_pidfile="/var/run/squid.pid" squid_port="3128" squid_stop_timeout="30" op start interval="0" timeout="60s" op stop interval="0" timeout="120s" op monitor interval="20s" timeout="30s" meta target-role="Started"

11) Binding/grouping the virtual IP and squid together other virtual IP will start on node1 and squid service will start on different node or vice versa.

pcs resource group add ProxyAndIP virtual_ip squid

pcs resource meta ProxyAndIP target-role="Started"

12 ) configuring the order of service to start first virtual IP then Squid will start

pcs constraint order virtual_ip then squid

then restart all cluster services using the below command

pcs cluster stop --all && sudo pcs cluster start --all
 crm_mon  //monitoring the cluster

if everything works fine you will see as below

Last updated: Wed Jan 14 11:19:30 2015
Last change: Mon Jan 12 16:27:53 2015 via cibadmin on pcltsquvt01
Stack: corosync
Current DC: pcltsquvt02 (2) - partition with quorum
Version: 1.1.10-32.el7_0.1-368c726
2 Nodes configured
2 Resources configured


Online: [ node1 node2 ]

 Resource Group: ProxyAndIP
     virtual_ip (ocf::heartbeat:IPaddr2):       Started node1
     squid      (ocf::heartbeat:Squid): Started node1

if you issue the systemctl status squid  on both nodes you can see that service is failed but you can see that it started as parent and started the service on one kid as below

squid.service - Squid caching proxy
   Loaded: loaded (/usr/lib/systemd/system/squid.service; enabled)
   Active: failed (Result: signal) since Mon 2015-01-12 16:30:02 GMT; 1 day 18h ago
  Process: 2372 ExecStop=/usr/sbin/squid -k shutdown -f $SQUID_CONF (code=exited, status=0/SUCCESS)
  Process: 883 ExecStart=/usr/sbin/squid $SQUID_OPTS -f $SQUID_CONF (code=exited, status=0/SUCCESS)
  Process: 869 ExecStartPre=/usr/libexec/squid/cache_swap.sh (code=exited, status=0/SUCCESS)
 Main PID: 914 (code=killed, signal=KILL)
   CGroup: /system.slice/squid.service

Jan 12 16:29:27 pcltsquvt01 squid[914]: Squid Parent: will start 1 kids
Jan 12 16:29:27 pcltsquvt01 systemd[1]: Started Squid caching proxy.
Jan 12 16:29:27 pcltsquvt01 squid[914]: Squid Parent: (squid-1) process 919 started
Jan 12 16:30:02 pcltsquvt01 systemd[1]: squid.service: main process exited, code=killed, status=9/KILL
Jan 12 16:30:02 pcltsquvt01 systemd[1]: Unit squid.service entered failed state.

Troubleshooting:
Check the firewall configuration
Make sure that selinux is properly configured
make sure you installed the net-tools installed
make sure squid is installed on both nodes and squid.conf is identical
make sure squid is listening on the right port
make sure squid is storing the pid file in the right location


8 comments:

  1. hi,
    how about configuring resource on different ips?

    virtual ip > node1 > squid1
    > node2 > squid2

    how do we pcs resource create squid ocf:heartbeat:Squid ?

    ReplyDelete
    Replies
    1. Hi Rizman,

      sorry for late reply, cana you please see the point number 10

      Delete
  2. If I add an additional Virtual IP (virtual_ip2) to the cluster with below commands, can it be used in squid proxy? How to add it in squid.conf?

    # pcs resource create virtual_ip2 ocf:heartbeat:IPaddr2 ip=xx.xx.xx.xx cidr_netmask=24 op monitor interval=1s meta target-role="Started"
    # pcs resource group add ProxyAndIP virtual_ip2
    # pcs constraint order virtual_ip then virtual_ip2 then squid
    # pcs cluster stop --all && sudo pcs cluster start --all

    ReplyDelete
  3. Is it possible to have an active/active configuration where squid is running on both servers, but the virtual IP moves between hosts without starting/stopping squid?

    ReplyDelete
    Replies
    1. I figured it out...i just didn't add squid to the resource group and it works great.

      Delete
    2. hi Jason,
      if squid service down in one node, how cluster works?does connection go to alive node ?

      Delete
    3. I don't have the cluster monitoring the squid service. I just created a script that will monitor the internet connection and move the cluster IP between the primary and backup Internet connection, and then fail back if the primary Internet connection is back online. We have 2 nodes, in different data centers connected to different Internet connections.

      Delete
  4. i added resource Squid but get failed error as below:
    SquidProxy (ocf::heartbeat:Squid): FAILED hvsquid02.fushan.fihnbb.com

    Failed Actions:
    * SquidProxy_start_0 on hvsquid02.fushan.fihnbb.com 'unknown error' (1): call=34, status=Timed Out, exitreason='squid:Pid unmatch',
    last-rc-change='Mon Nov 19 14:11:55 2018', queued=0ms, exec=60004ms


    can somebody hep to troubleshoot?
    thanks.

    ReplyDelete