Skip to content

Latest commit

 

History

History
172 lines (123 loc) · 4.94 KB

troubleshooting.adoc

File metadata and controls

172 lines (123 loc) · 4.94 KB

GLPI Troubleshooting

Pod in CrashLoopBackOff state

Problem

POD doesn’t start.

Symptoms

The POD status is CrashLoopBackOff.

Get all objects in namespace
$ kubectl -n glpi get all
NAME                                         READY   STATUS             RESTARTS   AGE
pod/glpi-75f944445c-ncjss                    1/1     Running            1          41d
pod/glpi-mariadb-0                           0/1     CrashLoopBackOff   739        41d
pod/phpmyadmin-deployment-74fc9dd457-dljld   1/1     Running            1          44d

NAME                         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
service/glpi-service         ClusterIP   xxx.xxx.xxx.xxx    <none>        80/TCP           41d
service/mariadb              NodePort    xxx.xxx.xxx.xxx    <none>        3306:31538/TCP   41d
service/phpmyadmin-service   NodePort    xxx.xxx.xxx.xxx    <none>        80:30100/TCP     44d

NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/glpi                    1/1     1            1           41d
deployment.apps/phpmyadmin-deployment   1/1     1            1           44d

NAME                                               DESIRED   CURRENT   READY   AGE
replicaset.apps/glpi-75f944445c                    1         1         1       41d
replicaset.apps/phpmyadmin-deployment-74fc9dd457   1         1         1       44d

NAME                            READY   AGE
statefulset.apps/glpi-mariadb   0/1     41d

Check the folders for the pod PV.

Check the gluster volume status.

# gluster volume status gv0
Status of volume: gv0
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick xxx1.xxx.xxx:/data/glus
ter/brick1/gv0                              49153     0          Y       3420
Brick xxx2.xxx.xxx:/data/glu
ster/brick1/gv0                             N/A       N/A        N       N/A
Self-heal Daemon on localhost               N/A       N/A        Y       1101
Self-heal Daemon on xxx.xxx.xxx.xxx            N/A       N/A        Y       1096

Task Status of Volume gv0
------------------------------------------------------------------------------
There are no active volume tasks

Problem

The gluster volume is not Online.

Brick xxx2.xxx.xxx:/data/glu
ster/brick1/gv0                             N/A       N/A        N       N/A

Solution

Start the gluster volume on the running server.

# gluster volume start gv0 force
volume start: gv0: success

Check that the volume is now Online.

# gluster volume status gv0
Status of volume: gv0
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick xxx1.xxx.xxx:/data/glus
ter/brick1/gv0                              49153     0          Y       3420
Brick xxx2.xxx.xxx:/data/glu
ster/brick1/gv0                             49153     0          Y       1792
Self-heal Daemon on localhost               N/A       N/A        Y       1096
Self-heal Daemon on xxx2.trikorasolutions
.net                                        N/A       N/A        Y       1101

Task Status of Volume gv0
------------------------------------------------------------------------------
There are no active volume tasks

Restart pod.

$ kubectl -n glpi rollout restart deployment glpi
$ kubectl -n glpi get pods
NAME                                     READY   STATUS    RESTARTS   AGE
glpi-747f9f7cc9-vm5qm                    1/1     Running   0          102s
glpi-mariadb-0                           1/1     Running   743        42d
phpmyadmin-deployment-74fc9dd457-dljld   1/1     Running   1          44d

Blank screen when adding/updating ticket

Problem

When executing some actions (seems to be the ones that send notifications) a blank screen is presented.

Note
In DEBUG mode the following error is shown. Uncaught Exception RuntimeException: You must create a security key, see glpi:security:change_key command. in /var/www/html/inc/glpikey.class.php at line 120

Cause

After recreating the GLPI container the security key has to be created

Cannot delete PVC

This happens when persistent volume is protected. You should be able to cross verify this:

Command:

$ kubectl describe pvc PVC_NAME | grep Finalizers

Output:

Finalizers:    [kubernetes.io/pvc-protection]

You can fix this by setting finalizers to null using kubectl patch:

kubectl patch pvc PVC_NAME -p '{"metadata":{"finalizers": []}}' --type=merge