Need help running kubernetes cluster

Hello! I am trying to setup a composeDB server on DigitalOcean kubernetes as described in the doc (Running in the Cloud | Ceramic documentation). I followed the instructions but it seems that something is not working correctly since the processes are still pending and not running after several hours. Are the instructions in the doc complete or are there others things to configure? Are there others resources about running a composeDB server?

( base) bruno@gram:~/Documents/Ceramic/simpledeploy$ kubectl get pods --watch --namespace ceramic
NAME READY STATUS RESTARTS AGE
composedb-0 0/1 Pending 0 129m
ipfs-0 0/1 Pending 0 129m
postgres-0 0/1 Pending 0 129m

@3ben could you take a look at this?

Hi brunolune!

Can you check the output of the describe command on the pods?
That should give some indication why they’re failing.

Try
kubectl describe pod composedb-0
kubectl describe pod ipfs-0
kubectl describe pod postgres-0

The workloads are pending some condition, just need to figure out what.

Hi!
It looks like I had messed up things with doctl auth context. So I deleted the kubernetes cluster and reinstalled doctl on my machine. I recreated a kubernetes cluster and followed instructions to launch simpledeploy. I had a different output this time but it looks like it is failing again:
(base) bruno@gram:~/Documents/Ceramic/simpledeploy$ kubectl get pods --watch --namespace ceramic
NAME READY STATUS RESTARTS AGE
composedb-0 0/1 Pending 0 17s
ipfs-0 0/1 Pending 0 16s
postgres-0 0/1 ContainerCreating 0 16s
postgres-0 0/1 Pending 0 74s
postgres-0 0/1 Terminating 0 74s
postgres-0 0/1 Terminating 0 75s
postgres-0 0/1 Terminating 0 76s
composedb-0 0/1 Pending 0 77s
postgres-0 0/1 Terminating 0 76s
ipfs-0 0/1 Pending 0 76s
postgres-0 0/1 Terminating 0 76s
postgres-0 0/1 Pending 0 0s
postgres-0 0/1 Pending 0 0s
^C(base) bruno@gram:~/Documents/Ceramic/simpledeploy$ kubectl get pods --watch --namespace ceramic
NAME READY STATUS RESTARTS AGE
composedb-0 0/1 Pending 0 26m
ipfs-0 0/1 Pending 0 26m
postgres-0 0/1 Pending 0 25m
^[[A^C(base) bruno@gram:~/Documents/Ceramic/simpledeploy$ kubectl describe pod composedb-0
Error from server (NotFound): pods “composedb-0” not found
(base) bruno@gram:~/Documents/Ceramic/simpledeploy$ kubectl describe pod postgres-0
Error from server (NotFound): pods “postgres-0” not found
(base) bruno@gram:~/Documents/Ceramic/simpledeploy$ kubectl describe pod ipfs-0
Error from server (NotFound): pods “ipfs-0” not found

I chose a fixed size basic kubernete:

I checked operational readiness on the kubernetes board, 4 issues are :

Also worth noting that @kammerdiener runs a node hosting service called hirenodes.io if that’d be helpful to you.

Ah, ok. The describe command will also need the namespace flag, --namespace ceramic.

So:
kubectl describe pod composedb-0 --namespace ceramic
kubectl describe pod ipfs-0 --namespace ceramic
kubectl describe pod postgres-0 --namespace ceramic

The problem is probably resources as the node is very light and we’ll need a few cpu and more memory to run the full stack on kubernetes.

If you’re new to kubernetes, I’d suggest running a node on a VM first to get used to the process before diving into kubernetes!

Ok, thanks, I got this output:
(base) bruno@gram:~/Documents/Ceramic/simpledeploy$ kubectl describe pod composedb-0 --namespace ceramic
Name: composedb-0
Namespace: ceramic
Priority: 0
Service Account: default
Node:
Labels: app=composedb
apps.kubernetes.io/pod-index=0
controller-revision-hash=composedb-68c4c4f469
statefulset.kubernetes.io/pod-name=composedb-0
Annotations:
Status: Pending
IP:
IPs:
Controlled By: StatefulSet/composedb
Init Containers:
init-composedb-config:
Image: ceramicnetwork/composedb:dev
Port:
Host Port:
Command:
/bin/bash
-c
/composedb-init/compose-init.sh
Limits:
cpu: 250m
ephemeral-storage: 1Gi
memory: 512Mi
Requests:
cpu: 250m
ephemeral-storage: 1Gi
memory: 512Mi
Environment Variables from:
composedb-env-c899225m8b ConfigMap Optional: false
Environment:
CERAMIC_ADMIN_PRIVATE_KEY: <set to the key ‘private-key’ in secret ‘ceramic-admin’> Optional: false
CERAMIC_INDEXING_DB_USERNAME: <set to the key ‘username’ in secret ‘postgres-auth’> Optional: false
CERAMIC_INDEXING_DB_PASSWORD: <set to the key ‘password’ in secret ‘postgres-auth’> Optional: false
Mounts:
/composedb-init from composedb-init (rw)
/config from config-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vs6pm (ro)
Containers:
composedb:
Image: ceramicnetwork/composedb:dev
Port: 7007/TCP
Host Port: 0/TCP
Command:
/js-ceramic/packages/cli/bin/ceramic.js
daemon
–config
/config/daemon-config.json
Limits:
cpu: 250m
ephemeral-storage: 1Gi
memory: 512Mi
Requests:
cpu: 250m
ephemeral-storage: 1Gi
memory: 512Mi
Liveness: http-get http://:7007/api/v0/node/healthcheck delay=60s timeout=30s period=15s #success=1 #failure=3
Readiness: http-get http://:7007/api/v0/node/healthcheck delay=0s timeout=30s period=15s #success=1 #failure=3
Environment Variables from:
composedb-env-c899225m8b ConfigMap Optional: false
Environment:
CERAMIC_STATE_STORE_PATH: /ceramic-data
CERAMIC_ADMIN_PRIVATE_KEY: <set to the key ‘private-key’ in secret ‘ceramic-admin’> Optional: false
Mounts:
/ceramic-data from ceramic-data (rw)
/config from config-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vs6pm (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
ceramic-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: ceramic-data-composedb-0
ReadOnly: false
config-volume:
Type: EmptyDir (a temporary directory that shares a pod’s lifetime)
Medium:
SizeLimit:
composedb-init:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: composedb-init-hkmg9fm4gg
Optional: false
kube-api-access-vs6pm:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: Guaranteed
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message


Warning FailedScheduling 20m (x58 over 5h5m) default-scheduler 0/1 nodes are available: 1 Insufficient cpu. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
Normal NotTriggerScaleUp 4m45s (x306 over 5h11m) cluster-autoscaler pod didn’t trigger scale-up:

So indeed it complains about insufficient CPU…
In the doc it says that 1CPU is enough ( “to follow this guide you can start with a 1GB RAM and 1vCPU cluster.” ) but what are the actual resources needed?

Indeed I am not familiar with kubernetes. At this point I just wanted to have the composeDB server accessible by anyone using my app for development purposes.

Are there some existing doc to guide me in running a node in a VM using postgres and ipfs?

If you just want a basic node to use during development, you can just install the @ceramicnetwork/cli package via NPM and run ceramic daemon. The node it starts won’t be suitable for production uses and won’t have strong data durability guarantees, but will be sufficient to have the APIs available to start building against.

Hello,
I managed to have the kubernete composedb server running and I could deploy my composites on it (I started with 2 vcpu and allowed the kubernetes to scale, and for deploying the composites I used the example of mark from here:ceramic-delegate-profiles/run.mjs at main · mzkrasner/ceramic-delegate-profiles · GitHub). It works fine locally, but I got issue accessing the server on Digital Ocean from my app deployed on vercel due to the server endpoint using a http address (not secure https). I tried to figure out how to change that … In the doc (Running in the Cloud | Ceramic documentation), the following command is used to expose the node endpoint to the internet :
kubectl apply -f k8s/base/composedb/do-lb.yaml
It creates a load balancer and defines the http port-forwarding. By the way, only the second spec in the manifest (do-lb.yaml) is executed:

apiVersion: v1
kind: Service
metadata:
name: composedb-lb
namespace: ceramic
labels:
app: composedb
spec:
ports:
- port: 7007
targetPort: 7007
protocol: TCP
name: composedb
selector:
app: composedb
type: NodePort

spec:
type: LoadBalancer
selector:
app: composedb
ports:
- name: api
protocol: TCP
port: 7007
targetPort: 7007

Finally, after lot of searches, I used this command to create the load balancer and modified the port forwarding rules as explained in How to Configure SSL Termination :: DigitalOcean Documentation, and added the ssl certificate associated with my domain dbrains.cloud.

HTTPS
Protocol
443
Port

dbrains.cloud
SSL Certificate

HTTP
Protocol
31799

And it works now I can reach the kubernete composedb server via https://dbrains.cloud:443 from my app on vercel.

For the moment I will use the digital ocean kubernete cluster, especially cause I got 200$ credits for 2 months to run it thanks to this free trial offer by DigitalOcean (see How To Use Doctl, the Official DigitalOcean Command-Line Client | DigitalOcean).
But I would be interested to run a composedb server in a basic node as you seem to say it is possible.

1 Like

The solution I proposed seemed to work but was not stable and the load-balancer I created was removed by digital ocean. I did more research and here is the proper way to setup a https connexion for a kubernete load-balancer on Digital Ocean:
In simpleDeploy, I modified the do-lb.yaml file to be the following:

kind: Service
apiVersion: v1
metadata:
name: composedb-lb
namespace: ceramic
labels:
app: composedb
annotations:
service.beta.kubernetes.io/do-loadbalancer-protocol: “https”
service.beta.kubernetes.io/do-loadbalancer-certificate-id: “535ee7db-79f5-49a5-b1b5-883e032394ae”
service.beta.kubernetes.io/do-loadbalancer-disable-lets-encrypt-dns-records: “false”
service.beta.kubernetes.io/do-loadbalancer-healthcheck-path: “/api/v0/node/healthcheck”
spec:
type: LoadBalancer
selector:
app: composedb
ports:
- name: api
protocol: TCP
port: 7007
targetPort: 7007
- name: https
protocol: TCP
port: 443
targetPort: 7007

The load-balancer is created with this command:
kubectl apply -f ./k8s/base/composedb/do_ssl_lb.yaml

Both port definitions are needed in the manifest: The first one is for internal communication within the cluster, and the second one is for external HTTPS traffic.
Also the healthcheck needed to be redefined (Load balancers only forward requests to nodes that pass health checks).

It seems to work well with this setup now.

1 Like

Awesome! Glad to hear you were able to get everything working!

Hi @brunolune

Were you able to get this running as a high availability cluster?

I won’t spend too much time explaining HA (figured you’d know if you’re trying to run in k8s) but want to clarify what we’re want out of this:

Availability & Sync: Any ceramic node data is available on any other node so if one node fails, the same data is available on the other node. Right now we can run multiple ceramic nodes but they each have their own data and they do not sync across each other.

We also want to snapshot the storage backend (not the nodes); In case all nodes fail (i.e.: invoking disaster recovery procedures). Have you sorted this out in k8s as well?

Would really appreciate :pray: if you can comment on any of this? BTW, I have a post about my ask (except we do not want to do this in k8s): Setting up HA Ceramic cluster plus node snapshots

Cheers,
Fadi