Quick Testnet Node for Indexing ComposeDB models


I’m looking at using Ceramic ComposeDB at EthLisbon and I am thinking about deploying the dApp and the models being indexed. Is there a quick way of having the models indexed without having to setup my own ceramic node? If I do have to setup a node is there a reliable way to do this for development / demo purposes?

Please could someone get back to me soon as I need to make a decision about whether to work with Ceramic ComposeDB based on this.

Thank you

@mattdavis0351.eth maybe you could provide some guidance here?

Thanks for the tag @spencer!

@mkc I’m happy to help with this. Because of indexing I’d recommend you run your own node. This leaves you with two options.

  1. Run the node locally in development mode. We have a guide that can help you. This is recommended if you are a one dev team or have really small, easy to replicate data that your team members can setup on their local nodes as well. You can connect your node to testnet so that your whole team can access the same streams if need be.
  2. Run a node with a cloud provider. I would recommend AWS since we have tooling and templates built already that can help with this. This would also be useful if you want to run a single node and have your entire team connect to the same node as you do development. There is additional overhead when it comes to managing this node since each team member will benefit from the ability to SSH into the node to make configuration updates.
    • Here is a template repo that has a GitHub Actions workflow in it. It will configure a node inside of AWS for you. It does configure the node in production mode, meaning IPFS and Ceramic are decoupled and data persistence is also setup. If you run it as-is, it will only use resources that are provided in the AWS free tier.
    • If you’d like to run a node in AWS configured in development mode, which is the same as running local but give your entire team access to the same node, you can follow our guide on running nodes in AWS for development. I will say that if you are using a newer version of Ubuntu if/when you follow this guide you may need to disable apparmor to get access to docker container using a non-standard port. Here is a guide to disable apparmor.

All of these ways are reliably ways for you to run a node for dev/demo purposes!


Thank you for this!

I will take a look at setting up on AWS as the instructions seem easy to follow.

Hey @mattdavis0351.eth

I have this setup now however, I wondered where I can find the daemon.config.json to add my models to? I’m struggling to find it in the $HOME directory.

Thank you

No worries I’ve now found it in the ec2-user directory:

Apologies for following up so many times.

I have set up everything on my EC2 Node and I’m trying to access the Models I created locally. To do this I have added the Admin DID and the models to the indexing. However when I switch to the EC2 server and try to mutate data I get the following:

Error: Failed to resolve did:3:kjzl6…?version-id=0#…: invalidDid, TypeError: Network request failed

I can read data so I know the server is working and indexing correctly. The DID changes as expected when I change the ETH account.

Make sure you have firewall rules that allow traffic on 7007 in an AWS security policy. If you followed the guide then I’ll assume this in place.

If you follow the guide I’ll also assume you are using Amazon Linux, in which case you will need to disable selinux on the host machine or else it will not allow traffic on non standard ports.

Your network requests are most likely failing because selinux is blocking them even though you have the proper firewall rules in place.

Hey @mattdavis0351.eth

I checked the selinux settings and they were already disabled i.e /etc/selinux/config already had:


I’ve also followed the guide you put together so the security groups are correctly setup. I can also see a successful GET request to the 7007 port on the server. Confusingly enough the error I shared above goes away when I spin up my local node as well as the remote EC2 ceramic daemon, however my code is:

export const composeDbClient = new ComposeClient({
  // ceramic: "http://localhost:7007",
  ceramic: "",

As you can see the local node is commented out and is not being used but for some reason the EC2 node will not write data unless the local node is running :man_shrugging: Is this something to do with me initially creating the DID for the account on the local node before trying to use it on another node?

We have EthLisbon starting tomorrow so any guidance would be greatly appreciated otherwise we will struggle to deploy to a publicly accessible version of our dApp.

To confirm the source of the issue: the first screenshot shows the mutation which is ran on localhost:7007

And then the request straight afterwards is ran on the EC2 instance

Seeing this I can say with confidence that the Network error is something to do with the localhost daemon not being found when running a mutation and not the EC2 instance’s firewall rules. Somehow the mutations are ran on the remote node and the mutations locally. A little strange - any help would be greatly appreciated!

Apologies, me again! :crazy_face: A bonus for us would be pointing us in the right direction to keep the ceramic daemon up and running when the SSH connection to the EC2 instance is terminated. I have tried a few things like supplying ‘user data’ on startup and working with cloud-init but I’m struggling to get ceramic running on startup / in the background. Any pointers would be very helpful for our demo.

Thank you

You’ll have to look into bash a bit more to get Ceramic to run once the ssh connection is terminated. Maybe configure a startup script or cron job that starts ceramic when your ec2 starts.

There is definitely a threshold for the support we can provide, we are a pretty small team. The assumption is that if you are running a node in the cloud that you have some understanding of Cloud and Linux administration.

Running a node locally is still possible, as you are seeing with your mutations. If that is the easier way for you to do something in a hackathon then that is recommended. Unfortunately you’ll need a fair amount of Cloud/Linux knowledge to get up and running in AWS.

My first two suggestions could work. I am curious to what you have tried to get the daemon to keep running when you close the ssh session. That info would go a long way with me being able to help you with that. That way I don’t tell you to do a bunch of things you’ve already tried.