ComposeDB schema stitching

Hey all,

Do you have an idea of how we could make GraphQL schema stitching work in our app? We have a client using relay to interact with the ceramic node at the moment.
We want to add a backend to our application and also query it with relay. So for this I was thinking about using schema stitching on our backend so the 2 schemas are merged.
Do you know what would be the best way to do so?

Thanks!

Hey, @leo , thanks for reaching out :slight_smile: !

Let me play it back to you so that I can make sure I understood correctly:

  1. You’d like to have two graphql schemas in your client
  2. One schema is for ComposeDB
  3. The other schema would be for your backend
  4. You’d like to run queries against both of these schemas at the same time

Did I understand correctly?

If I did, can you share a bit more details about your setup and use case? Specifically, I’d love to know:
a. Do you run your own ceramic node for ComposeDB? Where do you run it?
b. Where do you want to run your backend app?
c. Who is signing ComposeDB requests in your client? Do you ask your customers to sign in with their wallets?

Hey @anon94983028,

a. Yes I am running my node in a cloud provider (fly.io)
b. My backend app along with my frontend code is a next.js application where one of the API route a custom GraphQL server. This one is deployed on Vercel.
c. Yes, users sign in with Metamask and the did-session is stored in the local storage.

To explain a bit more: My application is a next.js website, fetching data using Relay as a client. At the moment I am interacting with Ceramic via Relay and the ComposeDB client. Now that my application is working fine like this, I want to add a more centralised part to it (required for some features) and want to create my own GraphQL endpoint that exposes my app queries/mutations and interacts with another PostgreSQL database just for my app.
Relay does only support one client on the frontend, meaning one endpoint. One way I solved this in the past is using GraphQL schema stitching. That way the request is made to only one endpoint (in this case I guess my app) and based on the query or mutation the request will be forwarded to the ceramic node before being returned to the client.

Some use cases this solution would allow me to do:

  1. extend query response - the client would send a standard query to the ceramic node, but before the response is returned, my server would augment the response with data that we can find on-chain or other. An example: query the CeramicAccount object,
 query profilePagePostsListQuery {
        # Provided by the Ceramic node
        viewer {
          id
    
         # My GraphQL server would resolve this part
         ensData {
            name
            imageUrl
          }
        }
      }
  1. create custom queries - a GraphQL resolver would make a request to the Ceramic node postgres to filter some data before returning it. As the current ceramic node GraphQL queries are limited, this is needed to improve the user experience. Eg; filter a list of posts by status “published”.
  2. create app queries/mutations: a GraphQL resolver would make a request to my app Postgres and update the user data. For this one authentication could maybe be done using next-auth for example.

Let me know if it’s not clear :slight_smile:

Hey again, @leo .

Yes, this is very helpful!

I think there’s one important detail about how ComposeDB processes GraphQL queries as of now: currently they are parsed on the client side and converted to calls to ceramic REST API.

We’ve recently merged the first version of a feature that enables parsing of GraphQL queries on the backend, but it has two caveats:

  1. It’s not very well documented yet - this is something we’ll improve in a couple of weeks
  2. It only works for queries, not mutations. The reason for that is that the underlying ceramic protocol operates on atomic updates of particular streams, and each of these updates needs to be signed by the stream’s controller. In your case controllers are did:pkh accounts of your users signing a DID Session with their wallets. These sessions are then stored on the client and need to be used for signing updates. Therefore, the client converts GraphQL mutations to atomic updates, signs each of these updates with the DID Session and sends them to the node.

Did I describe it clear enough, @leo ?

Technically, it is possible to serialize a DID Session, send it to a backend, restore it there and use it for signing ceramic updates, but this is something that is not supported by ComposeDB tools and we are currently strongly advising against doing this.

So I think that at this moment, if you want to use both ComposeDB and your own GraphQL server, you’re gonna have to have two GraphQL clients running in your next.js app. Or at least, I don’t see any other solution as of now.

1 Like

Okay, this is what I supposed based on the client calls, so yeah in such a case it will be really hard to do.
Looking forward to the backend accepting graphql queries! After this I guess stitching could work for queries only, mutations would be sent to the node directly.