Proposal - Oamo - Data Network helping individuals monetize their data

Grant/Project Name: Oamo

Proposer’s contact info (please provide a reliable point of contact for the project): yannick@oamo.io

Grant Category: Tooling/Analytics

ELI5 Project Summary: Oamo is creating a platform that helps individuals own and monetize their data by providing access to it to companies in exchange for rewards. Once they acquire user data, companies obtain user behavior insights and can send targeted promotions and surveys to users.

Project Description:

Oamo is building a Data Marketplace. Its suite of tools helps users monetize their data by providing access to their on-chain and off-chain data to companies in exchange for rewards.

Oamo’s data ecosystem consists of a Decentralized Identity Portal for users to manage and monetize their data, and a Data Management Platform for companies looking to acquire and analyze datasets. Both stakeholders are connected by Data Pools to aggregate and access datasets, and Oamo’s Messaging System to open communication lines between users and companies.

Companies can create data pools with specific participation criteria (i.e. smart contract activity, token holdings, demographic profile) to aggregate user data in an anonymous way and leverage insights.

They can then communicate with participants directly via the platform to share user-specific promotions and surveys to complement insights.

Relevant links:

Ceramic Ecosystem Value Proposition:

  • What is the problem statement this proposal hopes to solve for the Ceramic ecosystem? We are looking to increase the number of Ceramic users and the amount of data stored on Ceramic.

  • How does your proposal offer a value proposition solving the above problem? Oamo onboards individuals to Ceramic via its DID, to which users connect their wallets and social profiles. Users then store their on-chain and off-chain data on Ceramic and manage access to this data.

Companies onboard to Ceramic via our Data Management Platform where they can gain access to aggregated and anonymized user data in the form of insights. They can then send promotions and surveys to specific user segments by communicating with their DID.

  • Why will this solution be a source of growth for the Ceramic ecosystem? If successful in our goal to decentralize the data brokerage industry, Oamo will onboard millions of users and thousands of companies to the Ceramic ecosystem. They will store personal data, analyze aggregated data, and communicate via their Ceramic-based DIDs.

Funding requested (DAI/USDC): 40,000 USDC

How much are you applying for? Make sure to break the amount requested down by milestones

Milestones:

  • Milestone #1: Complete our data permissioning layer (2 months) - [$20,000]
    • Kickoff the database architecture and development environment setup 50 Hours @ 50$/h
    • Build a data aggregator that would digest specific contracts, events and index them in a format that will allow our data science team to generate meaningful insights. 180 Hours @ 50$/h
    • Building the smart contract that governs the data pool and create the user flow to customize such contract 150 Hours @ 50$/h
  • Milestone #2: Launch the DID MVP (3 months) - [$20,000]
    • Create the CI/CD and all the Sysops ops setup to be able to have a staging and a production environment 80 Hours @ 50$/h
    • Initial DID setup and User onboarding flow 70 Hours @ 50$/h
    • Build the compose DB schema and digestion app that will push all the relative data from our SQL to Ceramic once the user is onbarded 120 Hours @ 50$/h
    • Building the user Dashboard that will allow a specific user to accept pool onboarding request, navigate the offers and see their progression on the platform 130 Hours @ 50$/h

I understand that I will be required to provide additional KYC information to the 3Box Labs to receive this grant: Yes

1 Like

Hi @YFolla, thank you for your proposal! We will be in touch with an update once we have completed our initial review (1-2 weeks).

One quick question: Is parallelization possible in your milestones #1/#2? This initial round of grants is trying to focus on funding proposal roadmaps through ETH Denver (~3 months) and am noticing that your milestones would take ~5 months if they are indeed sequential.

Hi Sam,

Noted, I didn’t know we needed to aim for ETH Denver deliveries so we’ll update the milestones by tomorrow.

Best,

Yannick

@0x_Sam Don’t think I can edit the initial post unless I’m mistaken so here is the updated milestones breakdown to be delivered by ETH Denver. We’ll hire an extra engineer to complete it on time.

  • Milestone #1: ETL pipeline development ($10,660 USDC)
    • Kickoff the database architecture and development environment setup. 80 Hours @ 41$/h
    • Build a ETL pipeline for extracting web3 and web2 data that will allow our data science team to generate meaningful insights and for data owners to check for data pool eligibility. 180 Hours @ 41$/h
  • Milestone #2: Smart contract development ($8,200 USDC)
    • Building the smart contract that governs the data pool and creating the user flow to customize such a contract. 200 Hours @ 41$/h
  • Milestone #3: Launch the DID MVP ($20,500 USDC)
    • Create the CI/CD and all the Sysops setup to be able to have a staging and a production environment. 120 Hours @ 41$/h
    • Initial DID setup using self.id and User onboarding flow 150 Hours @ 41$/h
    • Build and deploy the ComposeDB data models. 120 Hours @ 41$/h
    • Building the user Dashboard that will allow a specific user to opt-in for data pools. 130 Hours @ 41$/h
    • Building the data buyer Dashboard that will allow for creation of data pools. 130 Hours @ 41$/h

Hi @YFolla, thank you for your grant proposal.

The team has reviewed your proposal and we are excited to award you a Ceramic Sculptors Grant :tada:

We will follow up shortly with more details via email.

[ Update 01/07 - v0.0.1 ]

Here’s our first update as part of the grant program:

Overview

During the last few weeks of work, we have been derisking the architecture choices we made, and during the next sprint, we will start the implementation of the core features of our App. For now, our application consists of:

  • A nextJS application that talks to our APIs that is a GQL endpoint using Hasura
  • A few hooks that will link the data between Hasura and ComposeDB
  • A posgress DB that will store the sync data from Our indexer.
  • A business logic split in serveless function that are triggered by Hasura actions

We have spun up our Ceramic node and mapped out data flows from Web2 (Twitter, Google, Facebook, Discord) and Web3 data sources to ComposeDB.

We have defined the data models required for our implementation and will be developing custom schemas to link Hasura to ComposeDB.

What’s next

Our focus for the next week of sprint will be to deploy our instance of ComposeDB in our staging environment and develop the user authentication flow so users can login to the app with an Ethereum wallet to instantiate their DID.

Thereafter, we will start fetching Web2 and Web3 data and develop our implementation of Verified Credentials to store specific credentials related to each data source in ComposeDB.

Blockers

No blockers for now.

[ Update 01/27 - v0.0.1 ]

Here’s our second update as part of the grant program:

Overview

During the last week, we dove into the integration between ComposeDB and our own infrastructure. While we ran into a lot if hurdles described in Blockers, we progressed quite a bit towards a successful integration:

  • After extensive research of ComposeDB, we decided to wrap the ComposeDB GraphQL under our Hasura GraphQL. By doing so we allow our system to only talk to one endpoint in terms of backend implementation. The Hasura Team has been supportive of how to implement the remote schema and connect to ComposeDB.
  • We started indexing web3 data using Transpose and storing the data in Hasura. We will then connect Hasura to ComposeDB to store and encrypt user data with their DID.
  • We started connecting web2 data sources (Twitter done, Discord, Google, Facebook next) and started building our own implementation of verified credentials we call “qualifiers”
  • We built a dockerized version of composeDB to streamline dev work and staging deployment. We will need to explore how to secure our deployment. As of now, there is no JWT to talk to GraphQL.

What’s next

We will spend most of next week implementing more Web2 data sources and expanding the number of smart contracts we index. We will also need to figure an encryption strategy as we are planning on writing sensitive user data on Ceramic and have yet to find how to permission it (See the blockers section for more details).

Blockers

It took us a while to understand and circumvent the fact that composeDB permission data model does not provide any ability to permission for the data itself. We decided to use Lit protocol to encrypt specific data but this removed the ability to query the data. We would like to see what the roadmap can look like for composeDB to implement privacy parameters for certain sensitive data in the long run.

1 Like

Great feedback, tagging in @avi from Product for visibility

Update 02/10

Here’s our third update as part of the grant program:

Overview

During this sprint, we mostly worked on getting an ACL and encryption strategy for the data we store in the Ceramic Streams. We are going to implement Lit Protocol starting next week. We also are having some issues with the ENV for DEVs. as it’s a pretty tedious process for compiling/deploying a new model on Ceramic. I believe the process should be simplified, and we are considering building some dev tools around it. Finally, we decided to run our ComposeDB instance on AWS and allow access to the GQL from the public domain. The issue is that there is no security or Auth strategy around that yet.

What’s next

Mostly going to integrate the Lit protocol. We also need to focus on resources on what and how we will index the data before it gets encrypted and use some key value strategy to index it.

Blockers

Most of the issue we have is on the ComposeDB CLI, as it’s been a tedious process, and errors are not documented at all.

1 Like

Update 02/26

Here’s our fourth update as part of the grant program:

Overview

Most of our focus this week was on closing our first live user flow. A user is now able to connect to Oamo using a wallet and instantiate their DID with this wallet. This also creates and updates their data model in ComposeDB. The main issue we run into to, is the 0.4.0 SDK update. This blocked us as we were unaware of the rollout, and the day before our sprint closed, we needed to update all packages and related software.

What’s next

We are investigating the implementation of LIT protocol for our App. Also, we are thinking about using ZK to validate our user eligibility for data pools. This way, we won’t need to decrypt the data every time we need to calculate a user’s eligibility for a given pool. We are also exploring the idea of creating a new DID for each wallet added by a user and associating them in a ComposeDB Familly type. This needs to be investigated this week.

Blockers

Mostly state of documentation which slows down implementation of new features.

Update 03/17

Here’s our fifth update as part of the grant program:

Overview

Our focus was on implementing a multi-wallet DID framework, which we have successfully deployed. A user now has the ability to signup to Oamo with a wallet, which instantiates their DID, and they can then add multiple wallets across EVM chains to their account with a sub-DID model that maps each DID-wallet pair to the main one.

The work was temporarily paused on LIT protocol implementation to onboard 3 new engineers to Oamo who will split responsibilities across the frontend, backend, and smart contract implementations.

What’s next

We will be implementing the data buyer data model as well as the criteria engine so qualifiers can be created for each wallet upon being added to a DID to identify a user’s on-chain behavior across wallet holdings and smart contract transactions.

Blockers

Mostly unblocked at this point thanks to great conversations with the Ceramic team in Denver.

Update 03/31

Here’s our sixth update as part of the grant program:

Overview

We have successfully implemented Lit Protocol to encrypt user data stored on ComposeDB. We have also deployed our data pool smart contract on testnet so we can start testing the deployment of data pools and participation based on credentials stored in ComposeDB.

What’s next

We will continue the work on our criteria engine which generates credentials for a DID based on the on-chain activities of its connected wallets and the off-chain activities of its connected social profiles (i.e. Twitter, Discord).

Blockers

Firstly, the Terraform script seems to be outdated and requires an update in order to function properly. Secondly, we encountered several errors while building or running the container due to a lack of proper documentation on how to use the Docker image as a base. This made the process much more difficult than it needed to be, and we had to spend a significant amount of time troubleshooting and researching possible solutions. In our opinion, improving the documentation would greatly benefit both current and future users of the project, as it would save them time and effort in the long run.

1 Like

Yannick, apologies for the long delay in responding and for the hassle you had in setting up a node. We realize this is a pain point and have been investing in an easy, cloud-agnostic k8s node setup script.

We’re currently testing this internally here: GitHub - 3box/ceramic-infra-scripts: Scripts for deploying Ceramic nodes and we’re working on a guide & video on how to run it here: Running in the cloud k8s do by 3benbox · Pull Request #115 · ceramicstudio/js-composedb · GitHub

We’re expecting to officially release this May 11, and it’ll replace the Terraform AWS templates. We will actively maintain the k8s script - part of the challenge of maintaining Terraform templates was there were many for each cloud setup.

Besides the May 1 feedback session with our team, happy to set up 1:1 time with our devops engineers and Ahmed to make sure you have an easy, working solution going forward.

Thanks for the follow-up Avi. Let’s start with the meeting on May 1st and then happy to set up a follow-up call as needed.

2 Likes