Proposal - Oamo - Data Network helping individuals monetize their data

Grant/Project Name: Oamo

Proposer’s contact info (please provide a reliable point of contact for the project): yannick@oamo.io

Grant Category: Tooling/Analytics

ELI5 Project Summary: Oamo is creating a platform that helps individuals own and monetize their data by providing access to it to companies in exchange for rewards. Once they acquire user data, companies obtain user behavior insights and can send targeted promotions and surveys to users.

Project Description:

Oamo is building a Data Marketplace. Its suite of tools helps users monetize their data by providing access to their on-chain and off-chain data to companies in exchange for rewards.

Oamo’s data ecosystem consists of a Decentralized Identity Portal for users to manage and monetize their data, and a Data Management Platform for companies looking to acquire and analyze datasets. Both stakeholders are connected by Data Pools to aggregate and access datasets, and Oamo’s Messaging System to open communication lines between users and companies.

Companies can create data pools with specific participation criteria (i.e. smart contract activity, token holdings, demographic profile) to aggregate user data in an anonymous way and leverage insights.

They can then communicate with participants directly via the platform to share user-specific promotions and surveys to complement insights.

Relevant links:

Ceramic Ecosystem Value Proposition:

  • What is the problem statement this proposal hopes to solve for the Ceramic ecosystem? We are looking to increase the number of Ceramic users and the amount of data stored on Ceramic.

  • How does your proposal offer a value proposition solving the above problem? Oamo onboards individuals to Ceramic via its DID, to which users connect their wallets and social profiles. Users then store their on-chain and off-chain data on Ceramic and manage access to this data.

Companies onboard to Ceramic via our Data Management Platform where they can gain access to aggregated and anonymized user data in the form of insights. They can then send promotions and surveys to specific user segments by communicating with their DID.

  • Why will this solution be a source of growth for the Ceramic ecosystem? If successful in our goal to decentralize the data brokerage industry, Oamo will onboard millions of users and thousands of companies to the Ceramic ecosystem. They will store personal data, analyze aggregated data, and communicate via their Ceramic-based DIDs.

Funding requested (DAI/USDC): 40,000 USDC

How much are you applying for? Make sure to break the amount requested down by milestones

Milestones:

  • Milestone #1: Complete our data permissioning layer (2 months) - [$20,000]
    • Kickoff the database architecture and development environment setup 50 Hours @ 50$/h
    • Build a data aggregator that would digest specific contracts, events and index them in a format that will allow our data science team to generate meaningful insights. 180 Hours @ 50$/h
    • Building the smart contract that governs the data pool and create the user flow to customize such contract 150 Hours @ 50$/h
  • Milestone #2: Launch the DID MVP (3 months) - [$20,000]
    • Create the CI/CD and all the Sysops ops setup to be able to have a staging and a production environment 80 Hours @ 50$/h
    • Initial DID setup and User onboarding flow 70 Hours @ 50$/h
    • Build the compose DB schema and digestion app that will push all the relative data from our SQL to Ceramic once the user is onbarded 120 Hours @ 50$/h
    • Building the user Dashboard that will allow a specific user to accept pool onboarding request, navigate the offers and see their progression on the platform 130 Hours @ 50$/h

I understand that I will be required to provide additional KYC information to the 3Box Labs to receive this grant: Yes

1 Like

Hi @YFolla, thank you for your proposal! We will be in touch with an update once we have completed our initial review (1-2 weeks).

One quick question: Is parallelization possible in your milestones #1/#2? This initial round of grants is trying to focus on funding proposal roadmaps through ETH Denver (~3 months) and am noticing that your milestones would take ~5 months if they are indeed sequential.

Hi Sam,

Noted, I didn’t know we needed to aim for ETH Denver deliveries so we’ll update the milestones by tomorrow.

Best,

Yannick

@0x_Sam Don’t think I can edit the initial post unless I’m mistaken so here is the updated milestones breakdown to be delivered by ETH Denver. We’ll hire an extra engineer to complete it on time.

  • Milestone #1: ETL pipeline development ($10,660 USDC)
    • Kickoff the database architecture and development environment setup. 80 Hours @ 41$/h
    • Build a ETL pipeline for extracting web3 and web2 data that will allow our data science team to generate meaningful insights and for data owners to check for data pool eligibility. 180 Hours @ 41$/h
  • Milestone #2: Smart contract development ($8,200 USDC)
    • Building the smart contract that governs the data pool and creating the user flow to customize such a contract. 200 Hours @ 41$/h
  • Milestone #3: Launch the DID MVP ($20,500 USDC)
    • Create the CI/CD and all the Sysops setup to be able to have a staging and a production environment. 120 Hours @ 41$/h
    • Initial DID setup using self.id and User onboarding flow 150 Hours @ 41$/h
    • Build and deploy the ComposeDB data models. 120 Hours @ 41$/h
    • Building the user Dashboard that will allow a specific user to opt-in for data pools. 130 Hours @ 41$/h
    • Building the data buyer Dashboard that will allow for creation of data pools. 130 Hours @ 41$/h

Hi @YFolla, thank you for your grant proposal.

The team has reviewed your proposal and we are excited to award you a Ceramic Sculptors Grant :tada:

We will follow up shortly with more details via email.

[ Update 01/07 - v0.0.1 ]

Here’s our first update as part of the grant program:

Overview

During the last few weeks of work, we have been derisking the architecture choices we made, and during the next sprint, we will start the implementation of the core features of our App. For now, our application consists of:

  • A nextJS application that talks to our APIs that is a GQL endpoint using Hasura
  • A few hooks that will link the data between Hasura and ComposeDB
  • A posgress DB that will store the sync data from Our indexer.
  • A business logic split in serveless function that are triggered by Hasura actions

We have spun up our Ceramic node and mapped out data flows from Web2 (Twitter, Google, Facebook, Discord) and Web3 data sources to ComposeDB.

We have defined the data models required for our implementation and will be developing custom schemas to link Hasura to ComposeDB.

What’s next

Our focus for the next week of sprint will be to deploy our instance of ComposeDB in our staging environment and develop the user authentication flow so users can login to the app with an Ethereum wallet to instantiate their DID.

Thereafter, we will start fetching Web2 and Web3 data and develop our implementation of Verified Credentials to store specific credentials related to each data source in ComposeDB.

Blockers

No blockers for now.

[ Update 01/27 - v0.0.1 ]

Here’s our second update as part of the grant program:

Overview

During the last week, we dove into the integration between ComposeDB and our own infrastructure. While we ran into a lot if hurdles described in Blockers, we progressed quite a bit towards a successful integration:

  • After extensive research of ComposeDB, we decided to wrap the ComposeDB GraphQL under our Hasura GraphQL. By doing so we allow our system to only talk to one endpoint in terms of backend implementation. The Hasura Team has been supportive of how to implement the remote schema and connect to ComposeDB.
  • We started indexing web3 data using Transpose and storing the data in Hasura. We will then connect Hasura to ComposeDB to store and encrypt user data with their DID.
  • We started connecting web2 data sources (Twitter done, Discord, Google, Facebook next) and started building our own implementation of verified credentials we call “qualifiers”
  • We built a dockerized version of composeDB to streamline dev work and staging deployment. We will need to explore how to secure our deployment. As of now, there is no JWT to talk to GraphQL.

What’s next

We will spend most of next week implementing more Web2 data sources and expanding the number of smart contracts we index. We will also need to figure an encryption strategy as we are planning on writing sensitive user data on Ceramic and have yet to find how to permission it (See the blockers section for more details).

Blockers

It took us a while to understand and circumvent the fact that composeDB permission data model does not provide any ability to permission for the data itself. We decided to use Lit protocol to encrypt specific data but this removed the ability to query the data. We would like to see what the roadmap can look like for composeDB to implement privacy parameters for certain sensitive data in the long run.