Updates:
04-Jan-2021
- Update the logic of the conditional rewards calculation with allowing for the total percentage of the allocation used
- Added explanations on the calculations to the tables
This proposal is a description for the initial functionality of the Jet platform, processes around it and voting mechanics. It is expected that the readers are familiar with Aurora staking proposal.
General considerations
As it was stated by the DAO previously, 200M AURORA tokens are devoted to the community treasury. AURORA token holders should be able to decide on the distribution of these tokens to the initiatives and projects that are planning to launch on Aurora. To simplify the interaction with this treasury, Jet platform is developed. Currently (30-Dec-2021) Jet is under development and ready for around 20%, so there are many flows that can be shown, although mechanics can be updated or added.
Note: from the previous DAO voting it was clear that the platform should be created, so Aurora Labs commenced the development
The basic idea is to reuse the mechanics of the crowdfunding platforms, like Kickstarter. However, since users are not transferring their AURORA tokens to the projects, but voting on the distribution from the community treasury it was decided to introduce periodicity in the allocations. So, Jet is operated within seasons. Each season is a complete cycle of applications - voting - distribution of the community grants. A season lasts approx. 2 months, application and voting phases lasting around 1 month each. Season mechanics is tightly coupled with AURORA staking.
Roles
There are three main roles for people interacting with Jet:
- Project. This role represents a user or set of users willing to receive an allocation from the community treasury.
- User. This role represents an AURORA token holder with VOTE tokens available for voting for projects in the voting phase.
- Curator. This is a technical role of the maintainers of the Jet platform.
Curators exist to simplify project and user journeys within Jet platform. Their goal is to handhold projects during the process of application and updating the statuses; and help users to obtain relevant information about the projects. Curators also responsible for providing aggregated information about seasons and winning/loosing projects; producing educational materials.
Project flow
Project flow consists of the following stages:
- Application. The project should collect all the relevant info about the proposal and submit it through the submission form.
- Curators’ review. After the submission, the project is not automatically published on Jet. On contrary, it is delivered to Curators in a draft form. Curators help the project to update the submission with all the knowledge they have. The goal of the review is to help the projects create a meaningful application with enough details.
- Curators’ approval. Once the application is in the good shape, curators approve it and it will be listed during the voting phase.
- Voting. At this stage the project owners are waiting for the community voting.
- Execution. In case the voting is successful, the project gets a grant and is able to execute through proposed roadmap.
- Reporting. At specific milestones, set by the project at the application stage, the project owners should report the progress. Together with curators project owners review the progress and publish a report. Curators are responsible for assigning the scores to the chosen KPIs.
Note: It is expected that there would be a large inflow of the submissions on Jet. Because of this it would be extremely complicated for the users to assess the submissions if these are done in the unmoderated way. Because of this reason, there’s a step 3 introduced in the above plan. Over time it is expected that Jet will become fully permissionless, to guarantee the absence of censorship. This would be only possible with the development of profound criteria for the submissions.
Project application
The projects’ application should consist of substantial amount of information:
- Description of the project in a free form, with the explanation of the application of the funding and all the relevant materials. Images and videos are supported.
- Grant structure. The distribution of the funding between different buckets, see below. Grant structure should be specified for each milestone, including the post-payment.
- Milestones of the project and KPIs for each milestone. A project can request funding for arbitrarily time; however, a project should have milestones at lease once per 4 months. During the every milestone project should submit updates and curators should assess the KPIs.
Grant structure
To support thoughtful applications, grants from community treasury should have the bucket structure with different purposes. Each bucket has a weighting coefficient that is applied to calculate for the total cost of the project. Some of the buckets may be allocated in NEAR tokens through the partnership with NEAR Foundation or Proximity Labs.
- Operational budget. Allocated in NEAR or AURORA, weighting coefficient 1.5. The part of the grant that is to be spent for salaries, servers, tools and licenses.
- Marketing budget. Allocated in staked AURORA for 1 year, weighting coefficient 0.5. The part of the grant that is to be used by the project to do the staked AURORA drops to their users. This can be implemented through depositing AURORA into staking contract on behalf of the other user.
- Long-term incentives. Allocated in locked AURORA, weighting coefficient 0.5. The part of the grant that is to be allocated to the project owners in return for the (future) project tokens deposited in the AURORA staking contract as a new stream (see AURORA staking).
- Protocol-specific incentives. Allocated in AURORA, weighting coefficient 1. The part of the grant that is to be used as incentives (for example can be used for double farming for AMM protocols). The difference with the operational budget is in the final recipient of the AURORA. In this case this would be Aurora community, not the project team.
- Liquidity. Allocated in NEAR or AURORA, weighting coefficient 0.25 for each 3 months. The part of the grant that is to be used by the project to feed AMM pools or set up its’ own operations.
Let’s consider an example of the project requesting funding and the calculation of the weighted size of the grant.
The project was just born on the hackathon and it requires some additional work till release of the main features. The project team proposes one milestone (3 months for development) and splits the grant into two parts: first focused on the finalisation of the development and second—on the rollout.
Bucket | Pre-payment, thousands USD | Weighted pre-payment, thousands USD | Post-payment, thousands USD | Weighted post-payment, thousands USD |
---|---|---|---|---|
Operational budget | 200 | 300 | 100 | 150 |
Marketing budget | 10 | 5 | 30 | 15 |
Long-term incentives | 100 | 50 | 0 | 0 |
Protocol-specific incentives | 0 | 0 | 50 | 50 |
Liquidity | 500, 3m | 125 | 1000, 3m | 250 |
Weighted budget | 480 | 465 | ||
TOTAL | 945 |
The described grant structure though is a complication comparing to the conventional approach, makes the goal of the decentralised ecosystem building much closer: projects spend a bit of additional time thinking through the actual budget; and the users are able to understand how the project would use the allocated funds.
Depending on the total weighted budget of the project and the voting results the decision is made on financing the project.
Voting results calculation
The allocation of the funding from the community treasury to the projects is dependant on the percentage of the VOTEs allocated by the users to the specific project.
Let’s consider this rule in detail on the following example:
Imagine the community treasury allocating weighted $1M for the grants in the current season. Also, there are only three projects that have applied and were approved by curators. Besides that there are 6600 VOTE tokens used for voting (with accounting for the decay of vote tokens, see AURORA staking).
Projects | Project 1 | Project 2 | Project 3 | Comment |
---|---|---|---|---|
Requested weighted budget | 500 | 200 | 500 | - |
Percentage of season allocation | 50% | 20% | 50% | (project budget) / (season allocation) |
VOTE allocated | 100 | 3000 | 3500 | Voting results |
TOTAL, VOTEs | 6600 | Sum of all VOTEs | ||
Percentage of VOTE | 1.52% | 45.45% | 53.03% | (VOTEs for project) / (total VOTEs) |
Funding decision | No | Yes | Yes | if VOTE percentage is higher than percentage of the season allocation, then Yes; No otherwise |
TOTAL, allocated budget | $700k | Sum of the budgets for all projects that have passed the voting |
NOTE: Depending on the actual budget structure of projects 2 and 3, the actual amount of money allocated to the project may differ from 200 and 500 thousands USD.
NOTE: Within this example, projects totally requested more than it was allocated to the season. Thus, the limit of the available funding introduces the natural competition between projects, which improves the quality of the projects.
Reporting
At it was said earlier, after each milestone the project should report their progress to the community. Project submits a draft report to the curators and together with them a validation of the report is done. Based on the report, curators are assessing the KPIs and assigning the scores. In case the mean score of the KPIs is less than 30%, the funding of the project ends. KPI results of the project influence the user rewards (see below).
Late submissions of the report are treated as the breach of the obligations in front of the community and means immediate halt of the funding (and, potentially blacklisting of the respective stream).
User flow
The active participation of the users is required only on the voting phase. During this phase it is expected that users will do the research of the projects, approved by the moderators, and vote for those projects, which they seem the most relevant for the development of the Aurora ecosystem.
Users are eligible for the rewards for the activity on Jet. The rewards are split in 2 buckets: unconditional and conditional. Unconditional rewards constitute 20% of the user rewards pool. Conditional rewards constitute 80% of the rewards pool. Conditional rewards are split between projects in accordance with their requested weighted budget and distributed with milestone mean KPIs as weighting coefficient. All rewards are distributed between users pro rata depending on the amount of VOTE tokens used.
Unconditional rewards are distributer right after the end of the voting. While conditional rewards are distributed within the lifetime of the project, depending on its milestones and KPIs.
To make the approach more understandable, let’s get back to the example of voting results calculation. Let’s also assume that for this season there are $100k allocated for user rewards. Let’s see how these would be distributed among the users.
Projects | Project 1 | Project 2 | Project 3 | Comment |
---|---|---|---|---|
Requested weighted budget | 500 | 200 | 500 | - |
Percentage of season allocation | 50% | 20% | 50% | (project budget) / (season allocation) |
VOTE allocated | 100 | 3000 | 3500 | Voting results |
TOTAL, VOTEs | 6600 | Sum of all VOTEs | ||
Percentage of VOTE | 1.52% | 45.45% | 53.03% | (VOTEs for project) / (total VOTEs) |
Funding decision | No | Yes | Yes | if VOTE percentage is higher than percentage of the season allocation, then Yes; No otherwise |
TOTAL, allocated budget | $700k | Sum of the budgets for all projects that have passed the voting | ||
Allocation efficiency | 70% | (allocated budget) / (season allocation) | ||
TOTAL, unconditional rewards | $20k | (20% of $100k) | ||
Unconditional rewards | $0.3k | $9,09k | $10,61k | pro rata on the VOTE distribution |
TOTAL, conditional rewards, max | $56k | (80% of $100k)*(allocation efficiency) | ||
Conditional rewards, max | - | $16k | $40k | pro rata, based on requested funding |
Project KPIs | - | 70% | 50% | |
Conditional rewards, distributed | - | $11,2k | $20k | (conditional rewards, max)*(KPI score) |
TOTAL, conditional rewards | $31.2 |
NOTE: in case the project has multiple milestones, conditional rewards are distributed pro rata weighted budget for the milestones.
There are multiple mechanics in the approach above:
- Everyone who devotes his time on the voting process will get an unconditional reward
- Users that correctly assess which projects will get funding receive additional conditional rewards. This mechanics mitigates lazy behaviour of the users: when the user votes for a random project just to get rewards.
- Conditional rewards pool is quite big, but the community is incentivised to vote carefully, since the more season allocation is distributed, the more rewards the users will get. This mitigates the scenario everyone is voting for a small simple project, that for sure would be delivered.
- User rewards depend on the project deliverables. This creates a strong bound between the project team and its’ voters: if project performs well, then users are happy.
- Users that vote according to the majority votes, will get lower rewards. Indeed, the conditional rewards distribution dependant on the weighted budget. So in the example above, users that have been voting for the project 2 in fact got much lower amount of conditional rewards per VOTE: up to $3.73 per VOTE; while voters of the project 3—$5.71 per VOTE; even though the project 3 performed worse.
Draft designs
Home page:
Project page:
User profile: