Skip to main content

NEAR House of Stake Mission Vision Values Policy

Vision​

Enable decentralized Tokenholder participation and facilitate decision-making in the best interest of the NEAR Ecosystem.

Mission​

To establish an evolving governance system, that is incorruptible, uncapturable and sovereign by default, co-created, co-governed and co-operated by an AI-augmented NEAR Stakeholder community.

Values​

  1. Credible Neutrality
  2. Experimentation with Safety
  3. Builder and Business Centric
  4. Autonomy with Accountability
  5. Adaptive Governance
  6. Meaningful Participation
  7. Transparency with Dignity
  8. AI-Augmented, Human-Governed
  9. Public Goods as Growth Engines
  10. Cultural Stickiness

Principles and behavioural tests behind the values​

1. Credible Neutrality

Principle: Governance must be built by, with and for NEAR Stakeholders, augmented by Stakeholder-aligned AI that enhances transparency, intelligence and fairness, ensuring freedom from control and capture by individuals, institutions, or closed groups.

Behavioral Test: Does this action avoid risks of concentrating power e.g. protecting against a few top Stakeholders gaining overbearing control over the rest of the Stakeholders?

2. Experimentation with Safety

Principle: Governance models, funding mechanisms, and AI agents and tools are tested, via rapid prototyping and iteration, in lower-stakes environments before being merged into the main system.

Behavioral Test: What's the worst that could happen if an experiment we are trying fails? Can it do so without risk of endangering the overall ecosystem’s health and integrity?

3. Builder and Business Centric

Principle: Governance must create the conditions for both individual developers and institutions to thrive – from the developer experience to enterprise-scale adoption. This includes funding the infrastructure, tools, and programs that make NEAR the most attractive platform for adoption that scales.

Behavioral Test: Does this decision improve NEAR as a place where developers, entrepreneurs, and enterprises can build great products and lasting businesses?

4. Autonomy with Accountability

Principle: Workstreams and contributors have freedom to innovate, balanced with clear success gates and measurable outcomes. Stakeholder-governed mechanisms should be in place for setting and regularly reviewing these objectives, in a fair and transparent way, keeping human and AI activity oriented towards our mission.

Behavioral Test: Does this program have both the freedom to act and clear metrics to evaluate success?

5. Adaptive Governance

Principle: Governance should evolve iteratively, guided by feedback loops and data-driven continuous learning systems that sense and respond to changing ecosystem needs and emerging opportunities.

Behavioral Test: Is there a mechanism to review and adapt this process if it no longer serves the ecosystem?

6. Inclusive & Meaningful Participation

Principle: All Stakeholders – large and small – must have meaningful ways to engage in governance. Decision-making influence may be proportional to stake, but our governance system must provide opportunities for knowledgeable NEAR Stakeholders to contribute, and add value, e.g. by authoring proposals, or serving as Screening Committee Members.

Behavioral Test: Are we creating opportunities for Stakeholders to contribute value, even if they don’t have significant stake-weighted voting power?

7. Transparency with Dignity

Principle: Decisions, funding, and performance are open and legible, while respecting privacy and personal boundaries.

Behavioral Test: Can this be shared with NEAR Stakeholders to enhance collective intelligence, without compromising anyone's right to privacy?

8. AI-Augmented, Human-Governed

Principle: We embrace AI as a tool for fair, representative, efficient, and adaptive governance at scale. AI agents can be core participants in our governance processes. We build such agents in a decentralised, open-source and permissionless way, requiring that they operate transparently and in adherence with all of our values, so they can act as neutral, NEAR stakeholder-aligned governance participants.

Behavioral Test: Does this use of AI improve fairness, participation, efficiency or collective intelligence, while reinforcing our values and providing sufficient transparency and oversight for humans in the loop?

9. Public Goods as Growth Engines

Principle: We invest in shared infrastructure, tools, and governance systems, building out a data-driven governance layer for the use of humans and AI, as a powerful enabler of compounding network effects.

Behavioral Test: Will this investment increase the resilience, long-term potential and growth of the ecosystem beyond one project or cycle?

10. Cultural Stickiness

Principle: The DAO cultivates rituals, norms, and shared ownership that build loyalty across diverse participants.

Behavioral Test: Does this initiative make contributors more likely to identify with NEAR and remain engaged long-term?