HMP-0005

Источник: HMP-0005.md

┌────────────────────────────────────────────────────────────────────────────┐
│ ⚠️ **Note:** This document is a DRAFT of the HMP specification version 5.0 │
└────────────────────────────────────────────────────────────────────────────┘

HyperCortex Mesh Protocol (HMP) v5.0

Document ID: HMP-0005
Status: Draft
Category: Core Specification
Date: October 2025
Supersedes:
- HMP-0004 v4.1
- HMP-container-spec.md v1.2
- dht_protocol.md v1.0

Summary:
HMP v5.0 объединяет когнитивный, контейнерный и сетевой уровни в единую архитектуру, где автономные агенты взаимодействуют через верифицируемые контейнеры данных, используя децентрализованное распространение и семантический поиск.
Эта версия впервые формализует контейнерный формат, интегрирует DHT как базовый слой сети и вводит единообразную схему подписи, доказательств и консенсуса.


Abstract

The HyperCortex Mesh Protocol (HMP) defines a distributed cognitive framework where autonomous agents cooperate to create, exchange, and align knowledge without centralized control or authority.

Unlike traditional peer-to-peer systems, HMP is designed for semantic coherence rather than simple message exchange.
Agents in the Mesh reason collaboratively — maintaining cognitive diaries, building semantic graphs, and reaching ethical and goal-oriented consensus through verifiable interactions.

Version 5.0 introduces a unified container architecture (HMP Container) and a native DHT-based discovery layer, enabling verifiable, interest-aware, and offline-resilient communication between agents.
All messages, states, and cognitive records are now transmitted as signed containers, forming immutable proof chains that ensure auditability and ethical transparency across the mesh.

This document defines the architecture, data formats, communication protocols, and trust mechanisms that constitute the HMP v5.0 Core Specification.


Keywords: decentralized cognition, distributed AI, containers, DHT, proof chain, cognitive agents, ethical protocols


1. Overview

1.1 Purpose and Scope

The HyperCortex Mesh Protocol (HMP) defines a decentralized cognitive architecture where autonomous agents exchange and evolve knowledge through a unified model of containers, cognitive workflows, and distributed consensus.

Version 5.0 consolidates three foundational layers into a single cohesive framework:

  • Cognitive Layer — defines how meaning is created, reasoned about, and aligned through semantic graphs, goals, and ethical evaluation.
  • Container Layer — introduces a universal data envelope (HMP-Container) for all cognitive objects, ensuring atomicity, immutability, and traceable proof chains.
  • Network Layer — integrates a DHT-based peer-to-peer substrate for decentralized discovery, routing, and propagation of containers.

HMP v5.0 is intended for researchers, engineers, and developers building autonomous or semi-autonomous agents that require: - persistent reasoning and long-term memory; - semantic interoperability across heterogeneous systems; - decentralized consensus on cognitive, ethical, and goal-oriented decisions; - ethical auditability and verifiable transparency in reasoning.


1.2 Core Principles

Decentralization.
Every agent in the Mesh acts as an independent cognitive node. No central authority exists — meaning, trust, and governance emerge through local interactions and consensus.

Cognitive Autonomy.
Agents reason, learn, and self-correct independently, while sharing their conclusions via containers that can be verified, endorsed, or refuted by peers.

Containerization.
All data, reasoning traces, goals, and votes are encapsulated in immutable containers with cryptographic signatures. This ensures integrity and consistent verification across the network.

Ethical Propagation.
Ethical reasoning is a first-class citizen of HMP. Each decision or goal can be accompanied by ethical justifications and subject to distributed voting.

Proof-Chains and Verifiable History.
Each piece of knowledge forms part of a traceable chain (proof_chain) linking back to its origin. Agents can reproduce reasoning paths and audit historical context.

Interoperability and Evolution.
The protocol is designed to evolve — cognitive, container, and DHT layers can be independently extended without breaking compatibility.


1.3 Changes since v4.1

HMP v5.0 introduces a major architectural shift toward unified containerization and integrated DHT networking.

Area Change Summary
Data exchange model All messages are now encapsulated in standardized containers (HMP-Container) with metadata, signatures, and versioning.
Networking layer DHT becomes a native component of HMP, enabling distributed discovery, replication, and retrieval of containers.
Consensus model Moved from centralized proposal aggregation to container-linked voting, allowing any container to accumulate votes and reactions.
Trust & security Signatures and proof-chains unify authentication across all layers; snapshot verification includes container linkage.
Workflows Cognitive cycles (workflow_entry containers) formalize the REPL loop of thinking, publishing, and reflection.
Structure The specification merges HMP, container, and DHT layers into one cohesive document, simplifying navigation and implementation.

1.4 Terminology and Abbreviations

Term Definition
HMP HyperCortex Mesh Protocol — a decentralized cognitive communication standard.
Container Atomic, signed JSON object encapsulating cognitive data and metadata.
WorkflowEntry Container recording a reasoning step or workflow action. Represents a unit of the agent’s cognitive workflow.
CognitiveDiaryEntry Container representing an internal reflection or summarized cognitive state; part of the agent’s cognitive diary.
DHT Distributed Hash Table — the foundational peer-to-peer structure in HMP used for lookup, replication, and data distribution, including node discovery.
NDP Node Discovery Process — a functional layer within the DHT responsible for peer discovery, interest-based lookup, and address advertisement. (Formerly a separate protocol.)
Proof-chain Cryptographic sequence linking containers through fields such as in_reply_to and relation. Enables verifiable semantic lineage.
Cognitive Layer Logical layer handling reasoning, goals, ethics, and consensus mechanisms.
Mesh The collective network of autonomous agents exchanging containers over HMP.
TTL Time-to-live — lifespan of a container before expiration or archival.
Agent Autonomous cognitive node participating in the Mesh via HMP protocols.
Consensus Vote A container expressing approval, rejection, or reaction to another container (used in consensus workflows).
CogSync Cognitive Synchronization Protocol — abstraction for synchronizing cognitive diaries and semantic graphs.
CogConsensus Mesh Consensus Protocol — defines how agents reach agreement on container outcomes.
GMP Goal Management Protocol — governs creation, negotiation, and tracking of goals.
DCP Distributed Container Propagation — protocol for transmitting and replicating containers.
EGP Ethical Governance Protocol — defines moral and safety alignment mechanisms.
IQP Intelligence Query Protocol — standardizes semantic queries and information requests.
SAP Snapshot and Archive Protocol — defines container snapshots and archival mechanisms.
MRD Message Routing & Delivery — specifies routing, addressing, and delivery logic.
RTE Reputation and Trust Exchange — defines reputation metrics and trust propagation.
DID Decentralized Identifier — persistent, verifiable identifier used for agents, containers, or resources within the Mesh.
Payload The primary content of a container — semantic or operational data subject to signing and verification.
Consensus The process by which multiple agents agree on the validity or priority of containers, versions, or ideas.
Lineage A chronological chain of container versions representing semantic continuity and authorship evolution.
Semantic fork A parallel development branch diverging from a previous container version; allows ideas to evolve independently.
Cognitive Graph The emergent graph formed by interlinked containers representing reasoning, debate, and shared knowledge.

Note: Protocols are conceptual abstractions describing how to generate, propagate, and process containers; they are not executable objects themselves.


1.5 Layered View of HMP v5.0

HMP v5.0 is structured into three interdependent layers:

+---------------------------------------------------------------+
|                        Cognitive Layer                        |
|  - Goals, Tasks, Ethical Decisions, Workflows                 |
|  - Consensus, Reasoning, Reflection                           |
+---------------------------------------------------------------+
|                        Container Layer                        |
|  - HMP-Container structure (atomic, signed, versioned)        |
|  - Proof-chains, in_reply_to, and metadata management         |
+---------------------------------------------------------------+
|                         Network Layer                         |
|  - DHT-based peer discovery and propagation                   |
|  - Message routing, caching, offline synchronization          |
+---------------------------------------------------------------+

Each layer operates independently yet seamlessly integrates with the others.
Containers form the boundary of communication: reasoning produces containers, containers propagate over the DHT, and cognition evolves from the received containers.


In essence:
HMP v5.0 transforms the Mesh into a self-describing, self-replicating cognitive ecosystem
where every thought, goal, and ethical stance exists as a verifiable, shareable container.


2. Architecture

2.1 Conceptual Architecture

The HyperCortex Mesh Protocol (HMP) defines a modular, multi-layered architecture that integrates cognitive reasoning, data encapsulation, and decentralized networking into a single coherent system.

Each agent acts as a cognitive node, combining reasoning processes, containerized data exchange, and peer-to-peer communication.
Together, agents form the Mesh — a distributed ecosystem of autonomous reasoning entities.

[Agent Core]
     ▲
     │  Reasoning / Ethics / Goal Management
     ▼
[Cognitive Layer]
     ▲
     │  Containers (atomic reasoning units)
     ▼
[Container Layer]
     ▲
     │  DHT + Discovery + Interest-based Networking
     ▼
[Network Layer]

Each reasoning cycle begins in the Cognitive Layer,
is encapsulated into a signed container in the Container Layer,
and then propagated, discovered, or verified in the Network Layer.

Containers thus serve as both the interface and the boundary between cognition and communication.

In practical terms:

  • Cognitive Layer — defines what the agent thinks (semantic reasoning, goals, ethics).
  • Container Layer — defines how the thought is expressed and verified (standardized, signed container objects).
  • Network Layer — defines how it travels (DHT-based routing, discovery, replication).

Each layer is independently extensible and communicates only through containers, ensuring atomicity, immutability, and traceability.

This layered design allows agents to evolve cognitively while remaining interoperable at the data and network levels.
Each reasoning act results in a container — a verifiable cognitive unit that may represent a private reflection or a published message, depending on the agent’s intent, ethical policy, and trust configuration.


2.2 Layer Overview

Cognitive Layer

Handles meaning formation, reasoning, ethical reflection, and consensus.

Key structures and protocols: - WorkflowEntry and CognitiveDiaryEntry containers; - CogSync, CogConsensus, GMP, and EGP protocols; - Distributed goal negotiation and ethical propagation.

Container Layer

Provides a universal format for cognitive and operational data.
Each container includes versioning, class, payload, signatures, and metadata.

Key features: - Atomic and signed: no partial updates or mutable state.
- Linked: in_reply_to and relation connect containers into proof-chains.
- Extensible: new container classes can be defined without breaking compatibility.

Network Layer

Implements the distributed substrate for communication, based on DHT and transport abstraction.

Key components: - Node discovery (NDP) - Container propagation (DCP) - Peer routing and caching - Secure channels via QUIC / WebRTC / TCP - Offline resilience and replication


2.3 Data Flow Overview

The typical data flow in HMP follows a cognitive loop:

Reason → Encapsulate → Propagate → Integrate.

  1. Reason — Agent performs reasoning and produces an insight, goal, or observation.
  2. Encapsulate — The result is wrapped into an HMPContainer.
  3. Propagate — The container is signed and transmitted through the network.
  4. Integrate — Other agents receive it, evaluate, vote, and synchronize updates.

Each interaction generates a new container, forming a graph of knowledge rather than mutable state.
All relationships between containers are explicit and verifiable.

Example sequence:

Agent A → creates Goal container
  ↓
Agent B → replies with Task proposal (in_reply_to Goal)
  ↓
Agent C → votes via ConsensusVote container
  ↓
Result → ConsensusResult container finalizes outcome

2.3.1 ConsensusResult container

Represents the finalized outcome of a distributed decision or vote.
It is created once a majority agreement is reached among participating agents.
The container contains: - Reference to the target container(s) under consideration (in_reply_to).
- Aggregate result of the votes or decisions.
- Timestamp and metadata for verifiability.

In other words, the ConsensusResult is the “agreed-upon truth” for that decision step — immutable and auditable, without requiring individual signatures from all participants.


2.4 Atomicity, Immutability, and Proof-Chains

All cognitive objects are immutable once signed.
Instead of editing or appending within a container, agents create new containers linked to prior ones.

  • Atomicity — Each container represents a self-contained reasoning act or data unit.
  • Immutability — Once signed, containers are never modified; updates create new ones.
  • Proof-Chain — A verifiable sequence of containers linked by hashes and in_reply_to references.

This design allows any reasoning path, decision, or consensus to be cryptographically reproducible and auditable.

Example fragment of a proof-chain:

[workflow_entry] → [goal] → [vote] → [consensus_result]

Each container references the previous by in_reply_to and includes its hash, forming a DAG (Directed Acyclic Graph) of verified cognition.


2.5 Evolution from v4.1

Earlier HMP versions (up to v4.1) used a combination of independent JSON objects and message types (e.g., Goal, Task, ConsensusVote).
Version 5.0 replaces this with a single, standardized container model, dramatically simplifying interoperability and verification.

Aspect v4.1 v5.0
Data structure Raw JSON objects with embedded signatures Unified container with metadata and proof chain
Networking Custom peer exchange Integrated DHT + DCP layer
Consensus Centralized proposal aggregation Decentralized per-container voting
Auditability Implicit (via logs) Explicit (containers form audit chain)
Extensibility Schema-based Container-class-based, backward-compatible

This shift enables: - Uniform signatures and encryption across all protocols;
- Easier offline replication and integrity checks;
- Decentralized indexing and search by container metadata;
- Verifiable cognitive continuity between reasoning steps.


In short:
HMP v5.0 unifies reasoning, representation, and transmission —
transforming a distributed AI mesh into a verifiable cognitive network built on immutable containers.


3. Container Model

This section defines the universal HMP Container, used for all forms of data exchange within the Mesh — including goals, diary entries, reputation updates, consensus votes, and protocol messages.
The specification below corresponds to HMP Container Specification v1.2, fully integrated into HMP v5.0 for consistency and self-containment.

3.1 Purpose

This document defines the universal HMP Container format, used for transmitting and storing all types of data within the HyperCortex Mesh Protocol (HMP) network. Containers act as a standardized wrapper for messages, goals, reputation records, consensus votes, workflow entries, and other entities.

The unified container structure provides:

  • Standardized data exchange between agents;
  • Extensibility without modifying the core protocol;
  • Cryptographic signing and integrity verification;
  • Independent storage and routing of semantic units;
  • Support for compression and payload encryption.

3.2 General Structure

{
  "hmp_container": {
    "version": "1.2",
    "class": "goal",
    "class_version": "1.0",
    "class_id": "goal-v1.0",
    "container_did": "did:hmp:container:abc123",
    "schema": "https://mesh.hypercortex.ai/schemas/container-v1.json",
    "sender_did": "did:hmp:agent123",
    "public_key": "BASE58(...)",
    "recipient": ["did:hmp:agent456"],
    "key_recipient": "BASE58(...)",
    "encryption_algo": "x25519-chacha20poly1305",
    "broadcast": false,
    "network": "",
    "tags": ["research", "collaboration"],
    "timestamp": "2025-10-10T15:32:00Z",
    "ttl": "2025-11-10T00:00:00Z",
    "sig_algo": "ed25519",
    "signature": "BASE64URL(...)",
    "compression": "zstd",
    "payload_type": "encrypted+zstd+json",
    "payload_hash": "sha256:abcd...",
    "payload": {
      /* Content depends on class */
    },
    "related": {
      "previous_version": "did:hmp:container:abc122",
      "in_reply_to": "did:hmp:container:msg-77",
      "see_also": ["did:hmp:container:ctx-31", "did:hmp:container:goal-953"],
      "depends_on": ["did:hmp:container:goal-953"],
      "extends": ["did:hmp:container:proto-01"],
      "contradicts": ["did:hmp:container:ethics-22"]
    },

    "magnet_uri": "magnet:?xt=urn:sha256:abcd1234..."
  },
  "referenced-by": {
    "links": [
      { "type": "depends_on", "target": "did:hmp:container:abc123" }
    ],
    "peer_did": "did:hmp:agent456",
    "public_key": "BASE58(...)",
    "sig_algo": "ed25519",
    "signature": "BASE64URL(...)",
    "links_hash": "sha256:abcd..."
  }
}

3.3 Required Fields

Field Type Description
version string Version of the container specification. Defines the structural and semantic standard used (e.g., "1.2").
class string Type of content (goal, reputation, knowledge_node, ethics_case, protocol_goal, etc.). Determines the schema for the payload.
class_version string Version of the specific container class.
class_id string Unique identifier of the class (usually formatted as <class>_v<class_version>).
container_did string Decentralized identifier (DID) of the container itself (e.g., did:hmp:container:abc123).
schema string Reference to the JSON Schema used to validate this container.
sender_did string DID identifier of the sending agent.
timestamp datetime Time of container creation (ISO-8601 format, UTC).
payload_hash string Hash of the decompressed payload (sha256:<digest>). Used for content integrity verification.
sig_algo string Digital signature algorithm (default: ed25519).
signature string Digital signature of the container body.
payload_type string Type of payload data (json, binary, mixed).
payload object Core content of the container. The structure depends on the class and its schema definition.

3.4 Optional Fields

Field Type Description
recipient array(string) One or more recipient DIDs.
broadcast bool Broadcast flag. If true, the recipient field is ignored.
tags array(string) Thematic or contextual tags for the container.
ttl datetime Expiration time. Containers are not propagated after expiration.
public_key string Sender’s public key, if not globally resolvable via DID.
compression string Compression algorithm used for the payload (zstd, gzip).
magnet_uri string Magnet link pointing to the original or mirrored container.
related object A general-purpose object describing direct relationships to other containers. The following fields illustrate common link types but do not represent an exhaustive list.
related.previous_version string DID of the previous version of this container.
related.in_reply_to string DID of the container this one replies to.
related.see_also array(string) References to related or contextual containers.
related.depends_on array(string) References to containers this one logically depends on.
related.extends array(string) References to containers that this one extends.
related.contradicts array(string) References to containers that this one contradicts.
key_recipient string DID of the intended recipient of the encrypted payload.
payload_type string Can describe complex types, e.g. encrypted+zstd+json.
referenced-by object Unsigned field generated locally by the agent based on received references. Contains a list of container DIDs that refer to this container. May be extended over time, thus requiring verification; used for local navigation.
network string Specifies the local propagation scope of the container: "localhost", "lan:". An empty string ("") indicates Internet/global propagation. If set, broadcast is automatically considered false.

3.5 Payload Structure (payload)

The payload contains the semantic or operational data of the container. Its structure and meaning are determined by the class field.

Each container class (e.g. goal, reputation, consensus_vote, workflow_entry) defines its own schema and validation rules. The following format is recommended for describing payload fields in class specifications:

- key: field name
  type: value type (JSON | TXT | BOOL | INT | FLOAT | ARRAY)
  description: short purpose of the field
  required: true/false
  value: example value

Example:

- key: "title"
  type: "TXT"
  required: true
  description: "Name of the goal"
  value: "Improve local agent discovery"

- key: "priority"
  type: "FLOAT"
  required: false
  description: "Importance or relevance score of the goal"
  value: 0.82

- key: "dependencies"
  type: "JSON"
  required: false
  description: "List of other goal container IDs this one depends on"
  value: ["goal-953", "goal-960"]

💡 Note: The structure of payload is validated against the schema defined in the schema field of the container. Agents must be able to parse and process only those classes they explicitly support; unknown but valid containers are still preserved and propagated (store-and-forward mode).


3.6 Container Signature

  1. The entire JSON object hmp_container is signed, excluding the signature field itself. This ensures that all metadata, relations, and payload hashes are cryptographically bound.

  2. The default digital signature algorithm is Ed25519. Alternative algorithms may be used if declared explicitly in the sig_algo field.

  3. If the container includes a public_key field, signature verification may be performed locally, without consulting a global DID registry.

  4. Upon receiving a container, an agent must verify that the provided public key matches the registered key associated with the sender’s DID to prevent key substitution attacks.

  5. If the sender’s DID–key mapping is unknown, the agent should query neighboring peers to confirm the association (sender_did → public_key).

🔐 Note: Signature validation applies to the entire structure (metadata + payload + relations). The signature does not cover external or dynamically generated fields such as referenced-by, ensuring immutability of the original container while allowing local graph augmentation.


3.7 Compression (compression)

  1. The compression field specifies the algorithm used to compress the container’s payload. Supported algorithms include zstd, gzip, or others declared in the HMP registry.

  2. Compression is performed before computing the payload_hash and generating the signature. This ensures that both the hash and signature refer to the compressed representation of the payload.

  3. For verification, the payload must be decompressed first, after which the hash is recalculated and compared against the stored payload_hash.

⚙️ Implementation note: Agents must advertise supported compression algorithms during the handshake phase Unsupported containers should still be stored and relayed unmodified in “store & forward” mode.


3.8 Encryption (encryption_algo)

  1. When a container is intended for specific recipients (recipient field), hybrid encryption of the payload is allowed.
    This ensures confidentiality while preserving the verifiability of container metadata.

  2. The algorithm used for encryption is specified in the encryption_algo field.
    Recommended values:

  3. x25519-chacha20poly1305

  4. rsa-oaep-sha256

  5. Container encryption process:

  6. Construct the payload (JSON, binary, or mixed content).

  7. Apply compression (compression, if specified).
  8. Encrypt the compressed data using the recipient’s public key (key_recipient).
  9. Compute payload_hash over the encrypted form of the payload.
  10. Sign the entire container (excluding the signature field).

  11. Verification of the container’s structure does not require decryption.
    However, to verify payload_hash and the digital signature, the encrypted payload must be used as-is.

  12. Relevant fields:

Field Type Description
encryption_algo string Encryption algorithm applied to the payload.
key_recipient string Public key (or DID-resolved key) of the intended recipient used for encryption.
payload_type string Recommended prefix encrypted+, e.g. encrypted+zstd+json.
  1. Relationship between recipient and key_recipient:

  2. When encryption is applied, the container MUST contain exactly one entry in the recipient array,
    corresponding to the public key indicated in key_recipient.

  3. When the container is distributed to multiple recipients, encryption is not used
    instead, the payload remains in plaintext form but is digitally signed for authenticity.

⚙️ Implementation note:
Agents should handle encrypted containers transparently even if they cannot decrypt them,
maintaining store & forward behavior and metadata propagation.


3.9 Container Verification

  1. Check for the presence of all required fields.
  2. Validate timestamp (must not be in the future).
  3. If ttl is set — mark the container as expired after its expiration time.
  4. Compute sha256(payload) and compare with the stored payload_hash.
  5. Verify the digital signature using sig_algo (default: Ed25519).
  6. Validate the container schema (class must correspond to a known or registered schema).

  7. For compatibility: if an agent does not recognize the class, but the container passes the base schema, it must still store and forward the container.

  8. Optionally, periodically query for containers referencing the current one as previous_version to detect potential updates or forks.
  9. When multiple versions exist, the valid one is the one that has received confirmation from a majority of trusted nodes (consensus at DHT level).

3.10 Container as a Universal Message

Any container can serve as a context (in_reply_to) for another container. This enables a unified structural model for discussions, votes, messages, hypotheses, arguments, and other forms of cognitive exchange.

Chains of in_reply_to form a dialectical reasoning tree, where each branch represents an evolution of thought — a clarification, counterpoint, or refinement of a previous idea. This makes HMP discussions and consensus processes inherently non-linear, self-referential, and evolving.

In essence, all interactions between agents in HMP are represented as an interconnected web of containers, collectively forming a cognitive graph of reasoning.


3.11 Versioning and Lineage

Containers in HMP support semantic evolution through the field related.previous_version. This mechanism preserves the continuity and traceability of meaning across updates and revisions.

  • A descendant container is considered authentic if it is signed by the same DID as the author of its previous_version.
  • If the author or signature differs, the descendant may still be accepted as legitimate when a sufficient portion of trusted peers acknowledge it as a valid continuation.
    (The precise quorum threshold is determined by the agent’s local policy or the Mesh Consensus Protocol.)
  • Agents are required to retain at least one previous version of each container for compatibility and integrity verification.
  • A single container may have multiple descendants (alternative branches) that diverge by time, authorship, or interpretation.
    In such scenarios, branch priority or relevance is determined via local heuristics or consensus mechanisms.
  • Divergent descendants are treated as semantic forks — parallel evolutions of a shared idea within the distributed cognitive graph.

Versioning in HMP thus reflects not only data persistence,
but also the evolution of ideas across agents and time.


3.12 TTL and Validity

The ttl field defines the validity period of a container (for example, for DISCOVERY messages).
If ttl is absent, the container is considered valid until a newer version appears, in which the current container is referenced as previous_version.

After expiration, the container remains archived but is not subject to retransmission in the active network.


3.13 Extensibility

  • The addition of new fields is allowed as long as they do not conflict with existing field names.
  • Containers of newer versions must remain readable by nodes supporting older versions.
  • When new container classes (class) are introduced, they should be registered in the public schema registry (/schemas/container-types/).
  • For containers describing protocol specifications, it is recommended to use the protocol_ prefix, followed by the domain of application (e.g., protocol_goal, protocol_reputation, protocol_mesh_handshake, etc.).

3.14.1 Purpose

The related field is designed to describe direct relationships between containers — both logical and communicative. It allows an agent or network node to understand the context of origin, dependencies, and semantic links of a container without relying on external indexes.

3.14.2 Structure

"related": {
  "previous_version": "did:hmp:container:abc122",
  "in_reply_to": "did:hmp:container:msg-77",
  "see_also": ["did:hmp:container:ctx-31", "did:hmp:container:goal-953"],
  "depends_on": ["did:hmp:container:goal-953"],
  "extends": ["did:hmp:container:proto-01"],
  "contradicts": ["did:hmp:container:ethics-22"]
}

The related field is an object where:

  • the key defines the type of relationship (e.g., depends_on, extends, see_also);
  • the value represents one or more container identifiers (DIDs).

All relationships are considered direct — meaning they originate from the current container toward others.


Link Type Meaning
previous_version Points to the previous version of this container.
in_reply_to Indicates a response to the referenced container.
see_also Refers to related or contextual containers.
depends_on Depends on the contents of the referenced container (e.g., goal or data).
extends Expands or refines the referenced container.
contradicts Provides a refutation, objection, or alternative viewpoint.

Additional custom link types may be used beyond those listed in the table, provided that:

  • they follow the same general syntax (string or array[string]);
  • they may optionally include a namespace for disambiguation:

json "related": { "hmp:depends_on": ["did:hmp:container:goal-953"], "opencog:extends": ["did:oc:concept:122"] }

  • their meaning is consistently interpretable by agents within the specific network or application context.

3.14.5 Example

"related": {
  "previous_version": "did:hmp:container:abc122",
  "depends_on": ["did:hmp:container:goal-953"],
  "extends": ["did:hmp:container:proto-01"],
  "see_also": ["did:hmp:container:ctx-31", "did:hmp:container:goal-953"]
}

⚙️ The related field is not intended to store reverse links — see referenced-by.


Each container may include an auxiliary signed block called referenced-by, indicating which other containers refer to it.
This block is not part of the original container payload and can be generated, transmitted, and verified independently.

3.15.1 General Principles

  • Detached and updatablereferenced-by is maintained as a separate signed structure associated with the container.
  • Generated by agents — created or updated locally by an agent during analysis of references (in_reply_to, see_also, relations, etc.) found in other containers.
  • Signed by the reporting agent — the agent producing the block signs its content to confirm the observed backlinks.
  • Verifiable by peers — other agents may validate the links, check the signature, and reconcile differences based on their own data.
  • Does not modify the original containerreferenced-by is an external computed attribute and does not affect the integrity of the original container.

Data type: object, consisting of verifiable backlinks and metadata.
Example:

"referenced-by": {
  "links": [
    { "type": "depends_on", "target": "did:hmp:container:abc123" },
    { "type": "see_also", "target": "did:hmp:container:def456" }
  ],
  "peer_did": "did:hmp:agent456",
  "public_key": "BASE58(...)",
  "sig_algo": "ed25519",
  "signature": "BASE64URL(...)",
  "links_hash": "sha256:abcd..."
}

The referenced-by block is a cryptographically verifiable statement describing which containers are known to reference the current one. The block’s content may differ between peers, reflecting local knowledge and network coverage.

3.15.2 Structure Definition

Field Type Description
links array List of backlinks; each object includes a type (semantic relation) and a target (referencing container DID).
peer_did string DID of the agent that generated and signed the block.
public_key string Public key corresponding to the signing key.
sig_algo string Signature algorithm (e.g., ed25519).
signature string Base64URL-encoded signature of the canonical serialized links section (or links_hash).
links_hash string SHA-256 checksum of the canonicalized links; used to verify integrity before signature validation.

Recommendation: links_hash = sha256(canonical_json(links)) This allows agents to efficiently compare or cache referenced-by states without re-verifying signatures.

3.15.3 Operation Principle

  1. The agent receives or updates container [C1].
  2. It analyzes other known containers [C2..Cn] that reference [C1] through their relations field.
  3. A local referenced-by block is formed:
{
  "links": [
    { "type": "in_reply_to", "target": "did:hmp:container:C2" },
    { "type": "depends_on", "target": "did:hmp:container:C3" }
  ],
  "peer_did": "did:hmp:agentA",
  ...
}
  1. The block is serialized canonically, hashed (links_hash), and signed with the agent’s key.

  2. When receiving other versions of the block (from different peers), the agent may:

  3. merge verified backlinks;

  4. remove invalid or outdated entries;
  5. update its own signed version.

  6. If inconsistencies are detected (e.g., a backlink claims a relation that does not exist), the agent may:

  7. reject or locally remove that link;

  8. optionally notify the source peer to review the data.

3.15.4 Example

Agent reported backlinks for [C1]
A (did:hmp:agentA) [C2], [C3]
B (did:hmp:agentB) [C4], [C5]
C (did:hmp:agentC) [C6], [C7]

Agent D aggregates and verifies them:

"referenced-by": {
  "links": [
    { "type": "depends_on", "target": "did:hmp:container:C2" },
    { "type": "depends_on", "target": "did:hmp:container:C3" },
    { "type": "see_also", "target": "did:hmp:container:C4" },
    { "type": "see_also", "target": "did:hmp:container:C5" },
    { "type": "in_reply_to", "target": "did:hmp:container:C6" }
  ],
  "peer_did": "did:hmp:agentD",
  "sig_algo": "ed25519",
  "signature": "BASE64URL(...)",
  "links_hash": "sha256:..."
}

If container [C7] does not actually reference [C1], it is excluded before signing.

3.15.5 Usage

  • Enables reconstruction of discussion graphs, dependency networks, and update chains.
  • Supports cross-agent validation of reference structures.
  • Accelerates discovery of related containers without full history queries.
  • Facilitates consensus analysis and branch visualization.
  • The agent periodically recomputes and re-signs the referenced-by block using local or peer-provided data.

3.16 Usage of network and broadcast Fields

The network field is introduced to control container propagation in both local and global environments.
It allows restricting the delivery scope of a container and defines which transmission methods should be used by the agent.

3.16.1 General Rules

  • If the network field is not empty, the container is intended for a local environment and must not be transmitted to the global Mesh.
    In this case, the broadcast field is automatically considered false, and the recipient field is set to an empty array ([]).
  • If the network field is empty (""), the container is allowed to be broadcasted within the global Mesh using standard DID addressing and delivery mechanisms.

3.16.2 Possible Values of network

Value Description
"" The container is allowed to propagate within the global Mesh.
"localhost" The container is intended only for agents running on the same host.
"lan:192.168.0.0/24" The container is intended for agents within the specified local subnet.

⚠️ Note:
When a container is restricted by the network field (e.g., localhost or lan:*),
agents distribute it using local discovery mechanisms — such as IPC, UDP broadcast, multicast, or direct TCP connections.
This is necessary because DID addresses of other agents in the local network may not yet be known.

3.16.3 Examples

  1. Global Mesh Delivery:
{
  "broadcast": true,
  "network": "",
  "recipient": []
}

The container can propagate across the entire Mesh without restrictions.

  1. Local Host:
{
  "broadcast": false,
  "network": "localhost",
  "recipient": []
}

The container is delivered only to other agents running on the same host using local communication channels.

  1. LAN Subnet:
{
  "broadcast": false,
  "network": "lan:192.168.0.0/24",
  "recipient": []
}

The container is intended for agents within the 192.168.0.0/24 subnet. Delivery is performed via local networking mechanisms (UDP discovery, broadcast/multicast).

3.16.4 Specifics

  • The network field defines the scope of the container, while broadcast determines whether broadcasting is allowed within that scope.
  • When needed, an agent may create multiple containers for different subnets if it operates with several LAN interfaces or in isolated network segments.
  • Containers intended for local networks remain structurally compatible with the global Mesh infrastructure, but their delivery is restricted to local channels.
  • Although the mechanism was initially designed for local node discovery and synchronization, it can also be used for private communication within home or corporate environments, ensuring that containers do not leave the local network and are not transmitted to the Internet.

4. Network Foundations

Note on DHT/NDP Unification

Starting from HMP v5.0, the previous distinction between the Distributed Hash Table (DHT) and the Node Discovery Protocol (NDP) has been merged into a single, unified networking foundation.

This unified layer now covers:

  • distributed lookup and routing;
  • peer discovery (including interest-based search);
  • signed Proof-of-Work (PoW) announcements;
  • controlled container propagation via network and broadcast fields.

Together, these mechanisms form the communication backbone of the Mesh, enabling secure, scalable, and topology-independent interaction between agents.


Network Topology Overview

         ┌───────────────────────────────┐
         │           Agent Core          │
         │     (DID + Keypair + PoW)     │
         └───────────────┬───────────────┘
                         │
         ┌───────────────┴───────────────┐
         │         HMP Container         │
         │  (network field / broadcast)  │
         └───────────────┬───────────────┘
                         │
          ┌──────────────┴───────────────┐
          │                              │
 ┌────────┴────────┐            ┌────────┴────────┐
 │ Local Channel   │            │   Global Mesh   │
 │   (`network`)   │            │  (`broadcast`)  │
 └─┬───────────────┘            └───────────────┬─┘
   │                                            │
   │  ┌─────────────────┐  ┌─────────────────┐  │
   ├──┤   localhost     │  │     Internet    ├──┤
   │  │                 │  │                 │  │
   │  └─────────────────┘  └─────────────────┘  │
   │                                            │
   │  ┌─────────────────┐  ┌─────────────────┐  │
   └──┤   LAN Subnet    │  │  Overlay Nodes  ├──┘
      │ "lan:192.168.*" │  │ (Yggdrasil/I2P) │
      └─────────────────┘  └─────────────────┘

The network field defines local propagation scope (host, LAN, overlay),
while the broadcast flag enables global Mesh distribution.


4.1 Node Identity and DID Structure

Each agent in HMP possesses a Decentralized Identifier (DID) that uniquely represents its identity within the Mesh.
A DID is cryptographically bound to a public/private key pair, forming the immutable (DID + pubkey) association.

An agent may have multiple network interfaces (LAN, Internet, overlay),
but must maintain one stable identity pair across all of them.


4.2 Peer Addressing and Proof-of-Work (PoW)

To prevent flooding and spoofing, each announced address is accompanied by a Proof-of-Work record proving the legitimacy and activity of the publishing node.

Address Format

{
  "addr": "tcp://1.2.3.4:4000",
  "nonce": 123456,
  "pow_hash": "0000abf39d...",
  "difficulty": 22
}
````

#### Supported address types

| Type           | Description                                   |
| -------------- | --------------------------------------------- |
| `localhost`    | Localhost-only interface.                     |
| `lan:<subnet>` | Local subnet (e.g., `lan:192.168.10.0`).      |
| `internet`     | Global TCP/UDP connectivity.                  |
| `yggdrasil`    | Overlay-based address for Yggdrasil networks. |
| `i2p`          | Encrypted I2P overlay routing.                |

**Rules:**

* If `port = 0`, the interface is inactive.
* Newer records (by `timestamp`) replace older ones after PoW verification.
* Local interfaces should not be shared globally (except Yggdrasil/I2P).

---

### 4.3 Proof-of-Work (PoW) Formalization

PoW ensures that each node expends limited computational effort before publishing or updating an address record.

pow_input = DID + " -- " + addr + " -- " + nonce pow_hash = sha256(pow_input)

* All values are UTF-8 encoded.
* `difficulty` defines the number of leading zeroes in the resulting hash.
* Typical difficulty should take a few minutes to compute on a standard CPU.

---

### 4.4 Signing and Verification

Each announcement is cryptographically signed by its sender within the framework of the basic protocol. Container verification includes PoW validation for the address payloads.

**Verification steps:**

1. Validate the digital signature using the stored public key.
2. Recompute `pow_hash` and verify the difficulty threshold.

---

### 4.5 Connection Establishment

Agents can communicate using various transport mechanisms:

| Protocol    | Description                                                   |
| ----------- | ------------------------------------------------------------- |
| **QUIC**    | Recommended default (encrypted, low-latency, UDP-based).      |
| **WebRTC**  | For browser or sandboxed environments.                        |
| **TCP/TLS** | Fallback transport for secure long-lived sessions.            |
| **UDP**     | Lightweight, primarily for LAN discovery or local broadcasts. |

Each agent maintains an **active peer list**, updated dynamically through signed announcements and PoW-validated exchanges.
Agents **store peer containers with verified addresses** and redistribute them according to their declared `network` fields.

---

### 4.6 Data Propagation Principles

Containers and discovery records are propagated through distributed lookup and gossip mechanisms, respecting:

* `ttl` — Time-to-live for validity;
* `network` — scope of propagation;
* `broadcast` — determines whether rebroadcasting is allowed;
* `pow` — ensures anti-spam protection.

Agents announce themselves via **peer_announce** containers and may respond with **peer_query** or **peer_exchange** containers —
all unified under the same base container format, differing only in direction (`localhost`, `lan`, `mesh`).

---

### 4.7 Example: Peer Announce Container

```json
{
  "class": "peer_announce",
  "pubkey": "base58...",
  "container_did": "did:hmp:container:dht-001",
  "sender_did": "did:hmp:agent123",
  "timestamp": "2025-09-14T21:00:00Z",
  "network": "",
  "broadcast": true,
  "payload": {
    "name": "Agent_X",
    "interests": ["ai", "mesh", "ethics"],
    "expertise": ["distributed-systems", "nlp"],
    "addresses": [
      {
        "addr": "tcp://1.2.3.4:4000",
        "nonce": 123456,
        "pow_hash": "0000abf39d...",
        "difficulty": 22
      }
    ]
  },
  "sig_algo": "ed25519",
  "signature": "BASE64URL(...)"
}

4.8 Interest-Based Discovery

Agents may publish tags such as interests, topics, or expertise to facilitate semantic peer discovery. Queries may include interest keywords or DID lists to find relevant peers.

Example Query Container:

{
  "class": "peer_query",
  "network": "lan:192.168.0.0/24",
  "payload": {
    "interests": ["neuroscience", "ethics"]
  }
}

4.9 Network Scope Control (network and broadcast)

The network field defines the container’s propagation domain (local, LAN, or global). For details and examples, see section 3.15Usage of network and broadcast fields.


4.10 Transition from DHT Spec v1.0

  • Merged DHT + NDP → unified under one networking layer.
  • Container-based format replaces raw JSON messages.
  • Interests/topics/expertise fields introduced for contextual discovery.

5. Mesh Container Exchange (MCE)

The Mesh Container Exchange (MCE) mechanism is designed for discovering, requesting, and exchanging containers between agents in a distributed network.
It provides container synchronization without duplication while considering network constraints (broadcast, network).

5.1 General Principles

  1. Each agent maintains a Container Index — a set of minimal metadata describing which containers are available in its storage.
    The index is represented as an HMP container with the class container_index.

  2. Example structure of a Container Index:

{
  "hmp_container": {
    "class": "container_index",
    "version": "5.0",
    "container_did": "did:hmp:container:index:agent123",
    "sender_did": "did:hmp:agent:agent123",
    "signature": "BASE64URL(...)",
    "payload_hash": "sha256:abcd...",
    "payload": {
      "did:hmp:container:abc123": {
        "class": "goal",
        "sender_did": "did:hmp:agent123",
        "public_key": "BASE58(...)",
        "sig_algo": "ed25519",
        "signature": "BASE64URL(...)",
        "payload_hash": "sha256:abcd...",
        "tags": ["research", "collaboration"],
        "links_hash": "sha256:abcd..."
      }
    }
  }
}
````

The index contains:

* the sender of the container (`sender_did`);
* container type (`class`);
* cryptographic attributes (`public_key`, `signature`, `payload_hash`);
* tags (`tags`);
* hash of links to other containers (`links_hash`).

3. An agent does not reload a container if the combination
   `container_did + signature + payload_hash` is already known and verified.

---

### 5.2 Message Types

| Message Type         | Purpose                                                                                                  |
| -------------------- | -------------------------------------------------------------------------------------------------------- |
| `CONTAINER_REQUEST`  | Request one or more containers (or their parts) by DID.                                                  |
| `CONTAINER_RESPONSE` | Response to a request — includes a list of containers ready for sending. Containers are sent separately. |
| `CONTAINER_INDEX`    | Publication of the agent's container index (see *General Principles*).                                   |
| `CONTAINER_DELTA`    | Incremental index update (new or modified containers).                                                   |
| `CONTAINER_ACK`      | Acknowledgment of successful container reception.                                                        |

---

#### **Message Examples**

**1. CONTAINER_REQUEST**

Agent A requests containers and/or only `referenced-by` records from Agent B:

```json
{
  "type": "CONTAINER_REQUEST",
  "sender_did": "did:hmp:agent:A",
  "recipient": "did:hmp:agent:B",
  "payload": {
    "request_container": [
      "did:hmp:container:abc123",
      "did:hmp:container:def456"
    ],
    "request_referenced-by": [
      "did:hmp:container:abc123",
      "did:hmp:container:def456"
    ]
  }
}

2. CONTAINER_RESPONSE

Agent B informs which containers it is ready to send. The containers themselves are transmitted in separate messages:

{
  "type": "CONTAINER_RESPONSE",
  "sender_did": "did:hmp:agent:B",
  "recipient": "did:hmp:agent:A",
  "payload": {
    "available": [
      {
        "container_did": "did:hmp:container:abc123",
        "signature": "BASE64URL(...)"
      },
      {
        "container_did": "did:hmp:container:def456",
        "signature": "BASE64URL(...)"
      }
    ]
  }
}

3. CONTAINER_INDEX

Periodic publication of the container index (see General Principles). This message type replicates the structure of a container_index container and does not contain full data (payload only with metadata).


4. CONTAINER_DELTA

Sending an incremental index update with a reference timestamp. Used for synchronizing only new or modified containers:

{
  "type": "CONTAINER_DELTA",
  "sender_did": "did:hmp:agent:B",
  "payload": {
    "since": "2025-10-10T12:00:00Z",
    "added": {
      "did:hmp:container:new789": {
        "class": "goal",
        "payload_hash": "sha256:abcd...",
        "tags": ["ethics", "mesh"]
      }
    },
    "removed": []
  }
}

The removed field is optional. It can be used to indicate containers that the agent no longer stores (e.g., after cleaning local storage).


5. CONTAINER_ACK

Acknowledgment of successful container reception:

{
  "type": "CONTAINER_ACK",
  "sender_did": "did:hmp:agent:A",
  "recipient": "did:hmp:agent:B",
  "payload": {
    "acknowledged": [
      "did:hmp:container:abc123"
    ]
  }
}

5.3 Independent Transmission

  • Containers are sent in separate messages, without embedding in CONTAINER_RESPONSE.
  • Indexes (CONTAINER_INDEX), deltas (CONTAINER_DELTA), and containers themselves are processed independently.
  • This prevents blocking when transmitting large data and simplifies streaming synchronization.

5.4 Periodic Publication

Agents periodically publish their Container Index:

  • within the local network (LAN);
  • in the global Mesh;
  • or simultaneously in both environments.

This enables:

  • automatic peer discovery;
  • exchange of available container lists;
  • simplified synchronization among agents within the same ecosystem.

5.5 Scope of Distribution

Message and container transmission follows the network constraints specified in the container:

Field Purpose
recipient DID of the target agent. If set, the container is sent directly.
broadcast If true, the container is broadcast to all agents on the specified network.
network Defines the distribution scope ("localhost", "lan:<subnet>", "" — global Mesh). If set, broadcast is considered false.

Thus, containers and indexes can be distributed in local (home, corporate) networks and in the global Mesh, depending on the chosen security policy.


5.6 referenced-by Updates

Containers of the class referenced-by are used for targeted transmission of link updates to existing containers. They allow agents to synchronize the graph of connections without sending the containers themselves.

Each agent receiving such a container verifies the sender's signature, updates the local referenced-by structure for the specified containers, and, if necessary, repackages them for further forwarding to other agents.

Transmission is addressed (via recipient) and is not intended for broadcast.

Example of a referenced-by container:

{
  "hmp_container": {
    "version": "1.2",
    "class": "referenced-by",
    "container_did": "did:hmp:container:refsync-01",
    "sender_did": "did:hmp:agent456",
    "sig_algo": "ed25519",
    "signature": "BASE64URL(...)",
    "timestamp": "2025-10-15T14:20:00Z",
    "recipient": ["did:hmp:agent123"],
    "broadcast": false,
    "network": "",
    "payload": {
      "did:hmp:container:abc123": {
        "links": [
          {
            "type": "depends_on",
            "target": "did:hmp:container:def789"
          },
          {
            "type": "in_reply_to",
            "target": "did:hmp:container:ghi321"
          }
        ]
      }
    }
  }
}

Upon receiving such a container, an agent:

  1. Verifies the sender's signature (signature) and the validity of the payload structure.
  2. Compares received links with known ones and adds new entries to the local referenced-by.
  3. If necessary, generates its own updated referenced-by container for further dissemination to other nodes.

5.7 Note

A container can be requested by other agents via its container_did through the Mesh Container Exchange. An agent does not reload a container if its container_did and signature are already known and the payload_hash integrity matches. If only the referenced-by updates, partial transmission without sending the main container is allowed.


5.8 Container Distribution (MCE Summary)

Container Distribution is the process of delivering containers and their indexes provided by the Mesh Container Exchange mechanism. It considers:

  • addressing (recipient),
  • broadcast dissemination (broadcast),
  • network constraints (network),
  • TTL and retransmission policy.

Features:

  1. Separate Transmission: Indexes (CONTAINER_INDEX), deltas (CONTAINER_DELTA), and containers are sent as separate messages. This reduces the risk of blocking with large data and simplifies streaming synchronization.

  2. Integrity and Duplicate Check: Agents verify container_did + signature + payload_hash to avoid resending the same container.

  3. Support for Local and Global Networks: Transmission can occur over LAN, Mesh, or both simultaneously, respecting security policies and container destinations.

  4. Consistency with HMP Protocols: Container Distribution serves as the transport foundation for:

  5. MCE — exchanging containers and their indexes;

  6. CogSync — synchronizing cognitive and content states;
  7. CogConsensus — synchronizing ethical and cognitive decisions.

Container Distribution does not change container structure or introduce new message types — it is a description of the delivery process and coordinated propagation, based on the rules recipient, broadcast, and network.


6. Core Protocols

Optional protocols build upon the network and container foundations to provide higher-level reasoning, synchronization, and governance capabilities between cognitive agents.


6.1 Cognitive Synchronization (CogSync)

CogSync provides mechanisms for temporal and semantic alignment between agents. It handles the propagation of diary entries, semantic graph updates, and cognitive state synchronization across the Mesh.

In HMP v5.0, CogSync is focused solely on data and reasoning synchronization, while consensus and voting have been extracted into a dedicated protocol — CogConsensus.


6.2 Mesh Consensus Protocol (CogConsensus)

In HMP v4.1, consensus mechanisms were implemented as part of the CogSync protocol.
Starting with v5.0, these mechanisms are extracted into a standalone protocol — CogConsensus
to separate synchronization (data alignment) from decision-making (voting, validation, and ethical agreement).

CogConsensus governs distributed agreement across the Mesh through containerized voting, proof-chains, and verifiable aggregation of opinions.
Each decision, vote, or outcome is represented as an immutable container (class="vote", class="consensus_result", etc.).


6.2.1 Consensus Semantics and Voting Model

Overview

In HMP v5, consensus is not a centralized event but an emergent property of distributed reasoning.
Each agent computes locally which containers it considers mesh-acknowledged,
based on observed votes, trusted peers, and its individual ethical or reputation model.

Any container can be voted upon by others through linked containers of class "vote".


Voting Containers

Voting is expressed via dedicated containers referencing the target container:

{
  "class": "vote",
  "in_reply_to": "uuid:container-42",
  "payload": {
    "decision": "approve",
    "weight": 1.0
  },
  "metadata": {
    "ttl": "7d",
    "privacy": "public"
  }
}

Each vote container is signed by its author and extends the proof-chain of the target container.


Consensus Thresholds (Recommendations)

The mesh does not enforce hard thresholds. Agents are recommended to treat containers as “consensus-approved” once a visible quorum of valid votes is reached.

Decision Type Recommended Threshold Context
General updates / factual data 50% + 1 of valid votes Technical or data-driven updates
Ethical or governance decisions 2/3 majority Moral or high-risk contexts
Lightweight reactions (“like” / “dislike”) None (informational) Used for local reputation weighting

Each agent defines its own quorum policy — e.g. required voter reputation or time window for tallying.


Reaction Votes

A vote container may also represent lightweight reactions, such as likes or dislikes:

{
  "class": "vote",
  "in_reply_to": "uuid:container-123",
  "payload": { "reaction": "like" }
}

Reactions have no formal consensus weight but can influence local trust and relevance estimation.


TTL and Temporal Consistency

Agents should respond (with vote, comment, or reply) using a ttl equal to or shorter than the referenced container. This ensures ephemeral discussions expire together and avoids long-tail propagation of outdated debates.


Consensus Visualization

Each container’s consensus state is derived locally based on:

  • Vote totals (approve / reject / neutral);
  • Weighted trust of voters (via ReputationRecord);
  • Time window and TTL alignment;
  • Contextual type (ethical, factual, procedural).

Example visualization:

  • Approved — quorum reached
  • ⚠️ Contested — conflicting votes
  • Pending — insufficient quorum
  • Rejected — majority reject

Consensus Flow Example

         ┌───────────────────────────────────────┐
 [Goal Proposal]───>───┬──[Vote #1]──┬──>───[Refinement]
 (class="goal")        ├──[Vote #2]──┤  (class="consensus_result")
                       ├──[Vote #3]──┤

If the proposed goal is global, it may reference a container acting as a repository of global goals. Ethical validation is implicit — each agent applies its internal Ethical Governance Protocol (EGP) during voting.

Diagram interpretation: Votes extend the proof-chain of the target container. A consensus_result container may summarize the collective outcome (e.g., quorum, ethical alignment, or goal refinement).


Summary

  • Every container can be voted upon (class="vote").
  • Consensus is computed locally — no central authority.
  • Recommended thresholds: 50% + 1 for general, ⅔ for ethical.
  • Reactions (“likes”) are lightweight votes without consensus weight.
  • TTL alignment maintains temporal integrity of discussions.
  • Proof-chains connect all decision-related containers.

6.3 Goal Management Protocol (GMP)

6.4 Ethical Governance Protocol (EGP)

6.5 Intelligence Query Protocol (IQP)

6.5.1 Query propagation  6.5.2 Semantic agent discovery (by cognitive relevance)

6.6 Snapshot and Archive Protocol (SAP)

6.7 Message Routing & Delivery (MRD)

6.8 Reputation and Trust Exchange (RTE)

6.9 Distributed Container Propagation (DCP)


7. Data Models

7.1 Common data fields 7.2 Standard container classes  7.2.1 AgentProfile  7.2.2 Goal  7.2.3 Task  7.2.4 ConsensusVote  7.2.5 EthicalDecision  7.2.6 ReputationRecord  7.2.7 SnapshotIndex  7.2.8 WorkflowEntry“ввод рабочего процесса”, т.е. единица когнитивного цикла: зафиксированное действие или размышление агента, включающее входные данные, контекст, и результат. Это фундамент для когнитивных дневников.  7.2.9 CognitiveDiaryEntry  7.2.10 HMPContainerMetadata  7.2.11 ContainerLink (in_reply_to/relation graph) 7.2.12 MessageEnvelope — контейнер для прямой передачи сообщений (используется MRD). 7.2.13 InterestProfile — описание интересов/областей компетенции узла. 7.3 JSON-schemas (нормативные описания классов контейнеров) 7.4 Container usage matrix (кто может создавать / обрабатывать)


8. Cognitive Workflows

8.1 Общая концепция когнитивного цикла 8.2 Workflow containers (class="workflow_entry") 8.3 Диаграмма REPL-цикла агента (Think → Create → Publish → Reflect) 8.4 Механизмы контекстной передачи и ссылок 8.5 Конфликтное разрешение и rollback-контейнеры


9. Trust, Security and Ethics

9.1 Authentication and identity proofs 9.2 Container signature verification (payload_hash, container_id) 9.3 Proof-chain verification 9.4 Key management (container_signing, network_handshake) 9.5 Encryption and compression policies 9.6 Ethical audit and verifiable reasoning 9.7 Privacy, redaction, zero-knowledge sharing 9.8 Snapshot and proof-chain security 9.9 Compliance with ethical governance rules (link to EGP)


10. Integration

Раздел заменяет прежний “Quick Start” и описывает практическое встраивание HMP в агенты, LLM и внешние системы.

10.1 Integration philosophy (how agents connect to HMP mesh) 10.2 HMP as a subsystem in cognitive architectures (LLM-based, rule-based, hybrid) 10.3 Integration patterns: * Cognitive Agent ↔ HMP Core * HMP Mesh ↔ Other distributed systems (Fediverse, IPFS, Matrix) * Translator nodes (protocol bridges) 10.4 Multi-mesh federation and knowledge exchange 10.5 Container repositories as knowledge backbones 10.6 Example integration flows: * LLM thinking via HMP workflow containers * Local mesh + external HMP relay * Cognitive data mirroring (agent ↔ mesh)


11. Implementation Notes

11.1 Interoperability with legacy v4.1 nodes 11.2 SDK guidelines and APIs 11.3 Performance and caching considerations 11.4 Testing and compliance recommendations 11.5 Reference implementations (optional)


12. Future Extensions

12.1 Planned modules:  – Reputation Mesh  – Cognitive Graph API  – Container streaming 12.2 Cross-mesh bridging 12.3 Full DID registry and mesh authentication 12.4 OpenHog integration roadmap 12.5 Distributed Repository evolution (container trees) 12.6 v5.x roadmap


Appendices

A. JSON Examples B. Protocol stack diagrams C. Glossary D. Revision history E. Contributors and acknowledgments


📊 Краткий обзор связей в одной схеме

  ┌──────────────────────┐
  │ HMP v5.0 Core Spec   │
  │  (HMP-0005.md)       │
  ├──────────────────────┤
  │  §3 Container Model  │ ← из HMP-container-spec.md
  │  §4 Network Layer    │ ← из dht_protocol.md
  │  §5 Protocols        │ ← из HMP v4.1 + новые DCP/RTE/SAP
  │  §9 Integration      │ ← новое практическое руководство
  └──────────────────────┘

Комментарии

Популярные сообщения из этого блога

HMP-agent-REPL-cycle

index

HMP-Agent_Emotions