package tezos-protocol-environment

  1. Overview
  2. Docs
Legend:
Library
Module
Module type
Parameter
Class
Class type
type ('ctxt, 'tree) ops = (module Tezos_protocol_environment__.Environment_context_intf.V5.S with type t = 'ctxt and type tree = 'tree)
type _ kind = private ..
type ('a, 'b) equality_witness
type cache

Abstract type of a cache. A cache is made of subcaches. Each subcache has its own size limit. The limit of its subcache is called a layout and can be initialized via the set_cache_layout function.

type t = private
  1. | Context : {
    1. kind : 'a kind;
    2. impl_name : string;
    3. ctxt : 'a;
    4. ops : ('a, 'b) ops;
    5. equality_witness : ('a, 'b) equality_witness;
    6. cache : cache;
    } -> t

A context is purely functional description of a state. This state is used to interpret operations, and more generally, to validate blocks.

This type is private because a context must be constructed using make, which is a smart constructor.

type key = string list

The type for context keys.

type value = bytes

The type for context values.

type tree

The type for context trees.

Getters

val mem : t -> key -> bool Lwt.t

mem t k is an Lwt promise that resolves to true iff k is bound to a value in t.

val mem_tree : t -> key -> bool Lwt.t

mem_tree t k is like mem but for trees.

val find : t -> key -> value option Lwt.t

find t k is an Lwt promise that resolves to Some v if k is bound to the value v in t and None otherwise.

val find_tree : t -> key -> tree option Lwt.t

find_tree t k is like find but for trees.

val list : t -> ?offset:int -> ?length:int -> key -> (string * tree) list Lwt.t

list t key is the list of files and sub-nodes stored under k in t. The result order is not specified but is stable.

offset and length are used for pagination.

val length : t -> key -> int Lwt.t

length t key is an Lwt promise that resolves to the number of files and sub-nodes stored under k in t.

It is equivalent to let+ l = list t k in List.length l but has a constant-time complexity.

Setters

val add : t -> key -> value -> t Lwt.t

add t k v is an Lwt promise that resolves to c such that:

  • k is bound to v in c;
  • and c is similar to t otherwise.

If k was already bound in t to a value that is physically equal to v, the result of the function is a promise that resolves to t. Otherwise, the previous binding of k in t disappears.

val add_tree : t -> key -> tree -> t Lwt.t

add_tree is like add but for trees.

val remove : t -> key -> t Lwt.t

remove t k v is an Lwt promise that resolves to c such that:

  • k is unbound in c;
  • and c is similar to t otherwise.

Folding

val fold : ?depth:Tezos_context_sigs.Context.depth -> t -> key -> order:[ `Sorted | `Undefined ] -> init:'a -> f:(key -> tree -> 'a -> 'a Lwt.t) -> 'a Lwt.t

fold ?depth t root ~order ~init ~f recursively folds over the trees and values of t. The f callbacks are called with a key relative to root. f is never called with an empty key for values; i.e., folding over a value is a no-op.

The depth is 0-indexed. If depth is set (by default it is not), then f is only called when the conditions described by the parameter is true:

  • Eq d folds over nodes and values of depth exactly d.
  • Lt d folds over nodes and values of depth strictly less than d.
  • Le d folds over nodes and values of depth less than or equal to d.
  • Gt d folds over nodes and values of depth strictly more than d.
  • Ge d folds over nodes and values of depth more than or equal to d.

If order is `Sorted (the default), the elements are traversed in lexicographic order of their keys. For large nodes, it is memory-consuming, use `Undefined for a more memory efficient fold.

Configuration

config t is t's hash configuration.

module Tree : sig ... end
val set_protocol : t -> Tezos_crypto.Hashed.Protocol_hash.t -> t Lwt.t
val get_protocol : t -> Tezos_crypto.Hashed.Protocol_hash.t Lwt.t
val fork_test_chain : t -> protocol:Tezos_crypto.Hashed.Protocol_hash.t -> expiration:Tezos_base.TzPervasives.Time.Protocol.t -> t Lwt.t
module Proof : sig ... end
type tree_proof := Proof.tree Proof.t
type stream_proof := Proof.stream Proof.t
type ('proof, 'result) verifier := 'proof -> (tree -> (tree * 'result) Lwt.t) -> (tree * 'result, [ `Proof_mismatch of string | `Stream_too_long of string | `Stream_too_short of string ]) Stdlib.result Lwt.t
val verify_tree_proof : (tree_proof, 'a) verifier
val verify_stream_proof : (stream_proof, 'a) verifier
val make : kind:'a kind -> impl_name:string -> ctxt:'a -> ops:('a, 'b) ops -> equality_witness:('a, 'b) equality_witness -> t

make kind impl_name ctxt ops equality_witness builds a context value. In this context, the cache is uninitialized: one must call load_cache to obtain a context with a valid cache. Otherwise, the context is not usable for all protocol-level features based on the cache, e.g., smart contract execution.

type cache_key

A key uniquely identifies a cached value in the some subcache.

Abstract type for cached values.

This type is an extensible type since values stored in the cache are heterogeneous. Notice that the cache must be cleared during during protocol stitching because the data constructors of this type are incompatible between two protocols: if there remains values built with a data constructor of an old protocol, the new protocol will be confused to find that some keys it is interesting in have unexploitable values.

type cache_value = ..

Cached values inhabit an extensible type.

module Cache : sig ... end

See Context.CACHE in sigs/v3/context.mli for documentation.

type block_cache = {
  1. context_hash : Tezos_crypto.Hashed.Context_hash.t;
  2. cache : cache;
}

A cache is a block-dependent value: to know whether a cache can be reused or recycled in a given block, we need the block that produces it.

During its loading, a cache can be populated in two different ways:

  • values are computed immediately via the builder and inserted into the cache ; or,
  • the computation of the values is delayed and will be computed only when such value is required.

The first mode is intended to be used after a rebooting of the node for example. The main benefit being that it does not impact the validation time of a block since the cache's values will be reconstructed beforehand. The second mode is intended to be used for RPCs where reactivity is important: we do not want to recompute the full cache to execute the RPC but only the values which are necessary.

type source_of_cache = [
  1. | `Force_load
    (*

    Force the cache domain to be reloaded from the context.

    *)
  2. | `Load
    (*

    Load a cache by iterating over the keys of its domain and by building a cached value for each key.

    This operation can introduce a significant slowdown proportional to the number of entries in the cache, and depending on their nature. As a consequence, loading a cache from that source should be done when the system has no strict constraint on execution time, e.g., during startup.

    *)
  3. | `Lazy
    (*

    Same as Load except that cached values are built on demand.

    This strategy makes load_cache run a lot faster and the overall cost of loading the cache is only proportional to the number of entries *actually used* (and also depends on their nature).

    Notice that, contrary to the `Load source of cache, this loading mode may also introduce latencies when entries are actually used since they are reconstructed on-the-fly.

    RPCs are a typical place where this Lazy loading makes sense since the number of entries used is generally low, and the cache cannot be inherited (as in the next case).

    *)
  4. | `Inherited of block_cache * Tezos_crypto.Hashed.Context_hash.t
    (*

    When we already have some block_cache.cache in memory coming from the validation of some block block_cache.context_hash, we can reuse or recycle its entries to reconstruct a cache to check some other block identified by a given Context_hash.t, which typically comes after block_cache.context_hash in the chain.

    This source is usually the most efficient way to build a cache in memory since the cache entries only change marginally from one block to one of its close descendants.

    *)
]

To load_cache in memory, we need to iterate over its domain and for each key found in the domain, a builder produces the associated value.

load_cache predecessor ctxt source builder populates the in-memory cache values cached in the current context during the validation of predecessor block. To achieve that, the function uses the strategy described by source, exploiting the builder to create cached values that are not already available in memory.

The builder is assumed to never fail when evaluated on the keys of the cache domain. Indeed, if a key had an associated value in the cache at some point in the past, it should have been a valid key. In other words, the construction of cache should be reproducible. For this reason, an error in builder is fatal.

OCaml

Innovation. Community. Security.