package octez-proto-libs
Install
Dune Dependency
Authors
Maintainers
Sources
sha256=ddfb5076eeb0b32ac21c1eed44e8fc86a6743ef18ab23fff02d36e365bb73d61
sha512=d22a827df5146e0aa274df48bc2150b098177ff7e5eab52c6109e867eb0a1f0ec63e6bfbb0e3645a6c2112de3877c91a17df32ccbff301891ce4ba630c997a65
doc/octez-proto-libs.protocol-environment/Tezos_protocol_environment/Context/index.html
Module Tezos_protocol_environment.Context
Source
type ('ctxt, 'tree) ops =
(module Tezos_protocol_environment__.Environment_context_intf.V5.S
with type t = 'ctxt
and type tree = 'tree)
Abstract type of a cache. A cache is made of subcaches. Each subcache has its own size limit. The limit of its subcache is called a layout and can be initialized via the set_cache_layout
function.
type t = private
| Context : {
kind : 'a kind;
impl_name : string;
ctxt : 'a;
ops : ('a, 'b) ops;
equality_witness : ('a, 'b) equality_witness;
cache : cache;
} -> t
A context is purely functional description of a state. This state is used to interpret operations, and more generally, to validate blocks.
This type is private because a context must be constructed using make
, which is a smart constructor.
val fork_test_chain :
t ->
protocol:Tezos_crypto.Hashed.Protocol_hash.t ->
expiration:Tezos_base.TzPervasives.Time.Protocol.t ->
t Lwt.t
val set_hash_version :
t ->
Tezos_crypto.Hashed.Context_hash.Version.t ->
t Tezos_base.TzPervasives.tzresult Lwt.t
type tree_proof := Proof.tree Proof.t
type stream_proof := Proof.stream Proof.t
val make :
kind:'a kind ->
impl_name:string ->
ctxt:'a ->
ops:('a, 'b) ops ->
equality_witness:('a, 'b) equality_witness ->
t
make kind impl_name ctxt ops equality_witness
builds a context value. In this context, the cache is uninitialized: one must call load_cache
to obtain a context with a valid cache. Otherwise, the context is not usable for all protocol-level features based on the cache, e.g., smart contract execution.
A key uniquely identifies a cached value
in the some subcache.
Abstract type for cached values.
This type is an extensible type since values stored in the cache are heterogeneous. Notice that the cache must be cleared during during protocol stitching because the data constructors of this type are incompatible between two protocols: if there remains values built with a data constructor of an old protocol, the new protocol will be confused to find that some keys it is interesting in have unexploitable values.
Cached values inhabit an extensible type.
A cache is a block-dependent value: to know whether a cache can be reused or recycled in a given block, we need the block that produces it.
During its loading, a cache can be populated in two different ways:
- values are computed immediately via the builder and inserted into the cache ; or,
- the computation of the values is delayed and will be computed only when such value is required.
The first mode is intended to be used after a rebooting of the node for example. The main benefit being that it does not impact the validation time of a block since the cache's values will be reconstructed beforehand. The second mode is intended to be used for RPCs where reactivity is important: we do not want to recompute the full cache to execute the RPC but only the values which are necessary.
type source_of_cache = [
| `Force_load
(*Force the cache domain to be reloaded from the context.
*)| `Load
(*Load a cache by iterating over the keys of its domain and by building a cached value for each key.
This operation can introduce a significant slowdown proportional to the number of entries in the cache, and depending on their nature. As a consequence, loading a cache from that source should be done when the system has no strict constraint on execution time, e.g., during startup.
*)| `Lazy
(*Same as Load except that cached values are built on demand.
This strategy makes
load_cache
run a lot faster and the overall cost of loading the cache is only proportional to the number of entries *actually used* (and also depends on their nature).Notice that, contrary to the
`Load
source of cache, this loading mode may also introduce latencies when entries are actually used since they are reconstructed on-the-fly.RPCs are a typical place where this Lazy loading makes sense since the number of entries used is generally low, and the cache cannot be inherited (as in the next case).
*)| `Inherited of block_cache * Tezos_crypto.Hashed.Context_hash.t
(*When we already have some
block_cache.cache
in memory coming from the validation of some blockblock_cache.context_hash
, we can reuse or recycle its entries to reconstruct a cache to check some other block identified by a givenContext_hash.t
, which typically comes afterblock_cache.context_hash
in the chain.This source is usually the most efficient way to build a cache in memory since the cache entries only change marginally from one block to one of its close descendants.
*)
]
To load_cache
in memory, we need to iterate over its domain and for each key found in the domain, a builder
produces the associated value.
val load_cache :
Tezos_crypto.Hashed.Block_hash.t ->
t ->
source_of_cache ->
builder ->
t Tezos_base.TzPervasives.tzresult Lwt.t
load_cache predecessor ctxt source builder
populates the in-memory cache values cached in the current context during the validation of predecessor
block. To achieve that, the function uses the strategy described by source
, exploiting the builder
to create cached values that are not already available in memory.
The builder
is assumed to never fail when evaluated on the keys of the cache domain. Indeed, if a key had an associated value in the cache at some point in the past, it should have been a valid key. In other words, the construction of cache should be reproducible. For this reason, an error in builder
is fatal.